Vivatok Child Safety & CSAE Prevention Policy
Effective Date: 20 April 2025
Last Reviewed: 25 April 2025
Applies to: Vivatok Quizzes mobile & web applications, all associated services, employees, contractors, moderators, and community volunteers.
1 Purpose
Vivatok is committed to safeguarding children and adolescents who use our products. This policy establishes clear, public‐facing standards for preventing, detecting, reporting, and removing child sexual abuse material (CSAM) and any activity that facilitates child sexual abuse or exploitation (CSAE).
Our goal is zero tolerance: No CSAM may ever be created, shared, sought, or stored on Vivatok.
2 Definitions
Term | Meaning |
---|---|
Child / Minor | Any individual under the age of 18. |
CSAM | Child Sexual Abuse Material: any content that depicts or exploits a minor in sexual activities. |
CSAE | Child Sexual Abuse & Exploitation: the broader set of behaviors, grooming, or trafficking that enables or encourages CSAM. |
User Input | Limited to a user‑chosen display name and selection of predefined quiz answers; no free‑form text, images, audio, or video can be uploaded. |
NCMEC | National Center for Missing and Exploited Children (USA) – our primary reporting partner for U.S. cases. |
3 Age Requirements & Verification
- Minimum Age: Users must be ≥13 years old (or local digital‐consent age where higher).
- Parental Consent: Users aged 13–15 are required to have explicit parental consent during registration, verified via email confirmation.
4 Acceptable‑Use Rules
Because Vivatok does not allow users to upload or share any media or free‑form text, opportunities to create or distribute CSAM within the app are effectively none. Nevertheless:
- Display names must not contain sexual content involving minors or any illegal content.
- Any attempt to solicit personal contact information from minors is forbidden (e.g., embedding a phone number in a display name).
- We reserve the right to remove accounts with inappropriate names and to report suspected CSAE that comes to our attention through support channels.
5 Prevention & Moderation Pipeline
Since the app has no facility for user‑generated uploads, our prevention measures focus on the small surface that does exist (display names):
- Name Filter
- All display names are checked against a dynamic blocklist of explicit, sexual, or violent terms, including child‑sex‑related language.
- Names that violate the policy are rejected automatically; users must choose a different name.
- Post‑Launch Monitoring
- Support staff review any user‑reported inappropriate names within 24 h.
- Account Actions
- Accounts with disallowed names are renamed or suspended. Prevention & Moderation Pipeline
- Proactive Detection
- Hash‑matching: We scan all image/video uploads against Project Arachnid, PhotoDNA, & NCMEC hash sets.
- AI classifiers: Text & image ML models flag grooming language, nudity, and child nudity likelihood.
- Human Review
- 24/7 trust‑and‑safety team reviews flagged items within 15 minutes (P0 priority).
- Automated Actions
- Content blocked at upload if hash or AI confidence ≥99 %.
- Accounts auto‑suspended pending review.
6 User Reporting Tools
If you spot a user with an inappropriate display name or any out‑of‑band CSAE activity related to Vivatok, please:
- Email [email protected] with relevant details.
We aim to respond within 24 h.
7 Response & Enforcement
Severity | Action | Reporting |
Inappropriate display name (no CSAE) | Name rejected or account suspended | Not applicable |
Suspected CSAE disclosed via support email | Gather available information; suspend account | Report to relevant authorities within 24 h |
Response & Enforcement | ||
Severity | Initial Action | Further Steps |
———- | ————— | ————— |
Confirmed CSAM | Immediate removal, account ban; hash added to blocklist | Report to NCMEC (US) or national hotline within 24 h; preserve evidence 90 days |
Suspected CSAM | Content hidden; escalate to senior moderator | Escalate to law enforcement if validated |
Grooming / Solicitation | Suspend messaging; warn or ban user | Mandatory report if minor is endangered |
We cooperate with law‑enforcement subpoenas and lawful data requests.
8 Collaboration & Reporting
- Mandatory Reports: When legally required, we submit confirmed CSAM to the relevant national hotline (for example, NCMEC in the United States) or directly to law‑enforcement agencies in the appropriate jurisdiction.
- Law‑Enforcement Cooperation: We provide prompt assistance to lawful requests, court orders, and subpoenas related to child‑safety investigations.
We do not currently publish a standalone transparency report; aggregate removal statistics may be incorporated into future updates of this policy.
9 Staff Training & Vetting
- All Trust & Safety staff undergo background checks.
- Mandatory annual training on CSAE detection, secondary trauma, and legal obligations.
- Moderators receive mental‑health resources and rotation schedules to minimize exposure.
10 Data Protection & Privacy
We collect only data needed to enforce this policy. Evidence of CSAM is encrypted at rest, access‑logged, and retained no longer than 90 days unless required for legal proceedings.
11 Policy Updates
We review this policy at least annually and whenever laws or platform rules change.
Changes take effect upon publication; a notice is displayed in‑app for 14 days.
12 Contact
For questions about this policy or to report a child‑safety issue, email [email protected]. This inbox is monitored 24/7.