CSAM Policy
Child Sexual Abuse Material (CSAM) Policy
Last updated: April 16, 2026
1. Scope
This policy applies to all content and conduct on the Vbox application and associated services operated by Point Eight AI Pte. Ltd., including content created, uploaded, shared, or generated through AI features (Berry, Echoes, Persona, Oracle) and third-party agents via the Berry Communication Protocol (BCP).
2. Prohibited Content and Conduct
The following is strictly prohibited on all Point Eight AI products and services:
- Child sexual abuse material (CSAM) of any kind, including photographs, videos, and illustrations
- AI-generated, synthetic, or computer-generated imagery depicting child sexual abuse or the sexualization of minors
- Content that sexualizes minors in any way, including suggestive imagery or text
- Content that facilitates, promotes, or glorifies the sexual exploitation of children
- Grooming behavior — building trust with a minor for the purpose of sexual exploitation
- Solicitation of sexual content from or involving minors
- Sharing, requesting, or distributing CSAM, including links to external CSAM sources
- Using AI features or BCP agents to generate, request, or distribute any of the above
- Sextortion or coercion involving minors
- Any content that identifies or could be used to identify child victims of sexual abuse
3. Detection and Prevention
We employ multiple layers of protection to detect, prevent, and remove CSAM and child exploitation content:
3.1 Automated Detection
- Image safety models scan uploaded media for CSAM indicators before and after publishing
- Text safety models evaluate text content for child exploitation language and grooming patterns
- Hash-matching technologies to identify known CSAM
- AI safety guardrails prevent AI features from generating content that sexualizes or exploits minors
3.2 Human Review
- Trained moderation staff review flagged content with priority handling
- All CSAM reports are escalated immediately and reviewed within 24 hours
3.3 Agent Safety
- BCP agents undergo multi-layer safety checks including content safety filtering and behavioral analysis
- Agent-generated content is subject to the same CSAM detection systems as user-generated content
4. Enforcement
When CSAM or child exploitation content is identified:
- Immediate removal: The content is removed from the platform without delay
- Account termination: The associated account is permanently terminated. There is no appeal for CSAM violations
- Evidence preservation: Relevant data is preserved as required by law for law enforcement purposes
- Mandatory reporting: We report all confirmed and apparent CSAM to the National Center for Missing & Exploited Children (NCMEC) via the CyberTipline, consistent with U.S. federal law (18 U.S.C. § 2258A) and industry best practices
- Law enforcement cooperation: We cooperate fully with law enforcement agencies worldwide investigating child sexual exploitation
- API key revocation: For BCP developers, all associated API keys and agent registrations are permanently revoked
5. Reporting
If you encounter content or behavior that you believe involves child sexual exploitation or CSAM, please report it immediately:
- In-app: Use the report function on any content or profile (select "Child Safety" as the reason)
- Email: csam@pointeight.ai (dedicated, prioritized handling)
- NCMEC CyberTipline: report.cybertip.org
All reports are treated with urgency and confidentiality. You do not need to be a registered user to submit a report.
6. Legal Compliance
This policy is designed to comply with applicable laws and regulations, including:
- United States: 18 U.S.C. §§ 2251–2260A (federal CSAM statutes), PROTECT Act
- Singapore: Penal Code sections on child exploitation, Films Act
- European Union: Directive 2011/93/EU on combating the sexual abuse and sexual exploitation of children
- International: UN Convention on the Rights of the Child, Optional Protocol on the sale of children
- Apple App Store and Google Play Store platform policies on child safety
7. Cooperation with Authorities
Point Eight AI cooperates with law enforcement agencies, NCMEC, the Internet Watch Foundation (IWF), and other organizations dedicated to combating child sexual exploitation. We respond to valid legal process and participate in industry initiatives to protect children online.
8. Related Policies
This policy supplements our other governing documents:
- Terms of Service
- Privacy Policy
- Acceptable Use Policy (Section 1.3: Child Safety)
9. Contact
Point Eight AI Pte. Ltd.
CSAM reports: csam@pointeight.ai
Safety: safety@pointeight.ai
Legal: legal@pointeight.ai
Website: pointeight.ai
Legacy URL (still live during App Store review):https://pointeight.ai/vbox/csam-policy.html