Platform: ZOOP operated by ii Corporation Ltd
Effective date: 28 January 2026
Last updated: 26 January 2026
This Governance, Safety & Moderation Policy (the “Policy”) explains how ZOOP keeps the Platform safer for users, how to report content and behaviour, how we make and explain moderation decisions, and how users can challenge those decisions through internal complaints and other available redress mechanisms.
This Policy should be read together with our Terms of Service, Community Guidelines, Privacy Policy, and Child Safety / Child Abuse Policy (together, the “Platform Policies”).
1. Purpose and scope
1.1 Purpose
ZOOP aims to:
- promote a safe environment and reduce harm;
- enforce the Platform Policies consistently and proportionately;
- provide accessible reporting and complaint mechanisms; and
- comply with applicable laws, including (where relevant) the EU Digital Services Act (DSA).
1.2 Scope
This Policy applies to:
- all users of the Platform (whether registered or not, where relevant);
- all content and interactions made available through the Platform; and
- reports submitted by users and by third parties (including non-users), where the report concerns content on the Platform.
2. Definitions
For the purposes of this Policy:
- “Content” means any information made available through the Platform, including text, images, video, audio, live streams, comments, messages, profiles, usernames, links, and metadata.
- “Illegal content” means content that is illegal under applicable law (including relevant EU and Member State laws where applicable).
- “Policy-violating content” means content that breaches the Platform Policies, even if not illegal.
- “Restriction” means any moderation action affecting Content or an account, including removal, disabling access, restricting visibility, demonetisation, limiting features, suspending or terminating an account, or other measures.
- “Notice” means a report submitted via the reporting channels described in Section 5 (including notices of illegal content, consistent with DSA notice-and-action mechanisms where applicable).
- “Complaint” / “Appeal” means a request for internal review of a Restriction decision under Section 9 (consistent with DSA internal complaint-handling requirements where applicable).
3. Our moderation principles
We moderate Content and enforce our rules guided by these principles:
- Safety and harm reduction: protecting users from abuse, exploitation, and serious harm.
- Consistency and proportionality: taking action that is appropriate to severity, context, intent, and prior conduct.
- Transparency: providing clear information about our rules and meaningful explanations for moderation decisions.
- Fairness and non-arbitrariness: handling reports and complaints diligently, objectively and without discrimination (including during internal complaint-handling).
- Respect for rights: balancing safety with freedom of expression and other fundamental rights, recognising that not all harmful content is illegal and that context matters.
4. What we enforce
4.1 Core rule sources
Users must comply with:
- the Terms of Service;
- the Community Guidelines;
- the Child Safety / Child Abuse Policy (where relevant); and
- applicable laws.
4.2 Examples of common violation categories
Depending on what is defined in your Community Guidelines / Child Safety Policy, moderation commonly addresses:
- harassment, bullying, and threats;
- hate speech;
- non-consensual intimate imagery;
- impersonation and fraud/scams;
- spam and manipulation (including gaming engagement and rewards systems);
- illegal goods/services;
- sexual exploitation and child sexual abuse material (CSAM);
- content encouraging self-harm or presenting imminent risk of serious harm.
(Your Community Guidelines remain the “source of truth” for category definitions.)
5. Enforcement measures
If we determine Content or behaviour is illegal or violates the Platform Policies, we may take one or more measures:
5.1 Content measures
- remove Content or disable access to it;
- restrict visibility (e.g., downranking, limiting recommendations, age-gating where relevant);
- apply warnings, labels, or interstitials (where appropriate and supported by product capabilities).
5.2 Account and feature measures
- limit account features (posting, messaging, commenting, livestreaming, monetisation features, etc.);
- issue warnings or strikes;
- temporarily suspend an account;
- permanently terminate an account (including for severe violations or repeated breaches).
5.3 Factors we consider
We consider, where relevant:
- severity and nature of harm (including imminence);
- legality vs policy breach;
- context, intent, and whether the content is targeted;
- whether the user has prior violations;
- credible risk indicators (e.g., coordinated harassment, credible threats).
6. Reporting and notice-and-action
6.1 No in-app reporting tools (current platform capability)
At launch, ZOOP may not offer in-app reporting buttons. Reporting is therefore provided through external online channels as described below. This is permitted provided the mechanisms are accessible, user-friendly, and electronic.
6.2 How to report (electronic channels)
Users and third parties can submit reports using our Help/Support reporting forms:
- Report Illegal Content (Notice & Action): [insert link]
- Report Abuse / Harassment / Hate Speech: [insert link]
- Report Child Safety / CSAM concerns: [insert link]
- Report Impersonation / Fraud / Scams / Spam: [insert link]
- Other Safety Reports: [insert link]
Where appropriate, we may also provide a reporting email address:
- Reporting email (optional): [insert email / link]
These mechanisms are intended to be easy to access and available by electronic means.
6.3 Information a report should include
To help us assess and act efficiently, a report should include (where possible):
- a direct link/URL, unique ID, screenshot, or other identifier of the Content;
- the account/profile identifier (if relevant);
- the reason for the report (e.g., category and explanation);
- any context that may not be obvious from the Content itself (e.g., pattern of harassment);
- if the report concerns alleged illegal content, why the reporter believes it is illegal.
6.4 What happens after you report
We process reports diligently and, where feasible and legally permitted:
- acknowledge receipt; and
- inform the reporter of the outcome, subject to privacy, security and legal limitations.
6.5 Prioritisation and urgent cases
We generally prioritise:
- credible threats and imminent risk of serious harm (violence, self-harm, terrorism);
- child safety concerns (including suspected exploitation);
- targeted harassment/hate speech;
- repeat offenders, coordinated abuse, or high-volume manipulation/spam.
Lower-priority reports are reviewed as capacity allows, but we apply the same rules consistently.
7. Child Protection
7.1. Age Restriction
The Platform is intended exclusively for users aged 18 years or older, as expressly set out in the Terms of Service.
Any account, activity, or content reasonably suspected to involve an underage user may be subject to immediate review and enforcement action, including suspension or termination, in accordance with the Terms of Service and applicable law.
7.2. Child Safety and Abuse Matter
All matters relating to the protection of minors, including but not limited to child sexual exploitation, child sexual abuse material (CSAM), grooming, solicitation, or any other child safety risks, are governed exclusively by the Child Abuse and Child Safety Policy, which forms an integral part of the Platform Policies.
This Policy does not set out substantive child protection rules and does not replace or duplicate the measures, definitions, escalation procedures, or legal obligations established under the Child Abuse and Child Safety Policy.
7.3. Reporting and Escalation
Any notices, reports, or concerns relating to child safety or potential violations involving minors must be submitted through the dedicated child safety reporting channels referenced in this Policy and will be handled in strict accordance with the Child Abuse and Child Safety Policy and applicable law.
Where legally required, ZOOP may escalate relevant matters to competent authorities and cooperate with lawful requests, in line with its legal obligations.
8. How moderation decisions are made
8.1 Human-led review
As a new platform, ZOOP’s moderation decisions are primarily made by trained human reviewers.
8.2 Limited automation for triage (if used)
If we use automated tooling (e.g., basic spam/URL filtering) it is used for triage and workflow support. Automated tools are not used to make final enforcement decisions without human oversight.
8.3 Quality, consistency and reviewer safeguards
We aim to:
- train reviewers on Platform Policies and common edge cases;
- apply proportionality and context-based assessment;
- use escalation paths for complex or high-impact decisions; and
- apply “second look” review where appropriate (e.g., appeals, borderline cases, high-severity actions).
9. Statements of reasons and user notification
9.1 Notice to affected users
When we take a Restriction action against a user’s Content or account, we will provide a statement of reasons where required and where legally permitted, explaining:
- what decision we took (e.g., removal, visibility restriction, suspension);
- whether the basis was alleged illegality or incompatibility with our rules;
- the key facts/circumstances underpinning the decision;
- the rule(s) relied on; and
- available redress options (appeal/complaint mechanisms).
9.2 Exceptions
We may limit details in the statement of reasons where necessary to:
- comply with legal obligations;
- prevent abuse of our enforcement systems;
- protect security, victims, reporters, or the integrity of investigations; or
- comply with privacy and data protection laws.
10. Internal complaints and appeals (review of decisions)
10.1 Right to complain/appeal
Where applicable, users may submit a complaint/appeal against certain moderation decisions affecting them, including:
- removal or disabling access to Content;
- visibility restrictions (where communicated as an enforcement measure);
- suspension or termination;
- other significant feature restrictions.
We will make the complaint/appeal mechanism available for at least six months following the decision where required by law.
10.2 How to appeal
Appeals can be submitted via:
- Appeal / Complaint Form: legal@zoop.com
The appeal should include:
- the decision notice reference (or sufficient details to identify it);
- why the user believes the decision was incorrect; and
- any relevant context or evidence.
10.3 How we handle appeals
We aim to handle appeals:
- in a timely, diligent, and non-arbitrary manner; and
- with reviewer separation where feasible (a different reviewer than the original decision).
10.4 Possible outcomes
After review, we may:
- uphold the decision;
- reverse the decision (e.g., reinstate content/account); or
- modify the decision (e.g., reduce restriction scope/duration).
We will notify the user of the outcome, subject to legal constraints.
11. Out-of-court dispute settlement and other redress
Where applicable under the DSA, users may be entitled to refer certain disputes about moderation decisions to certified out-of-court dispute settlement bodies. Information about this option and certified bodies is made available by the European Commission and/or relevant Digital Services Coordinators.
This does not affect a user’s right to seek judicial remedies.
12. Misuse of reporting and complaint channels
To protect the integrity of our safety systems:
- submitting knowingly false, abusive, or bad-faith reports or complaints may lead to warnings and/or restrictions on access to reporting/complaint features; and
- repeated misuse may contribute to account enforcement decisions under the Terms and Community Guidelines.
We may also disregard bulk or automated submissions that are clearly abusive or intended to disrupt Platform operations.
13. Cooperation with authorities and legal requests
ZOOP may cooperate with competent authorities where required by law, including responding to lawful orders and requests. Any disclosure of information is handled in accordance with applicable law, including data protection requirements, and subject to necessity and proportionality.
(Details on government requests, where published, may be included in transparency reporting.)
14. Records, logging and retention (moderation operations)
We maintain operational records to:
- assess reports and appeals;
- support consistency and quality assurance;
- comply with applicable legal obligations (including DSA transparency obligations where relevant); and
- protect users and the Platform from abuse.
Such records may include report metadata, decision outcomes, timestamps, and rationale summaries. Access is restricted to authorised personnel on a need-to-know basis.
Where DSA obligations apply, we may be required to submit certain decision information (e.g., statement-of-reasons data) to the relevant transparency systems/database.
15. Transparency reporting
Where required and appropriate for our service category and size, we publish periodic transparency information about moderation, which may include:
- number of reports received (by category);
- number and types of enforcement actions taken;
- average processing time ranges (where meaningful);
- number of appeals and outcomes.
(Transparency reporting obligations and formats vary by provider category under the DSA.)
16. Data protection
Processing of personal data in connection with reporting, moderation, and appeals is described in our Privacy Policy. We aim to collect and use only what is necessary for safety, legal compliance, and handling reports/appeals.
17. Contact and EU representative
17.1 Policy contact
Questions about this Policy may be submitted via:
- Support / Contact: support@zoop.com
- Email (optional): legal@zoop.com
17.2 DSA contact / EU representative
Digital Services Act (DSA) – Legal Representative
Pursuant to Article 13 of the DSA, II Corporation Ltd has appointed EDSR as its legal representative. You can contact EDSR regarding matters pertaining to the DSA:
– by email at dsa@edsr.eu
– by writing to EDSR at Avenue Huart Hamoir 71, 1030 Brussels, Belgium
– by phone at +32 2 216 19 71
18. Changes to this Policy
We may update this Policy to reflect legal, operational, or safety changes. Where changes are material, we will provide reasonable advance notice (e.g., via the Policies page, in-app notice if available, or email where appropriate). Continued use of the Platform after the effective date of an update means the updated Policy applies