ZOOP Community Guidelines

Effective Date: February 26, 2026 – Version 1.0

ZOOP is a communication platform that provides infrastructure for expression, creativity, and community‑building owned by ii Corporation. We believe that:

  • Communities should have meaningful control over what is acceptable in their own spaces.
  • The platform should apply only the minimum baseline rules necessary to comply with the law, protect users (especially minors), and maintain the integrity of the service.
  • Beyond that baseline, a technology platform should not censor lawful content or act as an arbiter of truth or acceptable opinion.

As a platform, ZOOP applies a narrow, clearly defined baseline of rules. Above that minimum standard, channels and users control their own content choices.

This Policy explains how we apply these principles in a way that remains consistent with applicable laws, including the EU Digital Services Act (DSA), child‑protection rules, and online safety standards. It should be read together with our Terms of Service, Privacy Policy, and Governance, Safety & Moderation Policy.

In this Policy, user means any account on ZOOP, including individual creators, fans, and business accounts. All users are subject to the same baseline rules and community standards

1. Our Content Governance Model

1.1 Layer 1 – Platform Baseline (Minimum Standard)

ZOOP intervenes at platform level where content:

  • Is illegal in the relevant jurisdiction(s), including for example:
    • Child sexual abuse material (CSAM), sexualised content involving minors, or exploitation of children (removed immediately and reported to competent authorities).
    • Terrorism or operational support for terrorism, as defined by applicable law.
    • Fraud, phishing, financial scams, and impersonation for financial gain.
    • Distribution of malware, malicious links, or other technical threats.
    • Copyright or trademark infringement, following valid notice and with counter‑notice mechanisms.
    • Non‑consensual intimate imagery (including “revenge porn” and deepfakes of intimate content without consent).
  • Creates an imminent, credible, and severe risk of physical harm, such as concrete, actionable threats against identifiable persons or detailed coordination of violent acts.
  • Compromises the technical integrity of the platform, including large‑scale spam, coordinated fake accounts, artificial manipulation of metrics, or abusive automation.

Content relating to self‑harm and suicide is handled with special care:

  • We remove content that directly encourages self‑harm or provides specific methods in a way intended to promote imitation, particularly where minors are involved.
  • We allow, with appropriate warnings and filters, personal narratives, help‑seeking, and responsible mental‑health discussion, recognising that open conversation can be protective rather than harmful.

Branded content and commercial communications posted without clear and prominent disclosure of the underlying commercial relationship, including content created in exchange for monetary payment, free products, services, travel, discount codes or any other material benefit from a third party. This applies equally to content promoting non-profit organisations or charitable causes where any material benefit has been received. Disclosure must be visible without any additional user action, in the same language as the content, and use unambiguous terms such as “Ad”, “Paid Partnership with [Brand]” or “Sponsored”. Hashtags such as #collab, #gifted or #ambassador used alone do not constitute adequate disclosure. Users are solely responsible for compliance with applicable advertising and consumer protection law, including the EU Digital Services Act and any national rules governing influencer marketing in their jurisdiction.

AI-generated and AI-edited content

Content that has been fully generated by an artificial intelligence (AI) tool, or materially edited using an AI tool, must be clearly labelled so that users can identify its origin. Specifically:

  • Content that is wholly or substantially produced by an AI system must be labelled “AI Content” in a manner that is visible without any additional user action and in the same language as the content.
  • Content that has been materially altered, enhanced, or edited using AI tools (including but not limited to AI image editing, AI voice cloning, AI video synthesis, and AI writing assistants that make substantive changes) must be labelled “AI Edited Content” in a manner that is visible without any additional user action and in the same language as the content.
  • The label must appear prominently alongside the content (for example, at the start of a post, caption, or video overlay) and must not be hidden, minimised, or presented in a way that requires additional steps to view.
  • Minor AI-assisted functions that do not materially alter the substance of the content (such as spell-checking, grammar correction, or auto-captioning) are excluded from this labelling requirement.
  • Users are solely responsible for ensuring that AI content labels are accurate and applied in compliance with this Policy and any applicable laws, including applicable EU and national rules on AI-generated content and transparency.

We also carry out risk assessments and apply proportionate mitigation measures where required (for example, for systemic risks affecting minors, public health, or democratic processes), relying primarily on product design, user controls, and transparency rather than broad removal of lawful opinions.

ZOOP is not available to users in certain jurisdictions due to legal, regulatory, and sanctions compliance requirements. Users from the following countries will not be permitted to open accounts or execute transactions on the ZOOP Platform: Afghanistan, Belarus, Burundi, Central African Republic, Crimea (Ukraine), Cuba, Democratic People’s Republic of Korea, Democratic Republic of the Congo, Donetsk (Ukraine), Eritrea, Ethiopia, Guinea-Bissau, Haiti, Iran, Iraq, Kherson (Ukraine), Lebanon, Libya, Luhansk (Ukraine), Mali, Myanmar, Nicaragua, Russia, Somalia, South Sudan, Sudan, Venezuela, Yemen, Zaporizhzhia (Ukraine), and Zimbabwe.

1.2 Layer 2 – Community Standards (Channels and Creators)

Above the platform baseline:

  • Each channel or community may define and apply its own rules for what is acceptable in that space, through the choices its owners and admins make about what to publish, whether to allow comments on particular posts, and which users to block or report, as long as those rules comply with the law and this Policy.
  • Channel owners and admins may moderate their own content and the interactions around it, including deciding whether specific posts accept comments, using reporting tools for posts and comments that may violate this Policy, and using blocking to prevent unwanted direct messages from particular users.
  • Users decide which communities to join or leave in light of these local rules.

ZOOP may still intervene above community standards where necessary to enforce the platform baseline or to comply with legal or safety obligations.

1.3 Layer 3 – Individual Choice (User Control)

Every user has tools to shape their own experience, including:

  • Following or unfollowing channels and creators.
  • Viewing content from followed channels and creators in chronological order.
  • Muting topics, keywords, or channels.
  • Blocking other users and limiting unwanted interactions.

Blocking currently prevents delivery of direct messages from the blocked user but does not stop them from viewing or commenting on your content.

Reporting a user or their content may hide their posts or comments for you, but it does not automatically remove them for everyone

Users primarily shape their experience by choosing which channels and creators to follow, and by using available blocking and reporting tools. ZOOP does not currently personalise feeds through algorithmic recommendation systems.

Channel‑level control focuses on what content is posted and whether comments are allowed; channels cannot currently make themselves completely undiscoverable on ZOOP.

2. What ZOOP Does Not Do at Platform Level

Without prejudice to the obligations described in Section 1, ZOOP does not:

  • Remove or restrict content solely because it is controversial, offensive, politically sensitive, unpopular, or critical of institutions, governments, companies, or public figures.
  • Maintain general lists of lawful words, topics, or names whose mere mention automatically leads to removal or blocking.
  • Classify, label, or rank content based on alignment with “official narratives” in political, historical, cultural, or scientific debates, except where a specific legal obligation requires it (for example, in relation to clearly illegal disinformation).
  • Penalise reach or monetisation solely on the basis of the lawful political or ideological orientation of the content or creator.

We do not position ourselves as censors of lawful content and we do not remove or restrict lawful expression for narrative, reputational, or political convenience.

Where law or a competent authority requires additional measures (for example, to protect minors, address specific systemic risks, or comply with court orders), we implement those measures as locally, transparently, and proportionately as possible and inform affected users where we are allowed to do so.

3. Algorithms, Recommendations and User Choice

At present, ZOOP primarily displays content from followed channels and creators in chronological order.

If and when recommendation or discovery systems are introduced, they will be designed to operate as neutral infrastructure, focusing on:

  • Objective usage signals: recency, interactions (such as views, reactions, shares), and the relationship between a user and a channel (for example, follows and prior engagement).
  • Explicit user preferences: topics, channels, and content types a user has chosen to follow or prioritise.

We do not use as ranking criteria:

  • Editorial judgements about the political, moral, or ideological “correctness” of lawful content.
  • The fact that content is controversial, sensitive, or unpopular as a negative factor in itself.
  • Informal pressure from governments, interest groups, or commercial partners that is not backed by a clear legal basis.

Where we need to adjust recommendations for legal or safety reasons (for example, to limit certain content for minors), we use defined, documented criteria and reflect those interventions in our transparency mechanisms.

Users retain meaningful control through:

  • The ability to switch to a purely chronological feed.
  • Clear options to mute topics or channels and manage interests.
  • Access to information about why certain content is being recommended, as these features are developed and rolled out.

4. Moderation, Transparency and Appeals

4.1 How Content Is Flagged

Content may come to our attention through:

  • User reports, via in‑app reporting tools as applicable.
  • Targeted automated detection, focused on high‑risk categories such as CSAM, spam, and malware, in line with legal and safety requirements.
  • Legal notices from public authorities or rights‑holders.

We do not rely on general, continuous monitoring of all conversations, nor do we delegate automatic removal powers to external actors beyond what is required by law.

4.2 Decisions and Notifications

When we assess content at platform level:

  • Where appropriate and necessary, a human moderator reviews the context and applies written rules, citing the relevant section of this Policy or the Terms of Service.
  • We choose the least restrictive effective measure (for example, warning, feature limitation, targeted removal, or suspension in serious or repeated cases).
  • Where feasible, we notify the affected user, stating what action was taken, the legal or policy basis, and how to appeal.

4.3 Right to Appeal

Users can challenge platform‑level moderation decisions by:

  • Submitting an appeal through the channels indicated in the notification.
  • Having the case reviewed by someone other than the original decision‑maker.
  • Receiving a response within a reasonable timeframe, with sufficient explanation to understand the outcome.

We monitor reversal rates, error patterns, and potential bias in decisions, and we adjust training and processes accordingly.

5. Transparency and Reporting

To meet our transparency commitments and applicable legal requirements, we will:

  • Publish periodic transparency reports with aggregate data on content removed or restricted at platform level, main categories of reasons, and appeal outcomes, in line with the DSA and other relevant laws.
  • Provide, where legally permitted, statistical information on requests from public authorities and our rate of compliance.
  • Offer each user an accessible record of significant moderation actions affecting their content or account.

6. Protection of Minors and Additional Safety Measures

Because ZOOP is also available to minors, we implement specific protections for minors, such as:

  • Dedicated systems to detect and remove CSAM and other forms of child abuse, in cooperation with competent bodies.
  • Regular risk assessments for harms to minors and adjustments to features, defaults, and reporting tools in line with child‑protection and online safety legislation.

Further details are set out in our Governance, Safety & Moderation Policy and Privacy Policy.

7. Updates to This Policy

We may update this Policy to reflect changes in law, the evolution of the platform, or community governance outcomes:

  • Material changes will be communicated with reasonable advance notice through in‑app notices, email, or a dedicated page.
  • Continued use of ZOOP after the effective date of an update constitutes acceptance of the updated Policy, without prejudice to any rights users may have under applicable law.