Content Moderation Policy

Last Updated: December 23, 2025

Table of Contents

1 Introduction and Purpose

At Echos, we are committed to maintaining a safe, respectful, and positive environment for all users. This Content Moderation Policy explains how we enforce the community standards outlined in our Terms of Service and protect our community from harmful content.

This policy applies to all user-generated content on Echos, including text responses, replies, reactions, and images shared within groups. It should be read in conjunction with our Terms of Service (particularly Section 6: User Conduct) and our Privacy Policy.

🎯 Our Commitment

We use a combination of automated systems and human review to identify and remove content that violates our policies, while respecting user privacy and ensuring fair treatment for all community members.

2 Content Standards

The following content is prohibited on Echos, as defined in Section 6.2 of our Terms of Service. This section provides additional detail and examples.

πŸ”ž 2.1 Sexual and Adult Content

Prohibited content includes:

  • Nudity or sexually explicit images
  • Sexual solicitation or offers of sexual services
  • Sexually suggestive content involving minors (zero tolerance)
  • Non-consensual intimate imagery
  • Graphic descriptions of sexual acts

⚠️ 2.2 Violence and Dangerous Content

Prohibited content includes:

  • Graphic violence, gore, or mutilation
  • Credible threats of violence against individuals or groups
  • Content promoting self-harm or suicide
  • Instructions for dangerous or illegal activities
  • Content glorifying or inciting violence

🚫 2.3 Hate Speech and Discrimination

Prohibited content includes:

  • Attacks on individuals or groups based on race, ethnicity, nationality, religion, gender, sexual orientation, disability, or other protected characteristics
  • Slurs, dehumanizing language, or hateful stereotypes
  • Symbols, imagery, or rhetoric associated with hate groups
  • Holocaust denial or glorification of genocide

πŸ˜” 2.4 Harassment and Bullying

Prohibited content includes:

  • Targeted harassment or repeated unwanted contact
  • Bullying, intimidation, or humiliation
  • Doxxing (sharing private information without consent)
  • Coordinated attacks against individuals
  • Content designed to shame, mock, or degrade others

βš–οΈ 2.5 Illegal Content

Prohibited content includes:

  • Child sexual abuse material (CSAM) – zero tolerance
  • Content promoting illegal drug use or sales
  • Fraud, scams, or financial exploitation
  • Intellectual property infringement
  • Any content that violates applicable laws

πŸ—‘οΈ 2.6 Spam and Deceptive Content

Prohibited content includes:

  • Unsolicited commercial messages or advertising
  • Repetitive or irrelevant content designed to disrupt
  • Misleading or false information presented as fact
  • Impersonation of other users or public figures
  • Phishing attempts or malicious links

3 Prohibited Behaviors

Beyond prohibited content, the following behaviors violate our policies as outlined in Section 6.3 of our Terms of Service:

πŸ‘€ 3.1 Identity and Authenticity

  • Creating fake accounts or impersonating others
  • Operating multiple accounts
  • Providing false information during registration
  • Misrepresenting your identity or affiliations

πŸ”§ 3.2 Platform Integrity

  • Using bots, scrapers, or automated tools
  • Attempting to bypass security measures or access controls
  • Interfering with the service or other users' access
  • Exploiting bugs or vulnerabilities

πŸ“Š 3.3 Data and Privacy

  • Collecting or harvesting user data without consent
  • Sharing content from private groups outside the group
  • Recording or screenshotting content to harass or embarrass others
  • Violating other users' privacy

🚨 3.4 Abuse of Features

  • False or malicious reporting of content or users
  • Coordinated abuse of reporting or blocking features
  • Using group features to harass or exclude users
  • Evading enforcement actions (e.g., creating new accounts after a ban)

4 Automated Content Review

To maintain a safe environment, all content shared on Echos is subject to automated review. Content is visible immediately after posting while our systems process it in the background. The vast majority of policy violations are detected and removed within seconds to minutes. We are committed to reviewing all content within 24 hours of posting.

πŸ’¬ 4.1 Text Content Review

All text posts, including responses and replies, are analyzed using AI-powered moderation to verify compliance with our Terms of Service. This review checks for:

  • Hate speech, slurs, and discriminatory language
  • Harassment, bullying, and threatening language
  • Sexual or explicit content
  • Spam and promotional content
  • Other policy violations

πŸ“· 4.2 Image Content Review

All uploaded images are analyzed using image recognition technology to detect:

  • Child sexual abuse material (CSAM)
  • Nudity and sexually explicit content
  • Violence, gore, and graphic content
  • Hate symbols and extremist imagery

🚩 4.3 Automated Flagging

Content that fails automated review is immediately flagged and hidden from all users. Flagged content is queued for human review to determine appropriate action.

Our automated systems are designed to err on the side of caution. If your content is incorrectly flagged, you may request a review through our appeals process.

5 User Reporting

We rely on our community to help identify content that violates our policies. Users can report any content or behavior they believe is inappropriate.

πŸ“ 5.1 How to Report

To report content, tap the options menu (β€’β€’β€’) on any post or message and select "Report." You will be asked to select a reason for the report. You may optionally provide additional details.

πŸ”’ 5.2 Reporter Anonymity

All reports are anonymous. The user whose content you report will never be told who reported them. We take reporter privacy seriously and do not disclose reporter identities.

⚑ 5.3 What Happens After You Report

  • The reported content is immediately hidden from your view
  • The report is prioritized for human review
  • Our Trust & Safety team reviews the content within 24 hours
  • Appropriate action is taken based on our enforcement guidelines

⚠️ 5.4 False Reports

Submitting false or malicious reports is a violation of our policies and may result in enforcement action against your account, including suspension or termination.

6 Flagging Thresholds

We use a tiered system to balance responsive content moderation with protection against abuse of the reporting system.

1️⃣ 6.1 Single Flag

When a user flags content, that content is immediately hidden from the reporting user only. The content remains visible to other group members pending further review or additional flags.

3️⃣ 6.2 Multiple Flags (3+)

When content receives 3 or more flags from different users within the same group, the content is automatically hidden from all group members. The content is then prioritized for human review.

🚨 6.3 Priority Escalation

User reports receive higher priority than automated flags. Content that is:

  • Reported by a user (vs. auto-flagged only): Escalated for faster review
  • Flagged by multiple users: Highest priority for immediate action
  • Related to child safety or violence: Immediate escalation and review

7 Blocking

Users can block other users to prevent unwanted interactions. Blocking is a personal safety tool available to all users.

πŸ›‘οΈ 7.1 How Blocking Works

When you block another user:

  • You will no longer see their content in any shared groups
  • They will no longer see your content in any shared groups
  • Neither party is notified of the block
  • The block remains in effect until you choose to unblock them

πŸ”„ 7.2 Unblocking

You can unblock a user at any time through your account settings. Once unblocked, you will resume seeing each other's content in shared groups.

⚠️ 7.3 Block Threshold Review

If a user is blocked by 3 or more different users, their account is automatically flagged for platform review. Our Trust & Safety team will review the account and may take action including warnings, restrictions, or account termination.

8 Group Administrator Responsibilities

Group administrators play an important role in maintaining healthy group dynamics and ensuring compliance with our policies.

πŸ—‘οΈ 8.1 Content Management

Group administrators can:

  • Flag any content in their group (content is hidden from the group)
  • Request removal of content that violates group standards
  • Set expectations for appropriate content in their group

πŸ‘₯ 8.2 Member Management

Group administrators can:

  • Remove members from the group
  • Control who can join the group
  • Set group access and invitation settings

πŸ“’ 8.3 Escalation to Platform

For serious violations or patterns of problematic behavior, group administrators can report users or content to Echos for platform-level review and action. Reports from administrators receive priority attention.

βš–οΈ 8.4 Administrator Accountability

Administrators are expected to enforce policies fairly and not abuse their privileges. Administrators who misuse their powers (e.g., unfair removals, enabling policy violations) may have their administrator status revoked and face account action.

9 Human Review Process

All flagged content is reviewed by our Trust & Safety team to ensure accurate and fair enforcement.

⏱️ 9.1 Response Time

We are committed to reviewing all flagged content and user reports within 24 hours. Priority is given to reports involving child safety, violence, or imminent harm.

πŸ” 9.2 Review Process

When reviewing flagged content, our team:

  • Examines the content in context
  • Reviews the user's history and any prior violations
  • Considers the severity and nature of the violation
  • Determines appropriate enforcement action
  • Documents the decision for consistency

βœ… 9.3 Review Outcomes

After review, flagged content may be:

  • Cleared: Flag removed, content restored if appropriate
  • Confirmed: Content removed, enforcement action taken
  • Escalated: Referred for additional review or legal action

10 Enforcement Actions

We use a range of enforcement actions proportional to the severity of the violation. Actions may be applied individually or in combination.

πŸ—‘οΈ 10.1 Content Removal

Violating content is removed and no longer visible to any users. The content creator is notified that their content was removed and the reason why.

πŸ’¬ 10.2 Warning

The user receives a warning explaining the policy violation. Warnings are recorded and considered in future enforcement decisions. Multiple warnings may result in more severe action.

πŸ”’ 10.3 Feature Restrictions

Certain features may be temporarily restricted, such as the ability to post images, create new groups, or invite new members. Restrictions are typically 24 hours but may be longer for repeat offenders.

⏸️ 10.4 Temporary Suspension

The user's account is temporarily suspended and they cannot access the service. Suspension periods range from 24 hours to 30 days depending on severity. Users are notified of the suspension reason and duration.

β›” 10.5 Permanent Ban

The user's account is permanently terminated and they are prohibited from creating new accounts. Permanent bans are reserved for severe violations or repeated offenses. This action may be appealed.

11 Severity Classifications

Violations are classified by severity to ensure consistent and proportional enforcement.

Minor

Examples: Spam, mild inappropriate language, minor policy violations

1st: Removal + Warning
2nd: 24hr restriction
3rd: 7-day suspension

Moderate

Examples: Harassment, adult content, targeted insults

1st: Removal + Warning + 24hr restriction
2nd: 7-day suspension
3rd: Permanent ban

Severe

Examples: Hate speech, threats, doxxing, explicit content

1st: Removal + 30-day suspension
2nd: Permanent ban

Critical

Examples: CSAM, credible violence threats, terrorism

Immediate: Permanent ban + law enforcement referral

Note: The above are guidelines. Actual enforcement may vary based on context, intent, user history, and other factors. We reserve the right to take more severe action when warranted.

12 Appeals Process

Users who believe enforcement action was taken in error may appeal the decision.

πŸ“§ 12.1 How to Appeal

To appeal an enforcement action, send an email to support@echosapp.dev with:

  • Your account email address
  • The date and nature of the enforcement action
  • Why you believe the decision was incorrect
  • Any supporting context or information

πŸ”„ 12.2 Appeal Review

Appeals are reviewed by a different team member than the original decision-maker. We aim to respond to appeals within 48 hours. You will receive an email with the outcome of your appeal.

βš–οΈ 12.3 Appeal Outcomes

After reviewing an appeal:

  • Upheld: Original decision stands
  • Modified: Enforcement reduced or adjusted
  • Overturned: Action reversed, content/account restored

🚫 12.4 Non-Appealable Actions

Certain violations are not eligible for appeal, including: confirmed CSAM violations, credible threats of violence, and bans for ban evasion. These decisions are final.

13 Account Restrictions and Termination

Serious or repeated violations may result in account restrictions or termination, as described in Section 15 of our Terms of Service.

πŸ”’ 13.1 Account Suspension

During a suspension:

  • You cannot access the app or any features
  • Your content remains visible to group members (unless removed)
  • You remain a member of your groups
  • Access is restored automatically when the suspension ends

β›” 13.2 Account Termination

When an account is permanently terminated:

  • All access to the service is revoked immediately
  • You are removed from all groups
  • Your content may be deleted or anonymized
  • You may not create a new account
  • Any active subscriptions are cancelled

πŸ‘₯ 13.3 Impact on Groups

If a group administrator's account is terminated, administrative privileges transfer to another group member (typically the longest-standing member). If all members of a group are terminated, the group is archived.

14 Child Safety

Zero Tolerance Policy

Echos has a zero tolerance policy for child sexual abuse material (CSAM) and any content that sexualizes, exploits, or endangers minors.

πŸ” 14.1 Detection

All images uploaded to Echos are scanned for known CSAM using industry-standard detection technology. Suspected CSAM is immediately quarantined and reviewed.

βš–οΈ 14.2 Reporting to Authorities

In compliance with U.S. law, we report all confirmed CSAM to the National Center for Missing & Exploited Children (NCMEC). We fully cooperate with law enforcement investigations.

β›” 14.3 Enforcement

Any account found to have uploaded, shared, or solicited CSAM is immediately and permanently terminated without notice. This decision is final and not subject to appeal.

πŸ›‘οΈ 14.4 Protecting Minors

Users must be at least 13 years old to use Echos. Content or behavior that targets, grooms, or endangers minors in any way will result in immediate account termination and referral to appropriate authorities.

15 Law Enforcement Cooperation

We cooperate with law enforcement agencies in accordance with applicable law and our Privacy Policy.

πŸ“‹ 15.1 Legal Requests

We respond to valid legal requests including subpoenas, court orders, and search warrants. We verify the authenticity and validity of all requests before disclosing any user information.

πŸ’Ύ 15.2 Evidence Preservation

Upon receipt of a valid preservation request, we will preserve relevant user data for up to 90 days pending receipt of formal legal process. This includes account information, content, and activity logs.

🚨 15.3 Emergency Disclosure

In emergency situations involving imminent danger of death or serious physical injury, we may disclose information to law enforcement without legal process to the extent permitted by law.

πŸ“’ 15.4 User Notification

Unless prohibited by law or court order, we will notify users when their information is requested by law enforcement, giving them an opportunity to object.

16 System Reliability and Failsafes

We are committed to ensuring our content moderation systems operate reliably and safely.

⚑ 16.1 Real-Time Moderation

Our automated moderation systems operate in real-time. Content that violates our policies is typically removed within seconds of being posted, before it can be viewed by other users.

πŸ›‘οΈ 16.2 Failsafe Protections

In the event of a system outage or technical issue affecting our moderation systems, flagged content will remain hidden from users. We design our systems to fail safely, ensuring potentially harmful content is not exposed during outages.

πŸ”„ 16.3 Continuous Improvement

We regularly review and update our moderation systems to improve accuracy, reduce false positives, and adapt to new types of policy violations. User feedback through appeals helps us improve our systems.

17 Policy Updates

This Content Moderation Policy may be updated from time to time to reflect changes in our practices, legal requirements, or platform capabilities.

πŸ“£ 17.1 Notification of Changes

For material changes to this policy, we will notify users through the app or by email. Non-material changes (such as clarifications or formatting) may be made without notice.

πŸ“… 17.2 Effective Date

Changes to this policy take effect on the "Last Updated" date shown at the top of this document. Continued use of Echos after changes become effective constitutes acceptance of the updated policy.

18 Contact and Reporting

If you have questions about this policy or need to report a concern, please contact us.

Trust & Safety Team

support@echosapp.dev

🚨 Urgent Safety Concerns

For urgent safety concerns including imminent threats of violence or child safety issues, please use the in-app reporting feature AND contact local law enforcement. In-app reports for these issues are escalated immediately.

Thank you for helping us keep Echos safe. Together, we can maintain a positive and respectful community for everyone.