Last Updated: December 23, 2025
At Echos, we are committed to maintaining a safe, respectful, and positive environment for all users. This Content Moderation Policy explains how we enforce the community standards outlined in our Terms of Service and protect our community from harmful content.
This policy applies to all user-generated content on Echos, including text responses, replies, reactions, and images shared within groups. It should be read in conjunction with our Terms of Service (particularly Section 6: User Conduct) and our Privacy Policy.
We use a combination of automated systems and human review to identify and remove content that violates our policies, while respecting user privacy and ensuring fair treatment for all community members.
The following content is prohibited on Echos, as defined in Section 6.2 of our Terms of Service. This section provides additional detail and examples.
Prohibited content includes:
Prohibited content includes:
Prohibited content includes:
Prohibited content includes:
Prohibited content includes:
Prohibited content includes:
Beyond prohibited content, the following behaviors violate our policies as outlined in Section 6.3 of our Terms of Service:
To maintain a safe environment, all content shared on Echos is subject to automated review. Content is visible immediately after posting while our systems process it in the background. The vast majority of policy violations are detected and removed within seconds to minutes. We are committed to reviewing all content within 24 hours of posting.
All text posts, including responses and replies, are analyzed using AI-powered moderation to verify compliance with our Terms of Service. This review checks for:
All uploaded images are analyzed using image recognition technology to detect:
Content that fails automated review is immediately flagged and hidden from all users. Flagged content is queued for human review to determine appropriate action.
Our automated systems are designed to err on the side of caution. If your content is incorrectly flagged, you may request a review through our appeals process.
We rely on our community to help identify content that violates our policies. Users can report any content or behavior they believe is inappropriate.
To report content, tap the options menu (β’β’β’) on any post or message and select "Report." You will be asked to select a reason for the report. You may optionally provide additional details.
All reports are anonymous. The user whose content you report will never be told who reported them. We take reporter privacy seriously and do not disclose reporter identities.
Submitting false or malicious reports is a violation of our policies and may result in enforcement action against your account, including suspension or termination.
We use a tiered system to balance responsive content moderation with protection against abuse of the reporting system.
When a user flags content, that content is immediately hidden from the reporting user only. The content remains visible to other group members pending further review or additional flags.
When content receives 3 or more flags from different users within the same group, the content is automatically hidden from all group members. The content is then prioritized for human review.
User reports receive higher priority than automated flags. Content that is:
Users can block other users to prevent unwanted interactions. Blocking is a personal safety tool available to all users.
When you block another user:
You can unblock a user at any time through your account settings. Once unblocked, you will resume seeing each other's content in shared groups.
If a user is blocked by 3 or more different users, their account is automatically flagged for platform review. Our Trust & Safety team will review the account and may take action including warnings, restrictions, or account termination.
Group administrators play an important role in maintaining healthy group dynamics and ensuring compliance with our policies.
Group administrators can:
Group administrators can:
For serious violations or patterns of problematic behavior, group administrators can report users or content to Echos for platform-level review and action. Reports from administrators receive priority attention.
Administrators are expected to enforce policies fairly and not abuse their privileges. Administrators who misuse their powers (e.g., unfair removals, enabling policy violations) may have their administrator status revoked and face account action.
All flagged content is reviewed by our Trust & Safety team to ensure accurate and fair enforcement.
We are committed to reviewing all flagged content and user reports within 24 hours. Priority is given to reports involving child safety, violence, or imminent harm.
When reviewing flagged content, our team:
After review, flagged content may be:
We use a range of enforcement actions proportional to the severity of the violation. Actions may be applied individually or in combination.
Violating content is removed and no longer visible to any users. The content creator is notified that their content was removed and the reason why.
The user receives a warning explaining the policy violation. Warnings are recorded and considered in future enforcement decisions. Multiple warnings may result in more severe action.
Certain features may be temporarily restricted, such as the ability to post images, create new groups, or invite new members. Restrictions are typically 24 hours but may be longer for repeat offenders.
The user's account is temporarily suspended and they cannot access the service. Suspension periods range from 24 hours to 30 days depending on severity. Users are notified of the suspension reason and duration.
The user's account is permanently terminated and they are prohibited from creating new accounts. Permanent bans are reserved for severe violations or repeated offenses. This action may be appealed.
Violations are classified by severity to ensure consistent and proportional enforcement.
Minor
Examples: Spam, mild inappropriate language, minor policy violations
1st: Removal + Warning
2nd: 24hr restriction
3rd: 7-day suspension
Moderate
Examples: Harassment, adult content, targeted insults
1st: Removal + Warning + 24hr restriction
2nd: 7-day suspension
3rd: Permanent ban
Severe
Examples: Hate speech, threats, doxxing, explicit content
1st: Removal + 30-day suspension
2nd: Permanent ban
Critical
Examples: CSAM, credible violence threats, terrorism
Immediate: Permanent ban + law enforcement referral
Note: The above are guidelines. Actual enforcement may vary based on context, intent, user history, and other factors. We reserve the right to take more severe action when warranted.
Users who believe enforcement action was taken in error may appeal the decision.
To appeal an enforcement action, send an email to support@echosapp.dev with:
Appeals are reviewed by a different team member than the original decision-maker. We aim to respond to appeals within 48 hours. You will receive an email with the outcome of your appeal.
After reviewing an appeal:
Certain violations are not eligible for appeal, including: confirmed CSAM violations, credible threats of violence, and bans for ban evasion. These decisions are final.
Serious or repeated violations may result in account restrictions or termination, as described in Section 15 of our Terms of Service.
During a suspension:
When an account is permanently terminated:
If a group administrator's account is terminated, administrative privileges transfer to another group member (typically the longest-standing member). If all members of a group are terminated, the group is archived.
Zero Tolerance Policy
Echos has a zero tolerance policy for child sexual abuse material (CSAM) and any content that sexualizes, exploits, or endangers minors.
All images uploaded to Echos are scanned for known CSAM using industry-standard detection technology. Suspected CSAM is immediately quarantined and reviewed.
In compliance with U.S. law, we report all confirmed CSAM to the National Center for Missing & Exploited Children (NCMEC). We fully cooperate with law enforcement investigations.
Any account found to have uploaded, shared, or solicited CSAM is immediately and permanently terminated without notice. This decision is final and not subject to appeal.
Users must be at least 13 years old to use Echos. Content or behavior that targets, grooms, or endangers minors in any way will result in immediate account termination and referral to appropriate authorities.
We cooperate with law enforcement agencies in accordance with applicable law and our Privacy Policy.
We respond to valid legal requests including subpoenas, court orders, and search warrants. We verify the authenticity and validity of all requests before disclosing any user information.
Upon receipt of a valid preservation request, we will preserve relevant user data for up to 90 days pending receipt of formal legal process. This includes account information, content, and activity logs.
In emergency situations involving imminent danger of death or serious physical injury, we may disclose information to law enforcement without legal process to the extent permitted by law.
Unless prohibited by law or court order, we will notify users when their information is requested by law enforcement, giving them an opportunity to object.
We are committed to ensuring our content moderation systems operate reliably and safely.
Our automated moderation systems operate in real-time. Content that violates our policies is typically removed within seconds of being posted, before it can be viewed by other users.
In the event of a system outage or technical issue affecting our moderation systems, flagged content will remain hidden from users. We design our systems to fail safely, ensuring potentially harmful content is not exposed during outages.
We regularly review and update our moderation systems to improve accuracy, reduce false positives, and adapt to new types of policy violations. User feedback through appeals helps us improve our systems.
This Content Moderation Policy may be updated from time to time to reflect changes in our practices, legal requirements, or platform capabilities.
For material changes to this policy, we will notify users through the app or by email. Non-material changes (such as clarifications or formatting) may be made without notice.
Changes to this policy take effect on the "Last Updated" date shown at the top of this document. Continued use of Echos after changes become effective constitutes acceptance of the updated policy.
If you have questions about this policy or need to report a concern, please contact us.
Trust & Safety Team
support@echosapp.devFor urgent safety concerns including imminent threats of violence or child safety issues, please use the in-app reporting feature AND contact local law enforcement. In-app reports for these issues are escalated immediately.
Thank you for helping us keep Echos safe. Together, we can maintain a positive and respectful community for everyone.