Civil Society Digital Policy Disinformation Internet United States and Canada
Report August 14, 2023

Protecting point-to-point messaging apps: Understanding Telegram, WeChat, and WhatsApp in the United States

By Iria Puyosa

Executive summary

Too often, consideration of point-to-point messaging platforms in the United States is focused on either diaspora or second-language usage, given the global popularity of these platforms. Another common focus is on extremist or unlawful usage. 

In reality, a broad swath of Americans use point-to-point platforms, the popularity of which is increasing, but that usage remains at a lower rate when compared to that in other regions of the world. An estimated 69 percent of the United States population currently uses at least one point-to-point messaging app, though the use and dynamics of this part of the information ecosystem remain understudied. 

The Digital Forensic Research Lab (DFRLab) undertook this project to better understand and contextualize point-to-point platform usage in the United States with two goals: first, to analyze the growing use of these platforms in the United States; and, second, to emphasize the growing importance of rights respecting— and protecting—elements of some platforms, such as end-to-end encryption as an important technology at the core of designing for data privacy and free speech. 

The DFRLab carried out this research project to shed light on the following topics:

  • First, how point-to-point platforms work, their varying degrees of security features, and how they deploy encryption. 
  • Second, understanding how diverse communities use the messaging platforms for different purposes. 
  • Third, understanding the variance among platform design and enforcement of terms of usage. 
  • Finally, how messaging app security is important for protecting and respecting rights—like privacy and freedom of expression—in this digital era.  

We mapped the ecosystem of point-to-point messaging apps in the United States, looking at the more than forty apps available in the market. We assessed the features these apps offer, their registration requirements, and their approach toward encryption.  

The messaging apps reviewed may be similar in communication features but varied substantially in security, privacy, and content policies. The intersection of technical features, policies, and detection methods around acceptable usage (as defined by the platforms) leads to different models for use. Ultimately, we chose to focus our empirical research on Telegram, WeChat, and WhatsApp because they present distinct product architectures and technical features, and varying policies on usage. 

Platforms must balance complex trade-offs to protect their users and ensure app integrity. Messaging apps typically establish policies of acceptable usage, prohibiting some harmful or criminal content, ranging from spam to sexual abuse material and terrorism. Telegram has a permissive content policy, but the platform has been adding restrictions in recent years following pressure from law enforcement in different countries. WhatsApp has a growing list of unacceptable content considered harmful or illegal. WeChat is the most restrictive messaging app regarding acceptable content, banning even political content. All three of these messaging apps prohibit sharing content depicting sexual abuse or calls for violent crimes. 

Messaging app security depends on how encryption is enabled. Almost every messaging app offers data encryption in transit between devices, as is standard in most internet-enabled data exchanges. Additionally, most reliable messaging apps provide end-to-end (E2E) encryption, which protects messages from unauthorized access by third parties, including the platform itself. 

WhatsApp offers E2E encryption by default, Telegram offers opt-in encryption, and WeChat only offers transport-layer encryption for data in transit. In general, data collection is less extensive in messaging apps than on mainstream social media platforms such as Twitter or Facebook. Few messaging apps conduct extensive monitoring for unacceptable content since human moderation and automated scanning would infringe on their terms of service. However, most messaging apps collect basic usage metadata to monitor platform performance and integrity. Telegram collects minimal usage data, WhatsApp collects sizable usage data, and WeChat extensively captures both usage and content data. As such, Telegram and WeChat are, in many ways, at opposite ends of the spectrum, where Telegram is loosely moderated and controlled while WeChat comprehensively tracks its users, their behavior, and the content they post. 

Remarkable differences exist among the three messaging platforms that the DFRLab focused on in this report. Telegram’s design prioritizes that the content of communications be available on different devices. Its public channels offer large group sizes, ample reach, and many features for reacting to content. WeChat is an all-encompassing app in which interaction with service and official accounts is paramount. Automated monitoring to ensure compliance with its policies of acceptable content is built into the design, in compliance with Chinese regulations. WhatsApp’s original design aimed to satisfy the needs of direct individual-to-individual personal communications. Thus, it still favors a balance between privacy and safety, although this may change as the platform embraces other forms of interactions, such as communities, public channels, and business transactions. 

Usage of messaging platforms is growing and overwhelmingly lawful and beneficial. The DFRLab observed the following general trends: 

  • Messaging conversations often link to content posted on social media platforms and the open web. 
  • Local communities’ dynamics and information related to transnational issues are intertwined. 
  • Diaspora communities rely on WhatsApp and WeChat for mutual support and exchange of resources. 

The case studies in this report were selected as illustrations of a cross section of platforms and communities or uses that have either received extensive news coverage or too little. In our analysis, we found different ways in which misinformation and foreign influence operations spread—or did not spread—on Telegram, WeChat, and WhatsApp. We found that political or ideological topics were more prevalent in messaging interactions among US-born users in public Telegram groups than among foreign-born diaspora communities. Moreover, we observed issues outside our initial scope. These issues included intrusive practices such as business spamming and outright harms such as the unsolicited posting of sexual abuse content on public groups. Upon analysis of public groups and channels on WhatsApp, Telegram, and WeChat, the DFRLab observed the following outlying findings: 

  • Misinformation and disinformation about political and health topics were widespread on the public Telegram channels, health-related misinformation was found in WhatsApp public groups, and misleading political narratives were detected on WeChat public accounts. 
  • Individuals and groups in the United States who espouse white supremacist beliefs are active on Telegram public channels in a way that they are not able to be (under the terms and conditions) on larger social media platforms like Facebook or Twitter. 
  • Public WeChat accounts were instrumentalized to foster narratives aligned with the Chinese Communist Party among various groups. 
  • Pro-Russian influence campaigns were active on public Telegram channels in English and Spanish. 
  • Supporters of former US President Donald Trump used public Telegram channels to boost their political views ahead of the 2022 midterm elections, and they are already sharing content related to the 2024 presidential elections. 
  • Unsolicited sharing of sexual imagery and content derived from sexual exploitation, including child sexual abuse, was found in a few public WhatsApp groups. 
  • Some users with business accounts violate WhatsApp’s acceptable usage policies by engaging in spam, offering prohibited transactions such as cryptocurrencies, or advertising fraudulent products. 

Messaging platforms can rely on methods that do not require accessing message texts or images in compliance with policies and terms of usage. These methods are in-app user reporting, analysis of metadata, and analysis of behavioral signals. WhatsApp uses all three methods for enforcing its policies. Telegram relies mainly on in-app user reporting, although the platform has capabilities for metadata analysis. WeChat also encourages user reporting, but this platform deploys automated content scanning for interactions within the app.  

Some organizations working on counterterrorism or child sexual abuse have been asking for privileged access or backdoors for law enforcement and deployment of automated scanning in messaging apps. E2E encryption renders automated scanning of content impossible, making it equally impossible for E2E encrypted apps to implement many common content policies of more open platforms, since they cannot decrypt content shared by their users. Content-dependent preemptive methods, such as server-side or client-side scanning to match content a user is sending against a database, compromise encryption integrity, weaken security, and erode privacy protection. Both server-side and client-side scanning are ineffective for identifying never-seen-before content that is not already part of a database. Currently, “hashes” databases are available for terrorist content and child sexual abuse material posted on social media. Security experts warn that automated content scanning undermines encryption and introduces security vulnerabilities in messaging apps, increasing risks for all users. Conversely, machine-learning procedures applied to metadata and behavioral signals would not compromise encryption and may detect never-before-seen content. 

Based upon this investigation, the DFRLab recommends that platforms prioritize the following: 

  • Investing in in-app reporting tools. 
  • Defining robust policies for business and organizational accounts. 
  • Partnering with outside researchers to investigate the spread of harmful content, while establishing protocols for protecting users’ personal data in the process. 
  • Collaborating with counterterrorism hashes databases. 
  • Considering impacts on human rights when designing policies and products. 

Likewise, the DFRLab recommends that policymakers prioritize the following: 

  • Enacting data privacy protection legislation. 
  • Avoiding regulations that undermine rights-protecting technologies, such as E2E encryption. 
  • Examining business practices and commercial services offered via messaging apps to identify regulatory gaps. 
  • Promoting digital literacy tailored to the risks faced by users of messaging apps. 

As an underlying ethos, legislators and policymakers should always take into consideration how policies and regulations aiming to govern or control messaging apps could be enforced across countries that maintain different levels of respect for human rights. For instance, a regulation instituted in the United States that mandates platforms keep identification records for their users and deliver that information to law enforcement agencies upon request could be weaponized in authoritarian or autocratic countries where a given messaging app is widely used, increasing the possibility of capture and incarceration of political dissidents. Similarly, requiring messaging apps to build in means for privileged access to E2E encrypted communications in a domestic context would likely open the door for other governments to repurpose the same technical infrastructure for surveillance. 

Ultimately, all actions taken by any company or government have potential impact beyond their intended target, often creating unintentional harm, and this potential must be a persistent consideration in every decision about how an app should operate. 


The Atlantic Council’s Digital Forensic Research Lab (DFRLab) has operationalized the study of disinformation by exposing falsehoods and fake news, documenting human rights abuses, and building digital resilience worldwide.