A solution to moderate unwanted senders on WhatsApp

A solution to A Rutgers researcher has developed techniques to help WhatsApp identify unwanted senders in public groups and automatically filter unwanted and spam messages for WhatsApp users.

The study, “Jettisoning Junk Messaging in the Era of End-to-End Encryption: A Case Study of WhatsApp,” will be presented at The Web Conference 2022. The researchers examined 2.6 million messages from 5,051 public policy-related WhatsApp groups in India. Analyzing the content, URLs, and patterns of junk messages over time.

WhatsApp is the world’s most popular mobile messaging app. With over 2 billion users.

The prevalence of spam messages – defined as messages that are not of interest or appropriate to a group’s administrators – was much higher than the researchers had expected. Nearly one in ten messages posted to these groups was spam, the study found.

“Eliminating unwanted messages is essential to improving the information consumption of people who are bombarded by spam and to reducing users’ economic concerns.” Said Kiran Garimella, assistant professor of library and information science at the Rutgers School of Communication and Information. “Some spammers aim to steal users’ credit card information.”

The study found that the most common spam messages were job ads

Which accounted for nearly 30% of the total data. Other spam messages included “click and earn” messages. Which encourage people to click on a URL and promise a reward. 7.7% of spam messages offered items for sale. While 7.5% offered a gift in exchange for users signing up for an online service, and consisted primarily of a URL to click.

Researchers have developed methods to moderate public Chile WhatsApp Number List WhatsApp groups. Unlike messaging systems such as email and Twitter, WhatsApp cannot read or moderate user content due to end-to-end encryption. While this ensures user privacy. WhatsApp’s inability to moderate content means that spam and unwanted messages posted by unwanted senders can impact users’ experience of the platform.

According to the study, spammers post in many groups and typically appear and disappear multiple times to avoid detection and removal by administrators.

Spammers broadcast the same spam messages for a few “active” days. Garimella says this strategy could improve the visibility of spam messages by extending the lifespan of recent messages.

URLs and phone numbers are a key indicator of spam

Nearly 90% of spam messages contained a phone number. A URL, or both (compared to 36% of non-spam messages). The researchers created a coding model to automatically detect spam WhatsApp Number Database messages using URLs and phone numbers. This can help WhatsApp admins quickly flag and remove such messages, they said.

From a user perspective, the researchers created a model in which users encode a signal that detects whether a message contains a phone number, a URL, both, or neither.

Our methods are very practical and applicable

Garimella said. “WhatsApp can apply them to stop the Bulk Database spread of spam in its groups, and our techniques can be used on the platform in a centralized manner while respecting the end-to-end encryption guarantees that WhatsApp offers users to protect their privacy.”

As part of a broader effort to reduce spam on WhatsApp’s public groups. Garimella and his co-authors are sharing their annotated dataset and code with WhatsApp and making it publicly available to other researchers.