By: Alon Cohen Jan 21st, 2025
Updated: June 12, 2025
Aggression on Social Media.
Humans often exhibit more aggressive or hostile behavior on social media compared to face-to-face interactions. We recently witnessed a public demonstration of this when one of the two most influential individuals on earth decided to poke the other over social media. This phenomenon of out-of-character over-aggressiveness is happening for several reasons:
Anonymity and Disinhibition: Social media platforms offer a layer of full or pseudo-anonymity, allowing users to feel less accountable for their actions. The reduced accountability phenomenon, known as the "online disinhibition effect," diminishes the social cues and immediate feedback that are present in in-person conversations, such as facial expressions or tone of voice, which ordinarily moderate behavior. Without these cues, people may feel freer to express harsh or critical thoughts without immediately seeing the impact on others.
Distance and Lack of Consequence: The physical and emotional distance provided by online communication means no immediate personal consequence to hurtful words. In face-to-face conversations, you might see someone's reaction, feel empathy, or face social repercussions, such as losing respect or damaging relationships. On social media, the immediate impact is often not visible, and the consequences can seem less tangible or delayed, which can embolden people to express themselves more harshly.
Deindividuation: When individuals are part of a large online community or mob, they may experience deindividuation, losing their sense of personal identity and feeling less responsible for their actions. This can lead to behavior that is more in line with the group's norms, which might be more aggressive or dismissive on platforms where such behavior is typical.
Echo Chambers and Group Polarization: Social media algorithms often create echo chambers, where users are primarily exposed to similar viewpoints, which can lead to group polarization. In these environments, individuals might feel validated in their harsher opinions because they see others expressing similar or even more extreme views. This can escalate the tone of discourse as users attempt to stand out or gain approval within their echo chamber by being more confrontational or provocative.
The Need for Attention: Social media thrives on engagement, and controversially harsh comments often garner more attention or reactions than polite or moderate ones. For some, pursuing likes, shares, or even notoriety can drive them to write in a way that's more likely to provoke a reaction, even if it's negative. The platform's reward system can inadvertently encourage this behavior by highlighting contentious content, thereby reinforcing it.
Instant Gratification and Impulse Control: Social media allows for immediate expression of thoughts without the time for reflection that might occur in person. This can lead to impulsive comments in the heat of the moment, which might not reflect one's true character or usual manner of discourse. The immediacy of posting can bypass the typical social filters that would moderate speech in real-life interactions.
These factors, when combined, create an environment where harsh comments are common.
Is that bad for humanity?
While there might be some scenarios in which harsh comments could lead to positive outcomes, the overwhelming evidence suggests the opposite. The potential benefits are often overshadowed by the damage they can cause to individuals' mental health, societal cohesion, and the quality of public discourse. The consensus suggests that harsh comments are not helping humanity, particularly when they contribute to a climate of fear, misunderstanding, or hate. The challenge lies in maintaining open, honest communication while promoting kindness and respect.
Addressing the challenge
Social media platforms can implement a few measures to help address the challenges described that can help reduce the “temperature” in regular Social Media discourse.
Implementing some or all of the solutions described below could be a good starting point.
THE UPLOADED PICTURE:
If possible, require people to upload a verified picture to reduce anonymity.
SHOWING THE FACE OF THE OTHER SIDE:
Show the commenter, on the same screen, the picture of the poster on which they are commenting. This base-level feature can help signal commenters that they are responding to a REAL person on the other side.SHOW HOW THE OTHER SIDE FEELS:
Let's take it to the next level. We can take the verified, loaded image (as the user uploaded it) and derive other images using AI that reflect that person's range of emotions. Users should also be able to upload their emotional photos if they like.
As the commenter writes the response to a post, the AI analyzes the dialog and the comment's sentiment (text or emojis - see sample UI below) and shows the commenter a realistic image (as possible) of the poster, that reflects the most likely feeling of the poster if they read that comment.
Given the set of ready-to-use “emotional images” created at image upload, there is no need for any real-time processing besides the text sentiment analysis and selecting the corresponding image.
Using this method, the commenter will see if the poster’s image changes to angry, sad, crying, etc., based on the sentiment in the comment that they are writing in real-time. The assumption is that most people (i.e., commenters and posters) would be more aware and less harsh if they realized, by looking at that instant emotional image feedback, that they would be inflicting emotional pain on the reader.
This process works on both sides: the Poster’s and the Commenter's sides.
PREDICT THE OUTCOME BY SIMULATING THE BACK-AND-FORTH TEXT:
The third level involves having the AI display possible back-and-forth sentences a few steps ahead and assess whether the result, after a potential escalation, is beneficial or detrimental to either side, i.e., the poster or the commenter.
Conclusion
The solution outlined above aims to cool social media without censorship. It is designed to address and alleviate the problem of harsh comments and harmful escalation on social media platforms.
The solution aims to address the psychological aspects of online interactions. We can perfect the concept by taking into account the following elements:
Privacy and Consent: The above solutions must be implemented with robust privacy policies and user consent mechanisms. Users should have some control over their image set and need to agree to load and display larger images, although results may also be achieved by displaying emojis.
Cultural Sensitivity: Responses to these features may vary widely across different cultures. Differentiated deployment of the feature can prevent bad reactions in certain cultures.
Technical Feasibility: The AI systems can run locally on the device to handle sentiment analysis and behavior prediction accurately across diverse human emotions and expressions. The speed at which the AI is progressing and local device capability might make this task more straightforward than it seems.
User Adoption: For these features to be compelling, they must be user-friendly and not intrusive or manipulative.
In conclusion, while these solutions offer promising avenues to mitigate toxicity on social media, they require careful implementation, ongoing evaluation, and possibly adjustments to strike a balance between effectiveness, ethical considerations, and user rights.
What do you think?