Only the big gains

Nukhosa Chüzho
Kohima

2021 Nobel Peace Prize winner Maria Ressa’s criticism against social media giant, Facebook, for its failure in restraining the spread of hate, disinformation and disinclination towards facts has questioned the model of business Facebook and its subsidiaries practice. She concurred with Frances Haugen when she asserted that Facebook’s algorithm prioritised the spread of lies laced with anger and hate over facts. Frances Haugen has earlier revealed that “Facebook knew its algorithms and platforms promoted this (Facebook’s role in spreading misinformation after the 2020 election and the impact its products have on teenagers’ mental health) type of harmful content and it failed to deploy internally recommended or lasting countermeasures”.

As was reported in 2018, a British consultancy called Cambridge Analytica had improperly harvested Facebook data to build voter profiles without users’ consent. It used the behavioural data of 50 million social media users to tilt the outcome of the hotly contested 2016 US Presidential elections. Facebook executives in India also helped certain political parties in the 2014 general elections. One of India’s Facebook executives was forced to resign after her role was established in actively coordinating social media with the ruling dispensation as part of digital electioneering.

Information in silos is of less concern. But when aggregated, it is inimical to the interest of private members (or community). For instance, interoperability of Facebook and Whatsapp and ease of access to both the platforms with anonymous identities facilitate clandestine collection of data of thousands of targeted social media users and their concomitant political views by the initiators. Deep division along inter-tribal and inter-state (Nagas across the border) lines is a social construct, exacerbated by divisive and distasteful social media discourses. With big data in hand, intelligence agencies can well exploit this faultline and engineer further fragmentation within the Naga Political Groups, Civil Society Organizations and even along the tribal lines. In such a scenario, only the metadata owner stands to profit (which was nearly the case between Ravi and NNPGs forming a virtual group to the exclusion of NSCN-IM, exponential enough to revive fratricide).

The accusation that Facebook “is biased against facts” is empirically evident even on various local Facebook groups and pages. Charges filled with hatred were exchanged on almost every day basis on prominent Nagaland-based Facebook pages/groups. Contents that possess a tendency to ignite inter-tribal insurrection were openly traded without any censure from Facebook, nor any remedial measures by the page/group administrators. This unabated digital instigation is held culpable for more divisive relations and mutual mistrust between the Nagas of Nagaland with Nagas residing in other states. Speaking from inter-tribal perspective, Facebook has not adopted any proactive approach to stop the spread of hate and disinformation in the ongoing impasse on shifting of DC office, Dimapur, even when the discourse on Facebook pages/groups assume an ugly tribalistic character, which, in the local context, is sensitive enough to position one particular tribe against the other and difficult to reign in.

Social media platforms, particularly Facebook and Instagram, are increasingly being viewed as harmful for the mental health of teenagers. Facebook’s own in-depth research shows a significant teen mental-health complexity directly attributable to Facebook and its subsidiary, Instagram. The negative impacts of Facebook and Instagram on adolescents include eating disorders, self-immolation to attain perfect body image as advertised by the fitness influencers, low self-esteem, poor life satisfaction, mood swings and their overall well-being. Facebook’s policy of pursuing profits completely disregarding these documented harms has sparked media to compare Facebook with Big Tobacco, which already knew that its products were carcinogenic yet denied it publicly into the 21st century.

Adolescent girls are generally susceptible to the toxicity of social media. Instagram exacerbates that vulnerability with its photo editing and retouch features being offered to its users. The problem crops up when its teen users, though fully aware of the Apps’ ability to retouch its content, develop a low self-esteem by making a negative comparison of themselves with their peers and celebrities, judging mainly from the retouched and edited versions of the peers’ body images. Upward social comparison, or comparing yourself to someone who is better is some respect, of teens with the celebrities they followed on the Instagram platform further compounds their already lowered self-esteem.

Low self-esteem combined with discontented feelings about one’s own body image and possession drive quite a many to develop self-destructive mental health. Growth of rebellious generation as well as increasing rate of suicide among the young age group, in part, trickles down from the harmful effects of social media platforms.

Facebook’s algorithm detects our daily online behaviour. From reels to news and analyses on current issues, algorithm-based Facebook auto-generates contents according to our likings and appears on our newsfeed. This has perilously placed us in two fundamental flaws: (a) spending unnecessary excessive time on social media and (b) indoctrination with one-sided political, social, cultural, economic views by feeding us with ‘alternative’ facts. The trend will in the long run negatively impact us from making reasonable decisions on our life views.

Facebook itself is a great technological feat. However, it just being an abstract machine, its ethical accountability is nearly absent when it mint monies through analysis of behavioural data of its users disregarding users well-being and privacy, when it has no conscience-driven moral standard to judge the right from wrong, when it become the principal complicit in furthering divisions and when its pre-programmed classifiers fail to identify malicious/hateful contents in preventing violence (especially in the Indian subcontinent). Like Haugen has observed, “the path forward is not about breaking up Facebook”, but it’s about transparency and governance. However, data protection law in India is largely seen as being drafted from political and business interest, as heavy contributions from these two professions were reported in drafting Data Empowerment and Protection Architecture. As of now, limiting individual behaviour on the social media ecosystem as well as vigilant parental oversight over kids/adolescents online activity posture itself as the only shield from the harmful effects of social media.

 



Support The Morung Express.
Your Contributions Matter
Click Here