Categories: Social Media

Researchers explore social media to monitor mental illness

Researchers at the University of Ottawa are tapping into social media data to detect and monitor mental illness warning signs.

The French and Canadian research team led by engineering professor Diana Inkpen is mining social media data to identify at-risk individuals by analyzing their mental states through novel algorithms.

Having received $464,100 for a three-year grant from the Natural Sciences and Engineering Research Council of Canada (NSERC), the project will combine “natural language processing, data mining, social media processing and medical informatics, in both English and French,” according to the university’s announcement.

“We will investigate one application scenario for our predictive model, which will be used to identify at-risk individuals in online communities,” said Inkpen, adding, “the model will also be used by psychologists and psychiatrists to identify variables related to major mental illness.”

With more than 2.2 billion active social media users in the world, roughly 30% of the global population, screening for mental illness across platforms is a massive undertaking; however, online users have shown that they are more than willing to post personal information regarding their moods and activities, that that data is just waiting to be tapped.

Facebook’s launch of different emotional reactions in addition to the “Like” button is one such way in which users’ reactions can be directly analyzed for overall trends in mood and emotion.

Although it seems innocent enough, the Facebook data is being sold for advertising purposes, with people reacting strongly with options like “Love,” “Sad,” or “Angry.” This information is being used to determine target audiences and potential customers by outside parties who purchase this data from Facebook.

With regards to mental health, the US Government is already proposing screening all adults, including pregnant women, for depression.

According to the US Preventative Service Task Force (USPSTF), “All positive screening results should lead to additional assessment that considers severity of depression and comorbid psychological problems (eg, anxiety, panic attacks, or substance abuse), alternate diagnoses, and medical conditions.”

In this broad-sweeping description, anxiety is considered a mental illness, and their methods for treatment include “antidepressants or specific psychotherapy approaches (eg, CBT or brief psychosocial counseling), alone or in combination.”

What is concerning is that there is nothing written about whether or not people actually have a choice to be put on antidepressants or need counseling, and that raises serious issues of consent.

A person may be suffering from anxiety because of an isolated event, but if they are diagnosed as depressed, they would be automatically sent for treatment, without any regard to consent.

“The task force says one key is that appropriate follow-up be available to accurately diagnose those flagged by screening — and then to choose treatments that best address each person’s symptoms with the fewest possible side effects.”

Notice that the USPSTF makes no mention of reviewing options for the “patient” being screened. The only proposal is “treatment” through medication and/or counseling.

And what does the US Government propose as a means for prevention?

The government recommends “collaborative care for the management of depressive disorders as part of a multicomponent, health care system–level intervention that uses case managers to link primary care providers, patients, and mental health specialists.”

This means more trips to mental health experts, more taxpayer money, and all of this under the umbrella of a massive, population-wide screening of American adults with little room for actual in-depth analysis on a case-by-case basis.

While screening social media for mental health may have benefits, it may also inadvertently send someone to be screened for depression, resulting in medication, all based on the fact that they were having tough time and decided to write about it on social media.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

MWC 2025 Barcelona was the ultimate experiential marketing dream

Mobile World Congress (MWC) in Barcelona is the biggest annual event in the mobile technology…

7 hours ago

GAN, Tec de Monterrey partnership highlights cross-border startup ecosystem building in Latin America amid trade dispute

Despite recent tensions between the United States government and Latin American countries over migration and…

5 days ago

This founder started out with US $5K to his name. Now, he owns a multi-million-dollar global business

Meet Nitin Seth, the Co-Founder and CEO of Screen Magic (SMS Magic), a messaging leader…

5 days ago

Building smarter: AI, the ultimate tool transforming an old-age industry

In this Brains Byte Back, we sit down with Hari Vasudevan, founder and CEO of…

5 days ago

When AI Goes Rogue: 8 Lessons from Implementing LLMs in the Healthcare Industry that Could Save the Future

By Santosh Shevade, Principal Data Consultant at Gramener – A Straive company All pharmaceutical companies…

6 days ago

Digital Public Infrastructure will enable public, private entities to control your access to essential goods, services & mobility

Digital Public Infrastructure is a top-down agenda coming from unelected globalists, bureaucrats, and their partners…

2 weeks ago