Responsibility of Technology in Suicide Prevention

Responsibility of Technology in Suicide Prevention

Responsibility of Technology in Suicide Prevention

Responsibility of Technology in Suicide Prevention

Responsibility of Technology in Suicide Prevention

Technology has a great impact on how we shop and get healthcare services. As we rely on technology for our decisions and social interactions, companies like Amazon, Google, and Facebook are collecting large amounts of data about us. This also helps them make decisions about advertising and user experience.

They have more information about our habits and increasingly more details about our direct thought process.

We always search on Google for a reason and these searches are friendly not only when we are searching for fashion or groceries but also when someone is in deep emotional pain. Many people posting suicidal content and some of them are live-streaming their suicide attempt.

This problem can be tackled by big companies as they have a unique opportunity to do something about suicide attempts. From a public health perspective, they control a lot of data that can be helpful but also can be harmful.

As social media and other companies begin to "understand" us more, they will have a responsibility to take action. Here are the points they need to be mindful of.

  1. Assessment

A watchful assessment is required before taking any action against suicidal content. Facebook is determining suicidal risks with the use of AI. They appear to use a mix of AI and humans to come behind and check. It is good to take steps to use tools such as Natural Language Processing and AI.

The "knowledge" gained from a large amount of data is only as good as the people interpreting it. Ensuring that AI understands this nuance or there is a way to manage this nuance is critical.

Dealing with concerns such as false positives and misinterpreting signals is going to be key but also how are we presenting the information to the user is important.

  1. Intervention and Consent

This is a larger concern on the user side and engaging with both users and professionals is important. Consent is required for mental health treatment but in the low to medium risk categories as face to face, contact would require consent than some education and intervention.

What should you do when an algorithm decides you are high risk? When face to face, clinicians often have to decide that a suicidal person might require police intervention. This process has nuance, as well as laws and interventions, vary on the state and local level.

Not only that but one has to consider the training of the officer that is taking the call. Tech companies are going to have to get familiar with the complexities of deeming that someone is a danger to themselves.

There are obvious cases of someone recording an attempt or making it clear on a post, however, there are many things that understand risk. Having an algorithm decides without informing users is not demonstrating consent.

For tech companies interesting in tackling suicidal ideation in real-time, their decisions shouldn't be hidden behind a box.

For suicide attempts issues there should be a face to face contact. This intervention should be done in partnership with the user, local authorities, and crisis services.

  1. Data Governance/Privacy

The next concern is what happens once data is collected. Keep in mind these companies have a large amount of data but are not healthcare companies. Despite how should this data be governed and how are individuals privacy protected?

Should tech companies scanning our risky behaviours be held to the same standard as "medicine"? If they are going to study the symptoms, should they be held to the standards of health privacy laws? If not those standards how can "big tech" protect privacy? How long should information be stored? Can data be de-identified after so that companies can still "learn" from them?

These critical questions are centre to this debate. There are no easy answers to this but again there are questions that need some answers.

That companies dealing with large amounts of data about your health should be transparent about how they are using it. Many argue that individuals should be compensated for their data if companies are going to "learn" from it.

  1. Where Do We Go From Here

Large technology companies have an opportunity in that they have immense amounts of data. When we talk about public health, they can make a huge impact on various health issues like suicide and other mental issues.

With this opportunity comes a responsibility to users to protect their rights and privacy. Being in a unique position to intervene with suicide in real-time is critical work.

Tech teams need to work with practitioners to determine how this real-time intervention is any different from face to face intervention. To ask challenging questions on how to best serve the public while having ownership of personal health data.

Technology companies must continue to ask these challenging questions. Not only that, but provide the answers to the users and society at large.

<div class="paragraphs"><p>Responsibility of Technology in Suicide Prevention</p></div>
IMPORTANT POINTS TO CONSIDER WHILE MAKING THE MOVE FROM CORPORATE TO ENTREPRENEUR
<div class="paragraphs"><p>Responsibility of Technology in Suicide Prevention</p></div>
6 Promising Tech Sectors for Launching your Startup

Get The CEO Magazine to your Door Steps; Subscribe Now

Software Suggestion

No stories found.

Best Place to Work

No stories found.

CEO Profiles

No stories found.

Best Consultants

No stories found.

Tips Start Your Own Business

No stories found.
logo
The CEO Magazine India
www.theceo.in