#SamaritansRadar Twitter App – my thoughts on why this is extremely problematic

(CN: discussion of mental health, suicide, elements of self-harm)

This will be another mental health post, this time centered on the Twitter app that mental health and suicide prevention charity Samaritans have launched, called Samaritans Radar. You can read about this app on their website in this link: http://www.samaritans.org/radarpress The basic premise is that a user can sign up to the radar, and it will then scan the Tweets of people they follow on Twitter, flagging to the user if certain keywords are used that imply mental crisis or suicidal ideation. The user can then view their friend’s Tweet and make a judgment call as to whether this requires further action. Sounds good, right? Friends can look out for potentially vulnerable friends and step in when it seems they’re having a crisis. Here’s what I and many fellow sufferers of mental health conditions find problematic about this app. Now, I don’t think I have anything new input that hasn’t already been said by others, but I would like to have my say and add my voice to those of us protesting the issues around this app. I do want to throw in here in case people don’t read to the end that Samaritans is a fantastic charity and offers a lot of great help to people in very acute crises. For many people this has meant they were able to resist a suicide attempt when they might not have otherwise. I’ve read that the app is in no way associated with the volunteers who run the 24-hour phone service, so please do not feel discouraged about contacting them if you need to.

Many of an individual’s Twitter followers are strangers, not friends
The app is based on the Twitter list of the user, not those who may be in crisis. This means that, should I mention suicide or self-harm in my Tweets, any stranger who follows me and also uses the Samaritans app would get an email about this. I find this idea very uncomfortable and invasive, that followers whom I don’t even know would get specific emails highlighting the app’s perception of my mental health state and prompting this stranger to do something about it. You may be thinking that support must be good whoever it comes from, right? Well, no, not really. Support can come from myriad sources, but in general, if I were in a suicidal frame of mind and in acute crisis, the type of help I’d be looking for wouldn’t be from a stranger on Twitter who needed an app to alert them that things weren’t well with me. You might also be thinking “What’s the big deal; they’d see your Tweets if they follow you anyway.” It’s the idea of being monitored by strangers for what they perceive as signs of suicidal ideation, who are then prompted by an app on what steps to take. I think if you need to be altered via an app that I may be in crisis, then we’re not close enough for you to provide the type of help I need, anyway. The app would be much improved if the mental illness sufferer, i.e. the “targets” of this help, were able to provide a definite list of who could and could not be alerted by this app. Close friends and family being alerted of vocabulary that seems suicidal is a much better idea than any potential follower being able to take some form of control over my mental health, an idea which provides a nice segue into my next point.

We must have agency over our own mental health
There is something very unsettling and invasive about the idea of others deciding that I’m suicidal or depressed enough to warrant their intervention and/or the introduction of outsiders without my explicit consent. This is pasted from the Samaritans website, in the link I posted earlier (emphasis mine):

“Twitter’s wider collaboration with Samaritans includes a referral process which works in two ways: Twitter has Samaritans listed within their Help Centre as the go-to organisation for suicide prevention in the UK and ROI. When somebody gets reported as suicidal, the Twitter Trust & Safety team verifies the report and if they deem it accurate, they get in touch with both the reporter and the reported account, to share recommendations and contact information for Samaritans.”

Everyone I’ve spoken with or whose words I’ve seen about this would *NOT* appreciate this. If we are deemed to be suicidal by particular Tweets, when we may or may not actually be acutely suicidal at that moment, someone would “report” this in the name of help and we’d be contacted by outside sources whether Twitter or Samaritans or whoever. This removes agency; we have the right to control over how we deal with our mental health and don’t appreciate this control falling into the hands of strangers on Twitter, who decide that we merit intervention without actually talking to us about it. If you’re not close enough to send me a message offering support or asking how I am, then you’re not close enough to justify the involvement of outside elements. Many of us have different ways of coping with fluctuating severity in our mental illnesses; strangers will not know those nuances well enough and will likely try to involve outside elements when they are not actually needed.

Now, I’m aware that to some extent that the idea is for people who have lost an element of control and may not be able to help themselves in that moment, but you have to remember that these people have entire lives and other contacts, close friends, family, fellow mental health sufferers on their TL. Preventing their potentially imminent suicide probably isn’t contingent on your report if you are an effective stranger on Twitter. You also have to bear in mind that many people without mental illness tend to offer support and comfort via platitudes that actually tend to make things worse, but that’s another post altogether. Maybe I’m overreacting and people wouldn’t go to the extent of “reporting” if they are virtual strangers to the person in crisis. But that doesn’t mean this will never happen. Anyway, the point still stands: Do NOT involve outside elements in someone’s care without at least contacting them first. If you are not a close friend or family member, you will virtually never be justified to involve outside elements without consent, in my opinion.

It’s a “cookie cutter” approach
Using so-called key words and phrases to detect someone’s suicidal ideation is hardly going to be an exact science. People suffering from mental illnesses are a diverse bunch just as any group will be and we have different ways of coping with our health, with changing circumstances and different words we use to convey these issues. Some people are very explicit even when not in crisis, whereas some don’t mention things even if a suicide attempt is imminent. Personally, I think I am less open about mental health in general when I’m struggling, so if I were suicidal, the clue would be that I *don’t* mention these key words. I don’t know for sure; what I do know is that there is no set of clues that a person puts out when they are suicidal. One person’s “normal” might be another’s “in crisis” and vice versa. I know people who might mention that they are considering methods of suicide at a point when they are not in immediate crisis and don’t need intervention. I know people whose key words when they are suicidal are more likely to do with making plans, finalizing something, getting their affairs in order, becoming content or relieved (mostly because they have decided to make the attempt). The app’s approach will not pick up on all these nuances; it will give many false positives and miss many genuine positives.

What people also need to realize is that for mental illness sufferers and mental health advocates, many of these words and phrases are part of our general, everyday vocabulary. I talk about self-harm often, sometimes when I’m doing it/have done it that day, worried I’ll relapse soon, and sometimes when I haven’t done it for a long time. Many of us mention suicide a lot whether we are having suicidal thoughts at that moment or haven’t for a long time. You might say “Well, the user will see the Tweet and realize it’s out of context”, but it’s not as simple as that. If I write “I can’t stop thinking about self-harm”, you have no idea what I mean by that. I could mean I’m fighting the urge from minute to minute and need immediate intervention or I could mean that I’m in a temptation phase where I know I won’t do anything and that it will soon pass. I often have periods of intense fantasy about self-harm and suicide methods without there being a big danger of me actually doing it. Often the imagining itself is a great comfort. So did that tweet need reporting or not? What if it had said “I can’t stop thinking about suicide”?

The potential for abuse
Finally, the lack of privacy concerning the app, which is intrusive and unsettling as it is, provides a potential for abuse. Many people may have a stalker following them or trolls, who would then take the opportunity to harass someone who may already be in a moment of crisis. Also, many people suffer extreme mental distress even in periods where is little or no risk of them actually attempting or committing suicide, and the last thing anyone needs in those periods is being harassed or trolled. Even people with good intentions can make it worse by offering platitudes.

Basically, I and many others feel like they have done very little or no research with actual sufferers of mental illness on Twitter. It seems to have come from “higher up” where they thought this was a good idea without asking the people concerned. The app could be much improved if the people being monitored could explicitly choose a list of those who would be contacted in the instances of tweets suggesting suicidal ideation. Twitter is a place where many of us who suffer from mental illness find we can be open and often already we receive an enormous amount of support. My fear is that this level of monitoring and intervention will make people shut off. I and many people I know have already explicitly stated that we do NOT want any of our followers using this app to monitor our Tweets. If you want to show support to your friends, try sending a message, ask how they are, if they need to chat or vent, if they need support. Many people don’t know what to say to someone in crisis, and that’s o.k. If you are not a mental health professional, then your friends likely don’t expect you to be able to always say the right thing. Sometimes just an offer that you’re there to listen or even just a sending of virtual hugs or a ❤ can be enough. It means we know you care but may not know what to say.

I hope that the better elements of this app can perhaps be developed into something better and it raises awareness of the problems surrounding this kind of thing that people without mental illness may have never considered.

Advertisements

About wolfennacht

I'm a 25 year old disabled polyglot who mainly spends time writing novels and poetry, teaching myself languages, and reading too much. I use a wheelchair. I am currently a grad student in biomedical science. I mainly blog about my physical and mental illnesses and procrastinate writing on my crochet blog!
This entry was posted in Mental health and tagged , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s