Social networks are powerful tools for data collation. We've seen Australia's Black Dog Institute launch a tool that scans millions of Tweets daily for a real-time global map of our emotions; even a tool that scans for profanity.
The major difference between those and a new web app -- called Samaritans Radar -- launched by UK suicide prevention charity Samaritans is anonymity. When the data isn't anonymised, suddenly the ability to opt into -- and out of -- being monitored is a pretty big privacy concern.
The app works by proxy. When a user downloads and signs up to Samaritans Radar, the app then has access to the Twitter accounts followed by that user. It will monitor those accounts, looking for key phrases in public Tweets. These include the terms such as "depressed", "help me" (probably not parsing for Star Wars fandom), "tired of being alone", "hate myself" and "need someone to talk to".
When it finds one of these phrases, it will alert the user via email, offering support on how to reach out to the depressed party; if the user is reported as suicidal, the report will be verified by Twitter Trust & Safety, and both the Radar user and the reported account will be contacted by Samaritans.
Obviously the service means well, but it makes too many assumptions: one, that everyone on Twitter is friends with all their followers, or that they'd be OK with every single one of their followers being alerted that they're suicidal every time that they mention they're depressed -- even in jest (the company has noted that their algorithm can't recognise jokes).
Secondly, it assumes that everyone using the app has the best intentions -- that it won't be used, for example, by trolls, or those otherwise wishing harm on vulnerable users.
Thirdly, it only monitors public Tweets. How many users would be more likely to make their Tweets private, knowing that someone might be emailed if they use one of the key phrases -- a full list of which is unavailable.
It doesn't help that Samaritans seem to profoundly misunderstand Twitter user concerns. In the service's FAQ, it answered the question of intrusiveness and privacy with "Samaritans is all about choice, and so we are giving people the choice to reach out to help others in need, in the same way that we give people choice to Samaritans' helpline if they want to."
So far, the app has had over 3,000 users sign up, with more than 1.6 million accounts being monitored -- and, according to The Register, a mere 4 percent of the Tweets flagged by the app have been validated as genuine.
As noted by IP/ICT lawyer Susan Hall, users can opt out using Section 12(1) of the UK Data Protection Act, which allows users to notify data controllers that they do not permit that data controller to make automated decisions that affect the individual.
In a response to user backlash, Samaritans executive director of policy, research and development Joe Ferns did not directly address any user concerns, instead nebulously promising to take action going forward.
"The App has had a positive response so far, with over 3,000 people signed up as subscribers to date. Since launch, almost 20,000 people have mentioned the App, helping #samaritansradar trend on Twitter for two days. We will take on board any feedback we receive as we develop the App further and are taking very seriously the concerns raised by some Twitter users regarding possible data protection and privacy issues relating to the Application," he wrote.
"We are looking into the details of the issues raised, including working with the relevant regulatory authorities and will continue to take action as needed to address these concerns appropriately going forward."
Meanwhile, a Change.org petition addresses Twitter, noting that it has no faith in Samaritans to address user concerns, and requesting that the social network deny the app access to user data, effectively shutting it down.
CNET has contacted Samaritans for comment and will update when we receive further information.