Facebook has rolled out a new technology called “proactive detection” artificial intelligence (AI) that would scan all of a user’s posts seeking patterns that suggest suicidal thoughts.
Also, if found necessary, it would share helpful information to either the user or his/her friends, or it may get in touch with local first-responders. Responding to a question by website TechCrunch, a Facebook spokesperson said that users cannot opt out of the feature.
The spokesperson further said that the new feature is meant to ensure better user safety and the support resources that the social media behemoth is offering could be easily dismissed if the user doesn’t wish to check them.
Since an AI is expected to sense suicidal patterns faster, Facebook, through its deployment, is hoping to bring down the time taken to provide help to those at risk. This isn’t the first time that Facebook has used AI in such a manner.
The company had earlier tested the technology to detect problematic posts and for reporting suicidal thoughts to users’ friends; but that testing was limited only to the US.
But now, the social media giant would deploy the AI to scan content from across the globe, except in the European Union. The General Data Protection Regulation privacy laws in the EU make use of such technology rather complicated in the EU.
Another use for which Facebook would use the AI is to prioritize urgent or highly risky user reports so moderators could address them quickly. Tools that would help instantly surface first-responder contact information and local language resources are also part of the strategy.