Imagine police knocking on your door because you posted a ‘troubling comment’ on a social media website.
Imagine a judge forcing you to be jailed, sorry I meant hospitalized, because a computer program found your comment(s) ‘troubling’.
You can stop imagining, this is really happening.
A recent TechCrunch article warns that Facebook’s “Proactive Detection” artificial intelligence (A.I.) will use pattern recognition to contact first responders. The A.I. will contact first responders, if they deem a person’s comment[s] to have troubling suicidal thoughts.
Facebook also will use AI to prioritize particularly risky or urgent user reports so they’re more quickly addressed by moderators, and tools to instantly surface local language resources and first-responder contact info. (Source)
A private corporation deciding who goes to jail? What could possibly go wrong?
Facebook’s A.I. automatically contacts law enforcement
Facebook is using pattern recognition and moderators to contact law enforcement.
Facebook is ‘using pattern recognition to detect posts or live videos where someone might be expressing thoughts of suicide, and to help respond to reports faster.’
Dedicating more reviewers from our Community Operations team to review reports of suicide or self harm. (Source)
Facebook admits that they have asked the police to conduct more than ONE HUNDRED wellness checks on people.