April 15, 2014

Is it time for a Stupidity Filter help preventing Social Media blunders and dramas?

This week two airlines got again involved in social media related controversies, the latest in an endless series of social media issues/dissasters related to customer attacks, blunders and human stupidity:
The US Airways attached an "inappropriate" (see pornographic) picture to a tweet written by its customer service department as answer to a customer tweet. The damage control is difficult in such cases since the picture was more than half hour online, enough time to create a cloud of jokes, irony and angry reactions (search for example with the term "US Airways Tweet" to get an idea). The airline was quick to apologise but some reputation damage has already happened. The sad thing is that the airline (or every other business for that matter) has nothing to do with this which again was an action of an individual; we do not know yet what the motives or reasons for such a thing are but there are a lot of explanations from self-proclaimed psychologists in Twitter posts.
The second case was the twitter post of a 14-year old girl in Rotterdam with a threat to American Airlines. Again the action of an irresponsible (who expect so much responsibility from a 14-years old?) in combination with stupidity. The airline was of course not amused with the (irresponsible) joke and contacted the FBI; this resulted in the arrest of the girl by the police.

I wonder if it is time for a "Stupidity Filter" for Twitter or other Social Media where irresponsible or anti-social persons are exposing their stupidity and complexes to the whole world putting even themselves in danger. No new technology is necessary for such a filter, we have already text analysis software available; algorithms detecting sentiment are getting better and better and it is not a big deal to check words used against a list of "sensitive" terms and block the text. In the case of pictures there are also solutions, the other days I came across picture analysis software that can be used to identify our behavior on the basis of pictures we publish in our social networks.
How such a "stupidity filter"should work? Blocking content is of course against the free (online) speech and expression but in analogy to the physical word, which we tend to forget in such ethical discussions, people can do things online that bring them in troubles as you see in an old example of such a case involving Dominos Pizza. I think a simple solution id the filter observes something "inappropriate" that is about to be posted in social Media a pop-up with a warning can appear and inform the user that the content he intends to post will bring him/her maybe in troubles.
The idea is simple yet not easy to implement because of several ethical and free speech aspects: such filters would be also interesting for various dictators or leaders allergic to ideas different than their own, eager to block the voices of their people exposing their crimes or illegal actions. So such filters must be exclusively under the control of the social media operator (Twitter, Facebook, YouTube etc) and beyond the reach of the varous modern sensors.
We must come back to this some time, for the time being good luck to all trying to repair reputation damages.