A creaking social media

The Guardian ran a piece by Tim Burrows on Facebook’s Safety Check feature, Safety Check: is Facebook becoming fear’s administrator-in-chief? I do find the social media platform increasingly fascinating, especially as it comes under critique for its social and technical choices. There’s an increasing creakiness that is coming to the fore now.

Ruminating on the notion of community an Facebook’s approach to it in the “The Education of Mark Zuckerberg“, The Atlantic quipped that they have a “move fast, know little attitude” in their quest to “give people community, meaning, and purpose”. This follows the “philosophy of everything we do at Facebook is that our community can teach us what we need to do” (Zuckerberg Files,  https://dc.uwm.edu/zuckerberg_files_transcripts/251/), which seems to me to be alarming. What community are they listening to? How are they listening? What’s the process to turn these thoughts into a product (for that is what it is) and is there a critical process to critique the process?

Burrows’s article fits into this narrative where it questions the times that the Safety Check is turned on by Facebook as part of it’s ‘grand plan to make users feel “more safe”’ (Burrows, Guardian). But does it really? Or does it encourage a sense of having to mark one’s self as ‘safe’ on the service, playing on our emotions?

I think that there is a wider question for about how social media helps or not in shock events such as an attack or catastrophe. Paul Virilio contests that this is the administration of fear as discussed in an interview with VICE.

Given the current investigations into Fake News and manipulation there, there needs to be critical questions raised as to the processes in place.

I was thinking about this sleepily yesterday having listened to Deep Mind being interviewed on Radio 4 saying nice things about the need for oversight of Artificial Intelligence. The speaker seemed to decide that they are the best people to explore this, yet arguably they are the last to do this. There was also nary a mention of the human and its effects on people.

Next year may be interesting to see if the critical approach to technology companies continues and, if so, in what direction? Scarily some of the larger companies paying for lobbyists (Guardian, CNN, Wired) the policy fight may be long and hard but I wonder if there are other avenues to explore.