Tarleton Gillespie’s the politics of platforms problematizes tech companies’ (specifically YouTube’s) use of the word “platform” in order to remove themselves from the content being hosted on their servers. By naming themselves as a platform, the focus is on the “User Generated” aspect of the content. This strategy is reminiscent of Google’s claims of “neutrality” in terms of their search results, and I would agree with Gillespie that the tech sector’s claim of “a-polity” isn’t possible.
I believe that tech companies should be held accountable for the hateful content on their “neutral platforms,” and failing to do so renders them complicit the dissemination of hate speech. Two contemporary examples come to mind in terms of the importance of this accountability.
First, (our favourite topic) Facebook’s Fake News epidemic. By implying that they are not responsible for the stories that get posted on their website, Facebook suggests that their users should be the one’s held responsible for the presence of Fake News on their timelines. The problem that I have with this is that Facebook has been made aware of the presences of these harmful stories on their site, while many of their users do not have the savvy to distinguish the fake from the real. Rather than implementing any kind of real solution, Facebook released a blog post saying that they are implementing ways to make it easier for users to report stories as “offensive”. AGAIN… putting the responsibility on the users.
Clearly, I’m not the only one who takes issue with Facebook’s lack of action. The German government is planning a law that will result in Facebook paying a fine of 500 000 euros for each Fake News post. Since then, Facebook has built a more intuitive way of reporting Fake News that still relies on their users to flag the stories, but Facebook then sends them off to a third-party fact checker, and subsequently de-prioritizing the story.
The second example that I would like to present is Twitter. Twitter has been called out many times for their inability to provide a solution for the harassment experienced by it’s users, but they recently provided an anti-harrassment feature that seems to be solving at least this one issue with the platform. They aren’t out of the woods yet, however, as there have also been reports of a White-Supremacist advertisement appearing as a “promoted tweet” on people’s timelines. While Twitter outlines in their advertising policy that advertisements cannot promote hate speech, “promoted tweets” can be delegated as such by the user. Once again, this brings the issue of accountability into question. I believe that Twitter should be responsible for deleting accounts that incite violence, especially since in many cases these are actual people behind the keyboards disseminating these messages.
The issue I see is a standard that Google has set with it’s “neutrality” that allows other media companies to hide behind this standard and absolve themselves of action. However, if as Gillespie states (pg. 358) YouTube can take action against their algorithm, “demoting objectionable content from their “Most Viewed” page,” then I believe that Twitter and Facebook should be taking action against the USERS on their platforms who are spreading even worse content.