Platform Accountability

Tarleton Gillespie’s the politics of platforms problematizes tech companies’ (specifically YouTube’s) use of the word “platform” in order to remove themselves from the content being hosted on their servers. By naming themselves as a platform, the focus is on the “User Generated” aspect of the content. This strategy is reminiscent of Google’s claims of “neutrality” in terms of their search results, and I would agree with Gillespie that the tech sector’s claim of “a-polity” isn’t possible.

I believe that tech companies should be held accountable for the hateful content on their “neutral platforms,” and failing to do so renders them complicit the dissemination of hate speech. Two contemporary examples come to mind in terms of the importance of this accountability.

First, (our favourite topic) Facebook’s Fake News epidemic. By implying that they are not responsible for the stories that get posted on their website, Facebook suggests that their users should be the one’s held responsible for the presence of Fake News on their timelines. The problem that I have with this is that Facebook has been made aware of the presences of these harmful stories on their site, while many of their users do not have the savvy to distinguish the fake from the real. Rather than implementing any kind of real solution, Facebook released a blog post saying that they are implementing ways to make it easier for users to report stories as “offensive”. AGAIN… putting the responsibility on the users.

Clearly, I’m not the only one who takes issue with Facebook’s lack of action. The German government is planning a law that will result in Facebook paying a fine of 500 000 euros for each Fake News post. Since then, Facebook has built a more intuitive way of reporting Fake News that still relies on their users to flag the stories, but Facebook then sends them off to a third-party fact checker, and subsequently de-prioritizing the story.

The second example that I would like to present is Twitter. Twitter has been called out many times for their inability to provide a solution for the harassment experienced by it’s users, but they recently provided an anti-harrassment feature that seems to be solving at least this one issue with the platform. They aren’t out of the woods yet, however, as there have also been reports of a White-Supremacist advertisement appearing as a “promoted tweet” on people’s timelines. While Twitter outlines in their advertising policy that advertisements cannot promote hate speech, “promoted tweets” can be delegated as such by the user. Once again, this brings the issue of accountability into question. I believe that Twitter should be responsible for deleting accounts that incite violence, especially since in many cases these are actual people behind the keyboards disseminating these messages.

The issue I see is a standard that Google has set with it’s “neutrality” that allows other media companies to hide behind this standard and absolve themselves of action. However, if as Gillespie states (pg. 358) YouTube can take action against their algorithm, “demoting objectionable content from their “Most Viewed” page,” then I believe that Twitter and Facebook should be taking action against the USERS on their platforms who are spreading even worse content.

Advertisements

2 Comments

  1. I agree that platforms such as Facebook, Twitter, and Google have a certain responsibility as providers of the platform, to maintain its authenticity and itself as a safe place to share and distribute reliable information.

    I do believe that is important to understand and put emphasis on the vastness in the amount of information that goes through these platforms. Google processes about 40,000 search queries per second (3.5 billion per day; 1.2 trillion per year). (http://www.internetlivestats.com/google-search-statistics/). Realistically, every single one of these searches cannot be moderated by humans, and therefore autonomy is given to Google’s algorithms to autocomplete user’s search queries based off of past search queries.

    This has led to problematic issues, as we discussed thoroughly in today’s and last week’s class. This said, I believe there are some factors important in the issue that we have not yet put much emphasis on: First, the overwhelmingly high amount of search queries that make it difficult to moderate effectively, and second a likelihood that the majority of internet users are uneducated (only 42% of Americans have a bachelor’s degree or higher, and 84% of Americans use the internet) (https://en.wikipedia.org/wiki/Educational_attainment_in_the_United_States#Gender https://www.census.gov/hhes/socdemo/education/data/cps/2014/tables.html
    http://www.pewinternet.org/2015/06/26/americans-internet-access-2000-2015/)

    Because of this, it doesn’t surprise me that a common search query will finish the sentence “are Jews” with “evil”, because a correlation exists between antisemitism and a lacking in education (http://archive.adl.org/antisemitism_survey/survey_iii_chart_education.html). Google responded to critics of this particular autocomplete controversy, however after a few minutes of playing around with Google’s autocomplete, I found a few other controversial auto completions:
    “are conservatives racist”
    “are black people smart”
    “why do gay guys lisp”
    “are Asians the smartest race”
    “why are liberals so stupid”
    “is Kwanzaa real”

    These are unarguably insensitive statements that would likely offend individuals subjected by the content in them.

    I believe that Google was right in removing the “are Jews evil” autocomplete, however it does bring up the issue of censorship. When does Google get to decide that an autocomplete should not be included in their results? Who in Google has the final say? What are those implications? Check out this Wikipedia entry that outlines the many censorship controversies the company has dealt with over the years: https://en.wikipedia.org/wiki/Censorship_by_Google

    I want to note that I understand that the emphasis of this issue relates to the idea of autonomy placed upon technological infrastructures that ‘do our job for us’, instead of censorship/education, however I believe that these aspects of the issue were missing in our discussion and are important to recognize.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s