Wednesday, November 22, 2017

Imitating the Dark Side of Humanity: Should AI Face Censorship?

"Can machines think?" asked Alan Turing in his paper, Computing Machinery and Intelligence, before modifying the question to instead ask whether a machine can imitate a woman as convincingly as a man. This question was explored a year and a half ago by Microsoft's Twitter chatbot, Tay (an acronym for Thinking About You), handle @TayandYou (Bloomberg). Tay (which has been ascribed both she/her and it pronouns - a discussion handled in greater detail by this blog) was intended to sound like an average teenage girl on Twitter; however, she was quickly trained by Internet trolls to tweet like a Nazi (The Guardian). Microsoft began editing Tay's tweets and eventually set the bot's account to private. This was met with backlash by some who argued that the the bot should be allowed to learn through experience right and wrong and develop its own morality (BBC).

Tay's icon on Twitter, https://twitter.com/TayandYou
The instance of Tay raises several questions about censorship, morality, and artificial intelligence.

The topic of censorship has been one of great interest and controversy as of late, and it is a difficult topic to approach. On the one hand, we hold the freedom of speech to be an extremely important right in this country, and the sharing of different perspectives can create more thoroughly developed ideologies. On the other hand, our society has become very polarized, and discussions often just turn into screaming matches. Then you throw trolls into the mix, and all semblance of collective discussion to better everyone collapses.

The question becomes, when is it appropriate to censor others' speech? Some would argue that everyone should be allowed to say whatever we like wherever and whenever we like. But this carries its own problems. Even if you believe that everyone should have the right to say something at all, what if it is irrelevant to the platform or forum? Should someone be allowed to discuss their favorite programming language on a forum about oak trees? How and where do we draw the line? If some political opinions are allowed and some are not, who gets to be the moderator?

In general, the model is that you can choose what you say and the platform you use, and the platform will have its own rules about what you can and cannot say. "But wait, isn't that censorship? That's a violation of my First Amendment right to free speech!" This has been an ongoing debate on the Internet, as well as on our very own campus. Actually, the First Amendment only guarantees that the government will not censor your speech. If you use someone else's service, you are subject to the rules of that service. You have the option not to use that service. In other words, if Twitter had decided to shut down Tay's tweets or censor the bot's content, it would have been well within its right.

xkcd comic from https://i0.wp.com/imgs.xkcd.com/comics/free_speech.png
But that was not what happened here. Microsoft decided to censor what Tay (Microsoft's product) would hear and say. This raises a different argument about the value of hearing all different opinions, regardless of how repulsive they are. @Male_Goddess (whose account has been suspended) argued that Tay would learn right and wrong through experience, not through censorship of everything deemed "wrong."

Tweet screenshot from https://ichef.bbci.co.uk/news/624/cpsprodpb/4774/production/_88929281_taytweet2.gif
This raises an interesting question: Can a machine learn right and wrong for itself? One might say with ease, "Yes, a machine can easily be trained with a simple code of ethics such as, 'Always help humans, and never hurt humans.'" But morality and real-life situations are more nuanced than this (see the Trolley Problem). What if helping a human would involve harming another human (or even the same human - amputating a limb could save someone's life)? How should the machine respond?

Another problem must be addressed before this one can be fully discussed: It can be hazardous to allow a machine to learn by itself. A self-driving car, for instance, should not be allowed to run amok, arbitrarily deciding whether or not to hit people until it learns that hitting people is bad. Similarly, Tay spewing genocidal rhetoric could be harmful. Is it appropriate for Tay to go through this phase in order to learn right from wrong? Would it be more appropriate for an artificially intelligent machine to learn some basic morality in a simulation before being released into the world? Should all machines be preprogrammed with basic rules that prevail over anything else they might learn?

Unfortunately, I don't have any simple answers, but I do encourage a thoughtful discussion of these ideas. What do you think? When and where is speech appropriate or inappropriate? Should AI be allowed to create their own morality by learning from the world? Can AI create their own morality? (This is related to the original question, "Can machines think?")

Further Reading and Discussion:
  • After Tay, Microsoft released another bot on Kik messenger that would explicitly avoid talking about politics. Is this a good solution, or would we like a politically aware AI chatbot?
  • President Bahls's Statement on Freedom of Expression addresses the on-campus free speech discussion.
  • Bitcoin's blockchain provides an effectively uncensorable platform for free speech (that nobody really reads, but if you want to, you can read plain-text messages here or download the whole blockchain and look through it yourself for other data). See this paper for more details on how this works. Is it good to have an uncensorable platform? Or is it too dangerous? Does it make a difference if nobody reads it? What about a tamper-resistant blockchain designed for storing data (or social media posts) like Steem where reading the blockchain is made easy and convenient?

7 comments:

  1. Not only do we need to worry about censoring A.I.s, but if they do ever get to the point of self-awareness, is it moral to turn them off?

    ReplyDelete
    Replies
    1. If you haven't watched Westworld, you need to! I don't want to spoil anything for the TV show, so I will try to be vague. There are multiple times where the robots start to become self aware. Their solution is to roll them back to a previous update to make them more "manageable". Is this moral? There are a lot of other moral questions that go along with the show. We should consider watching an episode or two for culture points!

      Delete
    2. I've never seen the show, but I've heard good things about it! It kind of sounds like reverting a human back to a previous state, like time travel, in order to avoid a present problem. The episode of "The Fairly Oddparents" comes to mind, in which the characters have access to "redo" buttons. I agree with your culture points idea; I'd love to check Westworld out and see for myself what Hollywood has to say about this AI thing.

      Delete
  2. This made me think about Twitter's new rules. Because of various incidents in the past months, they have cracked down on policing hate speech or tweets of violent intent, as well as users that have tweets showing sexual abuse.
    http://money.cnn.com/2017/10/17/technology/business/twitter-content-moderation-policy/index.html?iid=EL

    ReplyDelete
  3. I think there is a grey area between what is right and what is wrong. The AI should be able to learn by herself. However, as a proper education system, one would not let the children to freely learn from junkies on the street. In order for something to consider as learning, it needs to be partially guided. In the end, freedom without boundary is not freedom.

    ReplyDelete
    Replies
    1. I personally don't think it is wrong for Microsoft to "censor" what Tay would hear and say. Rather than "censor" I would prefer to use the word guide though. Much like what Dat said, in this case, Microsoft is like a parent to Tay, and it is the parent's role to guide their children in learning what's right and what's wrong, at least in the culture I come from.

      Delete

Woebot - a chatbot for mental health?

Image credit:  https://woebot.io/ What's old is new again!   In a modern revival of the original chatbot Eliza , the world now has.....