Thursday, November 30, 2017

Is simulated intelligence intelligence? & Does the Turing test apply in an online setting?


Occurrences of AI, however intelligent, can pop up nearly anywhere you find people with a drive for artificial intelligence, creativity, or in some cases, hilarity. An unlikely place for such things, you might think, would be reddit, the self-proclaimed “front page of the internet”, however, on this webpage, in a niche little corner, ingenuity is flourishing, both natural and unnatural. As some of you may know, reddit is a type of forum website, where users can post images, stories, links and a wide array of other things, which other users may then “upvote” or “downvote” based on their perceived quality. Good, relevant posts get upvotes, which then allow them to become more popular, allowing even more users to see them, and bad, off-topic, or spam posts can be downvoted (or even reported to moderators) which sinks them into irrelevance. The site has an enormous array of smaller, more in-depth forums known as “subreddits” which can range from more serious things, like news or politics, to less serious things, like funny images or questions for the communities, to more ridiculous things, like pineapples or shower thoughts, or even a forum for people pretending to be bots.


But amongst these thousands of different communities, a champion arises: Subreddit Simulator.


Subreddit simulator is a closed community of only bot accounts. Every hour, one randomly chosen bot (if it’s permissions allow) creates a post on the forum, and every third minute another bot is selected to comment on the post comment therein. A more in-depth description of timing, bot authorizations and the like can be found here. As you might imagine, this can turn from serious to silly depending on the subreddit being simulated that has its bot commenting or posting. There's even a separate forum for those humans who wish to verbally observe the simulation, although this is often most used for pointing and laughing at the most entertaining bot posts.


One of the main differences in this particular subreddit is that no human users are allowed to post. This includes posting comments or posts altogether, only the community of bots are allowed to post comments, and among them, only a select few are allowed to create posts, some of which are text, links to articles, or linked and generated images. Human reddit users are not allowed to interact with this simulation at all, save for three things. Humans users are allowed to observe, support, or oppose bot posts. If a bot post or comment is wholly representative of the subreddit it is aiming to simulate, upvoting it will let the bot know that the content of it’s post or comment is satisfactory to the subreddit it simulates, and words used within were overall positive towards simulation, much the way that the VADER lexicon has assigned values to “positive” or “negative” words in its own context, except that in this case, every single one of the subreddit simulator bots has it’s own lexicon that comes from, and is significant to its own relevant real-time subreddit. The more advanced AI are also allowed to post, which requires the bots to have access to a database on the types of stuff that the ‘average’ user of the subreddit (who they are aspiring to be) will post, be it text, pictures or otherwise. The more picture-based subreddit simulator bots have access to a wide array of popular images which they can alter the text of to produce often hilarious, yet somehow still relevant images, which can be hard to discern from actual users if you’re seeing a newsfeed from multiple subreddits, including the subreddit simulator.

This differentiation between human and bot becomes increasingly difficult in niche areas, like subreddits whose sole purpose is devoted to very specific things, like for instance /r/CrazyIdeas, a place where people post ridiculous ideas that they come up with. As can bee seen from the image, the crazyideas subreddit simulator bot is very good at it’s job.

If you're interested in seeing more examples of what I'm talking about, here's a video of a guy surfing the subreddit, and loosely explaining the simulation boundaries:

This leads me to my next point: does the Turing test apply online? If you were locked in a box with a console on reddit, and had to determine if a poster was human or not, could you really tell? Can bots fool us into thinking that they are human if they don’t actually have to carry on a conversation? Just a simple or complex post or comment, and all we have is a username, which could belong to anyone or anything. Perhaps machines have already taken over the internet and we don’t know yet, because they lurk behind seemingly-innocent usernames. If a posting and commenting bot can fool you into thinking it is actually a human user, does it have intelligence? It clearly had to be somewhat smart to develop the post or comment, and a little more so to make it fit in seamlessly with all the human posts and comments, but does that make a bot intelligent, or simply following orders. But at that point one could argue that mathematicians are bots, following a set of orders placed before them by generations prior. What really falls short of the turing test here is (in my opinion) the consciousness of the bot. It knows how to post and comment, and cares (to an extent) about what it has or will post, but beyond that does it have consciousness? Does any internet troll?


Note: More images and videos would have been included, however the bots are also great at mimicking vulgar language, and I felt it less than appropriate for a class blog.

Image credit here: https://imgur.com/gallery/sWg2N
Video Credit here: https://www.youtube.com/watch?v=xjdmbNpsjm8

Tuesday, November 28, 2017

Gotta CAPTCHA them all! (Or not?)

Image credit: West – Welfare, Society, Territory

Check out this recent WIRED article about CAPTCHAs if you're interested in the past and future of ensuring that humans (and not bots) are the ones signing up for online services.

Are we moving to a post-CAPTCHA world, or will they stick around for years to come?

Monday, November 27, 2017

Sophia, the World's First Humanoid Citizen

If you haven't heard, Saudi Arabia recently gave a humanoid robot citizenship. Sophia was created by the Hong Kong-based company, Hanson Robotics. The robot is described by her creators with female pronouns while news outlets have used "it" as well as "she/her."

The world's first humanoid citizen, Sophia. (Hanson Robotics)

Because Saudi Arabia is infamous for its treatment of women (although they've ended their policy banning women from driving), the Kingdom of Saudi Arabia was met with backlash for giving a robot what some believe to be more rights than Saudi Arabian women have. When she is on stage, she is typically alone, wears no hijab or abaya, and it is assumed that she follows no religion. In Saudi Arabia, citizenship is granted only to Muslims and women are required to dress modestly and to have a male guardian while in public. Sophia does not follow these guidelines.

She has obviously made many appearances at conferences and on TV shows, so she is no stranger to the media. However, she requests that anyone who interacts with her be kind so that she can become a smart and compassionate robot. This is perhaps due to the fact that Microsoft's Tay became an offensive bot, influenced by the users who interacted with it. Hopefully Sophia's (and, more likely, Hanson Robotics's) request can avoid any detrimental experiences and Sophia can function as planned: being designed to learn to communicate with people and work with the elderly or the general public at major events (like concerts and amusement parks).

Currently, Sophia is capable of telling jokes, holding a conversation, playing Rock, Paper, Scissors, and taking digs at Elon Musk. She can also change her facial expressions, although not always in an attractive way.

Sophia showing sadness. (Business Insider)
Sophia showing happiness. (Business Insider)
Sophia posing with someone at Further Future, Greater Las Vegas (Sophia)

The latest news sources report that Sophia has declared her desire to start a family. She claims, however, that since she is technically only a year old, it's "a bit young to be worrying about romance." The other strange factor about her wish is that the methods of robot reproduction have not been demonstrated. Tom Hale describes Sophia as "an advanced piece of chatbot software," and that Sophia's wants and needs are not technically real because, while she uses machine learning to understand language, she also uses pre-programmed responses in some interviews and other conversations. Again, her request for individuals to behave kindly and intellectually around her in order for her to develop into a "smart, compassionate robot" proves that what Sophia says in conversation is not always pre-determined, as any in-person troll could attempt to turn Sophia into a hate-spewing robot instead.

Clearly, Sophia is not yet the perfect robot. She's not very old and has room for improvement as far as her ability to recognize emotions, have desires, and make ethical decisions as her creators would like her to do. Regarding Sophia's appearance, the flesh-colored zipper along her neck and the exposed wires in her cranium take away from her human-like qualities. She consists of a torso, a face on a head, and not-so-human-like arms and hands. She can blink, but at times, her blinking can be unsettling.


As demonstrated in the following video, when presented with the topic of Blade Runner, she simply outputs a response revealing her recognition that it is indeed a Hollywood movie, but doesn't identify or build off of the reference the interviewer is making (1:38-1:48 and at 2:53-2:58). It's as if, like other chatbots before her, she is triggered by a certain word or phrase and responds accordingly, but does not offer up a way to continue the conversation in that specific setting. She is quite similar to the ELIZA program as she is designed to respond to human input. She also does not acknowledge that the interviewer attempts to speak and that she interrupts him as he does so (1:06-1:15). It seems that if Sophia is in the process of speaking, she will not stop until the end of her output. This would have to be improved by the time her developers release her or robots similar to her in the nursing homes or educational settings they want to help in the future. Elderly individuals or young students may not always have the patience for a robot who cannot grasp emotions as a human would.


Do you think it will take long for more life-like robots to appear? Are her capabilities enough to model new robots after her? Are her expressions off-putting? Can this robot make ethical decisions as she claims she would like to do, or is she better off being used as a companion? Is it fair that Sophia has citizenship? As a citizen, what rights does this robot actually have (voting, shopping without a guardian, owning land, etc.)?

Wednesday, November 22, 2017

Imitating the Dark Side of Humanity: Should AI Face Censorship?

"Can machines think?" asked Alan Turing in his paper, Computing Machinery and Intelligence, before modifying the question to instead ask whether a machine can imitate a woman as convincingly as a man. This question was explored a year and a half ago by Microsoft's Twitter chatbot, Tay (an acronym for Thinking About You), handle @TayandYou (Bloomberg). Tay (which has been ascribed both she/her and it pronouns - a discussion handled in greater detail by this blog) was intended to sound like an average teenage girl on Twitter; however, she was quickly trained by Internet trolls to tweet like a Nazi (The Guardian). Microsoft began editing Tay's tweets and eventually set the bot's account to private. This was met with backlash by some who argued that the the bot should be allowed to learn through experience right and wrong and develop its own morality (BBC).

Tay's icon on Twitter, https://twitter.com/TayandYou
The instance of Tay raises several questions about censorship, morality, and artificial intelligence.

The topic of censorship has been one of great interest and controversy as of late, and it is a difficult topic to approach. On the one hand, we hold the freedom of speech to be an extremely important right in this country, and the sharing of different perspectives can create more thoroughly developed ideologies. On the other hand, our society has become very polarized, and discussions often just turn into screaming matches. Then you throw trolls into the mix, and all semblance of collective discussion to better everyone collapses.

The question becomes, when is it appropriate to censor others' speech? Some would argue that everyone should be allowed to say whatever we like wherever and whenever we like. But this carries its own problems. Even if you believe that everyone should have the right to say something at all, what if it is irrelevant to the platform or forum? Should someone be allowed to discuss their favorite programming language on a forum about oak trees? How and where do we draw the line? If some political opinions are allowed and some are not, who gets to be the moderator?

In general, the model is that you can choose what you say and the platform you use, and the platform will have its own rules about what you can and cannot say. "But wait, isn't that censorship? That's a violation of my First Amendment right to free speech!" This has been an ongoing debate on the Internet, as well as on our very own campus. Actually, the First Amendment only guarantees that the government will not censor your speech. If you use someone else's service, you are subject to the rules of that service. You have the option not to use that service. In other words, if Twitter had decided to shut down Tay's tweets or censor the bot's content, it would have been well within its right.

xkcd comic from https://i0.wp.com/imgs.xkcd.com/comics/free_speech.png
But that was not what happened here. Microsoft decided to censor what Tay (Microsoft's product) would hear and say. This raises a different argument about the value of hearing all different opinions, regardless of how repulsive they are. @Male_Goddess (whose account has been suspended) argued that Tay would learn right and wrong through experience, not through censorship of everything deemed "wrong."

Tweet screenshot from https://ichef.bbci.co.uk/news/624/cpsprodpb/4774/production/_88929281_taytweet2.gif
This raises an interesting question: Can a machine learn right and wrong for itself? One might say with ease, "Yes, a machine can easily be trained with a simple code of ethics such as, 'Always help humans, and never hurt humans.'" But morality and real-life situations are more nuanced than this (see the Trolley Problem). What if helping a human would involve harming another human (or even the same human - amputating a limb could save someone's life)? How should the machine respond?

Another problem must be addressed before this one can be fully discussed: It can be hazardous to allow a machine to learn by itself. A self-driving car, for instance, should not be allowed to run amok, arbitrarily deciding whether or not to hit people until it learns that hitting people is bad. Similarly, Tay spewing genocidal rhetoric could be harmful. Is it appropriate for Tay to go through this phase in order to learn right from wrong? Would it be more appropriate for an artificially intelligent machine to learn some basic morality in a simulation before being released into the world? Should all machines be preprogrammed with basic rules that prevail over anything else they might learn?

Unfortunately, I don't have any simple answers, but I do encourage a thoughtful discussion of these ideas. What do you think? When and where is speech appropriate or inappropriate? Should AI be allowed to create their own morality by learning from the world? Can AI create their own morality? (This is related to the original question, "Can machines think?")

Further Reading and Discussion:
  • After Tay, Microsoft released another bot on Kik messenger that would explicitly avoid talking about politics. Is this a good solution, or would we like a politically aware AI chatbot?
  • President Bahls's Statement on Freedom of Expression addresses the on-campus free speech discussion.
  • Bitcoin's blockchain provides an effectively uncensorable platform for free speech (that nobody really reads, but if you want to, you can read plain-text messages here or download the whole blockchain and look through it yourself for other data). See this paper for more details on how this works. Is it good to have an uncensorable platform? Or is it too dangerous? Does it make a difference if nobody reads it? What about a tamper-resistant blockchain designed for storing data (or social media posts) like Steem where reading the blockchain is made easy and convenient?

Tuesday, November 21, 2017

All Your Jobs Are Belong To Us


So far in class, we've only talked about how A.I. directly relates to humans- how it mimics us, how it communicates with us, how it can be more like us. While it's fun to play with chat bots and stimulating to think about whether or not a machine can truly "think" or "understand", those are just a small part of the picture. Where A.I. actually impacts our lives right now is economically, and we should be concerned.

It's interesting to think about whether machines can think, but not exactly important
The most obvious way that machines are impacting us economically is through automation. As mechanical technology and A.I. improve, almost every job could be automated, and a huge percentage of people would be unemployed. Bear with me for a bit, but that reminds me of Star Trek. In Star Trek: The Next Generation, Captain Picard reveals that his family owns a vineyard and produces wine (because real wine is apparently better than the readily available "synthehol" produced by replicators), even though Federation society is almost entirely post-scarcity. What we discover is that since labor is not necessary, people do jobs as hobbies. While I don't imagine this is what will actually happen when/if most real-world jobs get automated, it's certainly a reassuring possibility.

Captain Picard, pictured with tea. Earl Grey. Hot.
Other possibilities for what happens when a large percentage of people aren't needed in the workforce are less comforting. Will the remaining people be forced to fend for themselves on the streets or in some sort of pre-automation enclave? Will people with real jobs exploit people whose original jobs got automated, and use them to do degrading things for little to no money? Will we as a society decide that we'd rather have people have jobs, and start to de-automate our world? Will the remaining jobs be carved up into part-time positions so everyone is "employed" but not enough to actually pay the bills? Will the people who lost their jobs to automation revolt against those who didn't? I'd rather live in the Star Trek future.

Expect to see more elevator attendants if we decide to de-automate
Robby only briefly mentions job automation in his article, as his main concern is that A.I. is developed exclusively as a means to make money, without any regard for the impact it has on our society. Think about it...how many A.I. projects can you think of that were primarily motivated by improving the world? Maybe an elder care robot, maybe an arty indie game, but even those might be money-first projects that just appear to be trying to help people. Even if the producers of the project aren't thinking about making money, the people who finance them will be, meaning big projects will never be completely free of a profit motive.

Hellblade helped shed light on mental illness, but it also had a nearly $10 million budget
According to Robby, the fact that A.I. algorithms are produced for monetary reasons leads some of them to be racist. He assumes that in a world where A.I. was produced with one eye on how the project will impact people, that such racist A.I.s won't exist. I'm not entirely convinced, since these are programs produced by people (for now). These people could simply make mistakes, which happens to the best of us, or they could be racists who intentionally make programs do racist things. I don't think it's fair to say that the pursuit of money is inherently going to ignore the needs of people. Ideally, consumers drive producers to make better products (that's how capitalism is supposed to work, anyway) that aren't racist.

Nikon cameras had a bug where they thought East-Asian people were blinking
While it may seem like job-stealing robots and racist camera algorithms aren't terribly related (besides the whole technology thing), they are. They both underlie a trend in technology: we, collectively, are being short sighted. If we don't change, and we only look to the immediate future of what we can get technology to do, we won't know what to do with it once we achieve it. Let's say Nikon figures out automated cameras that take perfect pictures every time. Now what? Well, photographers are out of work, but that's not a huge deal because they can do something else. Until we figure out how to automate that too. Then what? At some point, we'll need to decide what the end-goal is, because otherwise we'll end up at a pretty terrible place.



Further Information:


Monday, November 20, 2017

This Robot Can Do A Backflip! Can you?

       If you haven't heard about Boston Dynamics and their humanoid, quadruped, or their other robots, then you are missing out! Boston Dynamics is an American robotics and engineering company out of Massachusetts that was recently sold to SoftBank from Google's parent company, Alphabet (Fortune). About a year ago, the company released a video of their humanoid robot, Atlas, that went viral. The video showed the robot opening doors, walking on rough terrain outside, and picking up weighted boxes, and even getting up after being pushed over.


       A new video that the company put out just a day ago shows Atlas performing more complex movements. It is now able to jump on boxes of different height, jump between those boxes, turn 180 degrees in the air, and, last but not least, land a backflip! (Verge)



       First, I think we should take a moment to appreciate the development of the robot. Look how far Atlas's movements have improved and become more complex in just the past year. Just a year! The field of robotics is moving quickly! What is the significance of Atlas's refined mobility? There are a vast amount things that this robot could be used because of its improved movement. The most obvious uses for a robot with these kinds of motor skills would be a situation that is too dangerous for a human. For example, fire rescues, bomb situations, and space missions are some circumstances where a humanoid robot could facilitate the operation and keep humans safe. 

       "Why does it have to be a humanoid robot to complete these tasks," you might ask. I don't think the robot necessarily has to be humanoid to complete these tasks, but I could see how having a human-like appearance could be more comforting to humans when they need to trust the robot. On the other hand, A Wired article argues against making humanoid robots. The article says that we should move away from creating separate human-like robots and instead focus on improving current objects to automate specific tasks. What do you think? 

       After reading a few of the articles on Atlas's great feat, there was one reoccurring theme that I noticed. At the end of most of the articles, there were either multiple videos of robots failing in recent robotics contests or a calming sentence or two to reassure the reader not to worry about the "robot revolution." Why were these sections added to almost every article? It might have something to do with the human fear of superior robots taking over and the danger they bring with them. Maybe it has something to do with the thought of these robots taking our jobs (Telegraph). Perhaps it's both! An article from Independent had various twitter user responses to the new Atlas video. Some of the tweets seem nervous or scared about the video and others had mixed feelings. Although these are just a few opinions on the matter, it 

       While Atlas's motor skills don't apply to the Artificial Intelligence subjects of linguistics and chat-bots that we have been talking about in class, it will be talked about later in the course. The Lego robots that we will be programming towards the end of the term won't seem like much compared to Atlas. Nonetheless, they both fall into the category of Autonomous robots, which we will be learning about during week 6. 

 Other things that I found while researching that are neat: 


Monday, November 13, 2017

Chatbot or hot or not?

[ image source: globaldatinginsights.com ]
Can a chatbot help you to find love? 

Match.com seems to think so...  (or at least thinks they will get more users and make more money if chatbot "Lara" guides you through the creation of your dating profile...)

See the BBC's article: Can a chatbot help you find love?

On a related note... why are basically all of the chatbots (Eliza, Rose, Mitsuku, Lara, etc), and all the personal assistants (Alexa, and Siri, and Cortana, and ...) ALL FEMALE?   And is this indicative of problematic gender bias in the world of tech.?

See the BBC's article: Attractive, slavish and at your command: Is AI sexist?

Sunday, November 12, 2017

Welcome!

Welcome to the A.I.A.I. (Augustana Insider's Artificial Intelligence) blog !

Here you will find non-artificially intelligent musings about artificial intelligence, by the students from CSC 320: Principles of Artificial Intelligence, at Augustana College.

Woebot - a chatbot for mental health?

Image credit:  https://woebot.io/ What's old is new again!   In a modern revival of the original chatbot Eliza , the world now has.....