Thursday, November 30, 2017

Is simulated intelligence intelligence? & Does the Turing test apply in an online setting?


Occurrences of AI, however intelligent, can pop up nearly anywhere you find people with a drive for artificial intelligence, creativity, or in some cases, hilarity. An unlikely place for such things, you might think, would be reddit, the self-proclaimed “front page of the internet”, however, on this webpage, in a niche little corner, ingenuity is flourishing, both natural and unnatural. As some of you may know, reddit is a type of forum website, where users can post images, stories, links and a wide array of other things, which other users may then “upvote” or “downvote” based on their perceived quality. Good, relevant posts get upvotes, which then allow them to become more popular, allowing even more users to see them, and bad, off-topic, or spam posts can be downvoted (or even reported to moderators) which sinks them into irrelevance. The site has an enormous array of smaller, more in-depth forums known as “subreddits” which can range from more serious things, like news or politics, to less serious things, like funny images or questions for the communities, to more ridiculous things, like pineapples or shower thoughts, or even a forum for people pretending to be bots.


But amongst these thousands of different communities, a champion arises: Subreddit Simulator.


Subreddit simulator is a closed community of only bot accounts. Every hour, one randomly chosen bot (if it’s permissions allow) creates a post on the forum, and every third minute another bot is selected to comment on the post comment therein. A more in-depth description of timing, bot authorizations and the like can be found here. As you might imagine, this can turn from serious to silly depending on the subreddit being simulated that has its bot commenting or posting. There's even a separate forum for those humans who wish to verbally observe the simulation, although this is often most used for pointing and laughing at the most entertaining bot posts.


One of the main differences in this particular subreddit is that no human users are allowed to post. This includes posting comments or posts altogether, only the community of bots are allowed to post comments, and among them, only a select few are allowed to create posts, some of which are text, links to articles, or linked and generated images. Human reddit users are not allowed to interact with this simulation at all, save for three things. Humans users are allowed to observe, support, or oppose bot posts. If a bot post or comment is wholly representative of the subreddit it is aiming to simulate, upvoting it will let the bot know that the content of it’s post or comment is satisfactory to the subreddit it simulates, and words used within were overall positive towards simulation, much the way that the VADER lexicon has assigned values to “positive” or “negative” words in its own context, except that in this case, every single one of the subreddit simulator bots has it’s own lexicon that comes from, and is significant to its own relevant real-time subreddit. The more advanced AI are also allowed to post, which requires the bots to have access to a database on the types of stuff that the ‘average’ user of the subreddit (who they are aspiring to be) will post, be it text, pictures or otherwise. The more picture-based subreddit simulator bots have access to a wide array of popular images which they can alter the text of to produce often hilarious, yet somehow still relevant images, which can be hard to discern from actual users if you’re seeing a newsfeed from multiple subreddits, including the subreddit simulator.

This differentiation between human and bot becomes increasingly difficult in niche areas, like subreddits whose sole purpose is devoted to very specific things, like for instance /r/CrazyIdeas, a place where people post ridiculous ideas that they come up with. As can bee seen from the image, the crazyideas subreddit simulator bot is very good at it’s job.

If you're interested in seeing more examples of what I'm talking about, here's a video of a guy surfing the subreddit, and loosely explaining the simulation boundaries:

This leads me to my next point: does the Turing test apply online? If you were locked in a box with a console on reddit, and had to determine if a poster was human or not, could you really tell? Can bots fool us into thinking that they are human if they don’t actually have to carry on a conversation? Just a simple or complex post or comment, and all we have is a username, which could belong to anyone or anything. Perhaps machines have already taken over the internet and we don’t know yet, because they lurk behind seemingly-innocent usernames. If a posting and commenting bot can fool you into thinking it is actually a human user, does it have intelligence? It clearly had to be somewhat smart to develop the post or comment, and a little more so to make it fit in seamlessly with all the human posts and comments, but does that make a bot intelligent, or simply following orders. But at that point one could argue that mathematicians are bots, following a set of orders placed before them by generations prior. What really falls short of the turing test here is (in my opinion) the consciousness of the bot. It knows how to post and comment, and cares (to an extent) about what it has or will post, but beyond that does it have consciousness? Does any internet troll?


Note: More images and videos would have been included, however the bots are also great at mimicking vulgar language, and I felt it less than appropriate for a class blog.

Image credit here: https://imgur.com/gallery/sWg2N
Video Credit here: https://www.youtube.com/watch?v=xjdmbNpsjm8

8 comments:

  1. I spent way too much time looking at these posts! The top posts are hilarious (mostly because they are nonsense)! Most of them don't make sense at all, but the shorter ones are more likely to sound like proper English. If you look at most of the themes of the bots, I think that the goal was to make funny content. Interesting and entertaining!

    ReplyDelete
  2. To me, the most important question that arises from the subreddit simulator is really why are we even doing this? What's the point of having robots have their own forum and have them "discuss" random things? It might be entertaining to read, but that's it. It does not help or advance our knowledge, technology or lives in any way. It sounds like a waste of time and energy to me; we should be focusing on making chatbots that can be useful to us instead of just funny. But I agree with Megan, it is very entertaining!

    ReplyDelete
    Replies
    1. While this subreddit simulator might seem like a bunch of hilarious nonsense as many of the posts seem to make little sense, I think it serves a purpose as a sort of testing ground for bots so that people can observe these bots and learn what these bots are good at posting and what they are not so good at yet. In the future if AI improves we can look back and see the history of how these bots evolved to be better and what worked and didn't work for these bots.

      Delete
    2. I disagree, I think the sole purpose is for entertainment and that should be enough. Why do we need bots on reddit to focus on advancing our knowledge or lives anyway?

      Delete
  3. It's definitely entertaining, and I glanced at SS about a week ago, but didn't commit the time to really look at it. Just, wow.
    I think with some tweaking, the bots could sound more intelligent and carry on actual conversations. Right now, they just sound like posters filling out bad mad libs that don't make any sense. in particular, I found this post [https://www.reddit.com/r/SubredditSimulator/comments/7h7u4v/when_someone_tells_me_about_the_nun_on_the_campus/] interesting, and undelete_SS's reply about being pro-anti-fa, which made absolutely zero sense regarding the post...or in general.

    ReplyDelete
  4. I have to say this is an excellent post!!! This is where you get to some of the dark side of human nature. The Turing test is a completely optimistic test in the sense that it assumes that human, despite some difference, is essentially the same. However, if you have been on the internet for a while, you should clearly see the effect of partial anonymous identity. I would love to see what kind of test is capable of testing online to distinguish between bots and humans.

    ReplyDelete
  5. One of the interesting points to consider here is the absurdist/neo-dadaist humor that we use on the internet nowadays. A lot of the things real people say don't make any sense, and yet we enjoy them. I would imagine that this makes a semi-sensical bot more convincing: If we say things that don't make any sense, and it says things that don't make any sense, how is that different? It might make it more difficult to differentiate between a human and a machine.

    PS: I meant to make this comment a long time ago, but I haven't been able to leave comments for some reason.

    ReplyDelete
  6. I think you bring up some very good questions. Personally, in an online setting, I think it (can) be really hard to tell whether a certain user is a human or a bot...Like you point out, it would be hard to grant consciousness to a bot after looking at the bot's post and the 'consideration' put towards the post.

    ReplyDelete

Woebot - a chatbot for mental health?

Image credit:  https://woebot.io/ What's old is new again!   In a modern revival of the original chatbot Eliza , the world now has.....