Sunday, February 18, 2018

Woebot - a chatbot for mental health?

Animation of Woebot standing with open arms
Image credit: https://woebot.io/
What's old is new again!  In a modern revival of the original chatbot Eliza, the world now has...  "Woebot"

What do you think?  A cheap gimmick, or an effective substitute for some people, in place of meds and expensive counseling sessions?

Quoting from their website:
In a recent study conducted at Stanford University, using Woebot led to significant reductions in anxiety and depression among people aged 18-28 years old, compared to an information-only control group. 85% of participants used Woebot on a daily or almost daily basis. You can read the entire peer-reviewed study.

If you're feeling depressed or anxious, do you think talking to Woebot could improve your frame of mind?

Saturday, February 10, 2018

There's Still Time to Find Love!

Because I stumbled upon some weird tweets tonight and in honor of Valentine's Day in a few days, I thought I would share these humorous but thought-provoking posts.

A research scientist named Janelle Shane trains neural networks, as stated in her "About" section in the links below. She's revealed a few of her projects, including a neural network framework that produces pickup lines and her most recent creation, "Candy Heart messages written by a neural network."

In her "About," she says she trains neural networks to "write unintentional humor as they struggle to imitate human datasets. Well, I intend the humor."
How romantic.
She also does other cool stuff with Pokémon and recipes, and has been written about before (this article discusses a neural network she trained to tell knock-knock jokes), among other things.


Friday, February 9, 2018

Is AI biased?

When it comes to AI technology there are many concerns that we all think of. One concern that we may not often think of, however, is bias. One familiar case of biased AI is the case of Microsoft's chatbot, Tay. Microsoft released its twitter chatbot, Tay, in March of 2016 and within 24 hours the bot learned to use misogynistic, racist, and antisemitic language from other twitter users. However, the bot sometimes gave mixed responses. For example, when the topic of "Bruce Jenner" was tweeted at Tay, the bot responded in mixed ways ranging from praise for Caitlyn's beauty and bravery to entirely transphobic responses. But what if biased AI goes beyond the occasional chatbot?

The biases in AI unsurprisingly often comes from biased data. Since AI is designed to improve and become more accurate the more time and data it is given, it is not surprising that when given biased or faulty data the AI often produces results that enhance, inflate, and amplify that bias. For instance, when given a set of pictures that were 33% more likely to show woman cooking rather than men cooking, the AI predicted that women were 68% more likely to be shown cooking in the photos.

The answer seems to be simple, remove the bias. Train the AI with good quality data. The challenge, however, crops up in the "unknown unknowns" or the missing areas in the data that are not easily recognizable. For instance, when AI is given pictures of all black dogs and white and brown cats for training, when it comes to testing data the AI will incorrectly identify a white dog as a cat. It seems easy to catch this mistake with an AI that discerns pictures of dogs from cats, but catching this mistake becomes even more difficult in real world situations, especially when the AI is convinced that it is right.

(image source: https://www.theregister.co.uk/2017/04/14/ai_racial_gender_biases_like_humans/)

AI learning biases against certain groups may at first seem harmless to some, especially when considering low stakes situations such as identifying cats and dogs. The stakes become extremely high very quickly when AI gets involved in heavier matters such as the courts system. In May of 2016, a report showed that an AI program, Compas, used to determine risk assessment for defendants was biased against black prisoners. Compas wrongly labeled POC as more likely to reoffend at nearly twice the rate of white defendants. Another example is the first AI to judge a beauty contest, in which it chose predominantly white faces as winners. Or the LinkedIn search engine that suggests a similar looking masculine name when one tries to search for a contact with a feminine name.

So, what can we do? First off, we can be more mindful of the data we are using to train our AI. Paying close attention to details such as where the data comes from and what other biases and factors may shape and form said data can tell us many important things. We can also make an active effort to reduce biases within AI by joining or forming groups such as Microsoft's FATE (Fairness, Accountability, Transparency and Ethics in AI). We can also form and support groups and initiatives that encourage those of marginalized groups to join the field of AI. Furthermore, we can put an emphasis on developing explainable AI. There is already a vast amount of confusion and potential for slips with training data alone; adding more confusion with "black box" only exacerbates the problem and makes it more difficult to identify the source of the problem with biased AI.

With AI being implemented more frequently to help predict various elements from many facets of daily life, from loans to health care to the prison system, it is more important now than ever to understand how AI can inflate and exacerbate biases that are already present. It is not only our job, but our responsibility to understand how these biases may affect the lives of many and, hopefully, devise ways to adjust for these biases and skewed results accordingly.

“The worry is if we don't get this right, we could be making wrong decisions that have critical consequences to someone's life, health or financial stability,” says Jeannette Wing, director of Columbia University's Data Sciences Institute.

The technology we develop can have a vast impact on many lives. It has the potential to become the way of the future. Shouldn't we take the time to assure to the best of our ability that this technology doesn't further hinder, but rather facilitates the success of those who need it most?

Feel free to share your thoughts below. Should we be concerned about biased AI? What ways should we focus on diminishing this bias? Is AI bias shaped more from data or the AI itself? Would AI sentience/awareness affect this issue and our response to it?

If you would like to read more about Microsoft's chatbot, Tay, you can do so here.
If you would like to read more about Compas, click here.
Here you can read about the LinkedIn search engine.
You can read about AI bias and possible responses more in depth here.

A.I.’s role in Space Exploration

The use of A.I. technology in space exploration isn’t new. It actually dates back to 1998, when the comet probe Deep Space 1 was flown into space by the A.I. algorithm Remote Agent. Also, while autonomous driving systems are currently making a fuss on Earth, the technology has been in use on Mars since 2004 as AutoNav (the autonomous driving technology) has powered Spirit, Opportunity and, more recently, Curiosity rovers. These rovers are driven by an A.I. algorithm called AEGIS that intelligently chooses photo targets allowing scientists to explore areas of interest.


And now, with further advancements in A.I., NASA is already seeking to expand its role. They already have plans in place for the Mars 2020 rover as well as the Europa Clipper that will explore whether Jupiter’s icy moon Europa harbors conditions for life. It will be interesting to see how long these missions last, especially on Europa, where the probe seeks to penetrate 10 km beneath the ice crust, and communication to and from Jupiter gets even more limited.


These projects are definitely milestones for A.I. Especially as we are working on the robot maze project, I can only appreciate how these algorithms navigate the rovers through various obstacles like rocks, dents, and sand dunes, while simultaneously communicating information back to Earth. As it travels on Mars’s surface, it collects more information about its world, which seems like a more complex and larger version of our project.

While these missions are ambitious, I am also intrigued how far scientists are willing to go? Will this eventually lead us to extraterrestrial life forms? If we indeed find another planet that supports life, who owns it? With superpowers like China and USA competing for supremacy in AI, will this be the start of another space race?


Related interesting articles



Wednesday, February 7, 2018

Can quantum computer really help AI?

       In this article, I will discuss about the effect of quantum computer possibility on the future of AI. In particular, the topic of this discussion will be on weak AI (Machine learning). Based on several estimations, 2.5 exabytes of data per day is generated by us. That’s equivalent to 250,000 Libraries of Congress or the content of 5 million laptops. Every minute of every day 3.2 billion global internet users continue to feed the data banks with 9,722 pins on Pinterest, 347,222 tweets, 4.2 million Facebook likes plus ALL the other data we create by taking pictures and videos, saving documents, opening accounts and more.
Image Credit: posteriori / Shutterstock.com

      This amount of data is so big that even most recent chips and processors are still falling behind the pace of processing and analyzing all these high dimensional data. While Moore’s Law, which predicts the number of transistors on integrated circuits will double every two years, proved remarkably resilient since the term was coined in 1965, those transistors are now as small as we can make them with existing technology. That’s why there’s a race from the biggest leaders in the industry to be the first to launch a viable quantum computer that would be exponentially more powerful than today’s computers to process all the data we generate every single day and solve increasingly complex problems.
      I have attempted to find several good resources for starting with quantum programming. QISKit (Quantum Information Software Kit) is a software development kit (SDK) for working with OpenQASM and the IBM Q experience (QX). You can use QISKit to create quantum computing programs, compile them, and execute them on one of several backends (online Real quantum processors, and simulators). I think this is really cool and I have installed them by traditional Python command prompt. They have several tutorials that you can follow to understand the principles behind the scene.
     I think that quantum computer is getting more and more attention from the general public as well as funding. In 20 years from now, I think that quantum programming will be a very important factor of the development of AI as it can fundamentally change the way how we parallel training or parallel searching.

Sunday, February 4, 2018

A.I. Sports Betting

With today being Super Bowl Sunday, I couldn't help but think about how A.I. might be involved with the betting. We have seen artificial intelligence as a player in games such as chess, Connect 4 and our penguin game from class. What about artificial intelligence as an observer that predicts the outcome of a game? This is all made possible through the availability of competitor statistics and what is called swarm intelligence.
image: http://moziru.com/images/drawn-ant-ant-colony-2.jpg
Some areas where A.I. is making an impact in the betting world is in football, soccer and horse racing. You can read more on the betting and swarm intelligence here: https://www.digitaltrends.com/cool-tech/ai-swarm-intelligence-and-the-future-of-sports-betting/

One company that is involved in sports betting, Unanimous A.I., correctly predicted a superfecta in the 2016 Kentucky Derby. A superfecta is when you pick the top four finishers in a race in the correct sequence. Couldn't just be completely luck, right? Currently, Unanimous A.I. has made some predictions for Super Bowl LII today, which can be viewed here: http://unanimous.ai/super-bowl-52/.

Example of prediction interface of Jaguars vs Patriots on 1/21
gif: https://files.digitaltrends.com/images/KjCbtN8.gif


Another company, Stratagem, is focusing on soccer, basketball and tennis. You can read more about Stratagem here: http://www.stratagem.co/about/. These companies still utilize human resources to help with gathering data--with most of the difficult analysis being done through deep neural networks.
image: https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQKWhfcbquKC5iTD9e5vBUp9sTmORLo0HbICye0epL1jIHVZ-c4
This leads us to the question of how will artificial intelligence impact the future of betting and what role will it play in predicting the outcome of events? I think we still have time before the betting world becomes fully automated. Right now these technologies are giving an edge to those who utilize them. Will betting institutions prohibit the use of technology like this? In what ways could they even know you are using A.I. to place your bets? Lots of people make a living off betting, and I wonder what sort of impact these predictive models will have on that area.

Thursday, February 1, 2018

Wearable Technology: Why Are We Not On Board?



How many people do you know who employ some sort of wearable technology in their daily lives? The Apple Watch, the Fitbit Ionic, Motiv Rings, and the like are truly a new and exciting step in the implementation of computer technology in fashion and all aspects of our reality. Or at least that was the initial forecast back in 2015, when eMarketer anticipated smartwatch sales to increase by 60% the next year.

Though, in a more recent article, eMarket makes it clear that consumers have consistently shrugged off the significance of wearable technology. The 2015 forecast missed the mark by an enormous margin, and continued to disappoint the manufacturers for the next three years. The percentage increase in smartwatch tech consumption could be down to single digits by 2019, and most of the consumers are those who already own similar technology. The American public are not buying wearable technology, neither with their money nor as a "revolutionary" concept.

The article speculates cost as the main deterrent. The net benefit to one's daily life does not justify the price tag. However, at the end of the day, maybe society is simply not that interested in incorporating more technology into everyday life past the comforting external quality of the smartphone. Or, perhaps as the tech gets closer to the skin, it is assumed a more severe dependence might form. This possibility is haunting.

Thad Starner (2nd from right) and the MIT "Borg"
Though, according to those who have advocated for wearable Tech and AR (Augmented Reality) tools since their nascent stages, it does not so much challenge our humanity as enhance it. Thad Starner (above), possibly the very first individual to consistently explore and utilize wearable technology, has a remarkable story to tell in this sense. 

Recently interviewed on an episode of the popular NPR podcast "Invisibilia," Starner explains his journey to creating LIZZY, his wearable AR system, in 1993. He wore the get-up, consisting of a lead acid battery, a modem connected to a car phone,  a twiddler keyboard, and a tiny screen jerry-rigged to some glasses, for 20 years. Every single day. He developed with his friends (other MIT geeks called "the Borg") the Remembrance Agent, which is essentially augmented memory. It allowed Starner to query the information he had entered into LIZZY over the years, displaying it on his screen without distracting from a conversation or class. Besides needing to charge the device, and setting himself on fire once when initially designing it, Starner says the device is indispensable. He corroborates this in an article for engadget.com, in which he goes into even more detail about his LIZZY system and its effects on his life.

"You're really trying to make interfaces that allow people to augment their eyes, ears and mind, but not get mired in the virtual world." - Thad Starner on developing AR technology, engadget.com

A healthy balance of reality and virtual reality. Thad seems to have it figured out. So what's wrong with everyone else? Why is no one leaping at this phenomenal chance for endless memory and a google search of your own thoughts to share with the world? One possible reason is that the wearable tech we have right now is a diluted version of Starner's design. It lacks the intimacy he has allowed it, giving the device responsibility to manage all of his information and deferring to it automatically. Therefore, it lacks the utility he so very much enjoys.

Take, for example, Google Glass. Remember Google Glass? Me too, barely. It was a major flop. In this article describing five reasons the ill-conceived device was a disaster, the most damning piece of evidence is that it had no clear function. It could take pictures and give immediate access to the internet. But phones already accomplish that, and don't feel like you are lugging a barbell around on the side of your head. A $1,500 face barbell. That isn't nearly as sexy as a bottomless memory bank. However, one of the heads of development for Google Glass is the original cyborg himself, Thad Starner. He couldn't replicate LIZZY on a large scale due to its slightly invasive nature.

Certainly no one can argue the utility is incredible. It transcends the difference between having access to something that has an ability, and simply having the ability oneself. Though that transparent nature is what is truly unsettling. If we could input a chip into someone's brain that could run any an A* search method, or run a minimax algorithm to make decisions in a game like connect4, would you prefer those operations run automatically? Like a sharp intake of breath resulting in immediately finding an optimal path? Otherwise, it is the equivalent of having an apple watch or other wearable device that requires tedious manual input. Without surrendering some element of ourselves, we may never be able to unlock the full potential of these devices.

This form of Artificial Intelligence poses so many questions because it is the first step in contemplating our society's maturity level in dealing with human/technical relations. It is not a discussion of manufacturing new consciousness. It is a discussion of building off of our own current consciousness. Incorporating technology into our own psyches on an individual level now could reflect how we will react when we incorporate technology into our society on a grand scale. 

So perhaps the public's apprehension to adopt wearable tech, cyborg implants, or other forms of AR tech goes beyond the price tag.  Current AR technology is representative of what we discussed during the debate regarding consciousness. If we ever come to a point where we can give AI consciousness, perhaps we should think better of it. It might simply stunt AI's utility. Similarly, AR companies have thought better of offering something like LIZZY, and even if they offered it, would the public be forward-thinking enough to open themselves up to such an intimate relationship with any device? If we do so, would we have accomplished Artificial Intelligence by using our own current intelligence as the core and adding an artificial component? Would you be willing to take such a leap?

BONUS:

Pete Holmes comedy bit about the role of technology in our lives


Photo credit:

Smartwatch Photo: https://newsroom.cisco.com/feature-content?articleId=1840876&type=webcontent

Thad Starner: https://www.engadget.com/2013/05/22/thad-starner-on-google-glass/












Woebot - a chatbot for mental health?

Image credit:  https://woebot.io/ What's old is new again!   In a modern revival of the original chatbot Eliza , the world now has.....