Sunday, December 10, 2017

The Growing Safety Problem in AI

With the exponential advances in technology, specifically in artificial intelligence (A.I.), safety is becoming a major debate. There are several factors contributing to the lack of safety in A.I. One of the main reasons for this debate is competition. In an article by Ariel Conn, the lack of safety is described as a result of the race for something new and amazing, specifically in the field of A.I. Basically, countries are competing against one another for making the next technological breakthrough. The first one to make such a breakthrough will benefit greatly economically, socially, and perhaps even give them more power (military speaking). Thus, developers are not focusing on the safety and reliability of these new A.I., which causes debates on how important safety is and whether or not safety guidelines should be put into place.

Why does safety matter all that much? If the A.I. performs its given task well, why should we be concerned whether or not it's 100% safe? The answers to these questions are answered in another article written by Conn. Safety is a huge factor to market value of new and growing technologies, like cars, security systems, and even microwaves. Therefore, safety should be a huge factor in A.I. as well but there are problems with that (like the competition factor). Also, a lack of cooperation between countries in setting safety guidelines for emerging A.I. contributes to the lack of safety as well. Also, safety is more broad than one might expect. It doesn't only include physical safety but also emotional safety.
(Source: https://futureoflife.org/2017/09/21/safety-principle/)
In an article by Dave Gershgorn, a great point is mentioned. Currently, A.I. is becoming too complex and showing unexpected outcomes. Today, producers of A.I. implement machine learning (which will be talked about later in the term) and thousands of sensors in their technology. These algorithms and artificial neuron networks are very complex. The problem is A.I. will be expected to make a certain decision given a scenario but will do something completely different. Scientists have a tough time understanding their own creations. Unexpected outcomes are currently a big issue in A.I. and also explain why there is a lack of safety. It's almost impossible to predict an A.I.'s decisions; therefore, every A.I. would require very extensive safety testing in almost every situation imaginable.

Personally, I think safety in A.I. needs to be addressed as soon as possible. With the way technology is advancing, there needs to be some sort of guidelines to keep things under control. Though it might be tough to come up with such guidelines, I think it's possible and a necessity. I think that even though A.I. has many great benefits, it's not worth the risk if the A.I. isn't considered "safe". In order to encourage safety among everyone, the dangers and possible risks of A.I. need to be spread. People should be aware of these risks which might spur a change towards making safety the number one priority.

7 comments:

  1. I do think that we are only at the beginning age of AI. From a history perspective, I think we have to go through a couple of very dramatic and negative events in order for AI safety to gain public concern. We have taken for granted technology for so long that we consider it to be inherited by us, the same analogy with knowledge from Google. However, I hope that more actions and researches will focus on this before mass production.

    ReplyDelete
    Replies
    1. This is a very good point. I agree that the general public won't really see AI as a safety concern until something very bad happens. And by that time, it might be too late, who knows? But yea, safety definitely needs to be addressed in great detail and soon.

      Delete
  2. I think the unpredictability of AI is what makes it so advanced. If the goal is to mimic but be better than humans, then the less predictable it is when making decision, perhaps the better? There would obviously be an issue if creators are determined to avoid making AI that wants to hurt people/animals or destroy belongings but it does so anyway. However, if AI simply gives a sassy answer to a question, that's not as terrible, I guess.
    Regarding self-driving cars, I think we should either keep them off the streets until they are more refined or just cease the idea completely. They shouldn't risk ending lives or damaging property for glory and money. It's going to be so difficult to make the self-driving car figure out its surroundings (besides slickness of roads; they already do that) while also avoiding getting hit by others. Humans make mistakes, so unless all cars are self-driving, they can still get hit. What then? And what about when wildlife decides to cross the road unexpectedly? Does the car swerve, hit the animal, or immediately brake, risking the passengers' safety?

    ReplyDelete
    Replies
    1. Eric -- perhaps you will never buy a self-driving car (possibly because there may be a fleet of self-driving Ubers that you can summon on command, and you'll just use those as needed). But I bet that you'll find yourself riding in a self-driving car within the next 20 years.

      Delete
    2. Human driver make mistake, and we as a society are fine about it, so why we so fear of machine make mistake? Maybe because it cannot take responsibility itself. Therefore, i think that how can we manage uncertain event that cause damage is the main question we need to address. Maybe we need insurance, maybe the self-driving car maker need to have insurance on their car, maybe it the owner burden. I think self-driving car will be reality, and we need to be fast to address the responsibility problem. Hopefully, self-driving car will be so prevalent that human driver need to just assume the risk when they decide to ride, and buy themselves some good life insurance

      Delete
    3. Something to consider, is that the bar for weather or not self driving cars add to society is not weather or not they never make a mistake ever, that would be impossible. The bar is merely that they are better than humans. This is a low bar, since humans are bad drivers. If AI causes even 10 percent less accidents than humans, this is a huge move forward for our society.

      Looking at the moral decisions a car makes, whether to swerve, brake, etc, I think it is mostly likely that the AI will prioritize the life of the people in the car. After all, if you were buying a car, would you buy one that didn't prioritize your own life? Even if you would, would most people? Unless of course governments intervene and decide themselves what is moral, but I find that unlikely since it is so unclear what the correct answer is.
      While I find the "which moral decisions should we program in" an interesting argument, I ultimately do not find it relevant in whether or not we should use these cars. It does not directly effect whether they are better at driving than humans, nor as to how many accidents they have. If they indirectly effect those statistics, then they matter only in how they relate to the statistics.

      Delete
    4. It is safe for me to say, after seeing how unreliable Team H's robot sensors acted, I will not be stepping into a self driving car soon. I feel the state of technology being implemented by A.I. should definitely be prioritized at the same level as A.I. safety.

      Delete

Woebot - a chatbot for mental health?

Image credit:  https://woebot.io/ What's old is new again!   In a modern revival of the original chatbot Eliza , the world now has.....