Sunday, December 10, 2017

The Growing Safety Problem in AI

With the exponential advances in technology, specifically in artificial intelligence (A.I.), safety is becoming a major debate. There are several factors contributing to the lack of safety in A.I. One of the main reasons for this debate is competition. In an article by Ariel Conn, the lack of safety is described as a result of the race for something new and amazing, specifically in the field of A.I. Basically, countries are competing against one another for making the next technological breakthrough. The first one to make such a breakthrough will benefit greatly economically, socially, and perhaps even give them more power (military speaking). Thus, developers are not focusing on the safety and reliability of these new A.I., which causes debates on how important safety is and whether or not safety guidelines should be put into place.

Why does safety matter all that much? If the A.I. performs its given task well, why should we be concerned whether or not it's 100% safe? The answers to these questions are answered in another article written by Conn. Safety is a huge factor to market value of new and growing technologies, like cars, security systems, and even microwaves. Therefore, safety should be a huge factor in A.I. as well but there are problems with that (like the competition factor). Also, a lack of cooperation between countries in setting safety guidelines for emerging A.I. contributes to the lack of safety as well. Also, safety is more broad than one might expect. It doesn't only include physical safety but also emotional safety.
(Source: https://futureoflife.org/2017/09/21/safety-principle/)
In an article by Dave Gershgorn, a great point is mentioned. Currently, A.I. is becoming too complex and showing unexpected outcomes. Today, producers of A.I. implement machine learning (which will be talked about later in the term) and thousands of sensors in their technology. These algorithms and artificial neuron networks are very complex. The problem is A.I. will be expected to make a certain decision given a scenario but will do something completely different. Scientists have a tough time understanding their own creations. Unexpected outcomes are currently a big issue in A.I. and also explain why there is a lack of safety. It's almost impossible to predict an A.I.'s decisions; therefore, every A.I. would require very extensive safety testing in almost every situation imaginable.

Personally, I think safety in A.I. needs to be addressed as soon as possible. With the way technology is advancing, there needs to be some sort of guidelines to keep things under control. Though it might be tough to come up with such guidelines, I think it's possible and a necessity. I think that even though A.I. has many great benefits, it's not worth the risk if the A.I. isn't considered "safe". In order to encourage safety among everyone, the dangers and possible risks of A.I. need to be spread. People should be aware of these risks which might spur a change towards making safety the number one priority.

AI Remixing Sounds using Neural Networks

Photo by Denisse Leon on Unsplash

With an increased desire to use unique sounds, music producers invest a lot of money in synthesizers, virtual instruments, samplers, and recording equipment. This new sound design trend has led to the rise of some popular synthesis virtual instruments such as Serum. Serum is "a wavetable synthesizer with a truly high-quality sound, visual and creative workflow-oriented interface to make creating and altering sounds fun instead of tedious, and the ability to 'go deep' when desired - to create / import / edit / morph wavetables, and manipulate these on playback in real-time." In a fairly recent Google Magenta project, NSynth, Google was able to accomplish similar audio synthesis using neural networks.

At it's core NSynth uses neural networks to encode and decode sounds allowing artists to interpolate between multiple instruments, generating unique sounds. It differs from Serum's approach of synthesis in the fact that NSynth completely relies on neural networks instead of wavetables. The creativity of this instrument is limited to your sample library. You can try a demonstration of this in practice on the web here. My favorite combination is the Cat + Vibraphone.

While this is an incredibly cool use of neural networks to generate sounds, this project could be furthered by combining this with different sound synthesis algorithms to make far more interesting sounds. For example if NSynth was paired with granular synthesis, AI could make some pretty unique cinematic pads. Paired with FM (frequency modulation) synthesis, AI could use basic sound waves to make some very harmonic sounds that could be used for something like a dubstep or trap bass. The possibilities could be endless, and the result could be a virtual instrument that is far more powerful than Serum.

Thursday, December 7, 2017

DeepMind at it again



File:Chess-king.JPG



A recent BBC article entitled "Google's 'superhuman' DeepMind AI claims chess crown" , discuses DeepMind's recent achievements. While the title is somewhat over the top, it contains interesting information relating to what DeepMind has been up to. It's program, AlphaZero recently won in a chess competition against another AI, Stockfish. It won 75 games as white, 3 games as black, and the other 72 games were draws. This occurred after four hours of training against itself. AlphaZero has also beaten other AI in the Japanese game of Shogi, and a previous version of itself at go after similar hours of training. It seems DeepMind is after all the board game titles. As another article points out, this AlphaZero is more versatile than other programs, as it is not specialized to a specific task. However, this versatility came at a cost, as it used a massive amount of processing power, 5000 custom processors. This article really emphasizes how AlphaZero was given very little help while learning the games, merely the rules of the game, and then it trained for a ridiculously small number of hours against itself. Google hopes this will help DeepMind get closer to a general AI, to help with more complicated problems.

What most interests me about this story is that it is even a competition. Granted, almost half of the chess games were draws, but that still seems to low to me. If both AI are processing all the possible moves then both AI should never lose, and thus they would always tie. This is most true in chess, which AI has been better than humans for a long time. I would not think that having more processing power or a better algorithm would be that much of an advantage. I suppose the AI in my mind were not as strong as the AI in reality, but I think that eventually, even soon, there will be no more room to improve. The programs will be fast enough that they can explore the whole tree and expand all the states, and thus an "Ideal" game will be played, with neither making a mistake. I suppose there could even be multiple of these games. I am curious to see what these games would be, but the sport will be a little less exciting.

The other hugely interesting thing fact about this story is how AlphaZero taught itself quickly, and was not told to use cheap tricks by humans. I think this relates well to our debate which occured yesterday, where we discussed how the nature of how something is created may affect whether or not its creator is creative. I know deep mind was mentioned, specifically with some of the images that were referenced by the "yes" team, which was one of their more compelling points. Perhaps a situation like this, where the AI taught itself its strategies, is an even more compelling argument for the creativity of AI, or perhaps there is something else missing. It is difficult for me to land on one side or the other decisively, but stuff like AlphaZero seriously pushes me to the yes side.

image citation



Tuesday, December 5, 2017

AI the Future of Sales?

It is a general consensus that automation in the workplace will increase productivity level and will result in an increase in profits.  Most current AI bots are used for customer service but lack the intellectual capabilities to try to convince humans they need a product.  The goal of the new wave of the bots is to contact as many people that they see as potentially interested in a product, and then stay in contact with them until they are close to making a deal so they can pass it off to the actual sales reps(The actual sales reps are Human just to make that clear).  There are a few companies that are trying to develop a machine learning bot that can attract potential customers such as Conversica or Growbot.  Growbot is attempting to bring AI into the sales world and have established their two main goals.  "...the challenges with AI have 2 layers: the data you use for training your algorithms--and the algorithms themselves. Without the relevant training data, all of the algorithms are useless. AI researchers say that getting data is 90% effort and 10% building algorithms. Many people wonder why Tesla was the first company to introduce the autopilot feature to their cars, and the answer is quite simple: they've been collecting data from all of their cars on the road, while Google has only been collecting data from a few prototypes," (odd article title but some useful information)  For those of you not familiar with machine learning these are the blueprints you must follow, create a data set large enough to get some consistent results and continue to add data to make the machine better.  Growbot is dropping a subtle hint that they are going to be the first into the industry because they have been collecting data first.  I guess we will see how their building algorithm continues to improve their bots.  

A few side notes:

I don't know how many of you guys use Instagram, but from my experience when I search a product on my phone its only a matter of hours before Instagram is trying to advertise similar products on my timeline and I am sure that happens on other social media websites.  On the websites "About Us" tab on the Growbot website they include this statement  "Growbot is an all in one solution for driving predictable revenue growth.  With our AI based platform, anyone who needs to generate leads, from sales to marketing, can reach out to hundreds of potential customers in minutes."  Talk about spam!


Also apparently for those who are on the positive side of the debate over Will AI systems ever become conscious? "Over time, these bots "learned" that English was not the most efficient means of communication, and slowly started developing their own language to achieve their pre-programmed goal."  Their goal was to be as efficient as possible so the two bots no longer used English as their mode because it seemed less efficient



More in depth article of the two Facebook bots


Sunday, December 3, 2017

An A.I. that can build A.I.



In May 2017, Google Brain announced the creation of AutoML, an A.I. that can help them create new A.I.s. AutoML focuses on deep learning, and more specifically reinforcement learning, that involves passing data through neural network (imitating how the human brain works). We will be talking about deep learning and neural network later in the term. 

AutoML can essentially create a "child" that can perform a particular task. “In our approach, a controller neural net can propose a ‘child’ model architecture, which can then be trained and evaluated for quality on a particular task,” Google says in their Research Blog. “That feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from.” You can read more about AutoML and the details of the algorithm here

Last November, Google used AutoML to make NASNet, a computer vision system that has already broken the record for the most accurate image recognition, as seen in the graph below. They tested the program on the ImageNet image classification and COCO object detection data sets, which Google claims to be “two of the most respected large-scale academic data sets in computer vision” and it outperformed all other human-made systems.


Image Credit: Google Research Blog

There are a number of potential application for NASNet, some of them include faster and safer self-driving cars, facial recognition to replace passports (like in Australia) and even helping visually-impaired people to regain sight. However, on the other hand, there are also some potentially detrimental application, like robots "reproducing" so fast that humans cannot keep up. The question now becomes how do we ensure that the parent does not pass down biases to the child system and, most important, how do we make sure that the systems are used ethically. It is easy to imagine how people can take advantage of this technology for evil ends, such as recognizing faces on the street and following them. This concern seems to be so pervasive that Amazon, Facebook, Apple and other big companies have created a "Partnership on Artificial Intelligence to Benefit People and Society". The Institute of Electrical and Electronic Engineers has also created ethical standards for AI

In my opinion, using A.I. to help us better create A.I. is a great use of our resources. It would take us a much larger amount of time and energy to do what AutoML does so "easily". I do agree that there are a number of ethical issues at stake here and it does sound straight out of a science fiction movie. However, if researchers don't have to spend so much time on the grunt work, they will have time to perfect the systems and oversee them to make sure the biases are not passed down. Moreover, if society can recognize that A.I. technology is exponentially developing, then it should create rules and regulations right now to make sure that nothing gets out of control (which they are starting to do). All in all, I think there are more benefits than harms in this and I think it is a genius idea from Google. 

You can read more about the topic in the following links:






Friday, December 1, 2017

Is Procedural Generation a Form of Creativity?

Procedural generation (or random generation) is growing more and more popular, especially in the video game world. The creators of these games (like Minecraft, Gaunlet, and Space Engineers) use procedural generation to create data algorithmically rather than having to create fixed characters and worlds.  This method also cuts down on file size requirements. When creators want to have large amounts of items or huge, unpredictable maps in a game, doing so by hand would be tedious, time-consuming, and overwhelming.

Procedural generation clearly allows for each user to have a completely different experience each time they play, increasing replayability. If you've heard anything about last year's infamous release of No Man's Sky, procedural generation can have its downfalls (mostly due to the hype that surrounded it before its release). Because the worlds/objects/characters can be random in some game, it could lead to a large, exciting world to explore, or a dull string of similar maps in a row.

Obviously since no one person is technically creating worlds/items/characters manually when using this method, is the program then being creative when it builds these things? It may (or may not) be thinking about where to place things "randomly," given certain ranges and positions programmed in by humans. For example, it shouldn't try to place a mountain top in a river, or it shouldn't spawn a pig deep in a cave on Minecraft.

On the other hand, Minecraft does other weird things regarding placement of blocks:



 Like Megan's recent post, I just wanted to throw this in for consideration, whether you're debating on Monday or not.

Further reading:
  • A blog post discussing procedural narrative to create diverse stories
  • Massive's website, showcasing the media in which it's been used already, like the Lord of the Rings film trilogy
  • Some photos of the game Civilization V, displaying how procedural generation has created different terrains for maps
****Note: this is just an additional post, not my required one****

Angelina is Creating New Video Games


 
       I stumbled upon this article and I thought the students that are going to be in the creativity debate on Monday would like to look at this. It talks about AI and video game design. Apparently, Angelina is creating video games. Here is a link to one of the games it created: To That Sect

       Here is a paragraph I found interesting about how Angelina is using "creativity":

"Despite the challenges, it’s an area that could, Cook believes, reap major rewards in unlocking new game concepts and mechanics. Recently, he fed Angelina a game outline about exploring a dungeon as an adventurer. Instead of designing basic levels for an adventure game, Angelina designed levels in which a player controls multiple adventurers simultaneously and must get some of them killed in order to rescue the remainder. “It frequently does things like this—looking beyond the assumptions I have, and finding interesting things I would not think to look for,” says Cook."

       What do you guys think? Does this count as creativity?


NOTE:
       ACM TechNews sends out emails with a few articles pertaining to Computer Science three times a week. They summarize the articles and put a link to the website where you can read more about the topic. This time there were 3 or 4 articles about AI! This is where I saw the article. I encourage you to sign up for emails or check out the website if you want to know more about what is going on with CSC in the world. There is also an app, if that's the way you like to read your news.

Woebot - a chatbot for mental health?

Image credit:  https://woebot.io/ What's old is new again!   In a modern revival of the original chatbot Eliza , the world now has.....