Thursday, January 25, 2018

Automated Grocery Stores, Coming to a Neighborhood Near You!

Automated Grocery Stores, Coming to a Neighborhood Near You!


On Monday, January 22nd, the first automated grocery store opened in Seattle. Amazon Go, opened its doors to the world with an (almost) fully automated store. With having only a handful of employees to prepare food in the kitchen and to verify ID's for alcohol and tobacco-related purchases, the store is fully independent of having human workers.

The opening of the store was originally to be December 2016. With such a delay, Amazon explains that it took time to train their machine-learning algorithms that are also coupled with the concept of computer vision.


Now, at this point in reading into the store, I began to wonder...How does a fully automated grocery store even run? What is the check-out process like? What's implemented into the store to prevent shoplifting?

Well, upon further reading, I found that the store requires every shopper to log into their Amazon account, and download an 'Amazon Go' app. Then, through the complex algorithms implemented into the store, whatever items you bag and take with you, you will then be billed accordingly to your Amazon account. Another thing that I found fascinating is that the algorithm within the store is advanced enough to be able to determine if you are bagging an item to take with you, just picking up an item to read the label/nutrition facts, and if you end up setting the item back in another location in the store! Additionally, even if somebody tried to shoplift the store, the algorithms are precise enough to detect it, and that individual would be billed accordingly.

To me, this idea of automated grocery stores is very intriguing! Especially in regards to Amazon's machine learning and computer vision algorithms that are implemented within the grocery store. I think that this concept will be a start of a new chain of grocery stores (powered by Amazon too, no doubt). What do you guys think about this automated store? Do you think that the Amazon Go store will become the next local super-store? How do you think the machine learning algorithms implemented in the store will affect A.I. development in general?

Photo Creds:

Sunday, January 21, 2018

Money making AI with Steem

Steem is a Crypto-currency that is being used as a social media platform. It is light, fast to transact, and requires no fees to move funds. It is being used by the developers at the website called Steemit.com. The biggest feature from this project is that on Steemit the users create content and up vote others. The users are then given rewards that are generated from thin air on this platform.



Now that is obviously an over simplification of the platform but this blog post if not about a Crypto-currency. The point I would like to focus on are the rewards that are given to the users. There are two ways to make money from content.
1: If you create a comment or post that someone likes they can up vote you, the up votes all have different values depending on how much stake that user has on the network. But for every person who up votes you who owns Steem, you make money.
2: If you up vote posts and they then become popular then you get rewarded for finding good content for the network.

Now there is an API for Steem that allows you to automate anything with the network. You can automatically post anything on the network using a bot, you can also up vote using a bot. If you can create an algorithm that finds good content that gets up voted you can constantly collect rewards from the network. But it is really hard to read a picture, link and text to identify if content is enjoyable.

This blog post looks at some of the most famous bots on the network. I will show a couple of examples from this blog post, but if you are interested there are some very cool stories and posts on this blog.

The algorithms can be very basic or very complex. Some will analyze which posts get up voted, see how long each post is and how many photos are included. Now scan the block chain to see similar posts with the same length and number of photos and up vote all of them!

Here is a bot that just monitors accounts and up votes every article the user posts. Another bot tries to catch plagiarism in posts and report them.

Now this might seem a little off from AI, and I agree. Creating a bot that just up votes something everyday or just uses pure randomness is not AI. But going back to our assignment in class, NLP, we would analyze text and identify if it is positive, negative or neutral and rank the statement. A good example of a bot that is doing this is this bot. The AI looks at posts and ranks them depending on several parameters mentioned in the post. I also believe that this space is very new considering Steemit was introduced in 2016. Machine learning and other AI could be used in this space to try to understand what people enjoy. Since everything is open sourced it makes it easy for developers to interact with the block chain and users to see how information is received by the community. We could use weights to identify if a certain length of article keeps the user engaged but not too short/long. We could weigh the number of photos or the topics that are presented in the post.

Now this sounds really interesting, thousands of posts a day, earning a millions of dollars, up votes small communities of great writers. But this creates a weird ecosystem. You start to have huge bots who are up voting content. These bots start deciding what content looks good and what you should see at the top page. If you start creating content you can target these bots to reward your content. If you understand the algorithm and exploit it you can get hundreds of bots to up vote every single piece of your content. People will start writing blog posts to appeal to bots instead of humans, and people are trying to upvote what bots will upvote. This system can get to a point where bots are just fighting with eachother over content and the enjoyment of humans is put on the back burner. YouTube uses similar practices, but they have an end quote of getting the most ads in front of you. These bots have an end goal of using the system to guess which post will be best. If they dominate the humans on this platform they could be the audience that we perform for to make Steem.

But the Steem community calls themselves the "Proof-of-brain" Crypto so perhaps this will never be the case. But it is a very interesting space to try to understand the content that will be produced and received by a bot community.

Disclaimer: I am not trying to encourage anyone to buy Steem, I just want to show some cool things that are happening in the space.

Sources:
Screenshot of steemit.com curtosy of Daniel Zwiener
Second photo is from bitcointalk.org or found at http://empowereddollar.com/wp-content/uploads/robot-accountant.jpg

Universal Basic... Jobs, anyone?

good job blue ribbon
[image license: public domain]



Last class period we briefly discussed the question of...

If advanced A.I. takes everyone's jobs (except perhaps a few super-wealthy CEOs and presidents of the robot companies & robotic factories that are self-contained wealth-production machines), then... what?

Someone mentioned UBI (Universal Basic Income) as one possible solution, and some people have been promoting this idea for several years (NY Times article from 2016).  However, there are also a variety of potential objections to it.

An A.I. mailing list that I read posted a link to this article that proposes "Universal Basic Jobs" as an alternative to UBI. 

I don't know if this idea would work, but it caught my interest.  What do you all think?  (Let's discuss in the comments below!)

Thursday, January 18, 2018

AI beat human in Stanford reading test, NLP breakthrough?


Credit: Jack Ma, CEO of the Alibaba Group (Koki Nagahama/Getty Images)
The ability to read is truly a huge evolutionary advantage for humanity. With written language we can retain and retract knowledge, passing it down for generations. The ability to write and read is the heart of human development over the last millennium. The AI industry has long sought to develop machines that possess this ability. In a recent article, Alibaba's AI Outguns Humans in Reading Test, Alibaba and Microsoft’s AIs have made significant progress in machine reading, beating the human average score in the Stanford reading test. This test based on more than 500 Wikipedia articles, and designed to figure out if AIs can process through a large quantity of data to answer the questions set. Both AIs, for the first time in history, score a higher score than human average. This mark a big step in Natural Language Processing development. According to this article, Microsoft acquired Maluuba, a company that uses deep learning to develop natural-language understanding, around this time last year, and has since then placed focus on making literate AIs. It’s amazing how in the article, which written last May, the co-founder of Maluuba still only hope to make the literate machine, and now they have developed AI that can surpass the average human score in a reading test. For Alibaba, they already apply this technology on Singles Day, the world's biggest shopping bonanza, by using computers to answer a large number of customer service questions, according to this article. It’s clear that research in literate AIs are growing quickly, and will soon be able to provide us much benefit.

Scoring higher than human on a test does not prove AIs reach a human reading comprehension level, however. We as humans can detect nuances, hidden message, sarcasm, etc. in reading, but these AIs still very impressive in their own right. The AIs clearly open many possibilities, including processing a large amount of data and making sense of it. Customer service, healthcare, virtual assistant, search engine, etc. all will benefit from this. If we keep this development rate, I think we can create AIs that hold all humanity digital knowledge and can answer any question with precise accuracy in the next 50 years. However, this technology also has flip side. Now AIs could potentially understand human communications, malicious AIs could potentially spy on a large scale, and make it easier to obtain valuable information without us noticing. Also, in development of new know-it-all AIs, if we not careful with the data we feed the AIs, the AIs knowledge could potentially get manipulate, just like how the Microsoft's twitter AI Tay eventually got shut down due to her access to uncleaned data.

This breakthrough is clearly related to our class Nature Language Processing topic at the beginning of the term. In class, we learned how machines can try to understand and process written language by parsing and using sentinel score. We also talk about how the technology available still cannot completely let machine understand language well because of the ambiguity of language. However, it seems like we are coming nearer and nearer to that goal. Of course, Alibaba and Microsoft’s AIs are using deep learning and not simple code we use in our politician analysis assignment, but we can hope that this technology will soon be accessible to the public. What a great time to be learning about AIs, when we can see the technology evolve day by day.

I’m excited to see new technology like this helping people in their life. However, I also concern about how this could affect our job market, as now AIs could potentially continue to replace human in the workspace. I'm also wondering if after making AIs that could inherit our current knowledge, could we start making AIs that create new knowledge, and should we do such thing?

Tuesday, January 16, 2018

Can AI Be Used to Read Minds?

(Source: http://www.hindustantimes.com/tech/scientists-develop-artificial-intelligence-that-can-read-your-mind/story-drSQCLm7CXUbXbqkoJ28SN.html)

How would you feel about a computer being able to read your thoughts? While mind reading robots might seem like a fairytale of the future, it might become a reality as four Japanese scientists have begun to reconstruct images from people's brain patterns (article). For their experiment, scientists would show test subjects various images of animals, people, letters, and geometric shapes. They would then measure the subjects' brain activity while they were looking at the image or in other cases they would ask the subjects to recall a previously shown image from memory. Afterwards by using machine learning algorithms, a computer would take the brain measurements and decode the information to recreate the image the subjects were seeing in their heads (see example images below). The technology isn't perfect, however, as it only gives a rough outline of the object in the picture and appears to lack the finer details making it hard to tell what image the subject was looking at. One problem is it is harder to decode brain signals when a person is trying to recall an image versus when they are looking at the image because a person’s memory isn’t as good at remembering all the details making it harder for AI to recreate the image. (For more details check out their research paper)
(Source: https://www.cnbc.com/2018/01/08/japanese-scientists-use-artificial-intelligence-to-decode-thoughts.html)
(Source: https://www.cnbc.com/2018/01/08/japanese-scientists-use-artificial-intelligence-to-decode-thoughts.html)


Assuming we are able to create mind reading robots, that opens the possibility to do lots of cool things like controlling robots using just our mind or allowing people with speech problems to still communicate by speaking telepathically through a computer. Consequently, there are also a lot of problems that could come from mind reading AI especially with the issue of privacy. Many people would probably feel uncomfortable with the idea of a robot reading their thoughts and feel more self conscious about what they are thinking. The technology could be abused by advertising companies who might try to read a person's thoughts as they already are tracking people's browser history to determine what kind of advertisements to send them. Mind reading robots would also likely be used by intelligence agencies to help prevent violent crimes but at what cost? Are people willing to give up their privacy to be safer? My guess is no, based on the backlash the NSA received for collecting internet and phone data. If mind reading robots were created there would have to be some sort of laws put in place to protect people's privacy to prevent others from taking advantage of the technology.

Relating to class I think this article shows us the real life possibility of what happened in Chapter 5 of iRobot where the robot Herbie was able to read minds and speak telepathically. While I do not think we are close to having robots like Herbie who can read a person’s every thought, this is a stepping stone towards that future and offers us a better idea of how the human brain works. I think this can help scientist create smarter AI and maybe even conscious AI that can act like humans through recreation of human brain activity. Of course this depends on whether one believe AI can even become conscious or if it is purely a human trait. Do think it will be possible for us to be able to completely understand the human brain and create a mind reading robot like Herbie or a conscious robot that can think similarly to a human? 

Sunday, January 14, 2018

Artificial Intelligence in Medicine

One of the most important parts of a functioning society is the access to quality healthcare for life expectancy to be longer and to improve the overall healthiness of a population. Also, naturally, people want to help each other live longer and enjoy their lives with the least amount of physical suffering possible. So, of course in a field with such large ramifications towards the well-being of a population, the most advanced technologies are going to be developed oftentimes geared towards advancing medicine in some form or another. A.I is no exception to the rule and there have recently been fairly large advances in technologies capable of improving healthcare in the future.

Perhaps one of the most prominent examples of this is IBM's Watson - and particularly the Oncology diagnostics. IBM Watson makes recommendations that have a 90% concordance rate with the tumor board recommendations (See their full Oncology section here). This means that rather than waiting for another human doctor to spend time reading images and then getting back to the original doctor in order to corroborate ideas, the original doctor can rely on Watson to achieve the same goals but much much quicker. This efficiency is extremely important in a field like Oncology, because the earlier treatment starts, the better odds patients have at full recoveries. Watson uses a lot of similar techniques that we have been learning about to analyze different types of data or apply its A.I. to areas beyond Oncology specifically. From advanced natural language processing to machine learning in order to most successfully grow and achieve whatever its given goals are by IBM.

A second major example of A.I. influencing the world of medicine is the DeepMind research based in the UK. DeepMind focuses on A.I. that can learn to interpret test results based off of previous similar results and recommend a treatment plan or diagnosis through this learning. Probably the most obvious benefit of this is again the speed at which an A.I. will be able to return results to patients or the primary doctor versus waiting on multiple doctors to all agree on something. Speed at which treatment starts in medicine is arguably the most important part of the field, so using A.I. to greatly improve speed will be a huge leap forward in the quality of care for most patients. See more about DeepMind's own goals here

I wanted to talk a bit about medical advancements within A.I. because it is a pretty similar topic to the transhumanism debate question (or at least moving in that direction), because I think one could easily see this moving away from even needing to go to a hospital to receive a diagnosis and instead having some kind of device on you or attached to you that would immediately alert you to some sort of negative change to your bodily health. I think this is a pretty interesting concept that could really positively influence the world of medicine so I'm curious what you all have to say about it.


Here is a quick video of the cancer center heavily involved in trying to utilize IBM's Watson:

Thursday, January 11, 2018

A.I. Is Now Somewhat Capable of Creating Fanfiction

Photo taken from Botnik's Harry Potter post

As A.I. advances, we see new forms of content being created.  One of my more recent favorites is fanfiction that is being created with A.I.  In particular, a company called Botnik Studios is working on a predictive writer that can analyze content and output new content in the same style.  I originally stumbled upon this creation on Tumblr, but as it turns out, there have been quite a few websites covering some of Botnik's works.  In particular, a website called The Verge wrote an article called This Harry Potter AI-generated fanfiction is remarkably good, and just as the title says, it is remarkably good for what A.I. is today.

Thus far, Botnik has created predictive keyboards to write new lyrics for specific bands, facts about animals (which of course are probably not true, but sound like they could be true), cooking recipes, TV episode scripts, romance related topics, advertisements, holiday related topics (tips, history, and songs), poetry, book narrations and dialogs, video game tips and titles, tech reviews and quizzes, among other miscellaneous topics.  Botnik has a page of all available predictive keyboards here if you would like to play around with any.

From this technology has spawned quite an interesting chapter of Harry Potter fanfiction titled Harry Potter and the Portrait of What Looked Like A Large Pile Of Ash which can be found here on Botnik's website.  The new chapter opens on Harry, Ron, and Hermione, and it closes with strange instances such as long pumpkins falling out of McGonagall, mice exploding, and Dumbledore's hair scooting...  Of course because A.I. is not perfect in this day and age, the first chapter is not completely sound, but it is incredibly enjoyable to read and it does seem to be somewhat stylistically close to how J.K. Rolling writes.  At the bottom of each post by Botnik introducing a new writing, you can see that they will specify the algorithm used.  So, in the case of Harry Potter and the Portrait of What Looked Like A Large Pile Of Ash, Botnik utilized its predictive algorithm.  Unfortunately, there is only one chapter available from Botnik's website at the moment, but hopefully in the future we can see a completed work.

So knowing this, I find myself asking whether or not A.I. will ever be able to write completely new content that sounds exactly as it was written by a person.  Do you think it is possible that someday it will be common for books and book series written by A.I. to be published and readily available at place like Barnes & Noble?  Is content written by A.I. something that you would be interested in reading regularly?


Wednesday, January 3, 2018

Nvidia Relases New GPU Intended for Deep Learning, but How Does It Help?

It's no secret that California-based American Tech company NVIDIA has an interest in artificial intelligence.

The "AI Computing Company", as they call themselves, have consistently made strides to further the GPU (Graphical Processing Unit) (more reading here) industry; in fact, back in 2006 they unveiled their CUDA programming model and Tesla GPU platform, both of which revolutionized computing by opening up the parallel-processing capabilities of the GPU to everyday computing. It goes back even farther though: NVIDIA is responsible for inventing the consumer-grade GPU itself back in 1999 (they claim to have invented the GPU itself in 1999, but similar tech has existed since the 70's). Their impact on the technology is clear, but how does that affect AI? 

Let's lay out some of the finer details first.

Once upon a time, database throughput and application performance were proportional to available RAM and number of CPUs. This, however, quickly changed with the rise of NVIDIA and the GPU. It's easy to think that a GPU is simply used for graphical concepts like video games, modeling, and more. The GPU industry itself is largely synonymous with the gaming industry today but it's so much more than that. To make the distinction it helps to understand how GPU acceleration works under the hood.

Source: Nvidia


GPU-accelerated computing refers to the use of a GPU together with a CPU to accelerate applications in fields such as deep learning, engineering, and analytics. It works by offloading compute-intensive portions of an application's code to a GPU while the remainder continues to run on the CPU. Furthermore, the architecture of a given GPU is very different from that of a CPU. We've all heard of "quad-core" and "octa-core" CPUs, but why don't we hear of any "octa-core" GPUs? It's because they're already far beyond that. GPUs consist of thousands of smaller, more efficient cores that are designed to handle many tasks simultaneously (read: a massively parallel architecture) (Nvidia). This architecture means that a GPU can handle copious amounts of data better than a CPU can. It's easy to see where this is headed.

Let's backtrack a bit and tackle our main question: how does Nvidia's impact on the GPU market affect AI?

The answer lies in GPU deep learning.

GPU deep learning is an advanced machine learning technique that has taken the AI and cognitive computing industries by storm. It uses neural networks (and more) to power computer vision, speech recognition (OK Google...), autonomous cars, and much more.The neural networks that drive these projects perform very complex statistical computations in attempts to find patterns in what are often incredibly large sets of data. This is where the GPU comes in. A GPU can cut the time needed to compute these computations down dramatically by increasing the overall throughput over any given amount of time. Thanks to this architecture we are able to experiment with AI techniques that simply weren't possible (or probable) prior (Forbes).

Nvidia's newest card, the Titan V, is the most powerful consumer-grade GPU ever released (Titan V Webpage). 

Source: Nvidia


Inside of it lies their new Volta architecture, which they say is the world's most advanced architecture. With 640 Tensor Cores and over 100 TeraFLOPS of performance, that's no lie; it really is the best AI-oriented card and architecture on the market. Though, at $3,000 it's a bit pricey. At least Titan users get free access to GPU-optimized deep learning software on Nvidia's GPU Cloud. How nice of them.

If you're interested, you can read the whitepaper on their Volta architecture here.
(Posted originally to Google Group on 12/16/17) 

Woebot - a chatbot for mental health?

Image credit:  https://woebot.io/ What's old is new again!   In a modern revival of the original chatbot Eliza , the world now has.....