Wednesday, January 3, 2018

Nvidia Relases New GPU Intended for Deep Learning, but How Does It Help?

It's no secret that California-based American Tech company NVIDIA has an interest in artificial intelligence.

The "AI Computing Company", as they call themselves, have consistently made strides to further the GPU (Graphical Processing Unit) (more reading here) industry; in fact, back in 2006 they unveiled their CUDA programming model and Tesla GPU platform, both of which revolutionized computing by opening up the parallel-processing capabilities of the GPU to everyday computing. It goes back even farther though: NVIDIA is responsible for inventing the consumer-grade GPU itself back in 1999 (they claim to have invented the GPU itself in 1999, but similar tech has existed since the 70's). Their impact on the technology is clear, but how does that affect AI? 

Let's lay out some of the finer details first.

Once upon a time, database throughput and application performance were proportional to available RAM and number of CPUs. This, however, quickly changed with the rise of NVIDIA and the GPU. It's easy to think that a GPU is simply used for graphical concepts like video games, modeling, and more. The GPU industry itself is largely synonymous with the gaming industry today but it's so much more than that. To make the distinction it helps to understand how GPU acceleration works under the hood.

Source: Nvidia


GPU-accelerated computing refers to the use of a GPU together with a CPU to accelerate applications in fields such as deep learning, engineering, and analytics. It works by offloading compute-intensive portions of an application's code to a GPU while the remainder continues to run on the CPU. Furthermore, the architecture of a given GPU is very different from that of a CPU. We've all heard of "quad-core" and "octa-core" CPUs, but why don't we hear of any "octa-core" GPUs? It's because they're already far beyond that. GPUs consist of thousands of smaller, more efficient cores that are designed to handle many tasks simultaneously (read: a massively parallel architecture) (Nvidia). This architecture means that a GPU can handle copious amounts of data better than a CPU can. It's easy to see where this is headed.

Let's backtrack a bit and tackle our main question: how does Nvidia's impact on the GPU market affect AI?

The answer lies in GPU deep learning.

GPU deep learning is an advanced machine learning technique that has taken the AI and cognitive computing industries by storm. It uses neural networks (and more) to power computer vision, speech recognition (OK Google...), autonomous cars, and much more.The neural networks that drive these projects perform very complex statistical computations in attempts to find patterns in what are often incredibly large sets of data. This is where the GPU comes in. A GPU can cut the time needed to compute these computations down dramatically by increasing the overall throughput over any given amount of time. Thanks to this architecture we are able to experiment with AI techniques that simply weren't possible (or probable) prior (Forbes).

Nvidia's newest card, the Titan V, is the most powerful consumer-grade GPU ever released (Titan V Webpage). 

Source: Nvidia


Inside of it lies their new Volta architecture, which they say is the world's most advanced architecture. With 640 Tensor Cores and over 100 TeraFLOPS of performance, that's no lie; it really is the best AI-oriented card and architecture on the market. Though, at $3,000 it's a bit pricey. At least Titan users get free access to GPU-optimized deep learning software on Nvidia's GPU Cloud. How nice of them.

If you're interested, you can read the whitepaper on their Volta architecture here.
(Posted originally to Google Group on 12/16/17) 

3 comments:

  1. Nvidia has been doing lots of innovative work lately. Aside from GPU applications in mining for crypto-coins and data analytics, I am curious to see how this company might expand the capabilities of virtual reality software.

    ReplyDelete
  2. As a PC gamer, I have been using Nvidia graphics cards for optimal performance. I had no idea how GPUs really worked. I think it's very interesting how GPU acceleration works and how there's actually thousands of cores inside of a GPU. It's also pretty cool how GPUs can be used for A.I.

    ReplyDelete
  3. Yes, there are thousands of cores in a GPU, but note that they are much more limited in what they can do than a general purpose CPU core. GPU cores are small and optimized for certain operations, but can't handle others, so their good for doing the same simple calculations on a huge volume of data. This is what makes them good for rendering graphics (processing each coordinate point in parallel) or training neural networks (multiplying/adding numbers and updating weights for a lot of nodes in parallel).

    ReplyDelete

Woebot - a chatbot for mental health?

Image credit:  https://woebot.io/ What's old is new again!   In a modern revival of the original chatbot Eliza , the world now has.....