I’m Danny Shapiro. I’m the senior director of automotive at NVIDIA. NVIDIA is a technology company that’s developed a lot of different computing platforms. Specifically, we’ve developed an AI supercomputer for self-driving vehicles, whether the cars, shuttles, trucks, or any kinds of other vehicles that we’ll soon see on our roads and maybe even in the skies above us. So NVIDIA is not making cars, but NVIDIA has its own fleet of self-driving test vehicles, and we’ve dubbed it BB8, as a robotic, an advanced robotic vehicle, and we’re testing those here on the streets in California, in New Jersey, in Germany as well. We’ve been working with the auto industry for almost two decades now. In fact, every car, truck, plane, or other mode of transportation and even consumer goods are designed on NVIDIA graphics hardware and software systems. About a decade ago, we started bringing our technology into the car. We started working on infotainment systems and digital cockpits and head up displays. More recently though our technology because of the role we’re playing in artificial intelligence is being used for self-driving vehicles. So instead of us generating information out in the form of pixels on displays, we’re also now processing information coming in. So radar, lidar, camera information all comes in to the NVIDIA system, and we’re using artificial intelligence now to make sense of it. Now we have a specific product just for vehicles, and it’s called Drive PX. It’s an AI supercomputer that has the power of over 150 MacBook Pros, all in a very small form factor, about the size of a license plate. Essentially, all the inputs come in to the Drive PX, from cameras, from sensors like radar and lidar and ultrasonic all generating data. So our processor has to take all this information and make sense of it. Basically identify what’s another vehicle, what’s a pedestrian, what’s a crosswalk, what’s a sign. And all of this happens within a fraction of a second. We have cameras generating 30 frames every second, we have to analyze those frames and understand a full 360 environment around the vehicle. The GPU or graphics processing unit was invented by NVIDIA in 1999. This is a massively parallel processor, and it’s different from a CPU or central processing unit that you find in your PC. You’ve probably heard a CPU described as a dual core or quad core and so essentially that means it is two lanes, or maybe four lanes of information that will flow through that CPU at once. The GPU on the other hand, has now over five thousand cores, so imagine if this was a highway instead of a two-lane highway or a four-lane highway, we now have 5,000 lanes. So a lot of traffic, a lot of information can go through that processor simultaneously. And then, we’re able to bring all that computational horsepower down into that car in the Drive PX to the process all the sensor data in real time. Drive PX 2 is one of several products from NVIDIA that are all using deep learning. So we have the system for the car, but other people who are developing can develop on a PC. You can use deep learning in the cloud or an embedded device called jets and if you’re a hobbyist or a maker. The benefit of this unified architecture is again you develop on one platform and you can deploy it on any platform. And so I think that’s the challenge today is for students and now the industry to develop new algorithms, new deep neural networks, and leverage AI to build a complete self driving cars system that is specific to the vehicle. Given the types of jobs out in the marketplace today and the lack of talent, I think there’s a lot of opportunity for anyone just getting started who can take courses to understand the fundamentals of computing today. And now there’s a whole number of things that are unique to the self driving challenge, so understanding the complexity of what happens throughout the computing pipeline of developing a self driving cars will be essential to getting a job in this industry.