TECH | Feb 20, 2019

From dinosaurs to Artificial Intelligence

The first Jurassic Park showed its potential. The many applications in the field of AI have made them indispensable. History (and growth) of the GPU market.

Raise your hand if you have never seen Jurassic Park at least once. Raise your hand if you do not remember the now legendary tyrannosaur which, advancing through the mud, haunted the dreams of generations for years. Jurassic Park was one of the first films to exploit special effects generated by computer graphics (for 4 minutes). And perhaps not everyone knows that the story of Artificial Intelligence also passed through there.

                        Rendering e Artificial Intelligence

Let’s start from the beginning. When an artist crystallizes an idea or a new character using paper, pencils and ink, his/her sketches are then passed to digital graphic specialists. Their job is to give body, volume and color to two-dimensional sketches. The experts, seated at their computers and with various techniques, create virtual three-dimensional models of the initial sketch, made up of hundreds of thousands of tiny “polygons”, rotated, deformed and then animated. Models are colored and lit up in order to reproduce incredibly realistic shapes, colors, vapors, sketches. Roughly, this is what digital rendering is, now the basis of all special effects in movies and video games.

Generating a single “rendered” frame can require an impressive number of calculations, delegated to special hardware components, graphics cards, which transform polygons and vertices into video signals, i.e. images. GPUs (Graphic Processing Units) are the “heart” of graphics cards; they are processors designed to allow spectators and gamers to enjoy impressive special effects.

The role of GPUs

It was precisely the large-scale availability of programmable and low-cost GPUs that constituted one of the factors which allowed Artificial Intelligence systems to forcefully take center stage on the market.

The first large-scale commercial implementations of GPUs began in the late 90s, when some microchip manufacturers such as NVIDIA or ATI (later acquired by AMD) began to commercialize the first programmable graphics cards, mainly to improve user experience in famous video games.

While the heart of the machine, the CPU (Central Processing Unit), was created to optimize processing of a sequence of instructions in a short time, GPUs targeted the execution of large volumes of calculations, whilst maintaining low energy consumption: to give an idea, according to some estimates, should the Exaflop supercomputer be based on standard CPUs, it would have an energy consumption comparable to that of the cities of San Francisco and San José combined.

These features made GPUs excellent candidates not only for rendering digital models, but also for executing algorithms requiring intensive calculations, which could be performed in parallel, such as Data Mining or Deep Learning logics. And that is how the new processors began to impose new programming paradigms on a large scale.

Over the last twenty years the GPU market has experienced an unstoppable growth and the GPU has become the symbol of high performance computing, now essential in many business sectors, such as finance, security, automotive, text analytics, image processing or autonomous driving systems. For example, millions of images are required to train a facial recognition system, and it would be impossible to think of doing so without budgeting for the purchase or rental of a GPU, which enables a reduction in the computing time for some operations of 50-100 times compared with a standard CPU.

Some excellent teaching and tutorial portals, aimed at those who wish to approach the world of Artificial Intelligence, such as fast.ai, not only promote the use of platforms supported by GPUs, but also suggest their sizing.

GPUs and Analytics

In order to understand the importance of GPUs in the Analytics market, we need to look at the penetration they have had in the world of in-cloud solution providers. Today it is very easy to have PaaS (Platform as a Service) or IaaS (Infrastructure as a Service) solutions based on GPUs with pay-per-use formulas, specifically designed for Machine Learning and Deep Learning. These include Microsoft AzureAWSGoogleCloud, but also PaperSpaceCrestleFloydHub. Their approximate cost? Starting (approximately) from 0.5 euros / hour.

And while the Artificial Intelligence market feeds the GPU race, the great Artificial Intelligence service providers announce new microchips, specifically designed for executing neural networks: from Google’s TPUs (Tensor Processing Units), to the MobilEye chipsets, produced specifically for autonomous driving.

This scenario is marking far-reaching transformations in the analytics market. Although many development libraries today try to simplify the complexities imposed by the use of these microprocessors, the technical expertise necessary for programming in environments provided with GPUs, multi-core CPUs, or on Big Data distributed systems (such as machine clusters) can differ significantly. These differences often translate into complementary professional skills between Data ScientistsData Engineers and Big Data experts, who are called on the one hand to know the limits and potential of their own infrastructure perfectly, and on the other to explore new algorithms which meet business requirements such as volume, speed and scalability.

It is quite funny to think that today, in the field of Artificial Intelligence, in the era of cloud and virtual machines, only a perfect mastery in pairing hardware and algorithms can translate into a formidable business advantage.

I remember my first friends who introduced me to the world of computers, brought up one of the oldest metaphors in history to explain the difference between software and hardware: “Software is the soul of the machine whilst hardware is the body”. Today, mind and body are once again as one. But was it ever any different?

 

Michele Gabusi

 

Find out more