How NVIDIA became the world's first $4trn company Banner
Back to Blogs

How NVIDIA became the world's first $4trn company

Last week, Nvidia – the chip designer whose hardware lies at the heart of the artificial intelligence (AI) revolution – hit a momentous milestone, becoming the first company in the world to achieve a market capitalisation (market cap) of $4trn. 

It’s an incredible feat. Three years ago, far from being the world’s most valuable company, Nvidia had barely scraped into the top 10. Back then, its market cap was hovering around the $500bn mark – nothing to be sniffed at, but less than half the level required to join the prestigious $1trn club, and less than a quarter of what companies like Apple and Microsoft were worth at the time.

It’s strange to think today that this technology powerhouse, whose semiconductor technology is effectively ubiquitous with the latest AI and machine learning technologies, was for most of its history a relatively unheard-of business that enjoyed cult status in the gaming community, but which was rarely mentioned in conversations regarding the world’s most innovative technology companies.

So to celebrate Nvidia’s meteoric rise to becoming the world’s first $4trn company, we’re going to take a look back over its history and see how this gaming chip designer came to underpin a global revolution. 

 

Origins

Nvidia was founded in April 1993 by Chris Malachowsky, Curtis Priem and the man who, nowadays, is as synonymous with Nvidia as Nvidia is with AI: CEO Jensen Huang. The three worked together at Sun Microsystems – Huang was contracted to the project through LSI Logic – where they developed one of the world’s first graphics cards in the early 1990s.

Sun moved away from the architecture that Malachowsky and Priem worked on over the following years, prompting both to quit their posts. They persuaded Huang to leave LSI and found a business with them, designing graphics cards for video games. Nvidia was born. 

It wasn’t the most prestigious of beginnings – the three decided to start the business at a nearby Denny’s diner. But there was a certain poetry about the location, as Huang had worked as a dishwasher and waiter at the chain as a teenager. 

“I consider [Denny’s] my first company,” Huang told Sequoia Capital’s Crucible Moments podcast in 2023. “I really liked Denny’s.”

The name they chose for the company is said to be derived from the trio’s fixation on the ‘next version’, or NV for short. Constant innovation was baked into Nvidia’s essence from the outset (presumably along with bucketloads of strong coffee).

 

Winning the gaming world

Graphics cards like those the three had developed at Sun Microsystems had enabled video gaming to move beyond being essentially just text terminals, to incorporating increasingly complex graphics. 

Nvidia first made a name for itself in this emerging world in 1999 when it released the GeForce 256, which it marketed as the world’s first fully integrated graphics processing unit (GPU).

GPUs were transformative because they reduced the workload of the central processing unit (CPU). Until then, the CPU had to process everything involved in loading game graphics as well as running the rest of the systems involved. They handled these processes in linear fashion: one at a time.

But GPUs took this burden away from them, allowing far more complex games and graphics to be processed. GPUs achieved this through parallel processing: running several computations at the same time. Parallel processing was a total game-changer for gaming, and as it happened, it would have far-reaching applications which Huang and co could scarcely have guessed at back in the late 90s.

The innovation prompted Nvidia’s share price to double in the year after GeForce’s release, and saw rivals like AMD scramble to try to develop their own competitors.

 

An intelligent gamble

Nvidia, though, was one step ahead: focused on the next version while its competitors were only just catching up. 

Huang spotted early on that GPUs and parallel processing would be of interest outside the confines of the gaming world. In 2006, Nvidia released CUDA, which originally stood for Compute Unified Device Architecture. CUDA was effectively a parallel computing API allowing third parties to access the processing power of GPUs. 

CUDA’s architecture was initially geared towards scientific researchers, such as for data processing or rendering simulations. This application of GPUs to fields outside gaming became known as general purpose computing. Just as GPUs had for gaming, general purpose computing marked a step-change for scientists, researchers and engineers.

So during the mid-2000s, Nvidia was already spreading its wings beyond the world of gaming graphics, and this foray into academia would eventually pave the way for what is now its raison d’etre: AI.

Huang soon became aware of several research groups that were looking into innovative new approaches to AI and machine learning. One of these was led by Andrew Ng, a deep learning researcher at Stanford University who would go on to co-found Google Brain and become one of the biggest names in AI (besides, of course, Huang himself). 

Ng’s group found that using GPUs via CUDA for their research into neural networks speeded up their results by 10 or even 100 times, because it allowed them to perform thousands of processes simultaneously.

Huang saw potential in neural networks, and repositioned CUDA to be a specialised platform for AI development. That was an enormous gamble: there was no market for AI chips in the late 2000s or early 2010s, and the company, though going along fine, didn’t have unlimited resources. 

“At this time, we’re public, we’re a multi-billion dollar company, we’re actually successful now,” said Huang on Crucible Moments. “We’ve dodged several life-threatening challenges. Nobody wants to derail the company. They want to defend the company and protect the company.”

But Huang was so convinced by his vision that he went all-in on this nascent technology.

“This was a giant pivot for our company,” he added. “We’re adding cost, we’re adding people, we have to learn new skills. It took our attention away from our normal day-to-day competition in computer graphics and gaming. The company’s focus was steered away from its core business. 

“And it wasn’t just in one place, it’s all over the company. It was a wholesale pivot in this new direction.”

 

Nvidia’s rise to greatness

As with all innovators, Nvidia was way ahead of its time, and it’s a good thing too. AI research bubbled away largely in the background as far as the general public was concerned for much of the next decade. But exciting things were happening behind the scenes.

Nvidia underpinned AlexNet which, in 2012, highlighted the feasibility of using neural networks for computer image recognition. In the middle of the decade, Nvidia’s GPUs powered AI assistants like Siri which were gaining prominence at the time. 

As the 2010s gave way to the 2020s, Nvidia chips underpinned several natural language processing models, including OpenAI’s GPT-3. In 2020, GPT-3 started to attract the attention of technologists as it started setting new standards in context awareness and natural text generation.

Nvidia was front and centre in the emergence of neural networks, largely because (thanks to Huang’s gamble) it had very little competition. Most other chip designers, especially Intel – the industry’s biggest player at the time – were fixated on the smartphone revolution which had been taking place. Nvidia, though, was looking one step ahead.

AI jumped into the mainstream in a big way in November 2022, when OpenAI launched ChatGPT to the public. 

You know the rest of the story from there. The world went mad for AI in all its forms, and Nvidia – effectively the only company in the world with the knowledge to design the best-performing AI chips – rocketed to success. 

Its revenue increased by more than 380% in three years, and its market cap has shot from around $500bn to $4trn.

 

The future

True to its name and founding principles, Nvidia won’t rest on its laurels, content with past success. It is already looking ahead to the next version of innovation and technological progress.

Huang is already putting Nvidia at the forefront of developing national AI infrastructure projects, such as recently-announced plans to deploy 14,000 Blackwell GPUs across new data centres in the UK. Huang believes that this AI infrastructure industry alone will be worth trillions of dollars. 

He has also spoken of his vision of an era of “physical AI”, driven by robots that understand concepts like inertia and friction.

Whatever form the next phases of the AI revolution take, it’s safe to assume Nvidia, which played such a key role in its beginnings, will continue to innovate and drive the next version of the future.

 

Staying up to date.

Explore our blogs and talent reports. Keeping you up to date on the latest tech industry insights.

Get in touch.

oho connects the future to your hands. Let us know what we can do for you.