Case Study
NVIDIA
Jan 14, 2025
This case study aims to provide a concise snapshot of NVIDIA’s significance in AI and accelerated computing, shedding light on how the company’s pioneering work in GPUs has catalyzed many of today’s leading-edge AI breakthroughs. Additionally, we’re making an Excel valuation model available for download, so readers can delve deeper into NVIDIA’s financial prospects and arrive at their own valuation conclusions.
Key points
NVIDIA’s pioneering role: Over the past decade, NVIDIA has revolutionized AI by offering a holistic technology stack—spanning GPUs, networking solutions, and advanced software platforms like CUDA. This integrated approach has become a cornerstone for training and running massive AI models that power everything from cutting-edge research to real-world applications.
Enabling advanced AI: Thanks to NVIDIA’s accelerated computing solutions, trillion-parameter models have become a reality, ushering in an era where AI can tackle highly complex tasks in fields ranging from autonomous driving to scientific simulations.
Recent financial results: NVIDIA delivered another record-breaking quarter in Q3 FY 2025, with revenue reaching $35.1 billion, exceeding analyst expectations of $32.5 billion. This performance marked a 17% sequential increase and an impressive 94% year-on-year growth.
Valuation perspective: Our preliminary DCF analysis yields a fair value range of approximately $97–$107 per share. In contrast, the stock recently traded around $132 (as of January 14, 2025). This divergence could reflect the market’s optimism about NVIDIA’s central role in the AI revolution, the potential for further margin expansion and faster growth, or a premium for its leading position in accelerated computing.
Readers can download the Excel model here to explore our assumptions, run sensitivity analyses, and draw their own conclusions about NVIDIA’s fair value.
Disclaimer: This case study is provided for educational and informational purposes only and should not be construed as financial or investment advice. All opinions expressed herein are solely those of the author and do not constitute recommendations to buy, sell, or hold any security. Readers are encouraged to conduct their own due diligence and consult qualified professionals before making any investment decisions.
Introduction
Today, NVIDIA stands at the forefront of the accelerated computing revolution, with its GPUs powering an overwhelming majority of the top AI models and supercomputers worldwide. From cutting-edge language models to medical research and high-performance computing (HPC) tasks, NVIDIA’s hardware and software ecosystem has proven indispensable for organizations that require massive computational horsepower. But how did they get there?
Brief history & evolution of NVIDIA
NVIDIA was founded in 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem at a time when 3D graphics for gaming and professional visualization were still in their infancy. The company’s early focus on graphics processing units (GPUs) quickly set it apart from competitors in the emerging multimedia market. By consistently pushing the boundaries of 3D rendering and performance, NVIDIA established a dominant foothold in the gaming industry—powering everything from high-end gaming PCs to professional workstations.
A pivotal milestone arrived in 2006 with the introduction of CUDA, a parallel computing platform that allowed developers to harness the raw power of GPUs for much more than just graphics. CUDA revolutionized parallel computing by making it easier to write software for GPU acceleration, catalyzing breakthroughs in fields like scientific research, data analytics, and eventually, artificial intelligence.
My background is in Computational Fluid Dynamics (CFD)—another field that demands massive computing resources. Through firsthand experience in CFD, I’ve seen how CUDA delivers superior computing efficiency compared to alternatives (AMD, Intel)—way before it became widely known due to its role in AI computing. When offloading part of compute-intensive CFD meshing calculations to graphical units, with CUDA and NVIDIA, our team was able to achieve computational speed-ups a few times better than with comparable alternative stacks. This real-world exposure shapes my appreciation for NVIDIA’s holistic approach to hardware and software, and underscores why they continue to outpace many rivals in terms of performance and developer adoption.
Initially, GPUs were best known for gaming—rendering the cutting-edge visuals that made PC gaming popular. Over time, however, researchers discovered that the same parallel processing capabilities powering video games could also handle the intensive math behind high-performance computing (HPC) and AI workloads. This marked a transformative shift: NVIDIA moved beyond gaming to become a critical enabler of machine learning and deep learning applications worldwide.
Today, trillion-parameter AI models are pushing the limits of computational demands, and NVIDIA’s GPUs, alongside its advanced software stack, have become the backbone for training and running these massive systems. From autonomous vehicles to next-generation language models, NVIDIA’s technology underpins much of the AI revolution, illustrating just how far the company has come from its early roots in 3D graphics.
Why NVIDIA leads the AI revolution today
Traditional CPU performance growth has slowed in tandem with the plateauing of Moore’s Law, driving the need for specialized architectures capable of handling massive parallel workloads. NVIDIA addresses this challenge with an all-in-one stack that integrates high-powered GPUs, advanced networking (Mellanox), and robust software libraries. By tightly coupling hardware, interconnect, and software, NVIDIA drastically reduces the time it takes to train and infer large AI models—an increasingly crucial advantage as the scope and complexity of AI applications soar.
A major part of NVIDIA’s competitive advantage stems from CUDA, its proprietary parallel computing platform that made GPU programming more accessible to developers and researchers. This platform not only provides a high switching cost—once teams standardize on CUDA, switching to another GPU vendor can be expensive and disruptive—but also attracts continuous developer innovation. Combined with NVIDIA’s ongoing R&D investments, CUDA ensures the company stays ahead in a race where hardware alone isn’t enough; the supporting ecosystem and software stack are equally vital.
NVIDIA’s leadership is further underscored by the fact that over 75% of the world’s top 500 supercomputers rely on its GPUs and networking solutions, according to the company’s own data. From medical research and scientific simulations (including CFD) to natural language processing, NVIDIA’s technology repeatedly sets new performance benchmarks. By continually pushing the boundaries of HPC and AI infrastructure, the company has proven itself indispensable to industries seeking exponential gains in computational capabilities.
While it’s clear that NVIDIA stands to be one of the greatest beneficiaries of the current AI revolution, this success is well-earned. After all, it was NVIDIA’s groundbreaking work in accelerated computing and its holistic approach to hardware and software that enabled much of today’s AI boom in the first place.
The core business segments
1. Data Center
Data centers represent NVIDIA’s largest and fastest-growing segment (78% of FY24 revenue), fueled by surging demand for AI/ML, high-performance computing (HPC), and cloud adoption. Traditional CPUs alone can’t keep pace with the parallel processing needs of massive datasets, so NVIDIA’s GPUs have become the go-to solution for acceleration in training and inference workloads. Beyond GPUs, NVIDIA’s Mellanox acquisition in 2020 added high-speed networking to its portfolio, enabling a holistic approach to data center infrastructure. With silicon design hitting physical and technological limits, seamless GPU + networking synergy is increasingly critical to sustaining performance gains. This strategic move bolstered NVIDIA’s ability to deliver an end-to-end computing stack—hardware, interconnect, and software—that efficiently scales for everything from AI supercomputers to cloud data centers.
Source: Company
NVIDIA’s Data Center segment saw explosive growth in fiscal year 2024, surging 217% to reach $47.5 billion—by far the largest revenue contributor to the company’s overall results. The primary driver behind this remarkable expansion was unprecedented demand for AI and high-performance computing solutions, which touched nearly every industry vertical. Enterprise software and consumer internet firms in particular raced to upgrade their server and cloud infrastructures to handle massive AI workloads, while sectors like automotive, financial services, and healthcare also accelerated adoption of advanced data processing and analytics.
Additionally, NVIDIA’s focus on end-to-end solutions—combining its GPUs with high-speed networking (Mellanox) and specialized AI frameworks—resonated strongly with enterprise clients seeking holistic, plug-and-play performance enhancements. This integrated approach allowed data centers to scale rapidly, enabling faster model training and inference, and drove the record-breaking revenue reported in the Data Center segment.
2. Gaming
NVIDIA’s roots in gaming (17% of FY24 revenue) remain a major revenue driver and a foundation of brand recognition. The company dominates the high-end PC gaming market, where its GeForce GPUs are the de facto choice for enthusiast gamers seeking top-tier performance and visual fidelity. NVIDIA also extends this ecosystem with services like GeForce NOW, a cloud-based gaming platform that brings AAA gaming to devices lacking powerful local hardware. The gaming-as-a-service model further strengthens NVIDIA’s reach, enabling gamers to tap into GeForce GPUs remotely while reinforcing the company’s overall ecosystem dominance.
Source: Company
NVIDIA’s Gaming segment surged in fiscal year 2022, growing by 61% year over year. This spike was driven by several factors:
New Product Launches: Ramp-up of RTX 30 Series GPUs boosted sales, thanks to higher performance and ray tracing capabilities.
COVID-19 Tailwinds: With remote work and stay-at-home measures, consumers upgraded PCs for both gaming and professional needs.
Growing Popularity of Gaming: Esports and content creation thrived, further increasing GPU demand.
Cryptocurrency Mining: Although not the primary target market, cryptocurrency miners partially contributed to heightened GPU demand.
Despite supply constraints, these combined forces propelled the Gaming segment to record heights in FY2022.
Correction in FY2023
In fiscal year 2023, Gaming revenue declined 27% compared to the previous year as market dynamics shifted:
Inventory Realignment: Partners scaled back orders to address excess inventory built up during the pandemic boom.
Economic Headwinds: Rapid changes in economic conditions and COVID-19 disruptions (especially in China) tempered consumer demand.
Competitive Pressures: New competitor releases slightly eroded NVIDIA’s market share, although overall brand loyalty remained strong.
Despite some sequential improvements later in the year, the overall fiscal performance was significantly lower than the peak in FY2022.
Renewed Growth in FY2024
By fiscal year 2024, NVIDIA’s Gaming segment regained momentum, with revenue hitting $10.4 billion—a 15% increase over FY2023. Several factors drove this resurgence:
Normalized Channel Inventory: As channel partners worked through their surplus, sell-in levels rebounded, reflecting a healthier supply-demand balance.
New Product Launches: Introductions of GeForce RTX 4060 and 4070 (Ada Lovelace architecture) alongside the RTX 40 Super Series offered upgraded performance and generative AI capabilities, appealing to both gamers and creators.
Technological Advancements: Features such as DLSS (Deep Learning Super Sampling) delivered significant performance boosts and enhanced image quality, aligning with growing consumer interest in AI-driven gaming.
Strong Holiday Demand: The fourth quarter saw gaming revenue of $2.87 billion, flat sequentially but up 56% year over year, indicating sustained consumer appetite for high-performance gaming solutions.
Overall, NVIDIA’s Gaming segment has demonstrated a dynamic trajectory over the last three fiscal years—rising sharply in FY2022, adjusting to market realities in FY2023, and rebounding in FY2024 with robust product offerings and heightened consumer interest in AI-powered gaming experiences.
3. Professional Visualization
NVIDIA’s Professional Visualization segment (3% of FY24 revenue) caters to enterprise customers who rely on Quadro (or RTX) GPUs for tasks ranging from architectural design and simulation to advanced 3D rendering and animation. To elevate collaboration in virtual environments, NVIDIA introduced Omniverse—a platform for real-time 3D design, simulation, and digital twin creation. By offering a unified workspace in which multiple users can iterate on complex 3D projects simultaneously, Omniverse has gained traction in industries like architecture, automotive design, and film production. This software-focused value-add helps NVIDIA stand out in a field often defined strictly by hardware.
Source: Company
4. Automotive
Although still a smaller segment (2% of FY24 revenue) compared to Data Center and Gaming, Automotive represents a promising growth avenue. NVIDIA’s AI Cockpit systems and autonomous driving solutions integrate GPUs, sensors, and AI algorithms to power everything from infotainment to self-driving capabilities. The company has formed strategic partnerships with automakers and tier-1 suppliers, offering a scalable platform that can handle driver assistance features today and pave the way for fully autonomous vehicles in the future. As the automotive industry accelerates toward electrification and autonomy, NVIDIA’s blend of hardware prowess and software development environments positions it to capitalize on emerging opportunities.
Source: Company
Competitive landscape
NVIDIA's competitive landscape consists of two different groups of competitors. On one side, AMD and Intel stand out as the traditional semiconductor competitors, each striving to narrow the gap through their respective CPU and GPU solutions. On the other, a different class of specialized players—including Marvell, Broadcom, and Micron—focus on ASICs, high-speed networking, and memory solutions, all of which are critical to modern data center architecture. Together, these competitors form a complex ecosystem where each company targets different facets of the computing stack—ranging from graphics and AI acceleration to storage, connectivity, and custom chip design.
While all these are formidable players, NVIDIA’s holistic ecosystem, software-centric strategy, and tight developer partnerships give it a significant edge—one that won’t be easily eroded, despite ongoing competitive efforts.
Traditional competitors
NVIDIA competes primarily with AMD and Intel in the semiconductor industry, particularly in the graphics processing unit (GPU) and data center markets. AMD and Intel both offer CPU and GPU solutions aimed at data center, gaming, and professional workloads. AMD has gained traction with its Radeon lineup in gaming and Instinct accelerators in the data center, while Intel has recently launched initiatives like Intel Arc for consumer GPUs and Ponte Vecchio for HPC/AI. Both companies are investing heavily in R&D to improve performance, energy efficiency, and software support. AMD has notably acquired Xilinx to enhance its data center and AI capabilities, and Intel has poured billions into foundry expansion and next-gen chip designs. Despite these efforts, NVIDIA’s CUDA ecosystem and end-to-end hardware+software approach remain formidable barriers for these rivals to overcome.
Graphics Processing Units (GPUs)
NVIDIA:
Strengths: Known for high-performance GPUs with advanced features like real-time ray tracing and AI-enhanced graphics. Their GeForce RTX series is highly regarded in the gaming community.
Market Position: Holds a significant market share in the discrete GPU segment, especially in high-end gaming and professional visualization.
AMD:
Strengths: Offers competitive GPUs under the Radeon brand, which often provide good performance at lower price points compared to NVIDIA. The RDNA architecture has improved AMD's standing in gaming and professional markets.
Market Position: Gaining traction in the gaming and data center markets, particularly with its Radeon RX series and the recent launch of RDNA 3 architecture. However, it still trails NVIDIA in terms of high-end performance and features.
Intel:
Strengths: Recently entered the discrete GPU market with its Arc series. Intel leverages its extensive resources and existing relationships in the industry.
Market Position: Currently, Intel's GPU offerings are in the early stages and primarily target the mid-range segment. It faces challenges in competing with NVIDIA and AMD in gaming and high-performance computing.
Data Center and AI
NVIDIA:
Strengths: Dominates the AI and deep learning market with products like the A100 and H100 Tensor Core GPUs. Its CUDA platform allows developers to leverage GPU computing for AI workloads effectively.
Market Position: Leading provider of AI infrastructure solutions, with a strong ecosystem and partnerships across various sectors.
AMD:
Strengths: The EPYC series of processors offers a competitive alternative to NVIDIA in some HPC and server environments, but lacks the same level of GPU integration for AI workloads.
Market Position: Making strides in the data center market, particularly with its focus on high core counts and energy efficiency, but still lags behind NVIDIA in AI-specific applications.
Intel:
Strengths: Strong presence in CPUs for data centers, but its GPUs are still relatively new to the market. Intel is investing heavily in AI and machine learning capabilities.
Market Position: Holds a dominant position in server CPUs, which may help it leverage its GPU offerings in the future. However, it must overcome significant competition from NVIDIA's established AI solutions.
ASIC & networking players
Application-Specific Integrated Circuits (ASICs), on the other hand, can deliver exceptional performance and power efficiency for narrowly defined tasks—like certain cryptographic operations or specialized AI inference.
Marvell focuses on data infrastructure, designing semiconductors for storage, networking, and security applications. Their offerings target specialized functions like Ethernet connectivity and data center switching.
Micron is primarily known for its memory and storage solutions (DRAM, NAND, etc.), which are essential to AI and HPC workloads. Although not a direct GPU competitor, robust memory technology is crucial for data throughput in high-performance systems.
Broadcom has a broad portfolio that includes networking and connectivity chips, custom ASIC solutions, and storage products. They’re especially strong in switching ASICs for enterprise and cloud data centers.
These ASIC and networking players serve niche but critical functions, delivering high-speed connectivity or specialized data processing. While they don’t rival NVIDIA’s GPU-driven AI acceleration directly, they do compete for a slice of the data center budget. Their custom ASICs can be very efficient at single-purpose tasks, but typically lack the versatility and widespread software support of a general-purpose GPU ecosystem. Once fabricated, they offer little flexibility, making them harder to adapt when algorithms evolve or new workloads emerge.
GPUs, by contrast, are programmable across a wide range of tasks, from deep learning to high-performance computing. NVIDIA’s CUDA libraries and development tools provide a rich framework for building, optimizing, and scaling AI applications—a stark contrast to the more fragmented environment for ASICs. This combination of flexibility and robust software makes GPUs preferable for many organizations that want to future-proof their AI infrastructure rather than commit to a single specialized solution.
NVIDIA's competitive moat
One of NVIDIA’s most significant moats is its platform approach. Rather than offering standalone GPUs, the company provides an integrated stack of hardware (GPUs, networking), software libraries (CUDA, cuDNN, TensorRT), developer tools, and a large community that continually refines and contributes to these libraries. This synergy enables rapid iteration and optimization that competitors struggle to match.
Once developers or enterprise clients invest in CUDA-based workflows, switching to another platform involves retooling code, retraining staff, and potentially rewriting applications. This creates a developer lock-in effect that is both powerful and self-reinforcing. Additionally, ecosystem partners—ranging from cloud providers to AI startups—often optimize their solutions exclusively for NVIDIA hardware, further entrenching NVIDIA’s lead.
AMD, Intel, and custom ASIC vendors can offer comparable or even higher raw performance in certain benchmarks. However, without a similarly comprehensive ecosystem and developer community, they often struggle to gain widespread adoption—particularly in AI, where model architectures and software frameworks evolve rapidly, favoring a flexible, well-supported platform.
NVIDIA vs peers
In this section, we compare NVIDIA with its primary competitors—Intel (INTC), AMD (AMD), Marvell (MRVL), Micron (MU), and Broadcom (AVGO). Here we offer a bird’s-eye view of how each company has performed over the past few years and highlight NVIDIA’s relative positioning in the semiconductor landscape.
YoY revenue growth trends
Source: Booga App
NVIDIA’s Surge: NVIDIA exhibits a marked spike in year-over-year (YoY) revenue growth starting around Q2 2024, maintaining exceptionally high growth rates through Q3 2025. This contrasts sharply with earlier quarters where NVIDIA’s growth was more in line with the industry average. The jump reflects explosive demand for accelerated computing and AI-driven solutions—particularly in data centers and high-end GPUs.
Traditional Semiconductor Players (Intel, AMD):
Intel shows fluctuating YoY growth, dipping into negative territory in several quarters (e.g., around Q1/Q2 2023) as it restructures its product roadmap and navigates a highly competitive CPU/GPU market.
AMD, while faring better than Intel in most quarters, still lags NVIDIA’s pace. Its YoY growth has been positive but moderate—impacted by timing of product launches and shifts in consumer demand.
ASIC & Networking Specialists (Marvell, Broadcom) and Memory (Micron):
Marvell (MRVL) sees periods of elevated growth—especially around network infrastructure cycles—but overall doesn’t approach NVIDIA’s recent peaks.
Broadcom (AVGO) maintains steady positive growth, reflecting its strong position in networking, connectivity, and enterprise infrastructure, but lacks the dramatic spike driven by generative AI demand.
Micron (MU) has experienced negative YoY growth in several quarters (notably Q3 2023 and around Q1 2024) due to memory market downturns, although it occasionally bounces back with cyclical industry recoveries.
NVIDIA stands out with its triple-digit YoY growth in multiple recent quarters, highlighting its status as the primary AI beneficiary among these peers.
Gross profit margin trends over time
Source: Booga App
NVIDIA maintains industry-leading margins, hovering in the low- to mid-70% range in recent quarters. This premium is largely due to the high-value AI/data center product mix and the company’s pricing power in accelerated computing solutions. As AI adoption accelerates, NVIDIA’s end-to-end ecosystem (GPUs, networking, software) commands a significant margin advantage over more commoditized semiconductor offerings.
Broadcom (AVGO) consistently shows above-average gross margins (often in the mid-60s), reflecting its robust position in networking and infrastructure solutions. While it doesn’t match NVIDIA’s peaks, Broadcom’s core markets (switching ASICs, connectivity, enterprise storage) tend to be profitable and less cyclical than consumer-driven segments.
AMD and Intel’s Challenges
AMD: Its margins typically fall into the 40–50% range, influenced by a balancing act between CPUs/GPUs for both consumer and enterprise. Ramping new architectures (e.g., Ryzen, EPYC) can temporarily drag on margins, although the data center segment is gradually boosting overall profitability.
Intel: Historically known for high margins, Intel’s line has trended downward over the past few years. A combination of product delays, increased competition, and substantial foundry investments has compressed margins into the 30–40% zone more recently.
Marvell and Micron: Specialized Markets, Volatile Trends
Marvell (MRVL): Margins usually sit around 50%, reflecting its specialized focus on networking, storage, and custom ASICs. These end markets can be stable, but design cycles also create short-term volatility.
Micron (MU): Being primarily in memory, Micron’s margins are highly cyclical and can swing sharply, even going negative during market downturns. As DRAM/NAND prices fluctuate, Micron’s profitability mirrors those boom-and-bust cycles.
NVIDIA’s positioning at the top of the margin curve underscores the high value placed on its AI-driven hardware and software ecosystem. Despite the efforts of established players like Intel and AMD, and the strong niche roles of Marvell, Broadcom, and Micron, none have matched NVIDIA’s blend of premium product mix and ecosystem-based pricing power in the current AI-centric market environment.
Recent financial performance (Q3 FY 2025)
Headline numbers
Source: Booga App
Source: Booga App
NVIDIA delivered another record-breaking quarter in Q3 FY 2025, with revenue reaching $35.1 billion, exceeding analyst expectations of $32.5 billion. This performance marked a 17% sequential increase and an impressive 94% year-on-year growth.
Data Center: The biggest driver was the Data Center segment, posting $30.8 billion in revenue—up 17% sequentially and 112% YoY. Demand for accelerated computing and AI solutions remained strong, with cloud service providers such as AWS, CoreWeave, and Microsoft Azure accounting for about half of Data Center revenue. NVIDIA’s H200 product achieved the fastest sales ramp in company history, reinforcing its position as the largest AI inference platform globally.
Gaming: Gaming revenue reached $3.3 billion, up 14% sequentially and 15% YoY, reflecting ongoing interest in GeForce RTX 40 Series GPUs and better channel inventory management.
Professional Visualization: Revenue was up to $486 million, 7% sequentially and 17% YoY. These increases were driven by the continued ramp of RTX GPU workstations based on NVIDIA's Ada architecture.
Automotive: Revenue from automotive hit a record $449 million, showing 30% sequential and 72% YoY growth, propelled by self-driving platforms.
Operating Expenses & Shareholder Returns: Operating expenses rose 9% quarter on quarter—largely due to higher development costs—but NVIDIA still returned $11.2 billion to shareholders through share repurchases and dividends. Gross margins hovered in the low- to mid-70% range and are expected to move to the mid-70s over coming quarters.
Source: Booga App
Source: Booga App
NVIDIA’s cash position remains strong, with $38.5 billion in cash, cash equivalents, and marketable securities on hand as of late October 2024. The company’s operating cash flow increased in the first nine months of FY2025 compared to the same period in FY2024, primarily reflecting robust revenue growth. Advanced supply-agreement payments partially offset these gains, but overall liquidity continues to comfortably support both operating needs and strategic initiatives.
On the investing side, higher expenditures were driven by net purchases of marketable securities as well as land, property, and equipment—indicating ongoing investment in capacity and infrastructure. Meanwhile, cash used in financing increased due to more share repurchases and tax payments related to restricted stock units (RSUs).
NVIDIA returned $11.1 billion in Q3 (and $26.2 billion in the first nine months) of FY2025 through share repurchases, aligning with its program to both offset employee-related dilution and seize market opportunities. Notably, the Board of Directors authorized an additional $50 billion for repurchases in August 2024, bringing the remaining authorization to about $46.4 billion. The company also paid $245 million in dividends during Q3, underlining its commitment to returning capital to shareholders while maintaining sufficient liquidity to support innovation and growth initiatives. Overall, NVIDIA’s cash-generating ability and substantial reserves position it well to navigate supply constraints and invest in future product development.
Key growth drivers & management commentary
NVIDIA’s management underscored unprecedented demand for AI compute, noting that supply constraints remain a key focus. The company has been working to scale production of its Blackwell architecture, which shipped 13,000 GPU samples in Q3 alone and delivers a 2.2× performance improvement over the Hopper-based H200 in MLPerf Training benchmarks.
Demand Outstripping Supply: Tight supplies of HBM (high-bandwidth memory) and other advanced components have resulted in backlogs for many of NVIDIA’s high-performance offerings. However, management emphasized plans to ramp up production capacity.
Pricing Power: With accelerating AI adoption, NVIDIA maintains strong pricing power, benefiting from advanced features like fivefold inference improvements in Hopper’s software stack.
Product Roadmap: The Blackwell systems are being integrated into data centers worldwide, aligning with the company’s broader strategy to tackle generative AI, large-language models, and enterprise AI deployments. Management expects AI Enterprise revenue to more than double year-on-year, mirroring increased AI adoption across multiple industries.
Outlook & guidance
Looking ahead to Q4 FY 2025, NVIDIA projects revenue of about $37.5 billion—suggesting continued strength as AI demand broadens. Management foresees:
Gross Margins: Remaining in the low 70% range initially, trending to the mid-70s in subsequent quarters as product mix shifts toward higher-complexity systems.
Supply Constraints: Ongoing efforts to secure additional HBM supplies and expand Blackwell production lines aim to match surging customer demand.
AI & Data Center Modernization: NVIDIA views generative AI as an early-stage revolution, prompting enterprises to modernize data centers at scale. CEO Jensen Huang has described the opportunity as “multi-trillion dollar,” driven by agentic AI and robotics over the coming years.
Risks & Considerations: Possible macroeconomic headwinds, trade restrictions, and export controls could moderate growth in certain regions. Even so, NVIDIA believes it’s well-positioned to capitalize on the ongoing transformation in AI and accelerated computing.
Forward outlook & AI-driven demand
The AI revolution shows no signs of slowing, with ever-larger models—some reaching trillions of parameters—being developed to tackle tasks from advanced natural language processing to image generation and robotics. As these models grow, so too does the computational burden, making accelerated computing solutions a foundational pillar of AI progress. While training these massive networks remains resource-intensive, inference (running models in production) is projected to outpace training over the next few years, driving continuous upgrades in data center infrastructure.
NVIDIA stands at the center of this shift, having built an ecosystem that merges high-performance GPUs, networking technology, and developer-friendly software platforms like CUDA. Over the next five years, enterprise data centers are expected to undergo a wave of modernization, incorporating more AI-optimized hardware to handle increasingly sophisticated workloads.
Source: Booga App
As illustrated by the average revenue estimates chart, Wall Street analysts project strong ongoing growth for NVIDIA, albeit at a moderating pace compared to the explosive gains seen in FY2024 and FY2025. While the company’s Data Center segment is anticipated to remain the primary driver, segments such as Gaming, Automotive, and Professional Visualization also stand to benefit from new AI-driven product launches. These revenue estimates feed into most financial models, indicating that while triple-digit year-over-year growth may taper off, NVIDIA’s top-line expansion is still expected to significantly outpace many of its semiconductor peers over the mid-term.
Opportunities & risks
Beyond data centers, NVIDIA continues to push into automotive (self-driving platforms), robotics, enterprise AI, and high-performance computing (HPC) applications. Each of these verticals demands increasingly powerful, specialized hardware—an area where NVIDIA has a proven track record of innovation.
As AI model sizes and enterprise deployments scale up, securing advanced components (like high-bandwidth memory) may prove challenging. Although NVIDIA is actively expanding production capacity, short-term constraints could limit shipment volumes or push up costs.
Traditional rivals (AMD, Intel) are sharpening their focus on AI acceleration, while ASIC specialists and Big Tech custom silicon projects (e.g., Google’s TPUs, Amazon’s Inferentia) pose an emerging threat. NVIDIA’s software moat remains a key defense, but it must continue heavy R&D investments to preserve its edge.
Regulatory shifts, such as export controls and trade restrictions, could impede sales in key markets. In addition, the rise of custom AI ASICs or open architectures (e.g., RISC-V) may eventually change the competitive dynamics in high-performance computing, although near-term impacts are likely limited.
NVIDIA’s forward outlook is underpinned by the industry’s insatiable appetite for accelerated AI solutions. If management executes on product roadmaps (such as the new Blackwell architecture) and successfully navigates supply constraints, the company appears well-positioned to capitalize on the multi-trillion-dollar opportunities in AI and robotics—though it must remain vigilant to both competitive and regulatory risks on the horizon.
Valuation & analysis
For this valuation exercise, we employed a Discounted Cash Flow (DCF) approach, leveraging the Booga for Excel add-in to rapidly generate the core model structure. In under a minute, we had a fully functioning DCF template—complete with quarterly mechanics, an annual summary, and default assumptions for revenue growth, operating margins, capital expenditures, and more.
Note: This demonstration aims to show how easily and efficiently one can start building a fundamental model using Booga for Excel. We are not asserting whether NVIDIA is over- or undervalued. Instead, we’re highlighting how market expectations can compare to one’s own DCF-based assumptions.
Key assumptions
Revenue Growth
We set our 2025E–2028E revenue projections to match the average Wall Street expectations shown in the “Forward Outlook & AI-Driven Demand” section.
This is achieved by choosing the corresponding revenue drivers (i.e., YoY growth rates) in the model’s “Annual” mode, which aligns the forecasts with consensus estimates.
Gross Margins
Projected at 75% throughout our forecast period, consistent with recent management commentary and market expectations.
This level mirrors the mid-70% margin guidance NVIDIA has been targeting, particularly as Data Center products and AI-focused offerings make up a greater revenue share.
Operating Expenses (OpEx)
Set at 18.6% of revenues, in line with NVIDIA’s FY2024 ratio.
While OpEx as a percentage of revenue has trended downward recently (thanks to operating leverage), we took a conservative stance, assuming the company maintains high R&D investments to stay competitive in the AI race.
The resulting operating margin stands at 56.4%, which is higher than historical averages but aligns with NVIDIA’s current AI-driven momentum and remains slightly below some bullish analyst forecasts—again underscoring our conservative tilt.
Capital Expenditures (CapEx)
We assume annual CapEx at 40% of beginning-of-period PP&E, a step-up from historical levels, reflecting the company’s commitment to expanding data center capacity and accelerating R&D-related infrastructure.
Depreciation & Amortization (D&A) is set at 85% of CapEx, further reinforcing the company’s capital-intensive strategy in AI and advanced computing.
Working Capital
Days receivables, days payables, and days inventory remain at their FY2024 levels to maintain a neutral stance on working-capital changes.
Under these assumptions, free cash flow (FCF) hovers around 45–48% of revenues.
WACC Calculation
Using the Booga for Excel “WACC” template, we derived a 9.96% weighted average cost of capital.
This figure relies on an unlevered beta for NVIDIA’s peer group (including Intel, AMD, Marvell, Micron, and Broadcom) and aligns with market consensus estimates of risk premiums and cost of debt.
Model P&L
Model free cash flows (FCF)
Preliminary fair value findings
After feeding the above assumptions into the DCF model (covering 7 projected years) and applying:
5% Terminal Growth Rate, or
20% Terminal EBITDA Multiple,
we arrive at a per-share price range of USD 97.4–106.8. By comparison, NVIDIA’s stock recently traded around USD 132 (as of January 14th, 2025).
This wide gap between intrinsic valuation implied by our model and current market price may reflect the market’s optimism around AI-driven growth, the possibility of further margin expansion or faster growth, or simply a premium for NVIDIA’s leading position in accelerating computing.
For those interested in experimenting with these parameters—or adjusting them based on their own outlook—you can download the Excel model here. You are invited to tweak growth rates, margins, WACC, CapEx, or other assumptions to your liking and see how they affect the final valuation. Observe how your “fair value” range matches or deviates from current share prices, and form your own view on risk and reward in this high-growth sector.
Conclusion
NVIDIA’s holistic approach—combining cutting-edge GPUs, networking solutions, and a robust software ecosystem—continues to outpace most competitors in the fast-evolving AI landscape. With the AI revolution showing no signs of abating, the company’s deep R&D investments, developer community lock-in, and strong industry partnerships position it to capitalize on future waves of innovation. Whether it’s data center acceleration, gaming, or automotive AI, NVIDIA’s comprehensive product stack gives it a formidable edge in capturing and sustaining market leadership.
The semiconductor and AI markets move rapidly, and so does NVIDIA. We’ll keep this blog up to date with:
Quarterly Earnings: Summaries of future financial results to gauge how NVIDIA maintains its growth trajectory.
Product Launches: Coverage of significant announcements, from next-generation GPU architectures to new AI platform features.
Competitive Landscape Changes: Updates on how emerging players or evolving technologies might reshape the market dynamics around NVIDIA.
Disclaimer
This case study is provided for educational and informational purposes only and should not be construed as financial or investment advice. All opinions expressed herein are solely those of the author and do not constitute recommendations to buy, sell, or hold any security. Readers are encouraged to conduct their own due diligence and consult qualified professionals before making any investment decisions.
Appendix
NVIDIA's most notable products
GeForce GPUs
GeForce RTX Series: This includes high-performance gaming GPUs such as the RTX 30 and 40 series. These GPUs are known for their real-time ray tracing capabilities and AI-enhanced graphics, making them popular among gamers and content creators.
GeForce GTX Series: Aimed at entry-level to mid-range gamers, these GPUs provide solid performance for gaming without the advanced features found in the RTX series.
Quadro GPUs
Quadro RTX Series: Designed for professionals in design, visualization, and content creation, these GPUs are optimized for CAD applications, 3D rendering, and simulations. The Quadro line is widely used in industries such as architecture, engineering, and media.
Data Center Products
NVIDIA A100 Tensor Core GPU: This product is targeted at data centers, providing high performance for AI training and inference, as well as high-performance computing (HPC) tasks.
NVIDIA H100 Tensor Core GPU: A successor to the A100, this GPU is designed for next-generation AI and machine learning workloads.
AI and Deep Learning Solutions
NVIDIA DGX Systems: These are integrated systems designed for AI research and development, providing the necessary hardware and software for training AI models efficiently.
NVIDIA Jetson: A series of products for edge AI computing, Jetson modules are used in robotics, drones, and IoT applications.
Gaming and Content Creation Software
NVIDIA GeForce Experience: A companion application for GeForce graphics cards that optimizes game settings, provides game recording features, and allows for driver updates.
NVIDIA Studio: A platform that optimizes hardware and software for creative applications, enhancing the performance of content creation workflows.
Most recent AI innovations by NVIDIA (CES 2025)
Project DIGITS: A personal AI supercomputer designed to enhance computing capabilities for users, further solidifying NVIDIA's position in the AI landscape.
GeForce RTX 50 Series GPUs: These new graphics cards leverage the Blackwell architecture to deliver record-breaking performance, improved responsiveness in gameplay, and enhanced visuals, as well as new AI applications that enrich gaming experiences.
AI Foundation Models for RTX PCs: These models include NVIDIA NIM microservices that facilitate the creation of digital humans, podcasts, images, and videos, showcasing the versatility of NVIDIA's technology in consumer applications.
Cosmos Platform for Autonomous Vehicles: This platform integrates various AI technologies necessary for developing and testing autonomous vehicles, which includes utilizing NVIDIA DGX for training AI models, Omniverse for simulation, and DRIVE AGX as the in-vehicle supercomputer.
Generative AI Tools: NVIDIA has introduced new AI software and tools that are RTX-accelerated, aimed at both developers and consumers to enhance creative processes and productivity.
RTX AI Toolkit: This toolkit empowers creative professionals by integrating advanced AI capabilities into their workflows, enhancing productivity across various applications.
Sources: Technology Magazine, AI Magazine
NVIDIA's competitors
AMD
Advanced Micro Devices, Inc. operates as a semiconductor company worldwide. The company operates in two segments, Computing and Graphics; and Enterprise, Embedded and Semi-Custom. Its products include x86 microprocessors as an accelerated processing unit, chipsets, discrete and integrated graphics processing units (GPUs), data center and professional GPUs, and development services; and server and embedded processors, and semi-custom System-on-Chip (SoC) products, development services, and technology for game consoles. The company provides processors for desktop and notebook personal computers under the AMD Ryzen, AMD Ryzen PRO, Ryzen Threadripper, Ryzen Threadripper PRO, AMD Athlon, AMD Athlon PRO, AMD FX, AMD A-Series, and AMD PRO A-Series processors brands; discrete GPUs for desktop and notebook PCs under the AMD Radeon graphics, AMD Embedded Radeon graphics brands; and professional graphics products under the AMD Radeon Pro and AMD FirePro graphics brands. It also offers Radeon Instinct, Radeon PRO V-series, and AMD Instinct accelerators for servers; chipsets under the AMD trademark; microprocessors for servers under the AMD EPYC; embedded processor solutions under the AMD Athlon, AMD Geode, AMD Ryzen, AMD EPYC, AMD R-Series, and G-Series processors brands; and customer-specific solutions based on AMD CPU, GPU, and multi-media technologies, as well as semi-custom SoC products. It serves original equipment manufacturers, public cloud service providers, original design manufacturers, system integrators, independent distributors, online retailers, and add-in-board manufacturers through its direct sales force, independent distributors, and sales representatives. The company was incorporated in 1969 and is headquartered in Santa Clara, California.
Intel
Intel Corporation engages in the design, manufacture, and sale of computer products and technologies worldwide. The company operates through CCG, DCG, IOTG, Mobileye, NSG, PSG, and All Other segments. It offers platform products, such as central processing units and chipsets, and system-on-chip and multichip packages; and non-platform or adjacent products, including accelerators, boards and systems, connectivity products, graphics, and memory and storage products. The company also provides high-performance compute solutions for targeted verticals and embedded applications for retail, industrial, and healthcare markets; and solutions for assisted and autonomous driving comprising compute platforms, computer vision and machine learning-based sensing, mapping and localization, driving policy, and active sensors. In addition, it offers workload-optimized platforms and related products for cloud service providers, enterprise and government, and communications service providers. The company serves original equipment manufacturers, original design manufacturers, and cloud service providers. Intel Corporation has a strategic partnership with MILA to develop and apply advances in artificial intelligence methods for enhancing the search in the space of drugs. The company was incorporated in 1968 and is headquartered in Santa Clara, California.
Marvell
Marvell Technology, Inc., together with its subsidiaries, designs, develops, and sells analog, mixed-signal, digital signal processing, and embedded and standalone integrated circuits. It offers a portfolio of Ethernet solutions, including controllers, network adapters, physical transceivers, and switches; single or multiple core processors; ASIC; and printer System-on-a-Chip products and application processors. The company also provides a range of storage products comprising storage controllers for hard disk drives (HDD) and solid-state drives that support various host system interfaces consisting of serial attached SCSI (SAS), serial advanced technology attachment (SATA), peripheral component interconnect express, non-volatile memory express (NVMe), and NVMe over fabrics; and fiber channel products, including host bus adapters, and controllers for server and storage system connectivity. It has operations in the United States, China, Malaysia, the Philippines, Thailand, Singapore, India, Israel, Japan, South Korea, Taiwan, and Vietnam. Marvell Technology, Inc. was incorporated in 1995 and is headquartered in Wilmington, Delaware.
Micron
Micron Technology, Inc. designs, manufactures, and sells memory and storage products worldwide. The company operates through four segments: Compute and Networking Business Unit, Mobile Business Unit, Storage Business Unit, and Embedded Business Unit. It provides memory and storage technologies comprises DRAM products, which are dynamic random access memory semiconductor devices with low latency that provide high-speed data retrieval; NAND products that are non-volatile and re-writeable semiconductor storage devices; and NOR memory products, which are non-volatile re-writable semiconductor memory devices that provide fast read speeds under the Micron and Crucial brands, as well as through private labels. The company offers memory products for the cloud server, enterprise, client, graphics, and networking markets, as well as for smartphone and other mobile-device markets; SSDs and component-level solutions for the enterprise and cloud, client, and consumer storage markets; other discrete storage products in component and wafers; and memory and storage products for the automotive, industrial, and consumer markets. It markets its products through its direct sales force, independent sales representatives, distributors, and retailers; and web-based customer direct sales channel, as well as through channel and distribution partners. Micron Technology, Inc. was founded in 1978 and is headquartered in Boise, Idaho.
Broadcom
Broadcom Inc. designs, develops, and supplies various semiconductor devices with a focus on complex digital and mixed signal complementary metal oxide semiconductor based devices and analog III-V based products worldwide. The company operates in two segments, Semiconductor Solutions and Infrastructure Software. It provides set-top box system-on-chips (SoCs); cable, digital subscriber line, and passive optical networking central office/consumer premise equipment SoCs; wireless local area network access point SoCs; Ethernet switching and routing merchant silicon products; embedded processors and controllers; serializer/deserializer application specific integrated circuits; optical and copper, and physical layers; and fiber optic transmitter and receiver components. The company also offers RF front end modules, filters, and power amplifiers; Wi-Fi, Bluetooth, and global positioning system/global navigation satellite system SoCs; custom touch controllers; serial attached small computer system interface, and redundant array of independent disks controllers and adapters; peripheral component interconnect express switches; fiber channel host bus adapters; read channel based SoCs; custom flash controllers; preamplifiers; and optocouplers, industrial fiber optics, and motion control encoders and subsystems. Its products are used in various applications, including enterprise and data center networking, home connectivity, set-top boxes, broadband access, telecommunication equipment, smartphones and base stations, data center servers and storage systems, factory automation, power generation and alternative energy systems, and electronic displays. Broadcom Inc. was incorporated in 2018 and is headquartered in San Jose, California.