Nvidia shares have pulled back roughly 9.1% from their record high set on October 29, 2025, as Wall Street wrestles with a straightforward but enormous question: can the largest technology companies on the planet keep pouring hundreds of billions of dollars into AI infrastructure without a clearer path to consumer-side profits? The stock closed at $188.54 on February 10, 2026, well off its highs, after a turbulent stretch that saw shares drop to an intraday low of around $172 on February 5 before bouncing 8% the following day when CEO Jensen Huang mounted a public defense of AI capital spending levels. The tension is real and it cuts both ways.
On one hand, Nvidia just guided for $65 billion in fiscal Q4 2026 revenue, a staggering figure that reflects genuine demand for its chips. On the other, Alphabet recently revealed plans to nearly double its capital spending this year to as much as $185 billion, a number so large it prompted even bullish analysts to ask when returns would materialize. This article examines the forces behind Nvidia’s recent slide, Huang’s counterarguments, the circular financing concerns that have rattled investors, growing competitive threats, and what the upcoming February 25 earnings report could mean for the stock’s direction.
Table of Contents
- Why Are Investors Questioning the Sustainability of AI Spending?
- How Jensen Huang Defended the AI Boom and Where His Argument Has Limits
- The Circular Financing Problem That Spooked the Market
- What Nvidia’s Financial Results Actually Show
- The Competitive Threat From Nvidia’s Own Customers
- What the February 5 Selloff Revealed About Market Sentiment
- What Comes Next for Nvidia and the AI Spending Cycle
- Conclusion
- Frequently Asked Questions
Why Are Investors Questioning the Sustainability of AI Spending?
The numbers behind the AI infrastructure buildout have reached a scale that is difficult to process. Goldman Sachs projects that hyperscaler AI capital expenditures alone could hit $527 billion in 2026. A broader estimate that includes all forms of capital spending puts the figure at $660 billion. Meta, Amazon, Google, and Microsoft all announced significant increases in AI infrastructure spending during their most recent earnings reports, with Alphabet’s potential $185 billion outlay standing as the most eye-catching commitment. For context, that single company’s planned spending would exceed the GDP of most countries. The concern is not that these companies lack the cash.
They clearly have it. The worry is that the spending is running far ahead of proven revenue streams on the consumer side of AI. As several analysts have pointed out, there is surprisingly little profit being generated from consumer-facing AI products relative to the capital being deployed. The enterprise side looks healthier, but even there, the return on investment timeline remains murky. Investors have seen this movie before — massive infrastructure buildouts justified by future demand that takes longer to materialize than anyone projected. The question is whether this time is genuinely different, or whether optimism is once again outrunning reality.

How Jensen Huang Defended the AI Boom and Where His Argument Has Limits
On February 6, 2026, Huang appeared on CNBC and delivered what amounted to a point-by-point rebuttal of the bear case. He called the current AI buildout “the largest infrastructure buildout in human history” and said it would continue for seven to eight years. He argued that demand is “just incredibly high” and offered a striking data point: every GPU nvidia has ever sold — including six-year-old A100 chips — is currently being rented out. There is, he claimed, “no idle infrastructure today,” a deliberate contrast with the dot-com era when server racks sat unused in data centers that never filled up. Huang also pointed to companies like Anthropic and OpenAI generating profitable revenue as evidence that the AI ecosystem is not built on vapor. The market responded: Nvidia shares surged 8% that day, clawing back much of the week’s losses.
However, if investor confidence depends on one CEO’s media appearances to sustain a $4.64 trillion market cap, that itself is a vulnerability. Huang’s arguments are compelling but not unassailable. The dot-com comparison cuts in complicated directions — plenty of dot-com companies also had real revenue before the bubble burst. The question was never whether demand existed, but whether the supply of infrastructure was being built to match demand that would actually persist at projected levels. Nvidia bulls need the seven-to-eight-year buildout thesis to be correct. If the timeline compresses or if spending plateaus sooner, the stock’s premium valuation becomes much harder to justify.
The Circular Financing Problem That Spooked the Market
One of the more unsettling developments in early February was a report that Nvidia’s planned $100 billion investment in OpenAI had stalled. On its surface, this might seem like a minor corporate finance story. But it touched a nerve because it highlighted what critics call a “circular financing” dynamic in the AI sector. The pattern works like this: Nvidia sells chips to companies building AI infrastructure, then reinvests its profits into those same companies, which use the capital to buy more Nvidia chips. At each step, revenue and demand figures look healthy.
But strip away the circular flows and the picture becomes less clear. This is not a new concern in technology markets. During the telecom boom of the late 1990s, equipment makers financed their own customers, inflating demand figures that later proved unsustainable. The AI ecosystem is not a perfect parallel — the underlying technology is more transformative and adoption is broader — but the structural similarity is close enough to make sophisticated investors uneasy. When Nvidia is simultaneously a supplier, an investor, and a stakeholder in its own customers’ success, the usual signals that investors rely on to gauge organic demand become harder to read. The stalled OpenAI deal did not cause Nvidia’s slide on its own, but it gave concrete form to an abstract worry that had been circulating for months.

What Nvidia’s Financial Results Actually Show
For all the debate about sustainability, Nvidia’s financial performance remains extraordinary by any conventional measure. Fiscal Q3 2026 revenue came in at $57.0 billion, with the Data Center segment contributing $51.2 billion of that total. The company guided for fiscal Q4 2026 revenue of $65.0 billion, plus or minus 2%, which would represent continued acceleration. These are not the numbers of a company facing an imminent demand cliff. The tradeoff investors face is between present-tense fundamentals and forward-looking risk.
Nvidia’s current results justify a premium valuation, but the stock is priced not just for today’s revenue — it is priced for years of continued growth at a pace that has few historical precedents. If hyperscaler spending grows as projected, Nvidia is arguably still undervalued. If spending flattens or shifts toward in-house chip alternatives, the stock has significant downside from current levels. This is the core tension, and it will not be resolved by a single earnings report. That said, the February 25 earnings release is the next major catalyst. Investors will be parsing demand trends, supply chain updates, and any commentary on China sales prospects, where regulatory restrictions continue to cloud the outlook.
The Competitive Threat From Nvidia’s Own Customers
Perhaps the most underappreciated risk to Nvidia’s dominance is that its biggest customers are actively working to reduce their dependence on it. Alphabet, Amazon, and Microsoft are all developing custom AI chips designed to handle at least some of the workloads currently running on Nvidia hardware. Amazon’s Trainium chips, Google’s TPUs, and Microsoft’s Maia accelerators are all at various stages of deployment. AMD, meanwhile, continues to push its MI300 series as a viable alternative for data center AI workloads.
None of these efforts have meaningfully dented Nvidia’s market share yet, and there is a legitimate argument that the total addressable market is growing fast enough to accommodate multiple chip suppliers. But investors should be cautious about dismissing competitive threats entirely. Custom silicon does not need to match Nvidia’s performance across every benchmark to be attractive — it just needs to be good enough for specific workloads at a lower total cost of ownership. The history of the semiconductor industry is littered with dominant players who assumed their technology moat was wider than it turned out to be. Nvidia’s CUDA software ecosystem is a genuine competitive advantage, but it is not an impenetrable one, particularly as open-source alternatives mature.

What the February 5 Selloff Revealed About Market Sentiment
The sharp drop to around $172 on February 5 — Nvidia’s lowest intraday price of 2026 — was instructive. It showed how quickly sentiment can shift when multiple concerns converge. That day, the stalled OpenAI investment report was still fresh, Alphabet’s spending announcement had raised new questions, and broader market anxiety about AI valuations was building.
The speed and depth of the decline suggested that a meaningful cohort of institutional holders had moved to reduce positions, not just trim around the edges. The equally sharp 8% rebound the following day, triggered largely by Huang’s CNBC appearance, revealed the flip side: there is still enormous conviction among buyers that any dip represents an opportunity. This kind of volatility — violent moves in both directions on sentiment rather than fundamental changes — is characteristic of stocks where the bull and bear cases are both credible and where the ultimate outcome depends on variables that will not be resolved for years.
What Comes Next for Nvidia and the AI Spending Cycle
The next several months will be pivotal. The February 25 earnings report will provide the most immediate data point, but the bigger story will play out over quarters, not days. If hyperscaler capital spending continues to accelerate through the back half of 2026, the sustainability debate will quiet down — at least temporarily. If any major cloud provider signals a slowdown or a pivot toward internal chip solutions, the reaction in Nvidia’s stock could be severe.
Longer term, Huang’s seven-to-eight-year buildout timeline is either a remarkably prescient forecast or an overly optimistic projection from someone with every incentive to be bullish. The truth probably sits somewhere between those poles. AI infrastructure spending is unlikely to collapse, but it may not grow in a straight line either. Investors who can tolerate significant volatility and have a multi-year time horizon may find the current pullback from highs attractive. Those looking for stability should recognize that Nvidia, despite its financial strength, is a stock where the range of possible outcomes remains unusually wide.
Conclusion
Nvidia’s recent slide from record highs reflects a market grappling with legitimate questions, not panic selling. The sustainability of AI capital spending, circular financing concerns, competitive threats from custom silicon, and the gap between infrastructure investment and consumer-side profits are all real issues that deserve scrutiny. At the same time, Nvidia’s financial results are exceptional, demand for its chips remains strong by every available measure, and the AI buildout is clearly more than hype.
For investors, the path forward requires honest assessment of risk tolerance and time horizon. The February 25 earnings report will offer fresh data on demand trends and guidance, but it will not settle the larger debate. That will take years to play out. What is clear today is that Nvidia remains the most important single stock in the AI trade, and its price action will continue to serve as a barometer for how the market feels about the biggest capital spending cycle in technology history.
Frequently Asked Questions
Why did Nvidia stock drop in early February 2026?
Nvidia shares fell to an intraday low of around $172 on February 5, driven by concerns about AI spending sustainability, a report that Nvidia’s $100 billion OpenAI investment had stalled, and broader anxiety about when massive hyperscaler capital expenditures would translate into consumer-side AI profits.
What did Jensen Huang say to reassure investors?
On February 6, 2026, Huang told CNBC that the AI infrastructure buildout would continue for seven to eight years, that demand remained “incredibly high,” and that every GPU Nvidia has ever sold — even older A100 chips — is currently being rented out. He argued there is “no idle infrastructure today,” unlike during the dot-com era.
How much are hyperscalers expected to spend on AI in 2026?
Estimates vary, but Goldman Sachs projects AI capex from hyperscalers could reach $527 billion in 2026, while a broader estimate including all capital expenditures puts the figure at $660 billion. Alphabet alone may spend up to $185 billion.
When is Nvidia’s next earnings report?
Nvidia is scheduled to report fiscal Q4 2026 earnings on February 25, 2026, with revenue guidance of $65 billion plus or minus 2%. Investors are watching for demand trends, supply updates, and commentary on China sales.
What is the “circular financing” concern with Nvidia?
Critics worry that Nvidia invests in AI companies like OpenAI, which then use that capital to buy more Nvidia chips, creating a feedback loop that inflates demand metrics. The stalled $100 billion OpenAI investment brought this concern into sharper focus.
Who is competing with Nvidia in AI chips?
AMD offers its MI300 series as a data center alternative, while Nvidia’s own major customers — Alphabet, Amazon, and Microsoft — are developing custom AI chips including Google’s TPUs, Amazon’s Trainium, and Microsoft’s Maia accelerators.