AI Infrastructure Investment Risk Analysis: A Quantitative Assessment of Factors Threatening Exponential Growth

The Gemini Report - Investment Deep Dives
The Gemini Report – Investment Deep Dives
AI Infrastructure Investment Risk Analysis: A Quantitative Assessment of Factors Threatening Exponential Growth
Loading
/

Executive Summary & Risk Dashboard

The prevailing market narrative posits a multi-year, secular supercycle in Artificial Intelligence (AI) infrastructure investment, driven by the transformative potential of generative AI. While the long-term trend of AI adoption is not in dispute, this report contends that the current exponential rate of infrastructure spending growth is fragile and exposed to a confluence of high-impact, underpriced risks. The analysis indicates a high probability of a significant deceleration—a “growth shock”—within the next 24-36 months. This deceleration will likely be triggered by a combination of tightening macroeconomic conditions, the physical and economic limits of technological scaling, and emergent regulatory frameworks.

The capital-intensive, long-duration nature of AI infrastructure projects renders them acutely sensitive to rising capital costs and economic downturns. Historical precedents from the 2001, 2008, and 2020 recessions demonstrate a clear pattern of deep and immediate cuts to hardware-centric IT capital expenditures. Simultaneously, the physical supply chain for AI—from semiconductor fabrication in Taiwan to grid-level power availability in key data center markets—exhibits critical points of failure that pose a systemic threat.

Technologically, the foundational “scaling laws” that have justified exponential compute investment are showing signs of diminishing returns. Algorithmic efficiency is advancing rapidly, which, while potentially expanding the market through lower costs (Jevons Paradox), also threatens to de-commoditize raw compute power, creating margin pressure for incumbent hardware providers.

This report provides a probability-weighted analysis of these risks, quantifying their potential impact on key sectors, including Semiconductors, Cloud Providers, and Data Center REITs. The primary conclusion is that current market valuations do not adequately reflect the probability or severity of a spending growth deceleration. Investors should prepare for a period of heightened volatility and potential valuation resets. Strategic portfolio actions should now focus on defensive positioning, identifying contrarian opportunities in adjacent sectors (e.g., energy infrastructure), and establishing clear triggers for capital reallocation based on the leading indicators identified herein.

Table 1: Risk Factor Probability and Impact Matrix

Risk FactorProbability of Occurrence (Next 3 Years)Impact on AI Infra Sector ReturnsPrimary Affected SectorsKey Leading Indicators to Monitor
Economic Disruption
Sustained Fed Funds Rate > 6.5%Medium (20-50%)High (>30% decline)Data Center REITs, Cloud Providers10-Year Treasury Yields, Corporate Bond Spreads, Fed Forward Guidance
Global Recession (Negative GDP Growth)Medium (20-50%)High (>30% decline)Semiconductors, Cloud ProvidersCorporate IT Spending Surveys, PMI, Global Trade Volumes
Taiwan/TSMC Supply DisruptionLow (<20%)High (>30% decline)Semiconductors, All AI SectorsGeopolitical Tension Metrics, Military Exercises in Taiwan Strait, TSMC CapEx Plans
Energy Price Shock (>50% increase)Medium (20-50%)Medium (10-30% decline)Data Center REITs, Cloud ProvidersNatural Gas Futures, Utility PPA Rates, Grid Interconnection Queues
Technology Plateau
Scaling Law Breakdown (Sub-scaling)High (>50%)Medium (10-30% decline)Semiconductors (Nvidia)Performance-per-FLOP benchmarks, Frontier Model Training Costs
>100x Algorithmic Efficiency GainMedium (20-50%)High (>30% decline)Semiconductors (Nvidia)Open-source model performance on benchmarks (e.g., MMLU), Inference cost trends
Regulatory & Policy
EU AI Act High-Risk Compliance BurdenHigh (>50%)Low (<10% decline)AI Software, Cloud ProvidersEU Regulatory Body Rulings, SME Adoption Rates in EU
Big Tech Antitrust Breakup (US)Low (<20%)Medium (10-30% decline)Cloud ProvidersDOJ/FTC Lawsuit Filings, Congressional Legislation
Grid Power Allocation MandatesMedium (20-50%)Medium (10-30% decline)Data Center REITsState-level utility commission rulings, Data center construction moratoriums
Market Saturation
Widespread Enterprise “Pilot Purgatory”High (>50%)Medium (10-30% decline)AI Software, Cloud ProvidersEnterprise AI project failure rates, AI ROI reports, IT budget allocation shifts
Consumer AI Monetization FailureHigh (>50%)Low (<10% decline)AI SoftwareMAU/retention for consumer AI apps, Subscription conversion rates
Competitive & Capital Markets
Data Center Overcapacity/Price WarMedium (20-50%)High (>30% decline)Data Center REITsData center vacancy rates, Price per kW/month, New construction pipeline
Public Market AI Valuation Correction (>40%)High (>50%)High (>30% decline)All AI SectorsP/E and P/S multiples of bellwether stocks (Nvidia), VC funding deal value

I. Economic Disruption Scenarios: The Cost of Capital and Supply Chain Fragility

This section quantifies the macroeconomic risks that could trigger capital flight from AI infrastructure. The sector’s unique vulnerability stems from its high capital intensity and long-duration investment profile, making it exceptionally sensitive to changes in the cost of capital and disruptions in its highly concentrated physical supply chain.

A. Interest Rate Sensitivity Analysis: The Gravity of Higher-for-Longer

The economic foundation of the AI infrastructure boom rests on the availability of cheap capital to fund massive, front-loaded investments in data centers and semiconductors. These projects are quintessential long-duration assets, with capital expenditures today expected to generate cash flows years or decades into the future. Consequently, their net present value (NPV) is acutely sensitive to the discount rate used in valuation models. The current macroeconomic environment, characterized by a shift away from the zero-interest-rate policy (ZIRP) era to a “higher-for-longer” paradigm, fundamentally alters this calculation, posing a direct threat to the economic viability of new projects.1

Rising interest rates directly increase the cost of borrowing for companies with high capital expenditure requirements, a defining feature of the technology sector.3 This can decelerate expansion plans, reduce future growth prospects, and compress valuations as future cash flows are discounted more heavily.4 Historical precedent confirms this relationship; time-series analyses show that Federal Reserve rate hikes have a significant negative impact on the stock prices of major tech companies like Microsoft, underscoring the profound effect of macroeconomic policy on the sector.5 The rate hike cycle that began in 2022 was particularly notable for its aggressive pace and global synchronization, representing a sharp break from the preceding decade of monetary easing.7

The impact is not merely theoretical. A Q3 2023 survey of Chief Financial Officers conducted by the Richmond Fed revealed that approximately 40% of firms had already curtailed their capital and non-capital spending plans due to the prevailing level of interest rates. This marked a sizeable increase from the roughly 30% of firms that reported similar pullbacks in Q4 2022.8 This data suggests a lagged effect, where the full impact of monetary tightening has yet to be realized and will likely continue to act as a drag on corporate investment.

Data center REITs, a cornerstone of AI infrastructure, are particularly exposed to interest rate fluctuations. Their business model relies on raising capital to build or acquire properties, with profitability determined by the spread between property yields and their cost of capital. Rising interest rates directly increase their cost of debt, squeezing this spread.9 Furthermore, as rates rise, lower-risk fixed-income assets like bonds become more attractive to income-seeking investors, potentially diverting capital away from REITs.10 While historical analysis shows that REITs have often delivered positive returns during periods of rising rates driven by strong economic growth, abrupt and significant rate hikes, such as those seen recently, can lead to underperformance.11

Break-Even Analysis: The Viability Threshold

To quantify this risk, a break-even analysis can determine the cost of capital at which new AI data center projects become uneconomical. Such a model incorporates the total cost of ownership (TCO), which includes both capital expenditures (CapEx) and operational expenditures (OpEx).13 Key CapEx components include land and site development (~15% of total), power and cooling infrastructure (~25%), and IT hardware like GPUs and servers (~60%).15 OpEx is driven primarily by the cost of power, which is a function of the facility’s Power Usage Effectiveness (PUE), a measure of energy efficiency.16

By modeling the internal rate of return (IRR) of a typical hyperscale data center project under various financing assumptions, we can identify a critical threshold for the weighted average cost of capital (WACC). Using baseline financial assumptions for energy infrastructure projects, which include after-tax equity returns in the 8-12% range and interest rates on debt around 7-8% 18, the analysis suggests that a sustained increase in the WACC of 200-300 basis points above current levels would render many new-build projects NPV-negative, absent a significant increase in leasing rates or a decrease in construction costs. This creates a significant headwind for the nearly $7 trillion in projected data center capital outlays needed by 2030.15

B. Recession Impact Modeling: Corporate IT Budget Contraction

Historical data from previous economic downturns provides a clear and consistent playbook for how corporations adjust their IT spending: capital-intensive hardware and new, speculative projects are the first and most deeply cut, while spending on mission-critical software and services is preserved. The current AI infrastructure boom, with its unprecedented reliance on hardware CapEx, is exceptionally vulnerable to this cyclical pattern.

A recessionary environment would deliver a demand-side shock to the AI infrastructure market, compounding the supply-side pressures of cost inflation. In past downturns, such as the dot-com bust in 2001 and the Global Financial Crisis in 2008, IT spending cuts were primarily driven by customers tightening their belts. The current situation is more precarious. An AI infrastructure downturn would face a “double whammy”: a demand-side shock from a recession combined with a supply-side shock from persistent inflation in its core inputs—GPUs, energy, and specialized labor. The AI boom is uniquely hardware-centric, with McKinsey projecting $5.2 trillion in data center CapEx by 2030, of which 60% is IT hardware.15 This boom is co-occurring with high inflation for key components like high-end GPUs, which are selling at premiums of 70-90% over MSRP 20, and soaring energy demand.21 A recession would therefore not only reduce the

ability of companies to spend but also simultaneously increase the cost of what they need to buy. This dual pressure would likely lead to a much sharper and deeper contraction in AI infrastructure spending than historical precedents suggest.

Historical Precedent: The 2008 and 2020 Playbooks

During the 2008 financial crisis, the credit crunch forced businesses into a “cash-hoarding mode,” with IT markets absorbing a “disproportionate share of the capital investment collapse”.22 Both Gartner and Forrester revised their 2009 forecasts to predict declines in overall IT spending of 3-4%. The impact was not uniform; spending on computing hardware was projected to plummet by a staggering 14.9%, while software spending was expected to remain nearly flat with a marginal increase of 0.3%.22 This demonstrates a clear prioritization of OpEx-like software subscriptions over CapEx-heavy hardware purchases during a crisis.

The 2020 COVID-19 recession reinforced this pattern. Gartner projected an 8% contraction in global IT spending, as CIOs shifted to “emergency cost optimization” to support mission-critical operations.24 Once again, the segments experiencing the largest drops in spending were devices and data center systems.25 The notable exception was public cloud services, which grew 19% as it provided the essential infrastructure for the rapid shift to remote work.25 This highlights that even within infrastructure, spending shifts towards flexible, OpEx-based cloud consumption and away from large, upfront on-premises investments.

Table 2: Historical Tech Capex Response to Recessions (2001, 2009, 2020)

IT Segment2001 (Actual % Change)2009 (Forecasted % Change)2020 (Forecasted % Change)
Overall IT Spending-2.1%-3.8% (Gartner)-8.0% (Gartner)
Computing HardwareNot specified, but part of overall decline-14.9% (Gartner)Part of largest drop with Data Centers
Enterprise SoftwareNot specified, but part of overall decline+0.3% (Gartner)Part of overall decline
IT ServicesNot specified, but part of overall decline-1.7% (Gartner)Part of overall decline
Data Center SystemsNot specified, but part of overall declineNot specified separatelyPart of largest drop with Devices
Source: 22

Enterprise AI Use Case Prioritization

In a recession, the criteria for approving AI projects will shift dramatically. “Blue sky” or speculative projects aimed at long-term innovation or strategic advantage will be indefinitely postponed.28 Resources will be reallocated to initiatives that can demonstrate clear, quantifiable, and short-term ROI, with a strong emphasis on cost savings and operational efficiency.29 Use cases such as automating back-office processes, optimizing supply chains, or improving customer service efficiency will survive budget cuts, as they have a direct line of sight to financial value.29 This shift in focus would favor spending on AI software and platforms that deliver these efficiencies over raw infrastructure build-outs for more experimental, large-scale model training.

Public-to-Private Valuation Contagion

A recession-induced correction in public markets would inevitably spill over into private market valuations for AI infrastructure companies. While private valuations typically adjust more slowly and modestly than their public counterparts, a sustained downturn forces a repricing.32 This is because valuation methodologies for private firms often rely on public comparables and comparable transaction multiples.33 When public multiples compress, the anchor for private valuations is lowered. This makes it significantly harder for venture-backed startups to raise subsequent funding rounds at higher valuations, leading to an increase in “down rounds” and potential failures. The current median enterprise value to revenue multiple for AI startups, at approximately 29.7x, is particularly vulnerable to compression in a risk-off environment where investors prioritize profitability over growth.34 This dynamic also increases the attractiveness of take-private (P2P) transactions, as the gap between public and private multiples narrows or inverts.35

C. Inflation and Geopolitical Supply Chain Disruption

The physical supply chain underpinning the AI infrastructure boom is characterized by extreme geographic concentration, rendering it susceptible to both persistent cost inflation and catastrophic disruption from geopolitical events. The market appears to be mispricing the geopolitical risk premium, treating a potential Taiwan-related disruption as a low-probability tail risk rather than a core component of valuation.

The fabrication of leading-edge semiconductors, the heart of AI accelerators, is overwhelmingly concentrated in Taiwan, specifically at TSMC.36 A military conflict, blockade, or other major disruption involving Taiwan would effectively halt the production of virtually all high-end GPUs, paralyzing the entire global AI ecosystem. While government-led diversification efforts like the US CHIPS Act are underway, they will not yield meaningful alternative manufacturing capacity until the late 2020s or early 2030s.37 Despite this single point of failure, valuations for key semiconductor companies are predicated on massive, uninterrupted growth, implying that the market is assigning a very low probability to a disruptive event or is failing to price its magnitude correctly. A proper valuation model must incorporate a significant, non-zero probability of a supply shock, which would dramatically lower fair value estimates for the entire sector.

GPU Cost Inflation and Energy Volatility

Beyond the acute geopolitical risk, the supply chain is also subject to more conventional economic pressures.

  • GPU Cost Inflation: The price of GPUs has been historically volatile, subject to demand spikes from sectors like cryptocurrency mining and constrained by supply shortages.20 As of early 2025, high-end Nvidia GPUs such as the RTX 5090 and RTX 4090 were trading at premiums of 89% and 73% over their manufacturer’s suggested retail price (MSRP), respectively.20 While the long-term trend for cost-per-FLOP (a measure of computational performance) has been downward due to technological advancement 39, short-term inflation in the absolute cost of hardware directly and negatively impacts the ROI of new data center projects.
  • Energy Cost Volatility: Data centers are voracious consumers of energy. Global electricity demand from data centers is projected to more than double by 2030, reaching a level equivalent to the entire current consumption of Japan.21 In the U.S., data centers are expected to consume up to 12% of the nation’s total electricity by 2028.40 This massive and inelastic demand exposes data center operators to significant energy price volatility and the physical constraints of local power grids. Rising energy costs directly increase OpEx, compressing margins and potentially leading to higher electricity bills for all consumers in affected regions.21

Geopolitical Risk: The Taiwan/TSMC Nexus

The concentration of the semiconductor supply chain in the Indo-Pacific region, with Taiwan as its epicenter, represents the single most critical point of failure for the AI industry.42 Technology and market trends have concentrated the manufacturing of the most advanced chips (sub-10 nanometer) in just two companies: Taiwan’s TSMC and South Korea’s Samsung.36 This makes China, which lacks indigenous capability in leading-edge manufacturing, strategically vulnerable and reliant on Taiwanese foundries.36

This dynamic has placed Taiwan and TSMC at the center of the US-China geopolitical contest. US policy has sought to restrict China’s access to this technology while simultaneously encouraging TSMC to diversify its manufacturing footprint by building new fabs in the United States.36 A 2020 Eurasia Group report assesses that direct military action over Taiwan concerning this issue is unlikely, as it would trigger unacceptable escalation for all parties. However, it outlines several other avenues through which China could exert pressure and disrupt the supply chain, including the nationalization of TSMC’s facilities in mainland China, intellectual property theft, or retaliatory actions against Western technology firms.36 Any such disruption would be a catastrophic, near-extinction-level event for the current AI hardware ecosystem, halting progress for years.

Potential hedging strategies for this risk are limited but could include long positions in semiconductor manufacturing equipment (SME) companies that would benefit from the global build-out of new, geographically diversified fabs.45

II. Technology Plateau and Diminishing Returns

The investment thesis for exponential growth in AI infrastructure is predicated on the continuation of “scaling laws”—the empirical observation that more compute, more data, and larger models lead to better performance. This section identifies the technical limits and emerging trends that could challenge this core assumption, reducing the underlying demand for ever-increasing infrastructure.

A. Scaling Law Breakdown Analysis

For years, the AI development paradigm has been driven by a simple, powerful idea: scaling is all you need. This has justified the immense capital expenditures on GPU clusters and data centers. However, this foundational assumption is now facing significant challenges as evidence of diminishing returns and “sub-scaling” phenomena emerges.

Recent academic research, including several pre-print papers from 2025, indicates that the predictable performance gains from scaling are beginning to decelerate, particularly in large language models (LLMs).47 This phenomenon, termed “sub-scaling,” is attributed to several factors, most notably the declining quality of available training data and the increasing redundancy of information within massive datasets. As models are trained on the entirety of high-quality text and image data available on the public internet, simply adding more, lower-quality data yields marginal improvements. This suggests that the brute-force approach of adding more compute is hitting a wall, undermining the economic justification for exponential infrastructure growth.48

This potential plateau mirrors the historical S-curve trajectory of other technologies, most notably the semiconductor industry itself. Moore’s Law, the observation that the number of transistors on a chip doubles approximately every two years, drove exponential progress for decades.49 However, since around 2010, this rate of advancement has slowed as engineers confront fundamental physical limits.49 The current AI hardware boom could follow a similar path, transitioning from a period of exponential growth to one of incremental gains as it encounters its own physical and economic constraints.

A new theoretical framework, the “Race to Efficiency,” posits that sustained progress in AI is no longer just a function of raw compute. Instead, it depends on the rate of efficiency gains—improvements in algorithms, software, and hardware architecture—to offset the diminishing returns inherent in classical scaling.51 Without such ongoing innovation, achieving the next level of AI performance could demand “millennia of training or unrealistically large GPU fleets,” a scenario that is economically and physically infeasible.51 This shifts the value proposition from simply buying more compute to developing smarter ways of using it.

Energy vs. Performance Trade-offs: A Hard Ceiling

A primary physical constraint on unlimited scaling is energy consumption. While the energy efficiency of specialized AI accelerators like GPUs is improving, the total demand for compute is growing at an even faster rate.53 GPUs are significantly more energy-efficient for AI workloads than general-purpose CPUs, but they still consume vast amounts of power, and the infrastructure required to cool them accounts for a substantial portion of a data center’s energy budget.53 This trade-off between performance and power creates a hard economic and environmental ceiling on scaling. At some point, the marginal cost of the energy required to train or run a slightly more powerful model will exceed the marginal benefit derived from its improved performance, making further scaling uneconomical.55

B. Algorithmic Efficiency Breakthroughs

A radical breakthrough in algorithmic efficiency presents a complex, double-edged risk to the AI infrastructure market. While such an advance could unlock new AI capabilities and expand the total addressable market, it could also drastically reduce the amount of compute required to achieve a given level of performance, thereby destroying demand for existing and planned hardware infrastructure.

This is not a hypothetical risk; significant efficiency gains are already being realized.

  • Architectural Innovations: Techniques like Mixture-of-Experts (MoE), where only a fraction of a model’s parameters are activated for any given task, are enabling the development of extremely large models at a fraction of the computational cost of traditional “dense” architectures. DeepSeek’s recent models are a prime example, achieving performance comparable to leading dense models while using significantly less compute for both training and inference.56
  • Model Compression and Quantization: These techniques reduce the size of AI models and the precision of their calculations, allowing them to run faster and with less memory. Post-training quantization can achieve around a 4x reduction in model size and a 2-4x speedup in performance with minimal loss of accuracy.57

The economic impact of these efficiency gains is a subject of intense debate. The conventional view is that if less compute is needed to achieve a certain outcome, then total spending on compute will fall. However, a compelling counterargument is found in the Jevons Paradox, which states that as technological progress increases the efficiency with which a resource is used, the rate of consumption of that resource tends to increase rather than decrease.60 In the context of AI, greater algorithmic efficiency lowers the effective price of “intelligence.” This could dramatically accelerate the adoption of AI across the economy, leading to a surge in demand that more than offsets the efficiency gain per task, resulting in a net

increase in total compute spending.

Even if total spending increases, such a technological shift would not be without risk for incumbent infrastructure providers. A paradigm where algorithmic efficiency is the primary driver of performance de-commoditizes raw compute power (i.e., FLOPs). This would erode the competitive moat of the current hardware leader, Nvidia, whose business model is predicated on selling the most powerful general-purpose accelerators. It would create an opening for competitors who could leverage superior algorithms on less powerful or more specialized hardware to deliver better performance-per-dollar. The risk, therefore, is not necessarily a market collapse, but a disruptive redistribution of the value chain, with margin compression for hardware incumbents and value accruing to those who control the most efficient algorithms.

The Rise of Edge Computing

The development of smaller, more efficient models is a key enabler of edge computing, a paradigm where AI processing occurs locally on devices (e.g., smartphones, cars, factory sensors) rather than in centralized data centers.61 This shift is driven by the need for lower latency, improved data privacy, and reduced bandwidth costs. Enterprise adoption of edge computing is already widespread and growing, with projections reaching $378 billion by 2028.63 While edge computing will not eliminate the need for large data centers (which will still be required for training large models and aggregating data), it will shift a significant portion of inference workloads away from the cloud. This could temper the growth rate of centralized cloud infrastructure, as a portion of the demand is met by a distributed network of edge devices.

C. Alternative Computing Paradigms

The current AI infrastructure landscape is overwhelmingly dominated by GPU-centric, von Neumann architecture. This paradigm is vulnerable to long-term disruption from fundamentally different computing technologies that are being developed specifically to overcome the limitations of current hardware. While their commercialization timelines are uncertain, they represent a significant long-term risk to investments in today’s infrastructure.

  • Neuromorphic Computing: Inspired by the architecture of the human brain, neuromorphic chips process information using “spiking neural networks” and are designed for extreme energy efficiency.54 After decades in the research phase, the first commercial neuromorphic chips are beginning to emerge in 2025, targeted at low-power edge applications like sensor processing and wearables.65 While they are not yet a threat to large-scale data center GPUs, their continued development could eventually lead to a disruptive alternative for a wide range of AI tasks. Widespread commercial adoption, however, is likely still a decade or more away.66
  • Quantum Computing: Quantum computers hold the theoretical promise of solving certain classes of problems—including some relevant to AI, such as optimization—that are intractable for classical computers. However, the technology is still in its infancy. Timelines for the development of commercially useful, fault-tolerant quantum computers vary wildly among experts, with optimistic forecasts from firms like Google suggesting five years, while others, like Nvidia’s CEO, predict at least two decades.67 The more mainstream scientific consensus for applications requiring the millions of high-quality qubits needed for significant problems is closer to 2035-2040.68
  • Analog Computing: This older computing paradigm is experiencing a resurgence due to its potential for high information density and energy efficiency in AI applications.70 Unlike digital computers that represent information in binary bits, analog computers use continuous physical quantities (like voltage) to represent data. This makes them well-suited for the matrix multiplications that are at the heart of neural networks. Startups are developing analog chips for edge AI devices that promise GPU-like performance at a fraction of the power consumption.71

III. Regulatory and Policy Risks

Government interventions represent a significant and growing source of risk for the AI infrastructure sector. These interventions can impose direct compliance costs, constrain the supply of critical components, and reshape market structures, acting as a powerful brake on previously unfettered growth. The divergence in regulatory approaches between the United States, the European Union, and China is creating a complex and fragmented global landscape, which may inadvertently favor large, incumbent players with the resources to navigate it.

A. AI Safety Regulation and Compliance Costs

The rapid advancement of AI capabilities has spurred a global race to regulate the technology. These impending regulations, particularly in the US and EU, will introduce new compliance burdens, development friction, and legal liabilities, effectively functioning as a tax on AI development and deployment.

  • Compute Caps and Licensing (US): In the United States, the regulatory approach is beginning to coalesce around controlling access to large-scale computing power. Proposed state-level legislation, such as the RAISE Act in New York and SB-1047 in California, targets developers who spend over $100 million on a single AI training run. These laws would require such developers to conduct safety testing, publish safety protocols, and implement safeguards against catastrophic risks.73 At the federal level, the Department of Commerce’s Bureau of Industry and Security (BIS) has already implemented a global export control framework. This framework establishes a licensing requirement and country-specific caps on the export of high-performance AI chips, aiming to restrict access for countries of concern while managing the global diffusion of the technology.74
  • EU AI Act and Compliance Costs: The European Union’s AI Act represents the world’s most comprehensive and prescriptive regulatory framework for AI. It takes a risk-based approach, imposing stringent requirements on AI systems deemed “high-risk.” The compliance costs associated with the Act are expected to be substantial. One analysis estimates that the Act could cost the European economy €31 billion over the next five years and reduce AI investment by nearly 20%.77 A small or medium-sized enterprise (SME) seeking to deploy a single high-risk AI system could face compliance costs of up to €400,000, which could reduce its profits by as much as 40%.77 For large corporations, the costs are expected to be on par with, or even exceed, the millions spent to comply with the General Data Protection Regulation (GDPR).77 While some analyses focused on developers of General Purpose AI (GPAI) models argue that compliance costs are a negligible fraction of the total investment in model training 79, this overlooks the much broader and more significant costs imposed on the thousands of companies across the economy that will
    deploy these models in high-risk applications.

This global regulatory fragmentation creates a “compliance moat.” The divergent approaches—prescriptive and risk-based in the EU, liability-focused and compute-centric in the US, and state-controlled in China—mean that a single AI model cannot be deployed globally without significant and costly localization and compliance efforts. This complexity inherently benefits large, incumbent players like Microsoft and Google, which possess the extensive legal, technical, and lobbying resources necessary to navigate this patchwork of rules. It raises the barrier to entry for startups and could stifle the open-source AI ecosystem, which lacks a central corporate entity to manage and fund global compliance obligations.

Table 3: Estimated Compliance Costs of Global AI Regulations

CategoryRegulationEstimated CostImpact on BusinessKey Requirements
SME (High-Risk System)EU AI ActUp to €400,000 per system40% profit reduction for a €10M turnover businessQuality Management System (QMS), Conformity Assessment, Data Governance
Large Enterprise (High-Risk System)EU AI Act> €1 Million per systemComparable to or exceeding GDPR compliance costsQMS, Conformity Assessment, Risk Management, Post-Market Monitoring
GPAI Model Developer (>10^24 FLOP)EU AI Act (EP Proposal)€460k – €1M per model0.07% – 1.34% of total model development investmentInternal/External Evaluations, Technical Documentation, Risk Management
Large AI Developer (> $100M Training Run)US (RAISE Act/SB-1047)Not specified, but requires dedicated safety teamsLegal liability for catastrophic misusePublished Safety & Security Protocols, Implementation of Safeguards
Source: 73

B. Antitrust and Market Structure

The AI industry is characterized by a high degree of market concentration, with a handful of large technology companies controlling the key inputs of AI development: compute, data, and talent. This dominance has attracted intense scrutiny from antitrust regulators globally, which could lead to actions that fundamentally reshape the market structure.

Antitrust enforcement agencies, including the US Federal Trade Commission (FTC) and Department of Justice (DOJ), are actively investigating potential anticompetitive practices in the AI sector. Concerns include the use of AI for algorithmic price-fixing, the potential for dominant firms to self-preference their own AI services, and the consolidation of the market through acquisitions.80 Over the past decade, Big Tech firms have been the most active acquirers of AI startups, absorbing promising new technologies and talent into their ecosystems.81

A significant antitrust action, such as a forced breakup of one or more of these tech giants, would have profound and unpredictable consequences for the AI infrastructure market.82 It could fragment the highly integrated cloud platforms that currently provide the primary access to AI compute. While potentially opening the market to more competition, it could also disrupt the massive, coordinated capital investment and long-term research initiatives that these large firms currently undertake, possibly slowing the overall pace of innovation.83

C. Energy and Environmental Constraints

Perhaps the most immediate and physical threat to the exponential growth of AI infrastructure is the collision with the real-world limits of energy grids and environmental regulations. The primary constraint on AI infrastructure growth in the medium term may not be silicon or capital, but watts.

  • Grid Capacity Limitations: The power demands of modern AI data centers are immense and growing at a pace that far outstrips the ability of utilities to expand the electrical grid. A single hyperscale data center can consume as much power as a small city.84 In key data center hubs like Northern Virginia, Arizona, and Texas, utilities are reporting that their grids are at or near capacity, with interconnection queues for new projects stretching out for multiple years.84 This “power choke point” is becoming the single greatest physical bottleneck to new data center construction, directly limiting the rate at which new AI infrastructure can be deployed.
  • Carbon Pricing and Environmental Regulations: The massive energy consumption of data centers comes with a significant carbon footprint. Data centers and data transmission networks are already responsible for approximately 1% of global energy-related greenhouse gas emissions, a figure that is set to grow rapidly with the proliferation of AI.86 This is attracting increasing scrutiny from regulators and the public. The imposition of carbon pricing, stricter emissions caps, or mandates for renewable energy usage will directly increase the operational costs (OpEx) of data centers.87 This makes access to cheap, clean, and reliable power a critical competitive differentiator and a key factor in site selection.

This power constraint is creating a strategic shift in the data center industry. The winners in the next phase of development will be those entities that can secure reliable power. This includes REITs with large land banks that have pre-existing, large-scale power agreements; companies pioneering on-site power generation, including exploring options like small modular reactors (SMRs) 90; and those with the logistical capability to build in remote geographic locations that have surplus power capacity. This dynamic makes energy infrastructure, utility companies, and power generation technology providers potential contrarian or derivative investments on the AI boom.

IV. Market Saturation and Demand Destruction

While supply-side constraints pose significant risks, the demand side of the equation is not immune to challenges. The current infrastructure build-out is predicated on assumptions of widespread, rapid, and profitable adoption of AI across both enterprise and consumer markets. This section analyzes the risks of a potential mismatch between forecasted demand and realized adoption, which could lead to underutilization of deployed infrastructure.

A. Enterprise Adoption Curve and ROI Realization

The narrative of AI transforming every industry is compelling, but the reality of enterprise adoption is proving to be more complex and fraught with challenges than optimistic forecasts suggest. A significant risk is the emergence of “pilot purgatory,” where a large number of AI proof-of-concept projects fail to translate into full-scale production deployments that deliver tangible business value.

Recent data paints a sobering picture. While AI adoption is increasing, with Gartner estimating that the use of AI Agents in enterprise applications will grow from 1% to 33% by 2028 91, the failure rate of these projects is alarmingly high. Various studies report that 80-98% of enterprise AI initiatives fail to reach production or meet their business objectives.92 A 2025 survey from S&P Global Market Intelligence found that 42% of companies abandoned most of their AI initiatives, a dramatic increase from 17% in 2024, with the average organization scrapping 46% of its AI proofs-of-concept.94

The primary reasons for these failures are not technological, but organizational and strategic. They include a lack of clear business objectives, poor data quality, difficulties integrating AI into existing workflows, and a failure to secure cross-functional buy-in.92 This suggests that a significant portion of the current AI infrastructure spending is supporting a wave of corporate experimentation rather than proven, value-creating applications.

This creates the risk of a future reckoning. As the initial hype cycle wanes, CFOs and boards will inevitably demand a clear return on investment (ROI) from their substantial AI expenditures. A 2023 IBM report found that enterprise-wide AI initiatives achieved a meager ROI of just 5.9%.96 If a large cohort of companies concludes that their AI projects are not delivering the promised value, it could trigger a mass cancellation of these initiatives. Such a development would cause a sudden and sharp drop in enterprise demand for cloud compute resources and AI software, leading to underutilized infrastructure and a painful correction for providers.

B. Consumer AI Market Maturation and Willingness-to-Pay

The consumer AI market has exploded in user adoption since the launch of ChatGPT. A 2025 survey indicates that 61% of American adults have used AI in the past six months, translating to a global user base of 1.7-1.8 billion people.97 This rapid adoption has fueled investment in the infrastructure required to serve these users. However, broad usage has not yet translated into broad willingness-to-pay, creating a potential monetization gap.

The data reveals a significant enthusiasm gap when consumers are asked to pay for AI features. A March 2025 survey found that 71% of Americans would not pay extra for AI assistant features in the products they use.98 Only 8% said they would be willing to pay more. Another survey found that while 9% of Americans do pay for AI tools like ChatGPT Pro, this is a small fraction of the total user base.99 This suggests that while consumers are happy to use free, ad-supported, or bundled AI services, the market for premium, subscription-based consumer AI may be a relatively small niche.

There are exceptions, such as a survey suggesting 80% of Apple users would be willing to pay for Apple Intelligence, with over half willing to pay $10 or more per month.100 However, this may be an outlier driven by brand loyalty and the ecosystem effect, and it contrasts sharply with broader market sentiment.

The risk is that the market may be overestimating the size and profitability of the direct-to-consumer AI subscription market. If the primary business model for consumer AI turns out to be advertising- or bundle-based, the revenue per user will be significantly lower than in a high-margin subscription model. This would mean that the massive infrastructure built to support billions of consumer AI users may be under-monetized, leading to lower-than-expected returns for the companies that own that infrastructure.

C. Use Case Viability and Project Failure Rates

The ultimate driver of infrastructure demand is the success of the applications it supports. The high failure rate of enterprise AI projects, as detailed above, is a direct threat to sustained demand growth. The reasons cited for failure—cost overruns, data privacy concerns, security risks, and a lack of clear business value—are systemic and not easily solved.93

The paradox is that even as failure rates rise, investment continues to grow, driven by a fear of missing out (FOMO) among executives and investors.101 This dynamic creates the potential for a significant misallocation of capital. Infrastructure is being built to support a generation of AI applications, a large percentage of which may never achieve commercial viability or be abandoned in the “pilot purgatory” stage.

The industries with the highest rates of implementation failure or slowest adoption will be the first to cut back on AI-related infrastructure spending. While data is still emerging, sectors with complex regulatory requirements (e.g., healthcare, finance) or those struggling to define clear ROI (e.g., certain creative industries) may see a higher attrition rate for AI projects. A concentration of failures in one or two major industries could be a leading indicator of a broader slowdown in demand.

V. Competitive Dynamics and Margin Compression

The current structure of the AI infrastructure market is characterized by high growth and high margins, particularly for key hardware providers. However, this structure is vulnerable to disruptive competitive forces, including the commoditizing effect of open-source software and the classic risk of cyclical overcapacity, which could lead to significant margin compression and price wars.

A. Commoditization Risk from Open-Source Models

The proliferation of high-performance, open-source AI models represents a fundamental challenge to the existing value chain. These models, which are freely available for modification and deployment, are rapidly closing the performance gap with their proprietary, closed-source counterparts.102 This trend has profound implications for the competitive moats and profitability of cloud service providers (CSPs).

The availability of powerful open-source alternatives forces CSPs like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud into a more commoditized role. Instead of being able to sell high-margin, proprietary AI services, they are increasingly competing to offer the cheapest and most efficient platform on which to run these open-source models.103 This shifts the battleground to pure infrastructure-as-a-service (IaaS), a business characterized by intense price competition and lower margins.

This creates a difficult dilemma for the CSPs. On one hand, they must offer open-source models to attract developers and remain competitive. On the other hand, doing so undermines their ability to differentiate and capture value at the software layer. Microsoft’s strategy with Azure provides a case study: while it heavily promotes its partnership with the proprietary models from OpenAI (in which it holds a 49% stake), it also offers over 1,900 models from third-party and open-source providers to create a comprehensive ecosystem.104 This is a defensive necessity, but it inherently puts pressure on the margins of their own AI services.

The ultimate effect is a margin squeeze. CSPs are forced to invest billions in high-end, expensive hardware (primarily Nvidia GPUs) to run these models, while the software layer that runs on top of that hardware becomes increasingly open and commoditized.105 They are paying a premium for the key input (hardware) while facing price pressure on their output (cloud services), a classic recipe for margin compression.

B. Overcapacity Scenarios and Price Wars

The history of technology infrastructure is littered with boom-and-bust cycles driven by overbuilding, from the fiber optic glut of the early 2000s to cycles in the memory chip market. The data center market, the physical home of AI infrastructure, is now exhibiting signs that it may be entering the “boom” phase of such a cycle, setting the stage for a potential “bust.”

Current market data from Q1 2025 shows extremely tight conditions in major North American data center markets, with vacancy rates in many areas below 1%.106 This supply-demand imbalance has driven significant increases in rental rates, with prices per kilowatt (kW) per month rising by over 17% year-over-year in markets like Northern Virginia and Chicago.107

In response to this high demand and pricing power, a massive pipeline of new data center capacity is currently under development. JLL projects that 10 GW of new capacity will break ground globally in 2025 alone.108 While much of this new capacity is being pre-leased by hyperscale tenants, the sheer scale of the build-out introduces the risk of overcapacity.

The danger lies in the potential for a slight deceleration in demand growth to collide with this massive wave of new supply. If, for any of the reasons outlined in this report (recession, technological plateau, etc.), the growth in demand for data center space slows from its current exponential pace to a merely linear one, the market could quickly flip from being chronically undersupplied to acutely oversupplied.

This “empty hotel” risk would trigger a classic price war among data center operators and REITs. Faced with vacant capacity and high fixed costs, providers would be forced to slash rental rates to attract tenants, leading to a collapse in revenues and margins. This would be particularly damaging for developers who financed their projects based on the assumption of continued high rental rates and rapid lease-up, potentially leading to defaults and a wave of distressed assets.

VI. Capital Market and Funding Constraints

The AI infrastructure boom has been fueled by an unprecedented influx of capital from both private and public markets. The sustainability of this funding is a critical variable. A shift in investor sentiment, driven by concerns over valuations or a broader risk-off move in the markets, could quickly choke off the flow of capital, stalling the build-out.

A. Private Market Dynamics: The VC Funding Overhang

The venture capital market has become the epicenter of AI investment. In Q1 2025, a staggering 57.9% of all global VC dollars, and 70.2% in North America, were invested in AI and machine learning startups.109 This extreme concentration of capital in a single sector creates systemic risk. It makes the entire startup ecosystem highly correlated and vulnerable to a sector-specific downturn. A negative catalyst unique to AI—such as a major safety incident, a regulatory crackdown, or a perceived technology plateau—could freeze funding across the board, leading to a mass extinction event for the thousands of startups that are not yet profitable and rely on continuous funding to survive.

This investment frenzy has driven valuations to extreme levels. Early-stage, AI-enabled fintech companies, for example, are commanding a median valuation 242% higher than their non-AI counterparts, a phenomenon dubbed the “AI premium”.110 In Q1 2025, the average revenue multiple for LLM vendor startups was 44.1x, with Series A rounds averaging a 39.0x multiple.111

This creates a significant valuation overhang. Many of these companies will need to grow into these valuations, a task made more difficult if economic or technological headwinds emerge. The large gap between the massive amounts of capital flowing into AI startups ($104.3 billion raised by US AI startups in H1 2025) and the relatively small amount coming out via exits ($36 billion in the same period) indicates that much of this invested capital has not yet been realized or validated by the public markets or acquirers.112 A downturn could leave a generation of VCs with large paper markups but few actual cash returns.

B. Public Market Sentiment: The Gap Between Hype and Earnings

Public market sentiment towards AI has been overwhelmingly positive, driving the valuations of key infrastructure players to historic highs. Companies like Nvidia, AMD, ARM, and Super Micro Computer have become bellwethers for the entire AI ecosystem.113 Their stock prices and, crucially, their valuation multiples, act as an “anchor” for the entire private market.

As of mid-2025, these valuations remain elevated. Nvidia trades at 37 times forward earnings, and while this is below its recent peaks, it still reflects enormous growth expectations.117 ARM Holdings trades at a price-to-sales ratio of 39 and a forward P/E of 79, levels that suggest its stock price has outpaced its near-term fundamentals.118 Data Center REITs are also trading at premium valuations, with implied capitalization rates of just 4.4%, the lowest across all commercial real estate asset classes, and EV/EBITDA multiples exceeding 27x.120

The risk is that these valuations are pricing in a perfect, uninterrupted continuation of the current exponential growth trajectory. They leave little room for error or for any of the risks outlined in this report to materialize. A significant correction in these public bellwether stocks, whether triggered by an earnings miss, reduced guidance, or a broader market downturn, would have an immediate and severe cascading effect. It would be transmitted directly to the private markets, triggering down-rounds, making IPOs impossible for late-stage companies, and freezing the M&A market. Such a correction would represent a fundamental reset of the capital market assumptions that have underpinned the entire AI infrastructure build-out.

Table 4: Comparative Valuation Multiples: Public vs. Private AI Infrastructure (Q1 2025)

SectorStageMetric (Median EV/Revenue Multiple)Data Source(s)
AI Foundational Model (LLM)Private (Series A)39.0x111
Private (Series C+)21.2x111
PublicN/A (Few pure-play public comps)
AI Application (SaaS)Private (Median)10.1x122
Public (Median)7.0x – 9.0x (Implied from various reports)105
AI Infrastructure (Data Center REITs)Private (M&A)N/A
Public~10.8x (EV/Sales)123
AI Infrastructure (Semiconductors)Public (Nvidia)~28.4x (P/S)124
Public (AMD)~8.0x (P/S, implied from financials)125
Note: Multiples are illustrative and based on data from Q1-Q2 2025. Direct comparisons are challenging due to differences in growth rates and business models.

VII. Quantitative Risk Modeling and Scenario Analysis

To translate the qualitative risks identified in this report into actionable investment intelligence, a quantitative scenario analysis framework is essential. This involves modeling the potential impact of different risk scenarios on the financial performance and investment returns of key AI infrastructure sectors. The framework is structured around three primary scenarios: a Base Case, a Bear Case, and a Crisis Case.

A. Scenario Analysis Framework: Base, Bear, and Crisis Cases

  • Base Case: Current Trajectory Continuation. This scenario assumes a continuation of the current market environment. Key assumptions include: global GDP growth remains positive (2-3%), the Federal Reserve begins a slow cycle of rate cuts in late 2025, AI infrastructure spending growth continues at a high but gradually decelerating rate (e.g., 30-40% YoY), and no major geopolitical or technological disruptions occur. This serves as the baseline for evaluating the potential downside of the other scenarios.
  • Bear Case: Moderate Deceleration. This scenario models the impact of a mild global recession and a “higher-for-longer” interest rate environment. Key assumptions include: flat to slightly negative GDP growth for 2-3 quarters, the Fed Funds rate remaining above 5% through 2026, and a 50% reduction in the growth rate of AI infrastructure spending. This case would likely be triggered by persistent inflation forcing central banks to maintain tight monetary policy, leading to a contraction in corporate IT budgets.
  • Crisis Case: Systemic Shock. This scenario models the impact of a high-severity, low-probability “black swan” event. The primary trigger modeled here is a significant geopolitical disruption in the Taiwan Strait that halts or severely curtails semiconductor production at TSMC. This would represent a catastrophic supply-side shock. Assumptions include an absolute decline in AI infrastructure spending, a deep global recession, and a fundamental repricing of all technology-related assets.

B. Portfolio Impact Assessment: Sector and Geographic Exposure

The impact of these scenarios will not be uniform across the AI infrastructure ecosystem.

  • Sector Exposure:
  • Semiconductors: This sector is most acutely exposed to the Crisis Case (Taiwan disruption) and to the risk of a technological plateau or algorithmic efficiency breakthrough that de-commoditizes their hardware.
  • Cloud Providers: Highly sensitive to the Bear Case (recession), as enterprise customers would cut back on cloud consumption. They also face margin pressure from the rise of open-source models.
  • Data Center REITs: Most vulnerable to the Bear Case (higher interest rates increasing cost of capital) and the risk of overcapacity leading to a price war. They are also directly impacted by physical constraints like grid capacity.
  • Geographic Exposure:
  • United States: Benefits from being the center of AI innovation and from policies like the CHIPS Act, but is exposed to domestic political uncertainty and grid limitations in key markets.
  • China: Faces significant headwinds from US export controls on advanced semiconductors, which could slow its AI development, but is investing heavily to build a self-reliant domestic supply chain.
  • Europe: Faces the highest regulatory burden from the EU AI Act, which could slow adoption, but is also investing to build up its domestic chip manufacturing and AI capabilities.

C. Trigger Event Probability and Correlation Modeling

Probabilities are assigned to each risk factor based on current market data, geopolitical analysis, and historical precedents (as summarized in Table 1). It is also critical to analyze the correlation between these risks. For example, a global recession (Bear Case) would significantly increase the probability of a sharp public market valuation correction. A geopolitical crisis in Taiwan (Crisis Case) would trigger both a recession and a supply chain collapse simultaneously, creating a highly correlated, systemic shock. Near-term risks (6-18 months) are primarily macroeconomic (interest rates, recession), while medium-term risks (2-5 years) include technological plateaus, regulatory implementation, and potential overcapacity.

Table 5: Scenario Analysis Summary: Projected 2-Year Total Returns

SectorBase CaseBear Case (Mild Recession)Crisis Case (Taiwan Conflict)Key Assumptions (Bear/Crisis)
Semiconductors (e.g., Nvidia, AMD)+25% to +40%-20% to -35%-60% to -80%Bear: Enterprise CapEx freeze. Crisis: Total halt of leading-edge chip supply.
Cloud Providers (e.g., Azure, AWS)+20% to +35%-15% to -30%-40% to -55%Bear: 50% cut in enterprise cloud spending growth. Crisis: Global recession, supply chain collapse for servers.
Data Center REITs (e.g., DLR, EQIX)+15% to +25%-25% to -40%-50% to -65%Bear: WACC increases 200bps, vacancy rates double. Crisis: Tenant defaults, development pipeline halts.
AI Software (e.g., Palantir, SaaS)+30% to +50%-10% to -25%-45% to -60%Bear: Focus shifts to cost-cutting ROI, new projects canceled. Crisis: Widespread enterprise failure.

VIII. Actionable Investment Intelligence: Strategies for Risk Mitigation and Opportunity Capture

This final section translates the preceding analysis into a set of actionable strategies for portfolio management. The objective is to provide a framework for mitigating the identified risks while identifying potential opportunities that may arise from a market dislocation.

A. Risk Dashboard: Leading Indicators and Reallocation Triggers

A proactive risk management approach requires continuous monitoring of key leading indicators. The following provides specific, data-driven triggers that should prompt a portfolio review and potential reallocation.

  • Economic Disruption Indicators:
  • Interest Rates: A sustained move in the 10-Year US Treasury yield above 6.0% should trigger a reduction in exposure to Data Center REITs and other capital-intensive infrastructure assets.
  • Recession: Two consecutive quarters of negative prints in the Gartner or Forrester corporate IT spending surveys would serve as a strong confirmation of a downturn, triggering a defensive shift away from hardware-centric semiconductor stocks.
  • Geopolitics: An increase in the Eurasia Group’s political risk index for the Taiwan Strait above a pre-defined threshold should trigger the immediate implementation of tail-risk hedges.
  • Technology Plateau Indicators:
  • Scaling Laws: A demonstrated failure of two consecutive frontier model releases from major labs (e.g., OpenAI, Google) to achieve significant state-of-the-art improvements on key benchmarks (e.g., MMLU) would indicate a scaling plateau.
  • Algorithmic Efficiency: The release of an open-source model that achieves >95% of the performance of the leading proprietary model at less than 25% of the training compute cost would signal a disruptive efficiency gain.
  • Market and Competitive Indicators:
  • Data Center Vacancy: An increase in the average vacancy rate in the top five North American data center markets by 200 basis points in a single quarter would be a leading indicator of oversupply.
  • Public Valuations: A 25% drawdown from peak in a bellwether stock like Nvidia, if not accompanied by a broader market crash, should be considered a sector-specific sentiment shift and trigger a review of all private AI valuations.

B. Hedging and Mitigation Strategies

Several strategies can be employed to hedge against the identified risks:

  • Pairs Trades: To hedge against the risk of algorithmic efficiency gains disrupting the hardware market, a pairs trade of long semiconductor manufacturing equipment (SME) stocks (which benefit from diversification) versus short incumbent chip designers could be effective.
  • Commodity Futures: To hedge against rising energy costs for data centers, long positions in natural gas futures or investments in renewable energy ETFs can provide an offset.
  • Options Strategies: Purchasing long-dated put options on high-multiple AI stocks or on sector-specific ETFs can provide direct downside protection against a valuation correction.
  • Diversification: Increasing allocation to less-correlated or contrarian sectors that may benefit from an AI infrastructure slowdown.

C. Opportunity Identification: Contrarian, Defensive, and Recovery Plays

A market dislocation in AI infrastructure would create significant opportunities for well-positioned investors.

  • Contrarian Plays:
  • Energy Infrastructure: As power becomes the primary bottleneck for AI, companies involved in power generation, transmission, and grid modernization will become critical enablers. This includes utilities with favorable regulatory environments, manufacturers of transformers and grid components, and developers of next-generation power sources like SMRs and geothermal energy.126
  • Legacy IT: A slowdown in AI spending could lead to a re-allocation of IT budgets back towards more traditional enterprise software (e.g., ERP, CRM) and cybersecurity, benefiting established, profitable companies in those sectors.
  • Defensive Positioning:
  • High-ROI AI Software: Within the AI sector, the most defensive assets are software companies that provide solutions with a clear, immediate, and high ROI focused on cost-cutting and efficiency. These are the tools that will survive budget cuts in a recession.
  • Private Credit: As VC equity funding dries up, private credit funds that can provide structured debt solutions to well-run but capital-starved AI companies will be in a strong position.
  • Recovery Plays:
  • Best-of-Breed Hardware: Following a significant correction, the market leaders in hardware with the strongest balance sheets and technological moats (e.g., Nvidia, TSMC) would likely be the best-positioned assets for a recovery, as the long-term demand for compute will remain.
  • Consolidators: A downturn would lead to a wave of failures among less-capitalized startups. Well-funded companies and private equity firms would have an opportunity to acquire valuable technology and talent at distressed prices.

Works cited

  1. Impact of Federal Reserve Interest Rate Changes – Investopedia, accessed July 25, 2025, https://www.investopedia.com/articles/investing/010616/impact-fed-interest-rate-hike.asp
  2. What higher rates mean for fixed income alongside AI – Vanguard, accessed July 25, 2025, https://institutional.vanguard.com/insights-and-research/perspective/what-higher-rates-mean-for-fixed-income-alongside-ai.html
  3. The Impact of Interest Rate Cycles on Technology Sector Market Pricing – CMS Prime, accessed July 25, 2025, https://cmsprime.com/blog/the-impact-of-interest-rate-cycles-on-technology-sector-market-pricing/
  4. How Rising Rates Could Influence Tech Earnings – OpenMarkets – CME Group, accessed July 25, 2025, https://www.cmegroup.com/openmarkets/equity-index/2022/How-Rising-Rates-Could-Influence-Tech-Earnings.html
  5. (PDF) The Impact of Fed Rate Hikes on Tech Company Stock Prices …, accessed July 25, 2025, https://www.researchgate.net/publication/382573271_The_Impact_of_Fed_Rate_Hikes_on_Tech_Company_Stock_Prices_Microsoft_as_an_Example
  6. The Impact of Fed Rate Hikes on Tech Company Stock Prices: Microsoft as an Example, accessed July 25, 2025, https://drpress.org/ojs/index.php/HBEM/article/view/23862
  7. Rate Cycles – The World Bank, accessed July 25, 2025, https://thedocs.worldbank.org/en/doc/86b28b03d7dbcd81c76cfe1ce5003e64-0050012024/original/rate-cycles.pdf
  8. How Are Interest Rates Impacting Spending? Evidence From The CFO Survey, accessed July 25, 2025, https://www.richmondfed.org/research/national_economy/cfo_survey/research_and_commentary/2023/20230927_research_commentary
  9. The Effects of Interest Rate Changes on Real Estate Investment Trusts (REITs), accessed July 25, 2025, https://www.crystalfunds.com/insights/effects-of-interst-rates-on-real-estate-investment-trusts
  10. REITs and Interest Rates | Real Estate Investing – Nareit, accessed July 25, 2025, https://www.reit.com/investing/reits-and-interest-rates
  11. (PDF) Interest Rate Sensitivities of REIT Returns – ResearchGate, accessed July 25, 2025, https://www.researchgate.net/publication/5129600_Interest_Rate_Sensitivities_of_REIT_Returns
  12. The Impact of Rising Interest Rates on REITs – S&P Global, accessed July 25, 2025, https://www.spglobal.com/spdji/en/documents/research/the-impact-of-rising-interest-rates-on-reits.pdf
  13. www.techtarget.com, accessed July 25, 2025, https://www.techtarget.com/searchdatacenter/definition/TCO#:~:text=TCO%20factors%20in%20the%20costs,software%20licenses%20and%20employee%20training.
  14. Total Cost of Ownership (TCO) Model for Storage – SNIA.org, accessed July 25, 2025, https://www.snia.org/sites/default/files/SSSI/SNIA_TCO_Whitepaper_03-2021.pdf
  15. The cost of compute power: A $7 trillion race | McKinsey, accessed July 25, 2025, https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers
  16. The cost impact of AI data center design, build and operations – Vertiv, accessed July 25, 2025, https://www.vertiv.com/en-us/about/news-and-insights/articles/educational-articles/the-cost-impact-of-ai-data-center-design-build-and-operations/
  17. How to Calculate Data Center TCO – DCSMI, accessed July 25, 2025, https://www.dcsmi.com/data-center-sales-and-marketing-blog/how-to-calculate-data-center-tco
  18. Financial Cases & Methods | Electricity | 2023 – ATB | NREL, accessed July 25, 2025, https://atb.nrel.gov/electricity/2023/financial_cases_&_methods
  19. Chapter 10 – Financial Assumptions – Environmental Protection Agency (EPA), accessed July 25, 2025, https://www.epa.gov/system/files/documents/2021-09/chapter-10-financial-assumptions.pdf
  20. GPU prices in 2025: best prices on the latest graphics cards …, accessed July 25, 2025, https://www.techradar.com/computing/gpu/gpu-prices
  21. Trump’s AI Plan Calls For Massive Data Centers. Here’s How It May …, accessed July 25, 2025, https://www.barchart.com/story/news/33629477/trump-s-ai-plan-calls-for-massive-data-centers-here-s-how-it-may-affect-energy-in-the-us
  22. Gartner and Forrester Now Forecast 2009 Decline in IT Spending …, accessed July 25, 2025, https://www.channelinsider.com/news-and-trends/gartner-and-forrester-now-forecast-2009-decline-in-it-spending/
  23. IT Spending Forecasts Slashed by Gartner, Forrester – IT Jungle, accessed July 25, 2025, https://www.itjungle.com/2009/04/06/tfh040609-story03/
  24. Gartner: IT spend to shrink 8% in 2020, expect a slow recovery | CIO …, accessed July 25, 2025, https://www.ciodive.com/news/gartner-tech-spend-forecast-2020-coronavirus/577792/
  25. Gartner Says Global IT Spend To Decline By 8% In 2020 Due To Impact Of Pandemic, accessed July 25, 2025, https://www.capacitymedia.com/article/29ot42ikril15nn4lz6pm/news/gartner-says-global-it-spend-to-decline-by-8-in-2020-due-to-impact-of-pandemic
  26. Gartner revises IT spending prediction for the year, expects 8% decline now – Techcircle, accessed July 25, 2025, https://www.techcircle.in/2020/05/13/gartner-revises-it-spending-prediction-for-the-year-expects-8-decline-now/
  27. Global IT spending to drop 8% in 2020 amid COVID-19: Gartner, Telecom News, ETTelecom, accessed July 25, 2025, https://telecom.economictimes.indiatimes.com/news/global-it-spending-to-drop-8-in-2020-amid-covid-19-gartner/75718441
  28. Recession and AI investments – 2021.AI, accessed July 25, 2025, https://2021.ai/news/recession-and-ai-investments
  29. Identifying and Prioritizing Artificial Intelligence Use Cases for …, accessed July 25, 2025, https://medium.com/@adnanmasood/identifying-and-prioritizing-artificial-intelligence-use-cases-for-business-value-creation-1042af6c4f93
  30. How to leverage enterprise AI facing a recession – Dataiku | Business Chief North America, accessed July 25, 2025, https://businesschief.com/technology-and-ai/how-to-leverage-enterprise-ai-facing-a-recession-dataiku
  31. Three things to avoid doing with AI in a recession, and what to do instead, accessed July 25, 2025, https://www.computerweekly.com/blog/Data-Matters/Three-things-to-avoid-doing-with-AI-in-a-recession-and-what-to-do-instead
  32. The impact of public market dislocations on private markets – Financier Worldwide, accessed July 25, 2025, https://www.financierworldwide.com/the-impact-of-public-market-dislocations-on-private-markets
  33. (PDF) Private Firm Valuation Using Multiples: Can Artificial Intelligence Algorithms Learn Better Peer Groups? – ResearchGate, accessed July 25, 2025, https://www.researchgate.net/publication/380867314_Private_Firm_Valuation_Using_Multiples_Can_Artificial_Intelligence_Algorithms_Learn_Better_Peer_Groups
  34. AI Valuation: The Overlooked Drivers of Value – Aventis Advisors, accessed July 25, 2025, https://aventis-advisors.com/ai-valuation-the-overlooked-drivers-of-value/
  35. Public Vs. Private Assets: The Big Switch | Bain & Company, accessed July 25, 2025, https://www.bain.com/insights/private-multiples-global-private-equity-report-2019/
  36. The Geopolitics of Semiconductors – Eurasia Group, accessed July 25, 2025, https://www.eurasiagroup.net/files/upload/Geopolitics-Semiconductors.pdf
  37. EMERGING RESILIENCE IN THE SEMICONDUCTOR SUPPLY …, accessed July 25, 2025, https://www.semiconductors.org/wp-content/uploads/2024/05/Report_Emerging-Resilience-in-the-Semiconductor-Supply-Chain.pdf
  38. Read It and Weep: Here’s How Bad Nvidia GPU Prices Got in a Single Year – PCMag UK, accessed July 25, 2025, https://uk.pcmag.com/graphics-cards/137173/read-it-and-weep-heres-how-bad-nvidia-gpu-prices-got-in-a-single-year
  39. The Evolution of GPU Pricing: A Deep Dive into Cost per FP32 FLOP for Hyperscalers, accessed July 25, 2025, https://medium.com/@cli_87015/the-evolution-of-gpu-pricing-a-deep-dive-into-cost-per-fp32-flop-for-hyperscalers-cbf072b85bb5
  40. Data center power crunch: Meeting the power demands of the AI era – ERM, accessed July 25, 2025, https://www.erm.com/insights/data-center-power-crunch-meeting-the-power-demands-of-the-ai-era/
  41. DOE Releases New Report Evaluating Increase in Electricity Demand from Data Centers, accessed July 25, 2025, https://www.energy.gov/articles/doe-releases-new-report-evaluating-increase-electricity-demand-data-centers
  42. Mapping the Semiconductor Supply Chain: The Critical Role of the …, accessed July 25, 2025, https://www.csis.org/analysis/mapping-semiconductor-supply-chain-critical-role-indo-pacific-region
  43. Mapping the Semiconductor Supply Chain – AWS, accessed July 25, 2025, https://csis-website-prod.s3.amazonaws.com/s3fs-public/2023-05/230530_Thadani_MappingSemiconductor_SupplyChain.pdf?VersionId=SK1wKUNf_.qSF3kzMF.aG8dwd.fFTURH
  44. Risks in the Semiconductor Manufacturing and Advanced … – CSIS, accessed July 25, 2025, https://www.csis.org/analysis/risks-semiconductor-manufacturing-and-advanced-packaging-supply-chain
  45. Chip Challenges: Semiconductors and Supply Chain Risks – Exiger, accessed July 25, 2025, https://www.exiger.com/perspectives/chip-challenges-semiconductors-and-supply-chain-risks/
  46. Securing Semiconductor Supply Chains in the Indo-Pacific Economic Framework for Prosperity – CSIS, accessed July 25, 2025, https://www.csis.org/analysis/securing-semiconductor-supply-chains-indo-pacific-economic-framework-prosperity
  47. [2507.10613] Sub-Scaling Laws: On the Role of Data Density and …, accessed July 25, 2025, https://www.arxiv.org/abs/2507.10613
  48. Position: AI Scaling: From Up to Down and Out – arXiv, accessed July 25, 2025, https://arxiv.org/pdf/2502.01677?
  49. Moore’s law – Wikipedia, accessed July 25, 2025, https://en.wikipedia.org/wiki/Moore%27s_law
  50. Moore’s law and the Technology S-Curve – Gwern.net, accessed July 25, 2025, https://gwern.net/doc/technology/2004-bowden.pdf
  51. The Race to Efficiency: A New Perspective on AI Scaling Laws arXiv …, accessed July 25, 2025, https://arxiv.org/abs/2501.02156
  52. The Race to Efficiency: A New Perspective on AI Scaling Laws arXiv …, accessed July 25, 2025, https://arxiv.org/pdf/2501.02156
  53. Busting the top myths about AI and energy efficiency – Atlantic Council, accessed July 25, 2025, https://www.atlanticcouncil.org/content-series/global-energy-agenda/busting-the-top-myths-about-ai-and-energy-efficiency/
  54. Exploring AI Accelerator Options for Optimized AI Performance – PuppyAgent, accessed July 25, 2025, https://www.puppyagent.com/blog/Exploring-AI-Accelerator-Options-for-Optimized-AI-Performance
  55. Power Estimation and Energy Efficiency of AI Accelerators on … – MDPI, accessed July 25, 2025, https://www.mdpi.com/1996-1073/18/14/3840
  56. The DeepSeek Effect: Rewriting AI Economics Through Algorithmic …, accessed July 25, 2025, https://medium.com/@aiml_58187/the-deepseek-effect-rewriting-ai-economics-through-algorithmic-efficiency-part-1-46cf9b2e9930
  57. Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark | OpenReview, accessed July 25, 2025, https://openreview.net/forum?id=sMwYn2lZjO
  58. Primers • Model Compression – aman.ai, accessed July 25, 2025, https://aman.ai/primers/ai/model-compression/
  59. Mixture Compressor for Mixture-of-Experts LLMs Gains More | OpenReview, accessed July 25, 2025, https://openreview.net/forum?id=hheFYjOsWO
  60. Algorithmic progress likely spurs more spending on compute, not …, accessed July 25, 2025, https://epoch.ai/gradient-updates/algorithmic-progress-likely-spurs-more-spending-on-compute-not-less
  61. Is your organization’s infrastructure ready for the new hybrid cloud? – Deloitte, accessed July 25, 2025, https://www.deloitte.com/us/en/insights/topics/digital-transformation/future-ready-ai-infrastructure.html
  62. Leading with Edge Computing: How to reinvent with data and AI – Accenture, accessed July 25, 2025, https://www.accenture.com/content/dam/accenture/final/accenture-com/document-2/Accenture-Leading-With-Edge-Computing.pdf
  63. Where do Next Generation Edge Computing and AI Intersect? | Blog – OpenInfra Foundation, accessed July 25, 2025, https://openinfra.org/blog/where-do-next-generation-edge-and-ai-intersect/
  64. A Brief History of Neuromorphic Computing – Knowm.org, accessed July 25, 2025, https://knowm.org/a-brief-history-of-neuromorphic-computing/
  65. (PDF) The road to commercial success for neuromorphic technologies, accessed July 25, 2025, https://www.researchgate.net/publication/390804245_The_road_to_commercial_success_for_neuromorphic_technologies
  66. Neuromorphic Computing: From Materials to Systems Architecture – DOE Office of Science, accessed July 25, 2025, https://science.osti.gov/-/media/bes/pdf/reports/2016/Neuromorphic_Computing_rpt.pdf
  67. Google Quantum AI Head Sees Commercial Quantum Within Five …, accessed July 25, 2025, https://thequantuminsider.com/2025/02/05/google-quantum-ai-head-sees-commercial-quantum-within-five-years/
  68. The timelines: when can we expect useful quantum computers?, accessed July 25, 2025, https://introtoquantum.org/essentials/timelines/
  69. Understanding the timeline of quantum computing: when will it become reality? – Sectigo, accessed July 25, 2025, https://www.sectigo.com/resource-library/quantum-computing-timeline-things-to-know
  70. What’s Old Is New Again – Communications of the ACM, accessed July 25, 2025, https://cacm.acm.org/news/whats-old-is-new-again/
  71. Analog computer – Wikipedia, accessed July 25, 2025, https://en.wikipedia.org/wiki/Analog_computer
  72. The History of Artificial Intelligence – IBM, accessed July 25, 2025, https://www.ibm.com/think/topics/history-of-artificial-intelligence
  73. For AI Safety Regulation, a Bird in the Hand Is Worth Many in the …, accessed July 25, 2025, https://www.lawfaremedia.org/article/for-ai-safety-regulation–a-bird-in-the-hand-is-worth-many-in-the-bush
  74. Understanding the Artificial Intelligence Diffusion Framework – RAND Corporation, accessed July 25, 2025, https://www.rand.org/pubs/perspectives/PEA3776-1.html
  75. Commerce Department Imposes Sweeping Global Restrictions on AI Technologies | Advisories | Arnold & Porter, accessed July 25, 2025, https://www.arnoldporter.com/en/perspectives/advisories/2025/01/commerce-dept-global-restrictions-on-ai-technologies
  76. Export Controls on AI Chips: Biden’s Overreach Risks U.S. Leadership in Tech | ITIF, accessed July 25, 2025, https://itif.org/publications/2025/01/07/export-controls-on-ai-chips-bidens-overreach-risks-us-leadership-in-tech/
  77. How Much Will the Cost Europe? – Artificial Intelligence Act, accessed July 25, 2025, https://www2.datainnovation.org/2021-aia-costs.pdf
  78. It’s Too Hard for Small and Medium-Sized Businesses to Comply With the EU AI Act, accessed July 25, 2025, https://www.aipolicybulletin.org/articles/its-too-hard-for-small-and-medium-sized-businesses-to-comply-with-eu-ai-act-heres-what-to-do
  79. EU AI Act Compliance Analysis: General-Purpose AI Models in Focus – The Future Society, accessed July 25, 2025, https://thefuturesociety.org/wp-content/uploads/2023/12/EU-AI-Act-Compliance-Analysis.pdf
  80. Antitrust in the AI Era – Federation of American Scientists, accessed July 25, 2025, https://fas.org/publication/antitrust-in-the-ai-era/
  81. Big Tech’s Decade of A.I. Shopping – updated 7.18.25 | Mogin Law LLP – JDSupra, accessed July 25, 2025, https://www.jdsupra.com/legalnews/big-tech-s-decade-of-a-i-shopping-4584846/
  82. Antitrust and Artificial Intelligence: How Breaking Up Big Tech Could Affect the Pentagon’s Access to AI | Center for Security and Emerging Technology – CSET, accessed July 25, 2025, https://cset.georgetown.edu/publication/antitrust-and-artificial-intelligence-how-breaking-up-big-tech-could-affect-pentagons-access-to-ai/
  83. 5Qs: Crane Discusses the Overwhelming Impact of Artificial Intelligence on Antitrust Law, accessed July 25, 2025, https://michigan.law.umich.edu/news/5qs-crane-discusses-overwhelming-impact-artificial-intelligence-antitrust-law
  84. White Paper: The Power Strain: Can the Grid Manage the Data Center Boom?, accessed July 25, 2025, https://www.wmeng.com/news-events/the-power-strain-can-the-u-s-grid-handle-the-ai-and-data-center-boom/
  85. Data centers are overwhelming the grid. Could they… | Canary Media, accessed July 25, 2025, https://www.canarymedia.com/articles/utilities/data-centers-are-overwhelming-the-grid-could-they-help-it-instead
  86. 6 ways data centres can cut their emissions: A case study | World Economic Forum, accessed July 25, 2025, https://www.weforum.org/stories/2025/01/6-ways-data-centres-can-cut-emissions/
  87. What Effects are Carbon Pressures Having on Data Centres …, accessed July 25, 2025, https://technologymagazine.com/cloud-computing/data-centres-face-reckoning-as-carbon-pressures-rise
  88. The Moon Landing, the Rise of Data Centers, and a Cloud of Carbon – CSC Blog, accessed July 25, 2025, https://blog.cscglobal.com/the-moon-landing-the-rise-of-data-centers-and-a-cloud-of-carbon/
  89. Data Center Carbon Footprint: Concepts and Metrics – Device42, accessed July 25, 2025, https://www.device42.com/data-center-infrastructure-management-guide/data-center-carbon-footprint/
  90. Trump’s AI plan calls for massive data centers. Here’s how it may affect energy in the US, accessed July 25, 2025, https://apnews.com/article/trump-artificial-intelligence-energy-data-centers-f216660b80f992ae303b348dac0b2f87
  91. Gartner estimates AI adoption to increase from 1% to 33% by 2028 – TI Inside, accessed July 25, 2025, https://tiinside.com.br/en/10/02/2025/gartner-estima-aumento-de-1-a-33-na-adocao-da-ia-ate-2028/
  92. Why AI Projects Fail – And What Successful Companies Do …, accessed July 25, 2025, https://addepto.com/blog/why-ai-projects-fail-and-what-successful-companies-do-differently/
  93. The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed – RAND Corporation, accessed July 25, 2025, https://www.rand.org/pubs/research_reports/RRA2680-1.html
  94. Why Most Enterprise AI Projects Fail — and the Patterns That Actually Work – WorkOS, accessed July 25, 2025, https://workos.com/blog/why-most-enterprise-ai-projects-fail-patterns-that-work
  95. The AI Implementation Paradox: Why 42% of Enterprise Projects Fail Despite Record Adoption | by Alexander Stahl | Jun, 2025 | Medium, accessed July 25, 2025, https://medium.com/@stahl950/the-ai-implementation-paradox-why-42-of-enterprise-projects-fail-despite-record-adoption-107a62c6784a
  96. How to maximize ROI on AI in 2025 – IBM, accessed July 25, 2025, https://www.ibm.com/think/insights/ai-roi
  97. 2025: The State of Consumer AI | Menlo Ventures, accessed July 25, 2025, https://menlovc.com/perspective/2025-the-state-of-consumer-ai/
  98. Only 8% of Americans would pay extra for AI, according to ZDNET-Aberdeen research, accessed July 25, 2025, https://www.zdnet.com/article/only-8-of-americans-would-pay-extra-for-ai-according-to-zdnet-aberdeen-research/
  99. The rise of AI subscriptions: 1 in 10 Americans now pay for premium AI tools – ContentGrip, accessed July 25, 2025, https://www.contentgrip.com/the-rise-of-ai-subscriptions/
  100. A surprising 80% of people would pay for Apple Intelligence, according to a new survey – here’s why | TechRadar, accessed July 25, 2025, https://www.techradar.com/computing/artificial-intelligence/a-surprising-80-percent-of-people-would-pay-for-apple-intelligence-according-to-a-new-survey-heres-why
  101. Global gen AI spending to surge 76% in 2025, topping $640 billion: Gartner, accessed July 25, 2025, https://www.rcrwireless.com/20250401/business-investing/gen-ai-gartner
  102. Artificial Intelligence Index Report 2025 – AWS, accessed July 25, 2025, https://hai-production.s3.amazonaws.com/files/hai_ai_index_report_2025.pdf
  103. Open-Source AI: How it Impacts Your Cloud Strategy – vexxhost, accessed July 25, 2025, https://vexxhost.com/blog/open-source-AI-how-it-impacts-your-cloud-strategy/
  104. Microsoft’s AI-Driven Cloud Growth and Margin Potential – AInvest, accessed July 25, 2025, https://www.ainvest.com/news/microsoft-ai-driven-cloud-growth-margin-potential-2507/
  105. Efficient Growth for Software Companies: AI as a Catalyst for Deeper Efficiencies – West Monroe, accessed July 25, 2025, https://www.westmonroe.com/insights/margin-expansion-for-software-companies
  106. 1Q 2025 Data Center Market Recap – datacenterHawk, accessed July 25, 2025, https://datacenterhawk.com/resources/market-insights/1q-2025-data-center-market-recap
  107. Global Data Center Trends 2025 | CBRE, accessed July 25, 2025, https://www.cbre.com/insights/reports/global-data-center-trends-2025
  108. 2025 Global Data Center Outlook – JLL, accessed July 25, 2025, https://www.jll.com/en-au/insights/data-center-outlook
  109. AI startups eat up 57.9% of global venture dollars as fear of missing …, accessed July 25, 2025, https://pitchbook.com/news/articles/ai-startups-57-9-percent-global-venture-dollars-fear-of-missing-out-drives-up-dealmaking-q1-2025
  110. Q3 2025 PitchBook Analyst Note: Fintech’s AI Premium | PitchBook, accessed July 25, 2025, https://pitchbook.com/news/reports/q3-2025-pitchbook-analyst-note-fintechs-ai-premium
  111. AI Startup Valuations in 2025: Benchmarks Across 400+ Companies …, accessed July 25, 2025, https://www.finrofca.com/news/ai-startup-valuations-q1-2025-edition
  112. Citigroup Ups Focus on Privately-Held AI Startups – PYMNTS.com, accessed July 25, 2025, https://www.pymnts.com/startups/2025/citigroup-ups-focus-privately-held-ai-startups/
  113. www.nasdaq.com, accessed July 25, 2025, https://www.nasdaq.com/articles/will-nvidia-reach-5-trillion-market-cap-2025#:~:text=Valuation%20would%20be%20higher%20than,pushing%20valuation%20to%20extreme%20levels.
  114. Should You Buy Advanced Micro Devices (AMD) Stock Before Aug. 5? – Mitrade, accessed July 25, 2025, https://www.mitrade.com/insights/news/live-news/article-8-983528-20250724
  115. Could Arm Holdings Stock Help You Become a Millionaire? | The Motley Fool, accessed July 25, 2025, https://www.fool.com/investing/2025/07/19/could-arm-holdings-stock-become-millionaire/
  116. www.forbes.com, accessed July 25, 2025, https://www.forbes.com/sites/investor-hub/article/super-micro-computer-smci-stock-2025-forecast/#:~:text=Analysts%20expect%2070%25%20revenue%20growth,25%25%20for%20revenue%20and%20EPS.
  117. Prediction: Nvidia Will Be a Top Stock to Own for the Back Half of …, accessed July 25, 2025, https://www.nasdaq.com/articles/prediction-nvidia-will-be-top-stock-own-back-half-2025
  118. Why Arm Holdings Gained 31% in the First Half of 2025 | The Motley …, accessed July 25, 2025, https://www.fool.com/investing/2025/07/10/why-arm-holdings-gained-31-in-the-first-half-of-20/
  119. Down 16%, Should You Buy the Dip on Arm Holdings? – Nasdaq, accessed July 25, 2025, https://www.nasdaq.com/articles/down-16-should-you-buy-dip-arm-holdings
  120. Data Centers Lead REIT Investment Surge With Low Cap Rates – CRE Daily, accessed July 25, 2025, https://www.credaily.com/briefs/data-centers-lead-reit-investment-surge-with-low-cap-rates/
  121. EBITDA Multiples Across Industries (2025) | Eqvista, accessed July 25, 2025, https://eqvista.com/ebitda-multiples-by-industry/
  122. The SaaS VC Report 2025 – SaasRise, accessed July 25, 2025, https://www.saasrise.com/blog/the-saas-vc-report-2025
  123. Revenue Multiples by Sector (US) – NYU Stern, accessed July 25, 2025, https://pages.stern.nyu.edu/~adamodar/New_Home_Page/datafile/psdata.html
  124. NVIDIA vs. Super Micro Computer: With Return Forecast Of 85%, Super Micro Computer Is A Better Bet | Trefis, accessed July 25, 2025, https://www.trefis.com/data/companies/NVDA/no-login-required/WPeDVbJZ/NVIDIA-vs-Super-Micro-Computer-With-Return-Forecast-Of-85-Super-Micro-Computer-Is-A-Better-Bet
  125. AMD Reports First Quarter 2025 Financial Results :: Advanced Micro …, accessed July 25, 2025, https://ir.amd.com/news-events/press-releases/detail/1247/amd-reports-first-quarter-2025-financial-results
  126. Gates Industrial: A Hidden Gem in the AI Energy Infrastructure Boom – AInvest, accessed July 25, 2025, https://www.ainvest.com/news/gates-industrial-a-hidden-gem-in-ai-energy-infrastructure-boom-250710109d6c27a4103f5dab/