This is private exploration and general reflection, not financial, investment, tax, or legal advice.
A lot of AI investing discussion still sounds like software-era pattern matching. People want the next obvious winner: the model company, the app layer, the fastest-growing interface, the brand with the most cultural gravity. I understand the instinct, but I think that framing misses where a lot of the real constraint still lives.
If demand for AI compute keeps compounding, the more useful investing question is often: where does the system still get stuck? Which suppliers remain hard to replace? Which components become more important as scale rises instead of less? That is a different exercise. It is less about charisma and more about industrial dependency.
The first bottleneck is memory, not branding
The most obvious place to start is high-bandwidth memory. Frontier AI accelerators are not just "fast chips." They are tightly integrated systems whose usefulness depends on moving enormous amounts of data without stalling the compute. That makes HBM a structural part of the AI stack, not a peripheral accessory.
As of April 30, 2026, the public picture still looks highly concentrated. SK hynix said in September 2025 that it had completed HBM4 development and readied mass production. Samsung said on February 12, 2026 that it had begun mass production of HBM4 and shipped commercial products. Micron said on March 16, 2026 that it had begun volume shipment of HBM4 for NVIDIA Vera Rubin. That is not a crowded field. It is closer to an oligopoly with brutal technical barriers.
That matters because it shifts the AI investing conversation away from generic "semis will benefit" language. Some semiconductor exposure is ordinary cyclical exposure. HBM is different. It is a narrower, more strategic choke point tied directly to the economics of training and inference at the frontier.
The AI winner still needs somebody else to package the system
Even if you correctly identify the critical memory suppliers, that still is not the whole thesis. Accelerators do not ship as isolated memory stacks. They have to be integrated into packages that can actually deliver bandwidth, thermals, and yield at scale.
TSMC describes CoWoS as its proprietary advanced packaging service for high-performance computing and explicitly says it is expanding the CoWoS portfolio to accommodate more advanced nodes and high-bandwidth memories. In its latest annual-report site, TSMC also said strong AI demand and advanced packaging drove growth and that it continued expanding CoWoS capacity. That is an important combination. It means the bottleneck is not only the memory die. It is also the ability to assemble the whole compute package around that memory in volume.
This is why I am skeptical of AI investing takes that stop at "buy the GPU leader" or "buy anything exposed to datacenter demand." That is too coarse. There are layers underneath the obvious product winner that can exert real control over how much revenue the system can physically convert into shipped hardware.
Tooling matters because somebody has to build the capacity
The next level down is manufacturing equipment. If the world decides it wants much more HBM and advanced logic, someone has to sell the tools that make that expansion possible. That is where the picks-and-shovels framing becomes more interesting to me than the headline model race.
ASML's 2025 annual report explicitly says memory momentum is being fueled by investment in HBM and DDR5 to support AI-related applications. Applied Materials reinforced the same point in March 2026 when it announced a long-term R&D partnership with SK hynix focused on next-generation DRAM, HBM, and advanced packaging. That does not mean every tool vendor is equally attractive, and it definitely does not mean valuation no longer matters. It does mean the AI buildout is not just a story about chip designers. It is also a story about who gets paid when the memory and foundry ecosystem has to add capability.
I would treat this as a useful filtering question: is the company merely adjacent to AI enthusiasm, or does it sell something that the constrained parts of the supply chain cannot expand without? Those are not the same thing.
As clusters get larger, networking and cooling stop being side notes
One reason I like the bottlenecks framing is that it keeps widening in the right direction. Once the accelerators are dense enough, the next constraints move outward into the rest of the physical system.
Broadcom said in June 2025 that Tomahawk 6 was shipping with 102.4 Tbps of switching capacity for scale-up and scale-out AI networks. That is not just a nice-to-have incremental feature. It is a reminder that when AI clusters move toward hundreds of thousands or millions of XPUs, interconnect stops being background infrastructure and becomes part of the thesis.
The same thing is happening with power and thermal infrastructure. Vertiv said in February 2025 that 30 kW racks were becoming the standard and that some AI deployments were already reaching 120 kW or higher. That is a different world from ordinary datacenter build assumptions. Once you accept that, the AI story is no longer only about compute demand. It is also about the industrial ability to feed, connect, and cool that compute.
What I would actually do with a bottlenecks sleeve
If I were building a watchlist from this thesis, I would not rank companies by how often they are mentioned in AI discourse. I would rank them by substitution difficulty, time-to-capacity, and whether demand forces customers to keep coming back.
But a sleeve map is still a dodge. The earlier version of this draft stopped one step too early. If this is a real idea, it should survive contact with actual tickers. So if I had to turn the bottlenecks thesis into a concrete illustrative portfolio today, this is the six-name version I would start with.
It adds to 100%, it uses real public tickers, and every name is there for a specific bottleneck reason rather than because it sounds adjacent to AI on a quarterly call.
| Ticker | Weight | Bottleneck role | Why it makes the cut |
|---|---|---|---|
| TSM | 22% | Advanced packaging and leading-edge foundry | This is the center of the whole thesis. If CoWoS capacity, foundry execution, and packaging yield stay tight, TSMC keeps getting paid before a lot of the glamour layer does. |
| AVGO | 20% | AI networking silicon | Once clusters get huge, the network is part of the computer. Broadcom is one of the cleanest public ways to express that constraint. |
| MU | 18% | HBM memory | Micron is the cleanest U.S.-listed way to own the memory-bandwidth bottleneck directly, even if the global HBM field is broader than one ticker. |
| ASML | 15% | Core lithography tooling | I want one name with a genuinely deep tool moat that sits under both logic and memory capacity expansion. ASML is still the cleanest answer there. |
| VRT | 15% | Power and cooling infrastructure | Dense AI racks are useless if the facility cannot feed and cool them. Vertiv gives the portfolio real thermal and uptime exposure instead of stopping at silicon. |
| AMAT | 10% | Broader semicap and advanced packaging tooling | Applied gives me more direct exposure to materials engineering and packaging throughput instead of pretending one tooling name covers the whole manufacturing stack. |
That is a concentrated sleeve, not a life plan. But it is at least a real portfolio instead of a smart-sounding taxonomy.
The weight logic is not optimizer magic. It is dependency order. I put TSM at the top because packaging plus foundry execution is where a lot of the theoretical AI demand gets converted into shipped systems. I keep AVGO and MU close behind because the accelerator still needs memory bandwidth and the cluster still needs a network that does not collapse into an expensive traffic jam.
ASML and AMAT together make up a quarter of the sleeve because I do not want to pretend the tool chain is a footnote. If memory, logic, and advanced packaging all need more capacity, the ecosystem has to buy more process capability somewhere. VRT stays large enough to matter because AI infrastructure is increasingly constrained by physical deployment, not just chip design. In other words, I do not think cooling belongs in the appendix anymore.
I would also leave the app layer and most of the celebrity layer out of this specific sleeve on purpose. That is not because NVIDIA, Microsoft, Amazon, or the application layer do not matter. It is because "AI bottlenecks" and "AI product winners" are different bets, and I would rather not pretend one basket is doing both jobs.
If I wanted to make one refinement, it would probably be at the memory slot. If I were comfortable owning non-U.S. listings directly, I would seriously consider whether part of the MU allocation belongs in SK hynix instead because it is such a central HBM name. I am not doing that in the table because I wanted a simple portfolio with normal public-market tickers, not a global custody debate. But it is the first place I would push if I wanted the sleeve to be even purer.
Why these six and not the obvious AI heroes
The simplest way to say it is this: I am trying to own dependency, not popularity.
NVIDIA can still be a phenomenal business. Microsoft can still monetize AI faster than most people expect. Some application-layer company can still become the cleanest software winner of the cycle. None of that invalidates the bottlenecks frame. It just answers a different question.
This sleeve is built for the narrower question: what parts of the stack stay hard to route around even if the branded winners change? If the answer is memory bandwidth, packaging throughput, tool capacity, cluster networking, and thermal infrastructure, then the portfolio should look like that answer instead of sneaking back toward the obvious mega-cap trade.
The risk is that bottlenecks are great businesses until they are not
The weakness of this whole line of thinking is obvious. A bottleneck can look invincible right before capacity catches up, pricing normalizes, or demand shifts. Semiconductor history is full of periods where a real constraint produced extraordinary economics and then trained the entire ecosystem to remove that constraint as fast as possible.
So I would not treat "critical to AI" as a sufficient thesis. I would still want to know how quickly supply can expand, whether customers are trying to vertically integrate around the choke point, how much geopolitical concentration risk exists, and whether the current scarcity is technological, cyclical, or simply temporary.
That is why I like this framework more as a starting map than as a complete answer. It disciplines the search. It does not replace valuation work, cycle timing, or the possibility that the market already knows all of this and has priced it aggressively.
The useful shift is from narrative to dependency
The real reason this thread interested me is that it pushes the AI conversation away from theater and toward dependency mapping. Instead of asking who sounds most central to the future, ask who the future still depends on when demand becomes physical.
That question produces a less glamorous list. It also tends to produce a more honest one.