April 20, 2026
Amazon + Anthropic
If this is a capacity commitment, it shows up in capex, utilization, and the networking layer – not in one evening’s price action.
The headline says “Amazon expands partnership with Anthropic.”
The important part is what that forces Amazon to do next.
Because once a model builder gets treated like a long-duration workload, the cloud provider stops talking about “AI excitement” and starts talking – quietly – about megawatts, delivery calendars, and how fast new capacity can become billable. That’s the dividing line between AI as marketing and AI as a line item.
So I’m going to mostly ignore the price jump. The pop is a symptom. The partnership mechanics are the cause.
Your Download Link Will Expire
If you still haven’t downloaded my free “Simple Options Trading For Beginners” guide…
…please take a few seconds and download it right now before your new temporary download link expires.
I eventually plan to charge money for this training, so do yourself a favor and download it now…
That way, no matter what it costs in the future, you’ll have a free copy on your computer.
Make sense?
Start with the one number investors can’t avoid anymore: Amazon guided to about $200 billion of capital expenditures for 2026. ([ir.aboutamazon.com](https://ir.aboutamazon.com/news-release/news-release-details/2026/Amazon-com-Announces-Fourth-Quarter-Results/default.aspx?utm_source=openai))
That figure is not subtle. It’s not a “we’ll see.” It’s Amazon signaling that the next phase of cloud competition will be fought on physical buildout and proprietary silicon, and that waiting for perfect visibility is the same as forfeiting capacity.
AWS, for context, posted $35.6B in Q4 revenue (+24% year over year) and $12.5B in operating income. That’s roughly a 35% operating margin for the quarter. ([ir.aboutamazon.com](https://ir.aboutamazon.com/news-release/news-release-details/2026/Amazon-com-Announces-Fourth-Quarter-Results/default.aspx?utm_source=openai))
Those are “cash engine” numbers. They also create an uncomfortable question: how do you keep a ~35% margin business healthy while you pour $200B into capacity that must be utilized, not merely installed?
This is where Anthropic matters. Not as a logo. As an anchor tenant.
What the partnership could mean, in plain economics
Cloud economics don’t reward “best model.” They reward “persistent usage.”
If Anthropic is deepening its commitment to AWS for mission-critical training and future foundation model development, that’s a utilization story. It’s Amazon trying to secure the kind of demand that keeps new data centers from being empty shells in their first 6–12 months. ([aboutamazon.com](https://www.aboutamazon.com/news/company-news/amazon-anthropic-ai-investment?utm_source=openai))
And utilization is the whole tension in 2026. Amazon’s CEO framed it pretty directly on the Q4 call: as fast as Amazon installs AI capacity, it’s being monetized. ([geekwire.com](https://www.geekwire.com/2026/amazon-stock-sinks-10-after-q4-profit-miss-as-jassy-signals-200b-in-capital-spending/?utm_source=openai))
That line is doing more work than it looks like it’s doing. It suggests two things at once:
- Demand is pulling capacity forward (good for revenue growth, good for supplier ordering).
- Amazon believes the payback window is manageable (or at least manageable enough that they’d rather risk overbuilding than risk losing share).
What’s interesting is how that changes the bar for “good news.” It’s no longer “AI demand exists.” It’s “AI demand exists at a price that doesn’t poison long-run margins.” Same demand. Different filter.
Now layer in the capital allocation side. Back in February 2025, Amazon said it expected about $100B of capex in 2025, largely driven by AI investments. ([cnbc.com](https://www.cnbc.com/2025/02/06/amazon-expects-to-spend-100-billion-on-capital-expenditures-in-2025.html?utm_source=openai))
By early February 2026, Amazon was talking about roughly $200B for 2026. ([ir.aboutamazon.com](https://ir.aboutamazon.com/news-release/news-release-details/2026/Amazon-com-Announces-Fourth-Quarter-Results/default.aspx?utm_source=openai))
Even if you allow for category differences in what gets counted as capex, the direction is clear: the company is migrating into a heavier infrastructure cycle. That makes customer “stickiness” less optional. It becomes a core risk control.
Elon’s $4 Trillion Takeover Target, Revealed
Banking. Cars. Rockets. The Internet itself. Each time, the same pattern: Elon targets an industry the world says can’t be disrupted, the experts call him crazy, the short sellers pile in… and then he does it. Now he’s preparing for his biggest takeover yet.
Slight tangent, but it matters. People keep treating AI spending as a single bucket called “GPUs.” In practice it’s compute, networking, storage, power, cooling, and the painful operational layer that keeps uptime high. The workloads don’t care about your press release. They care about throughput and latency.
So when a partnership expands, it often doesn’t mean “more chips” in a generic sense. It means “more constraints” in a very specific sense: more interconnect, more optics, more retimers, more switching capacity, more custom silicon design cycles. And it means procurement teams get louder.
Why Marvell belongs in the conversation (even if you’re focused on AWS)
Marvell is one of the names that tends to show up when hyperscalers shift from experimentation to scaling. Not because it’s flashy. Because the scaling phase is a bandwidth problem disguised as an AI problem.
Marvell and AWS have already described an expanded collaboration that spans a wide range of data center semiconductor components – including custom AI products, optical DSPs, active electrical cable DSPs, PCIe retimers, optical modules, and Ethernet switching silicon. ([marvell.com](https://www.marvell.com/company/newsroom/marvell-expands-strategic-collaboration-aws-enable-accelerated-infrastructure-ai-cloud.html?utm_source=openai))
That list reads like plumbing. It is plumbing. It’s also the stuff that determines whether AI clusters actually scale efficiently across racks and across buildings.
If you believe Amazon is serious about $200B of 2026 capex and Anthropic is deepening its AWS reliance, then the “plumbing spend” has two tailwinds:
- More capacity built means more physical deployment of interconnect and networking silicon.
- Higher utilization targets tend to push architectures toward higher efficiency, lower bottlenecks, and fewer weak links in the data path.
There’s a catch, and it’s not small. Hyperscalers are tough customers. They buy at scale, they negotiate hard, and they increasingly want tailored silicon. That can be excellent volume. It can also be a margin and concentration puzzle for suppliers.
So the right question is not “does AI help MRVL.” It’s “does the mix of custom and connectivity products improve earnings power without giving away pricing.” That’s where the next 2–4 quarters matter.
The market’s expectation vs. the next measurable proof
Here’s where I’m at. Investors are gradually shifting from a belief statement (“AI is the future”) to a measurement problem (“what does this do to cash flow over the next 24 months”). It’s the same optimism, but with a stopwatch.
For Amazon, the near-term proof doesn’t require guessing model adoption curves. It’s more mechanical:
- AWS growth rate stability – Q4 was 24% year over year. If that holds or accelerates while capex ramps, the spend looks justified. ([ir.aboutamazon.com](https://ir.aboutamazon.com/news-release/news-release-details/2026/Amazon-com-Announces-Fourth-Quarter-Results/default.aspx?utm_source=openai))
- AWS operating margin behavior – a 35% quarter is strong; the question is whether incremental AI capacity comes with margin dilution or maintains scale economics. ([globaldatacenterhub.com](https://www.globaldatacenterhub.com/p/amazon-q4-2025-earnings-the-200b?utm_source=openai))
- Capex cadence and commentary – whether $200B is front-loaded, and what portion is clearly AWS-related, matters for investor tolerance. ([geekwire.com](https://www.geekwire.com/2026/amazon-stock-sinks-10-after-q4-profit-miss-as-jassy-signals-200b-in-capital-spending/?utm_source=openai))
For the supplier layer (Marvell as an example), proof often comes through a different door: design win durability, production ramps, and whether “AI-related” demand is steady enough to model rather than lumpy enough to trade.
And this is where it gets interesting. When cloud providers commit to this magnitude of spend, they inevitably prioritize standardization and cost control. That can compress unit economics for suppliers, even while volumes rise. Volume without pricing power is not a victory, it’s just busyness.
One Shark Missed Billions… Another Saw This Coming
Imagine turning down Uber at a valuation of $10 million only to watch them go public at over $80 billion.
That’s exactly what happened to Mark Cuban… a 799,900% return, gone.
But Kevin Harrington, another shark from Shark Tank, built his reputation by spotting these opportunities early and didn’t make this same mistake.
Like Uber turned vehicles into income-generating assets, there’s a tech startup right now turning smartphones into the easiest passive income source imaginable.
They were named the #1 fastest growing software company by Deloitte and have already helped their users earn and save over $1B.
Kevin Harrington invested early in this mobile disruptor, and now you can too.
At just $0.50/share, you can become a shareholder in Mode Mobile before their potential IPO.
Invest alongside Kevin Harrington at $0.50/share today.
Please read the offering circular at invest.modemobile.com. This is a paid advertisement for Mode Mobile’s Regulation A Offering.
Options market: keep it simple and keep it defined-risk
You asked to remove certain trading jargon, so I’ll keep this practical.
When a headline like this hits, option premiums often get more expensive in the front end. Sometimes that’s justified by real event risk. Sometimes it’s just demand for short-dated exposure.
If you’re evaluating AMZN or MRVL options around this theme, the two numbers that matter most are:
- How expensive implied volatility is versus its own recent history (percentile/rank). High relative IV changes what structures make sense.
- The expected move into the next known catalyst (earnings, major events). If the market is already pricing a large move, you’re paying for it up front.
For traders expecting upside continuation, defined-risk call spreads often express that view without needing a straight-line surge. For traders expecting consolidation, defined-risk premium-selling spreads can align better with the “volatility is expensive” condition. The point isn’t the structure name. The point is staying explicit about max loss and time horizon.
What could go wrong (and it’s not the obvious thing)
The bear case isn’t “AI demand disappears.” That’s too clean. The real risk is that demand is real but the economics get competitive fast.
- Cloud pricing pressure – large buyers push pricing down, and AI workloads are already cost-sensitive.
- Underutilization – building capacity is easy; keeping it busy at healthy pricing is harder.
- Power and deployment friction – power availability, grid constraints, and build approvals can slow revenue recognition even if demand is waiting.
- Supplier margin squeeze – hyperscaler leverage can shift economics away from component vendors, even in a strong volume cycle.
None of those risks are dramatic. That’s what makes them real. They show up gradually, quarter by quarter.
So if you want the cleanest way to think about Amazon’s expanded Anthropic partnership, try this reframing: it’s less about “another AI announcement” and more about Amazon underwriting utilization for a giant 2026 buildout.
The market will debate the story all day. The proof will be less poetic: AWS growth, AWS margins, capex cadence, and whether the connectivity layer keeps getting pulled forward.
I’m not closing this with a neat summary because the next useful signal isn’t going to be a quote. It’s going to be the next earnings cycle and the next capex update. If $200B is the plan, the only remaining question is how quickly the dollars convert into billable usage.
