Sign up to see more
SignupAlready a member?
LoginBy continuing, you agree to Sociomix's Terms of Service, Privacy Policy
There's a photograph from February 2026 that pretty much sums up the state of AI right now. At the India AI Impact Summit in New Delhi, Indian Prime Minister Narendra Modi invited the world's tech leaders onstage for a group photo. Everyone held hands. Well, almost everyone. Sam Altman of OpenAI and Dario Amodei of Anthropic, standing right next to each other, refused to clasp hands and instead raised their fists separately. The internet, predictably, lost its mind.
An awkward moment between OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei at an AI Summit captured the increasingly icy relations between two rival tech leaders who started off as colleagues. That's not just petty drama. It's a window into what may be the most consequential corporate rivalry in the technology world right now, one that's playing out in boardrooms, courtrooms, Super Bowl ads, and billion-dollar compute deals all at once.
Let's start with the money, because the money is genuinely staggering. OpenAI says it's on pace to generate $25 billion in revenue this year, versus Anthropic's $19 billion. Those were the numbers circulating in mid-March. Then, just weeks later, the story shifted again. Anthropic hit $30B ARR in April 2026, passing OpenAI at $25B while spending 4x less on training.
Anthropic crossed $19 billion in annualized revenue by March 2026. For context, that figure was $9 billion at year-end 2025, $1 billion in December 2024, and essentially zero in 2022. Read that again. From zero to $30 billion in roughly two years. SaaStr noted that no enterprise technology company in recorded history has compounded at this rate at this scale, not Slack, not Zoom, not Snowflake.
The engine behind all of this? A coding tool. The primary driver was Claude Code, Anthropic's coding assistant, which has been adopted at a pace that surprised even the company's internal projections. Claude Code alone generates $2.5 billion in annualized revenue. Engineers and software teams fell in love with it, got their companies to pay for it, and that enterprise momentum snowballed into something nobody predicted at this speed.
Here's what makes this rivalry so fascinating to watch. OpenAI and Anthropic aren't just competing for the same customers. They've made fundamentally different bets about what kind of company wins in AI.
OpenAI's growth is strongly driven by the mass-market product ChatGPT, which is aimed at consumers. Anthropic, by contrast, has focused on enterprise customers and API usage from the very beginning, that is, on companies that integrate Claude into their own products and processes. That strategic fork is now showing up in the revenue mix. Approximately 80% of Anthropic's revenue comes from enterprises, with gross margins of around 40%.
Anthropic is now capturing over 73% of all spending among companies buying AI tools for the first time, according to customer data from Ramp. The number of enterprise customers spending over $1 million annually on Anthropic doubled in just two months. Eight of the Fortune 10 are its clients. That's a seismic shift. When the biggest companies on the planet are quietly choosing your product over your competitor's, that tells you something real about where enterprise confidence is landing.
OpenAI isn't taking any of this lying down. With Anthropic gaining momentum in the booming AI market, OpenAI sent a memo to investors this week slamming its chief rival for "operating on a meaningfully smaller curve," and characterizing the company as compute constrained. OpenAI said it's planning to have 30 gigawatts of compute by 2030, while it expects Anthropic to have roughly 7 to 8 gigawatts by the end of 2027. It's the kind of thing you say when a smaller competitor is eating your lunch and you want investors to keep the faith.
If the revenue race is the economic story of this rivalry, the cybersecurity battle is where things get genuinely strange and a little scary. Anthropic released a preview of its new frontier model, Mythos, which it says will be used by a small group of partner organizations for cybersecurity work. In a previously leaked memo, the AI startup called the model one of its "most powerful" yet. The model's limited debut is part of a new security initiative, dubbed Project Glasswing, in which 12 partner organizations will deploy the model for the purposes of "defensive security work" and to secure critical software.
The scope of what Mythos found is alarming. Anthropic claims that, over the past few weeks, Mythos identified "thousands of zero-day vulnerabilities, many of them critical." Many of the vulnerabilities are one to two decades old. Among the findings: a 27-year-old bug in OpenBSD, an operating system known for its strong security posture. In another case, the model fully autonomously identified and exploited a 17-year-old remote code execution vulnerability in FreeBSD that allows an unauthenticated user anywhere on the internet to obtain complete control of a server running NFS. No human involved. Just the model, a prompt, and a decades-old flaw that had been hiding in plain sight.
The launch partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. Beyond that core group, Anthropic has extended access to over 40 additional organizations that build or maintain critical software infrastructure. Anthropic is committing up to US$100 million in usage credits for Mythos Preview across the effort, along with US$4 million in direct donations to open-source security organizations.
OpenAI moved fast to respond. OpenAI is developing a cybersecurity-focused AI model for a restricted program called "Trusted Access for Cyber," according to a new report, putting it in direct competition with Anthropic's Claude Mythos Preview. OpenAI committed $10 million in API credits to businesses participating in the pilot program. Same playbook, smaller check. The pattern of OpenAI following Anthropic's lead in enterprise strategy is becoming hard to ignore.
AI's leading players are now treating their software less like consumer products and more like digital weaponry.
Both companies are hurtling toward public markets, and the stakes could not be higher. The company behind Claude is now in active discussions with Goldman Sachs and JPMorgan Chase about what could be the second-largest AI IPO in history: a $60 billion+ raise targeting October 2026, at a valuation that bankers are privately estimating between $400 billion and $500 billion.
But there's a tension hiding behind those eye-watering revenue numbers. Anthropic is burning cash at an extraordinary rate, with $12 billion earmarked for model training and another $7 billion for inference infrastructure in 2026 alone. And OpenAI isn't doing much better. OpenAI projects $14 billion in losses for 2026. Anthropic projects positive free cash flow by 2027, while OpenAI has pushed its breakeven target to 2030. These are companies generating more revenue than most Fortune 500 names while still burning through capital at a pace that would make most CFOs sweat through their shirts.
What makes this rivalry genuinely personal is the history. Amodei worked under Altman at OpenAI from 2016 to 2020. His last role was vice president of research, where he focused on safety. During his time at OpenAI, he played an instrumental role in launching GPT-2 and GPT-3. He didn't leave quietly, either. Amodei, along with a small group of employees, became concerned about OpenAI's direction and what he perceived as a lack of safety involved in scaling up ChatGPT. In 2021, Amodei, along with his sister and cofounder Daniela Amodei and a dozen other ex-OpenAI employees, started Anthropic with a mission to put safety first.
The jabs have only gotten sharper in 2026. The most recent conflict between Altman and Amodei came after Anthropic, earlier this month, released a four-ad Super Bowl campaign titled "A Time and a Place," indirectly poking fun at OpenAI's recent move to include ads in ChatGPT. Each commercial opened with a single, loaded word splashed across the screen: "betrayal," "deception," "treachery," and "violation." Altman fired back on X, calling it doublespeak and labeling Anthropic "authoritarian." Not exactly the tone you'd expect from two companies that share mutual investors and roots.
There are a few things worth stepping back and appreciating here. First, the speed of this entire story is almost disorienting. Anthropic's annualized revenue has been growing at 10x per year since reaching $1B, outpacing OpenAI's growth of 3.4x per year. That kind of trajectory forces the whole industry to move faster, invest more, and make bigger bets on uncertain infrastructure.
Second, the enterprise vs. consumer divide matters more than people realize. The AI race is shifting from who has the best model to who can monetize the fastest, and Anthropic is pulling ahead with the customers that matter most: enterprises. OpenAI has the brand recognition and the consumer users, but while OpenAI is the most popular chatbot for consumers, it's losing money from them since it's subsidizing the cost of their token usage.
Third, the cybersecurity moves signal something bigger. The frontier labs have been taking a harder line on distillation this year, with Anthropic publicly revealing what it says are attempts by Chinese firms to copy its models, and three leading labs, Anthropic, Google, and OpenAI, teaming up to identify distillers and block them. The era of open and freely accessible frontier AI is quietly closing, replaced by controlled releases to trusted partners, enterprise-only pilots, and strategic moats built around who gets access to what.
We're watching two of the most capable and well-funded technology companies in history fight over the infrastructure of the future. One raised fist at a time.
OpenAI telling investors Anthropic is compute constrained in the same week Anthropic announced a 3.5 gigawatt compute deal with Google and Broadcom requires some serious confidence or some serious selective reading of the news.
The detail that Claude is the only frontier AI available on all three major clouds simultaneously is a distribution advantage that barely gets mentioned. AWS Bedrock, Google Vertex, and Microsoft Azure Foundry. That's where enterprise developers already live.
A 27-year-old bug in OpenBSD found autonomously by an AI. A 17-year-old remote code execution vulnerability in FreeBSD exploited with no human in the loop. If those were offensive capabilities instead of defensive ones we'd be calling this a national security crisis.
The IPO targeting October 2026 at 400 to 500 billion valuation means Anthropic would be going public at a multiple that makes most tech valuations look conservative. The market's willingness to price growth over profitability has limits and we might be approaching them.
The Super Bowl ads with the words betrayal and deception flashing across the screen were genuinely shocking. Anthropic is not playing nice anymore and honestly good for them.
On the model quality question, the benchmarks are all over the place and both companies commission their own evals. What I trust is production usage data. At my company we run A-B tests on complex tasks monthly. Claude wins on multi-step reasoning and long context consistently. Codex wins on raw code generation speed.
What strikes me is that both companies are building toward AGI but with completely different theories of the customer. Anthropic thinks the path to AGI funding runs through enterprise trust. OpenAI thinks it runs through consumer scale. We'll know in about five years which thesis was right.
Project Glasswing is either a genuine attempt to secure critical infrastructure or the most sophisticated enterprise sales move in tech history. Probably both.
The part about Anthropic never really having a consumer phase is wild when you think about it. They went from research lab to 30 billion ARR enterprise company without ever really needing the consumer flywheel that most assumed was the only path to scale in AI.
Both of these companies have investors who are also investors in each other. Both use Google Cloud infrastructure. Both have Microsoft as a stakeholder in various capacities. The idea that this is a clean binary rivalry is a narrative convenience.
Both companies are burning through cash at a pace that would bankrupt most Fortune 500 firms and we're all just nodding along like this is fine.
Dario left OpenAI over safety concerns and then built a company that just convinced Apple, Microsoft, Google, Nvidia, Amazon, Cisco, and JPMorgan to all join a cybersecurity partnership together. Whatever you think of the rivalry drama, that is a remarkable outcome for someone who walked away from a VP of Research job.
Genuinely cannot tell if this is the most important corporate rivalry of our lifetimes or the most expensive public argument in history. Possibly both at the same time.
Both companies losing billions while generating tens of billions in revenue is the defining financial paradox of the AI era. Every legacy tech company would kill for those growth rates and every rational CFO would be horrified by those burn rates simultaneously.
For what it's worth, I tried Codex after Altman bragged about 3 million weekly users and then went straight back to Claude Code within two days. The gap in output quality for complex multi-file projects is still meaningful.
I genuinely appreciate that this post tries to give a balanced view, but calling the Super Bowl ads indirect pokes is underselling it. Those ads were a direct and personal attack designed to damage OpenAI's consumer brand trust. Calling it a poke is too gentle.
Hot take, Altman's 421-word X post accusing Anthropic of doublespeak after the Super Bowl ads was the least strategic thing any CEO has done this year. Responding at length to your competitor's ad proves the ad landed.
The cloud credit ARR question is the right one. If a significant chunk of that 30 billion is Amazon compute credits flowing back as revenue it's not the same as cash paying enterprise customers. The SEC is going to look at this very carefully.
As someone who works in finance and is watching both IPO preparations closely, the question nobody is asking enough is what percentage of Anthropic's ARR reflects cloud credits from Amazon and Google versus actual cash revenue. That distinction will matter enormously in the S-1.
What nobody has answered satisfactorily yet is what the long term model looks like when compute gets cheap enough that model quality converges across all the major labs. Does the revenue advantage of being enterprise-first persist when the product advantage narrows?
The law of financial gravity comment above deserves more attention. The assumption behind all of this is that compute costs keep falling faster than revenue growth so margins eventually normalize. If that assumption breaks, these burn rates become something much scarier.
the fist bump photo felt very staged to me. Like both CEOs knew exactly what they were doing for the cameras. Good rivalry content but I don't fully buy the spontaneous drama narrative.
The Super Bowl ads were genuinely clever marketing. Using words like betrayal in a commercial aimed implicitly at your competitor while never naming them directly is a textbook competitive ad move. Altman taking the bait and responding publicly was the mistake.
The open source labs like Meta with LLaMA are the real wildcard here. If a capable enough open source model closes the gap with Claude and GPT-5, the entire enterprise pricing premium evaporates and both companies face a different kind of existential question.
The raises their fists separately photo is genuinely going to be in business school case studies about competitive dynamics within ten years.
Three million Codex weekly users versus Claude Code driving 2.5 billion in annualized revenue. User counts and revenue are measuring completely different things and AI company communications are very careful about which metric they cite depending on what story they want to tell.
As someone who works in enterprise software sales, the shift I've seen in 2026 toward Claude is real and it happened faster than anything I've experienced in 15 years. Procurement teams that wouldn't even take an Anthropic meeting in 2024 are now signing multi-year contracts without much negotiation. The brand trust flipped almost overnight.
Not gonna lie, the line about AI leading players treating their software less like consumer products and more like digital weaponry is the most important sentence in this whole piece and it got buried near the bottom.
Speaking from experience building products on top of these APIs, the reliability and uptime differences between Claude and GPT-4 class models matter enormously for production systems. Anthropic's infrastructure reliability improved dramatically in late 2025 and that's part of why enterprise adoption accelerated.
The GitHub commit stat is the one that should concern software educators. We're training computer science students for a job market that is automating the entry level work faster than the curriculum is adapting.
The article says no enterprise technology company in recorded history has compounded at this rate. That's either the most important sentence written about tech in 2026 or a sign we are in the most spectacular bubble in tech history. Possibly both.
To the question above, I think the moat is embedment in enterprise workflows. Once Claude Code is generating 20% of your company's GitHub commits, you do not rip it out. Switching costs in agentic AI are going to be enormous and Anthropic is winning that stickiness race.
The article mentions Chinese firms trying to copy Anthropic's models and the three labs teaming up to block them. That's a significant development that barely got covered in the mainstream press. The era of freely accessible frontier AI is ending faster than most people realize.
Honestly the thing that nobody talks about enough is how Sequoia is backing both OpenAI and Anthropic simultaneously. That breaks every traditional VC rule about funding direct competitors. They've clearly decided this market is big enough that picking sides is a mistake.
I want to push back on the Dario won framing. Revenue leadership in April 2026 is not the end of the race. OpenAI has 900 million users, more compute committed long term, and the most recognized AI brand on earth. This is the middle of the game not the end.
Honestly the most underreported part of all this is the talent competition. Both companies are offering compensation packages that most public companies can't match. Whoever retains the best researchers over the next three years probably wins the model quality race.
The $100 per month ChatGPT Pro tier is interesting pricing because it exactly matches Claude Max 5x. OpenAI isn't trying to undercut, they're trying to match on features and price simultaneously. That tells you they're studying Anthropic's pricing very carefully.
The Mythos cybersecurity findings are what really got me. A model autonomously finding and exploiting a 17-year-old vulnerability in FreeBSD with no human involved is not a demo, that is a preview of something genuinely new and a little terrifying.
Fastest growth in enterprise tech history and the CEO is still fighting about Super Bowl commercials on social media. Silicon Valley is a genuinely strange place.
The article says we're watching two of the most capable and well-funded technology companies in history fight over the infrastructure of the future. I'd add, we're also watching them both lose money at historic speed to do it.
OpenAI's enterprise revenue is now 40% of total and growing toward parity with consumer by end of 2026, while Anthropic is already at 80% enterprise. OpenAI is essentially trying to become more like Anthropic in revenue mix while Anthropic tries to become more like OpenAI in scale. They're converging.
The IPO race is interesting because whoever goes public second has a huge advantage in terms of seeing how the market values the first one. Neither company wants to go first and get priced wrong.
Whatever you think about the personal drama, the actual strategic question being answered in real time is whether safety as a brand attribute translates into durable commercial advantage. Anthropic is providing strong evidence that it does.
From zero to 30 billion ARR in roughly two years. I've worked in enterprise software for over a decade and this genuinely does not have a historical comparison. Nothing in traditional SaaS scaled this way.
My company is one of those 500 enterprises spending over a million a year on Anthropic. The ROI is there but so is the dependency risk. Nobody is talking about what happens if Anthropic raises prices significantly post-IPO.
My entire engineering team switched to Claude Code in January and the productivity difference was noticeable within the first week. I stopped fighting the internal change requests because the output quality made the case for me.
Genuinely, who do you think wins this? Not in terms of revenue right now but in five years when compute gets cheaper and model quality converges across everyone. What's the actual moat?
Eight of the Fortune 10 are Anthropic customers. I keep rereading that because it doesn't fully compute. A company that was essentially pre-revenue in 2024 is now embedded in the largest corporations on earth.
The thing about former colleagues turned rivals is the fighting is always more personal than it looks. Amodei sat in those OpenAI meetings and knows exactly where the bodies are buried. Altman knows exactly what Dario thinks of him. That history makes every public jab land differently.
I switched from ChatGPT Plus to Claude Max in February and the difference in quality for anything involving long documents or complex reasoning is not subtle. I get why enterprises are paying for this.
Hot take, Anthropic winning the enterprise market was inevitable the moment they decided not to chase consumer virality. ChatGPT became a brand associated with hallucinating homework help. Claude became associated with serious work. That positioning difference is worth billions.
Okay but can we talk about how OpenAI sending a memo to investors literally complaining about a competitor is such a weird move? That memo reads like a company that's scared, not confident.
The real story buried in all this is what happens to traditional enterprise software. When Anthropic launched Cowork and SaaS stocks lost two trillion in market cap in a day, that was the market finally pricing in what agentic AI actually means for Salesforce and ServiceNow.
Hot take, the handshake refusal was the most honest moment in the entire history of AI corporate PR. At least they stopped pretending.
The compute gap argument in OpenAI's investor memo is actually quite interesting when you flip it. Anthropic achieving more revenue while spending 4x less on training isn't a weakness, it's a sign their architecture is more efficient. OpenAI spent more and got less. That's the story.
The part about enterprises being the customers that matter most is true but also worth questioning. Consumer AI creates culture, shapes perception, and builds the next generation of professionals who become the enterprise buyers in five years. You can't fully ignore that feedback loop.
Honestly I think both companies are going to be fine long term and all this rivalry drama is great for the industry because it forces faster iteration. The real winners are developers who get better tools every few weeks.
The real battle isn't OpenAI versus Anthropic. It's both of them versus Google, which has the infrastructure, the data, the distribution, and the cloud and is playing a much quieter long game.
The pricing risk post-IPO is real. When you're a private company burning cash you keep prices low to gain share. When you have public market shareholders demanding margins, the pressure to raise prices becomes very different. Enterprise customers locked into multi-year contracts now might face very different terms on renewal.
This is the most important business rivalry since Google versus Microsoft in search and mobile. The difference is this one is moving ten times faster.
Honestly the story I keep waiting for is what the actual model quality difference looks like at the frontier right now. Revenue and drama are interesting but which model is actually smarter for complex reasoning tasks in April 2026 and by how much?
Salesforce took 20 years to reach 30 billion in annual revenue. Anthropic did it in under three years. That sentence alone should end every debate about whether this growth is meaningful.
Dario and Daniela leaving OpenAI with a dozen researchers to start a safety-focused lab and then outcompeting their former employer on revenue is one of the most remarkable stories in recent tech history regardless of how the rivalry ends.
The article frames the Super Bowl ads as Anthropic being aggressive but honestly using the words betrayal and deception about ads in ChatGPT is a bit much. OpenAI putting ads in a free tier is not a moral failing, it's a normal business decision.
To answer the question above, from what I've seen in procurement conversations, the safety branding matters more to regulated industries, finance, healthcare, legal, but for pure engineering teams it really does come down to Claude Code just being better. Both things are true depending on who's buying.
The Mythos name is such a choice. Very much not a subtle signal about what they think they've built.
Every few weeks there's a new biggest private funding round ever and it's always one of these two companies. At some point the private markets for AI capital become their own systemic risk.
Unpopular opinion, the handholding thing at the India summit is being way overanalyzed. Two competitive CEOs who don't particularly like each other didn't want to do a forced photo moment. This happens between executives all the time. The internet turned a slightly awkward photo into a geopolitical event.
79% of OpenAI enterprise customers also pay for Anthropic. So much for this being a zero sum battle. Enterprises are just buying both and letting their teams pick.
Speaking from experience running a software team, the thing about Claude Code generating 90% of its own codebase is both impressive and slightly concerning from a quality control perspective. At some point we need real data on defect rates in AI written production code at scale.
The article is great but I'd push back on the framing that Anthropic is clearly winning. OpenAI has 900 million weekly active ChatGPT users. Consumer AI becomes the operating system for how people interact with information. That's not a small thing to concede.
OpenAI launching a 100 dollar ChatGPT Pro tier specifically to compete with Claude Code right after Anthropic's revenue crossover is such a reactive move. The company that used to set the pace is now responding to the competitor.
Genuinely curious, does anyone know if Anthropic's safety focus actually influences which enterprise customers choose them, or is it mostly just Claude Code being better at coding tasks? Because those are very different stories about why they're winning.
The article mentions Anthropic capturing 73% of first-time enterprise AI spending. That is the number I keep coming back to. If you're a business buying AI tools for the first time, you're overwhelmingly choosing Claude. That flywheel only gets stronger from here.
The supply chain risk designation from the Pentagon is huge and kind of undercuts the Project Glasswing story. You're launching a cybersecurity model with Apple and Microsoft and Amazon as partners but the US Department of Defense just called you a risk. Those two things happening at the same time are wild.
Wait, what about the Pentagon situation? The article glosses over Anthropic being designated a supply chain risk by the Defense Secretary after they pushed back on autonomous weapons use. That is a massive deal that deserves more than a passing mention in the context of cybersecurity AI.
reading that OpenAI projects a 14 billion dollar loss in 2026 while Anthropic aims for positive free cash flow by next year makes me think the IPO timing matters enormously. Going public while still burning that much cash is a very different story to tell Wall Street.
Both companies are burning billions and everyone's acting like the laws of financial gravity don't apply because the technology is impressive. I've seen this movie before and it doesn't always end with the most impressive tech winning.
Anthropic monetizes at roughly 211 dollars per monthly user versus OpenAI at about 25 per weekly user. That eight times difference in monetization efficiency is the entire story of this rivalry in one stat.
To the point above, 900 million users who are each generating about 25 dollars in average revenue per year versus Anthropic's enterprise contracts averaging 1 million plus per customer annually. Scale of users does not equal scale of sustainable revenue.
The cybersecurity program finding thousands of zero days in weeks makes me simultaneously grateful Anthropic exists and terrified about what happens when a less careful organization builds something similar.
Can we acknowledge that both companies spending over 19 billion dollars combined on infrastructure in a single year means the compute buildout is now an infrastructure crisis as much as a business story? The power grid implications alone are enormous.
Altman calling Anthropic authoritarian while Anthropic is out here building the most commercially successful safety-first AI company in history is such a specific flavor of irony.
The safety first mission branding is doing real work for Anthropic in regulated industries. When you're selling AI to a hospital system or a bank, the company that built its entire identity around not cutting corners on safety has a meaningful credibility edge.
As an independent security researcher, the claim that a model autonomously identified and exploited a 17-year-old FreeBSD vulnerability with no human involved is extraordinary if true. I'd want to see the technical writeup before fully accepting it but the implication for offensive security research is profound either way.
Does anyone else find it slightly ironic that a company founded on AI safety concerns is now building some of the most powerful autonomous hacking tools ever created, even if the intent is defensive? Where exactly is the safety-first line drawn?
The burn rate numbers for both companies are genuinely staggering. 19 billion dollars total in infrastructure spending for Anthropic in 2026 alone. That's not a startup. That's a small country's GDP.
The answer to the safety question above is that Glasswing is restricted to 12 trusted partners with vetted use cases. The whole point of the safety framing is that you build powerful tools with careful controls rather than releasing them widely. Whether those controls hold is a different question.
The article mentioned that 4% of all public GitHub commits are now authored by Claude Code with projections of 20% by year end. If that 20% projection is accurate, the implications for junior developer hiring are going to be severe.
the compute math here is brutal for OpenAI. Spending 4x more on training and generating less revenue is not a gap you close by writing memos to investors.
OpenAI and Anthropic both teaming up with Google to block Chinese model distillation is one of the most significant geopolitical tech stories of the year and it's being treated as a footnote. The era of AI as neutral technology is definitively over.
The point about the enterprise vs consumer divide being about who has the customers that matter most feels right but I'd add a nuance. Consumer AI shapes the expectations that enterprise buyers bring to the table. If 900 million people use ChatGPT daily, IT departments get pressure to support it regardless of what procurement prefers.
As someone who works in cybersecurity professionally, Project Glasswing finding thousands of zero day vulnerabilities in weeks changes the entire threat landscape. The vulnerabilities were already there but now we have to assume adversaries can find them just as fast. Defense has to move at AI speed now.
The IPO at a 400 to 500 billion valuation is where I get nervous as a potential investor. The revenue growth is real but the gross margins at 40% after inference costs are not what public market investors expect from a software company. S-1 is going to be a very interesting document.
The fist thing at the India summit is going to live rent free in my head forever. Two grown men running trillion dollar companies and they couldn't just hold hands for one photo.
The pattern of OpenAI following Anthropic's lead is new and worth noting. A year ago nobody was saying that. OpenAI launched the new pricing tier to match Claude Code, launched a cybersecurity program to match Project Glasswing, and sent a memo to investors only after Anthropic's revenue numbers dropped. That's a defensive posture.
Claude has become the default for serious technical work in a way that ChatGPT never really was. ChatGPT won the curious consumer market. Claude won the people who actually build things. Those are different markets with very different economics.