SPEAKER_1: This is Trending Thursday, issue forty-six. We're covering SpaceX's sixty billion dollar Cursor play, Anthropic crossing one trillion, Google's coding counter-attack, and the AI child safety crisis. Let's start with SpaceX. What exactly is happening with Cursor? SPEAKER_2: So SpaceX has signed a partnership with Cursor to build what they're calling the world's most useful models. And as part of that deal, SpaceX has the right to acquire Cursor for sixty billion dollars, or pay ten billion just for the partnership itself. SPEAKER_1: Why isn't SpaceX just acquiring Cursor outright right now? SPEAKER_2: Because doing so could delay SpaceX's IPO. So they're keeping it as an option for now. And separately, Cursor has dropped its reported two billion dollar funding round entirely. SPEAKER_1: Were there other suitors in the mix? SPEAKER_2: Yes. Microsoft considered buying Cursor in recent weeks but didn't make an offer. And xAI held talks with both Mistral and Cursor about a potential three-way partnership. Mistral co-founder Devendra Chaplot actually joined xAI back in March. SPEAKER_1: So the AI coding market has moved really fast from funding rounds to acquisition talks. SPEAKER_2: Exactly. And the SpaceX S-1 filing gives you a sense of why. SpaceX lists its total addressable market at twenty-eight point five trillion dollars, with twenty-six point five trillion of that coming from the AI sector alone. SPEAKER_1: That's a staggering number. What else does the S-1 reveal? SPEAKER_2: SpaceX is manufacturing its own GPUs, listed among substantial capital expenditures. Its debt grew from fourteen billion to twenty-three billion last year, tied to a four point five billion dollar lease deal with Valor Equity for AI equipment, including chips for xAI. SPEAKER_1: And Musk's own stake in the company? SPEAKER_2: He purchased one point four billion dollars of stock from current and former employees to boost his control. SPEAKER_1: There's also a warning in the filing, right? About the orbital AI data centers? SPEAKER_2: Right. SpaceX says those use unproven technologies and may not achieve commercial viability. And sources say Musk has de-emphasized SpaceX's original Mars mission as the company prepares to go public, focusing instead on AI and other revenue streams. SPEAKER_1: Tesla also has something going on with Intel? SPEAKER_2: Yes. Tesla is planning to use Intel's fourteen A process at its Terafab project, making it the first major customer for that node. SPEAKER_1: To put the sixty billion Cursor price in context, that exceeds what Microsoft paid for LinkedIn. And ninety-three percent of SpaceX's claimed TAM is from AI, while Mars has been quietly deprioritized. Let's move to Anthropic. They just crossed one trillion dollars in valuation? SPEAKER_2: They did. Anthropic's valuation hit one trillion on Forge Global, a leading private marketplace exchange. That surpasses OpenAI's valuation on the same platform, which sits at eight hundred and eighty billion. SPEAKER_1: And Amazon is deeply tied into this. SPEAKER_2: Very much so. Amazon committed a total of thirty-three billion dollars to Anthropic, with a one hundred billion plus AWS lock-in over ten years. And Trump said his administration had very good talks with Anthropic and that a Department of Defense deal is possible. SPEAKER_1: Anthropic is also spending more on lobbying. SPEAKER_2: Significantly more. They spent one point six million on lobbying in Q1, up from three hundred and sixty thousand in Q1 of twenty-twenty-five. That's the first time they've outspent OpenAI, which spent one million. SPEAKER_1: But there's a gap between the valuation and the product reality, isn't there? SPEAKER_2: A big one. Anthropic's CEO has publicly described the company as compute-limited, with new infrastructure taking eighteen to twenty-four months to translate into actual capacity. SPEAKER_1: And then Opus four point seven launched to some pretty harsh developer feedback. SPEAKER_2: Within twenty-four hours. Reddit threads called it legendarily bad, hitting two thousand three hundred upvotes in forty-eight hours. On the MRCR long-context benchmark, Opus four point seven scored thirty-two point two percent, down from Opus four point six's seventy-eight point three percent. SPEAKER_1: What were the specific complaints? SPEAKER_2: It produces worse code than the previous version, argues with users to the point of hallucination, and flags routine code as malware. On top of that, the new tokenizer uses up to thirty-five percent more tokens, which effectively raises costs without raising prices. SPEAKER_1: So Anthropic is tightening the economics of inference by making a cheaper-to-run model while shifting cost to the user through the tokenizer. SPEAKER_2: That's the read. Meanwhile, Claude Design launched to strong reviews as a tool for creating websites and presentations, but it's compute-constrained with separate weekly usage limits. SPEAKER_1: There were also some other product stumbles. SPEAKER_2: Claude Code was briefly removed from the Pro plan and brought back after backlash. And unauthorized users in a private Discord have been accessing Mythos since the day it was announced. SPEAKER_1: On the enterprise side, Anthropic signed a deal with law firm Freshfields. SPEAKER_2: Yes, to develop specialized legal AI tools for document drafting, contract review, and due diligence. OpenAI responded with ChatGPT for Clinicians, free for verified US physicians, pharmacists, and others. Both companies are racing to lock in professional verticals before the other can establish pricing power. SPEAKER_1: And Anthropic has a new identity verification policy that's causing friction? SPEAKER_2: They're requiring government-issued IDs and selfies from some users to prevent access from adversary nations. But that policy is alienating Chinese-American founders who need access to build. SPEAKER_1: And then there's the broader open-weight competition from China. SPEAKER_2: A White House memo says foreign entities, principally based in China, are engaged in industrial scale distillation of American AI technology. Moonshot AI released Kimi K2.6, an open-weight model under a modified MIT License, showing strong improvements in long-horizon coding tasks. SPEAKER_1: And Alibaba and Tencent are also shipping. SPEAKER_2: Alibaba shipped Qwen three point six twenty-seven B, a twenty-seven billion parameter dense model that surpasses its own three hundred and ninety-seven billion parameter model on major coding benchmarks. Tencent released Hy three preview, its first model developed under former OpenAI researcher Yao Shunyu. SPEAKER_1: And there's investment activity around DeepSeek too. SPEAKER_2: Tencent and Alibaba are in talks to invest in DeepSeek at a twenty billion plus valuation. The point is that the moat Anthropic is trying to build at one trillion is being distilled by open-weight models that cost a fraction of the compute. SPEAKER_1: Let's talk about Google. They had a big Cloud Next event. What did they ship? SPEAKER_2: Quite a lot. They announced TPU eight t for training and TPU eight i for inference, their eighth generation of custom silicon. They also launched the Gemini Enterprise Agent Platform, a revamped dev tool built on Vertex AI that manages the full lifecycle of AI agent fleets. SPEAKER_1: What about on the workspace and security side? SPEAKER_2: Workspace Intelligence now understands complex semantic relationships across Workspace apps for personalized context. Google and Wiz debuted new AI security agents for threat hunting and detection engineering to combat automated zero-day exploits. And Deep Research and Deep Research Max are now available via Gemini API paid tiers. SPEAKER_1: And then there's the seventy-five percent number that's been getting a lot of attention. SPEAKER_2: Google says seventy-five percent of new code created inside the company is now AI-generated and reviewed by human engineers. That's up from fifty percent last fall, and it's the highest number any major company has reported. SPEAKER_1: But internal capability hasn't translated into external product adoption. SPEAKER_2: Right. Chief AI Architect Koray Kavukcuoglu is working to unite Google's internal AI coding tools under the Antigravity platform, built specifically to counter Claude Code and Codex. SPEAKER_1: And Google is also hosting competitors on its own infrastructure? SPEAKER_2: Mira Murati's Thinking Machines Lab signed a deal with Google Cloud, valued at a single-digit billion, to access Google's latest AI systems built on Nvidia's GB three hundred chips. So Google is simultaneously building its own coding tools, hosting rivals' infrastructure, and watching Claude Code and Cursor eat the developer market it helped create. SPEAKER_1: Now let's get into the most serious topic, the AI child safety crisis. The numbers here are alarming. SPEAKER_2: They really are. NCMEC received one point five million reports of suspected AI-generated child sexual abuse material in twenty-twenty-five. That's up from sixty-seven thousand in twenty-twenty-four, and four thousand seven hundred in twenty-twenty-three. That's a twenty-two times increase in a single year. SPEAKER_1: And generative AI is what's driving that scale. SPEAKER_2: Exactly. Generative AI has made CSAM production trivially easy, and the volume is overwhelming every moderation system built for the pre-AI era. SPEAKER_1: Governments are responding. What's happening on the regulatory front? SPEAKER_2: Turkey's parliament passed a bill restricting social media access for children under fifteen after a school shooting. The UK's Ofcom launched an investigation into Telegram over CSAM concerns and predator grooming. Australia's eSafety Commissioner issued transparency notices to Roblox, Minecraft, and other platforms. SPEAKER_1: And there's action in the US as well. SPEAKER_2: The LA Unified School District became the first major American school system to mandate screen time limits. The UK is proposing a statutory smartphone ban in all schools in England. State AGs in West Virginia, Alabama, and Nevada had Roblox settle for thirty-five point eight million dollars over child safety protections. SPEAKER_1: And then there's the Florida investigation into OpenAI. That's a first. SPEAKER_2: Florida AG James Uthmeier issued criminal subpoenas to OpenAI to investigate whether ChatGPT's role in planning a mass shooting constitutes criminal liability. It is the first known criminal probe of an AI company over content generated by its model. SPEAKER_1: Let's run through the product launch quick hits. There's a lot here. Starting with Claude Code. SPEAKER_2: Claude Code's source leaked, and an undergraduate used AI assistants to rewrite it in a different language, highlighting copyright uncertainty. ChatGPT Images two point zero adds thinking capabilities, web search, up to two K resolution, multiple images from a single prompt, and stronger non-Latin text rendering. SPEAKER_1: OpenAI also launched shared agents for teams. SPEAKER_2: Yes, Codex-powered shared agents in ChatGPT for teams, which they're calling an evolution of GPTs. They also released an open-weight model for masking personally identifiable information in text, with one point five billion total parameters and fifty million active. SPEAKER_1: And OpenAI is now running ads? SPEAKER_2: Cost-per-click ads at three to five dollars per click, in addition to existing CPM pricing. Microsoft three sixty-five Copilot agentic features in Word, Excel, and PowerPoint are now generally available and enabled by default for Copilot and Premium subscribers. SPEAKER_1: A few more to get through. Beehiiv, Anker, GitHub, and China's three sixty. SPEAKER_2: Beehiiv launched metered paywalls, webinars for up to ten thousand people, and AI analytics. Anker released a compute-in-memory chip for on-device AI called the Thus Chip, launching first in Soundcore earbuds. GitHub began collecting pseudonymous client-side telemetry from CLI users, enabled by default. SPEAKER_1: And the China security finding? SPEAKER_2: China's three sixty Digital Security Group uncovered roughly one thousand previously unknown vulnerabilities in Microsoft Office using an AI-powered agent. SPEAKER_1: To recap: SpaceX is betting sixty billion on Cursor while quietly deprioritizing Mars. Anthropic hit one trillion but shipped a model developers called legendarily bad. Google writes seventy-five percent of its code with AI but can't win the coding product market. And AI-generated CSAM reports jumped twenty-two times in a single year, triggering the first criminal probe of an AI company. Next time, we'll be going deeper into what all of this means for the broader AI landscape.