Running 70B models on Apple Silicon vs cloud APIs and GPU rentals: a cost analysis for March 2026.
The companies building frontier AI models are telling you who they are. In its most recent IRS filing, OpenAI quietly removed the word "safely" from its mission statement[^7] — capping two years of dissolving its own safety teams, from the Superalignment team in May 2024 to the Mission Alignment team in February 2026, while researchers who departed warned publicly that "safety culture and processes have taken a backseat to shiny products."[^8] That same month, Anthropic — widely considered the most safety-focused of the frontier labs — published a statement from CEO Dario Amodei clarifying that the company does not oppose autonomous weapons in principle: "Even fully autonomous weapons," he wrote, "may prove critical for our national defense." The objection is not moral; it is technical. Frontier AI, in Anthropic's view, is simply not reliable enough yet.[^9]