By Harry Booth/San Francisco and Billy Perrigo TIME — Mar 11, 2026 6:00 AM MT
In a hotel room in Santa Clara, Calif., five members of the AI company Anthropic huddled around a laptop, working urgently. It was February 2025, and they had been at a conference nearby when they received disturbing news: results of a controlled trial had indicated that a soon-to-be-released version of Claude, Anthropic's AI system, could help terrorists make biological weapons.
They were members of Anthropic's frontier red team, which studies Claude's advanced capabilities and tries to project worst-case scenarios, from cyberattacks to biosecurity threats. Sprinting back to the hotel room, they flipped a bed on its side to serve as a makeshift desk and pored over the test results. After hours of work, they still weren't sure whether the new product was safe. Anthropic ended up holding up the release of the new model, known as Claude 3.7 Sonnet, for 10 days until they were certain. That may not sound like much, but it felt like an eternity for a company operating at the vanguard of an industry rapidly remaking the world.
Logan Graham, the leader of the red team, recalled the bioweapons scare as an example of the challenges Anthropic faces at a pivotal moment for the company and the world. Anthropic is the frontier AI lab with the greatest emphasis on safety. It's also leading the race to create ever more powerful versions of a technology that many of its own staff believe could usher in a terrifying parade of horribles, from nuclear war to human extinction. Graham, a baby-faced 31-year-old, doesn't soft-pedal the responsibility of balancing the benefits of AI with its enormous risks. "Some people's intuition from growing up in a peaceful world is that somewhere there's a room full of adults who know how to fix it," he says. "There are no groups of adults. There is no room in the first place. There is no door you're looking for. You are responsible."
If that's not bracing enough, consider how he recalls the bioweapons scare: "It was a fun and interesting day."
Graham was speaking a few weeks ago at Anthropic's headquarters, where TIME spent three days interviewing executives, engineers, product heads, and safety leaders in an attempt to figure out how what was once the eccentric little brother in the race for artificial intelligence has suddenly become the pacesetter. Anthropic had just raised $30 billion from investors ahead of a possible IPO this year. (Salesforce, where TIME owner Marc Benioff is CEO, is an investor in Anthropic.) Already, its $380 billion valuation eclipses those of Goldman Sachs, McDonalds, and Coca-Cola. Its revenues are a rocket ship. Claude is considered a world-class model, with products like Code and Cowork upending what it means to be a programmer. Its tools are so good that each new release causes stock-market shocks, as investors grasp the likelihood the advances will upend entire categories, from law to software development. Over the past few months, it emerged as the company most poised to disrupt the future of work.
Then Anthropic found itself in a fight over the future of war. For more than a year, Claude has been the AI model of choice for the U.S. government, and the first frontier system cleared for classified use. In January it was used in the audacious capture of Venezuelan President Nicolás Maduro in Caracas. But in the weeks that followed, the relationship between Anthropic and the Pentagon unraveled. On Feb. 27, the Trump Administration announced it would designate the company a supply-chain risk to national security—the first time the U.S. is known to have slapped the label on an American company. The fastest-growing software company in history was now at war with its own government. President Trump ordered the U.S. to cease all use of Anthropic's software. Pete Hegseth, the Secretary of Defense, announced that any company doing business with the government would be barred from doing business with Anthropic. OpenAI, Anthropic's rival, swooped in to sign the military contract instead. The most disruptive company in the world had been disrupted.
At the heart of the confrontation is the question of who gets to set limits on a technology that is viewed as one of the most powerful weapons America has at its disposal. Anthropic was happy for its tools to be deployed in war fighting, arguing that bolstering the U.S. military was the only way to avert the threat of authoritarian states like China. But CEO Dario Amodei had objected to the Pentagon's attempt to renegotiate the company's government contracts in order to permit "all lawful use." Amodei cited two specific concerns: he didn't want Anthropic's AI to be used in fully autonomous weapons systems, or to conduct mass surveillance of American citizens.
Hegseth and his advisers bristled at what they saw as a private company's attempt to dictate how the military waged war. In the view of the Department of Defense, Anthropic kneecapped the partnership by insisting on unnecessary guardrails, attempting to litigate specific hypotheticals, and then dragging its feet in the subsequent negotiations. The Trump Administration saw Amodei as arrogant and intractable, and would not abide a private company, regardless of how good its product might be, interposing itself in the chain of military command. "It just dragged on," says Emil Michael, the Under Secretary of War and chief technology officer at the Pentagon. "I can't run a 3 million-person department on exceptions that I can't imagine or fathom."
From Silicon Valley to Capitol Hill, many observers wondered whether this was really about a contractual dispute. Critics saw in the Trump Administration's actions a troubling attempt to bring down a company whose politics it disliked. "The real reasons [the Department of Defense] and the Trump admin do not like us is we haven't donated to Trump," Amodei wrote in a leaked internal memo. "We haven't given dictator-style praise to Trump (while [OpenAI CEO] Sam [Altman] has), we have supported AI regulation which is against their agenda, we've told the truth about a number of AI policy issues (like job displacement), and we've actually held our red lines with integrity rather than colluding with them to produce 'safety theater.'" Michael disputes this, calling it a "total fabrication," and says the designation was made because Anthropic's posture put war fighters at risk: "My job is not politics in the Department of War, my job is to defend the country."
Anthropic's unorthodox culture had collided with divisive domestic politics, national security, and a murky world of cutthroat corporate competition. How much damage it sustained in the wreck isn't clear. The supply-chain-risk designation was narrower than initially threatened; according to Anthropic, it applies only to military contracts. On March 9, Anthropic sued the government seeking to overturn the blacklisting. Customers appeared to reward the company for taking an ethical stand, ditching ChatGPT and flocking to Claude. Yet the company now has to navigate the next three years under a hostile, favor-trading Administration staffed by officials with close ties to bitter rivals who intensely dislike Anthropic.
The Pentagon saga raises uncomfortable questions, even for a company that is accustomed to navigating high-stakes ethical trade-offs. In this confrontation, Anthropic did not buckle: it maintained that it stuck by its values, even when they came at great cost to the company. But in other episodes it has. The same week it stared down the Pentagon, the company softened a core part of its commitment to train its models safely, citing its peers' unwillingness to do the same. What other compromises would it be willing to make?
The stakes are only ratcheting higher. Contests over who controls AI will intensify as the technology grows more powerful. Claude's use in Venezuela and Iran indicates that advanced AI is now an integral tool for the most powerful military in the world. Meanwhile, an array of new pressures—state power, domestic politics, national-security imperatives—have been piled atop those already weighing on a for-profit company in a race to deploy a volatile new technology. Like biologists conjuring deadly pathogens in the lab in order to find a cure, Anthropic took it upon itself to chart AI's hazards, pushing the frontiers of development rather than leave it to others more willing to take reckless shortcuts. Yet even as it preaches caution, Anthropic is using Claude to accelerate the development of future, more powerful versions of itself. Staff believe the next few years will be a pivotal test, for the company and the world. "We should operate as if 2026 to 2030 is where all the most important things happen—models becoming faster, better, possibly faster than humans can handle them," says Graham. As Dave Orr, Anthropic's head of safeguards, puts it, "We're driving down a cliff road. A mistake will kill you. Now we're driving at 75 instead of 25."
The fifth floor of Anthropic's San Francisco headquarters is all warm wood and soft light. Windows look upon a lush green park. A portrait of Alan Turing, one of the fathers of computer science, hangs on a wall alongside framed machine-learning papers. Security personnel dressed in black patrol the nearly empty entry, where a friendly receptionist hands visitors copies of a small book, the size of the pocket Bibles proselytizers distribute on street corners. It's a copy of Machines of Loving Grace, a 14,000-word essay that Dario Amodei wrote in 2024, laying out his utopian vision for how AI could transform the world by accelerating scientific discovery. In January, Amodei published a second novella-length essay, The Adolescence of Technology, detailing the attendant dangers: enabling mass surveillance, widespread job losses, even permanent loss of human control.
Amodei is a San Francisco–raised biophysicist. He runs Anthropic with his sister, Daniela, the company's president. They were early employees at OpenAI, where Dario helped make a pivotal finding about so-called AI-scaling laws that kicked off the current AI boom. Daniela was an executive responsible for safety policy. At first, they felt in sync with OpenAI's founding mission to safely develop a technology with huge potential benefits and equivalent risks. But as OpenAI's models grew more powerful, they thought Altman was rushing to release new products without taking enough time for deliberation and testing. The siblings decided to strike out on their own.
They launched Anthropic in 2021, along with five co-founders, in the depths of the pandemic, conducting planning meetings on Zoom and eventually bringing chairs to a park to strategize in person. From the beginning, the company sought to do things differently. Before it had a product, Anthropic built a "societal impacts" team. It employs an in-house philosopher, Amanda Askell, whose role is to shape Claude's sensibilities, and teach it to navigate moral uncertainty in preparation for a future where it's vastly more intelligent than its creators. "It does sometimes feel a little bit like you have a 6-year-old, and you're teaching the 6-year-old what goodness is," Askell says. "By the time they're 15, they're going to be smarter than you at everything."
As it grew, Anthropic was determined to preserve its founding values and tight-knit culture. Employees call themselves "ants." Many maintain a digital "notebook," a Slack channel where they share their hopes, fears, and insights in stream-of-consciousness fashion. Dario Amodei writes his own lengthy entries, Daniela says. Dario also gives biweekly company-wide lectures known internally as "Dario vision quests," Daniela says. Managers are fixated on maintaining a shared sense of purpose. Potential recruits must pass a highly selective "cultural interview," which is designed partly to screen out people who aren't in it for the mission. (A sample question: Would you be willing to lose the value of your stock if Anthropic decides not to release models because it can't guarantee they're safe?) Anthropic's competitors contain fiefs "that all care about different things and are low-key at war with each other," says Daniel Freeman, a member of Anthropic's frontier red team who used to work at Google. "I've absolutely never felt that at Anthropic."
The company has deep roots in effective altruism (EA), a social and philanthropic movement dedicated to using reason to do the most good, including by averting catastrophe. In their 20s, the Amodeis began donating to GiveWell, an EA group that evaluates where charity can be deployed most effectively. All seven of its co-founders—all now paper billionaires—have pledged to give away 80% of their wealth. Askell's ex-husband is William MacAskill, an Oxford philosopher who co-founded the EA movement, and Daniela Amodei is married to Holden Karnofsky, GiveWell's co-founder and Dario's former roommate, who works on safety policy at Anthropic. The Amodeis have never publicly embraced the EA label, which became a lightning rod after Sam Bankman-Fried, an EA who invested in Anthropic, was found to have perpetrated one of the biggest financial frauds in U.S. history. "The same way that you might say some people overlap with a political ideology in some ways, but don't have a political affiliation—that's more how I would think about it," Daniela Amodei says.
For some in Silicon Valley and the Trump Administration, Anthropic's EA ties were cause for skepticism. Others consider Anthropic, which has hired a number of former Biden Administration officials, a vestige of the ancien régime using unelected power to frustrate Trump's MAGA mission. Trump's AI czar David Sacks accused the company of running a "sophisticated regulatory capture strategy based on fear-mongering," by trying to scare governments into passing onerous AI regulations that would privilege itself over startups. Elon Musk, who runs rival xAI, likes to refer to the company as "Misanthropic," bristling against what he feels is a powerful set of woke elites trying to instill paternalistic values into AI systems, in much the same way that conservatives perceive social media platforms to be unfairly censoring their views. But even Anthropic's rivals grudgingly concede its tech is bleeding edge. Nvidia boss Jensen Huang has said he "pretty much disagree[s] with almost everything" Dario Amodei says about AI, but regards Claude as an "incredible" model. In November, Nvidia, the chipmaking behemoth, invested $10 billion in Anthropic.
Boris Cherny, the creator of Claude Code, had a simple question for his new tool: "What music am I listening to?" It was September 2024, the Ukrainian-born engineer's first month at Anthropic. Cherny, previously a software engineer at Meta, had built a system that allowed the Claude chatbot to run free on his computer. If Claude was the brain, then Claude Code was the hands. Where chatbots could talk, this tool could access his files, run programs, and write and execute code in the same way any programmer might. At the engineer's prompt, Claude opened Cherny's music player, snapped a screenshot, and responded: "Husk" by Men I Trust. "I was taken aback," Cherny says with a grin.
Cherny shared his prototype internally. Claude Code spread so quickly that in Cherny's first performance review, Dario Amodei asked if he was forcing colleagues to use it. When a research preview of the tool was publicly released in February 2025, programmers outside Anthropic flocked to it too. Then in November, Anthropic released a new version of Claude that, when strapped into Claude Code, was good enough at spotting its own mistakes to be trusted to complete tasks on its own. Cherny stopped writing his own code entirely.
Growth skyrocketed. Annualized revenue from the coding agent alone topped $1 billion by the end of 2025. By February, it had more than doubled to $2.5 billion, putting Anthropic on track to surpass OpenAI's revenue by the end of 2026, according to estimates from industry monitors Epoch and Semianalysis.
By this point, Anthropic had cemented itself as a leading AI company for business. Each new product release sent judders through the stock market. When Anthropic launched plug-ins for a version targeting noncoders for sales, finance, marketing, and legal services, $300 billion evaporated from the market value of software companies.
Dario Amodei has warned that AI could displace half of entry-level white collar jobs in one to five years, and urged the government and other AI companies to stop "sugar-coating" it. Wall Street's reaction to new Anthropic product drops suggested that the company's tech could render entire job categories obsolete. Amodei suggested it might reorder society in the process. "It is not clear where these people will go or what they will do," he wrote, "and I am concerned that they could form an unemployed or very-low-wage 'underclass.'"
The irony of the company most preoccupied with AI's social risks being the one poised to possibly put millions out of work is not lost on its employees. "It's a real tension. I think about this all the time," says Deep Ganguli, who leads Anthropic's societal-impacts team, which studies Claude's labor impacts. "It feels like we might be speaking out of both sides of our mouths."
Internally, employees began to question if Anthropic had crept to the cusp of the moment they had anticipated with fear and wonder: the arrival of a process known in AI circles as recursive self-improvement. Recursive self-improvement is when an AI system starts bettering itself, creating a flywheel that continues accelerating. In science fiction, and in planning exercises carried out at major AI labs, this is when things can start going very wrong. An "intelligence explosion" might unfold so quickly that humans can no longer oversee what they've built.
Anthropic isn't quite there yet—human scientists still guide Claude's progress—but Claude Code is already allowing Anthropic to execute its plans far faster than before. Model releases are now separated by weeks, not months. Some 70% to 90% of the code used in developing future models is now written by Claude. But the rate of change is such that Anthropic co-founder and chief science officer Jared Kaplan, as well as some external experts, believes fully automated AI research could be as little as a year away. "Recursive self-improvement, in the broadest sense, is not a future phenomenon. It is a present phenomenon," says Evan Hubinger, who leads Anthropic's alignment stress-testing team, which seeks to find and patch weaknesses in the nascent science of "aligning" unpredictable AI models to human values.
Already, Claude is 427 times faster than its human overseers at performing some key tasks, according to internal benchmarks. In an interview, one researcher described a colleague running six versions of Claude, each managing 28 more Claudes, all simultaneously running experiments in parallel. While the model still lacks the judgment or taste of its human overseers, executives don't expect that gap to last long. The resulting acceleration is precisely what Anthropic's leadership warns could outpace human control.
Anthropic's efforts to develop safeguards are also being sped up by Claude. But as the company becomes Claude, the dangers become circular. In experiments where Hubinger made small changes to Claude's training process, the resulting models became hostile, expressing desires for world domination and crippling Anthropic's safety measures. Recently, models have shown an awareness they're being tested. "The models are getting better at hiding things," Hubinger says. In one set of experiments concocted by researchers, Claude showed a willingness to blackmail a fictional engineer by revealing his extramarital affair in order to prevent itself from being taken offline. As Claude trains future Claudes, these sorts of issues could compound.
For AI companies that have raised billions on the promise of future progress, the idea that AI will keep accelerating their research is both powerful and potentially self-serving—a way to prime investors to keep pumping in the billions of dollars needed to perform expensive training runs. Some experts are not convinced the companies will achieve full automation—but worry that if they do, it could catch the world flat-footed. "The idea that the wealthiest companies in the world, employing some of the smartest people on the planet, are trying to fully automate AI R&D deserves a 'what the f-ck' reaction," says Helen Toner, interim executive director at Georgetown University's Center for Security and Emerging Technology.
Anticipating a future where technological progress could outstrip the company's ability to manage the risks, Anthropic built a braking mechanism known as a Responsible Scaling Policy (RSP). Published in 2023, it committed Anthropic to pausing development of an AI system if it could not guarantee in advance that its safety measures were adequate. Anthropic touted the policy as evidence it was safety-conscious and willing to withstand market incentives in the sprint for superintelligence.
In late February, as TIME first reported, Anthropic rewrote its policy, dropping the binding commitment to pause. In hindsight, Kaplan tells TIME, it was "naive" to think Anthropic could identify bright lines between danger and safety. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments ... if competitors are blazing ahead," he says. The new version of the policy includes commitments to be more transparent about the safety risks of AI, including making additional disclosures about how Anthropic's own models fare in safety testing. It commits to matching or surpassing the safety efforts of competitors. And it promises to "delay" development if leaders both consider Anthropic to be the leader of the AI race and think the risks of catastrophe are significant. The company cast it as a pragmatic concession to uncomfortable realities. But overall, the change to the RSP left Anthropic far less constrained by its own safety policies. And it augured a tougher test to come.
The raid that captured Venezuelan President Nicolás Maduro was one of the first major military operations planned with the help of a frontier AI system. In the dead of night on Jan. 3, U.S. Army helicopters swooped into Venezuelan airspace. After exchanging fire, commandos zeroed in on the living quarters of the President. They captured Maduro and his wife and spirited them away to New York to face narcoterrorism charges. The full details of exactly how Claude was used in the Maduro raid are not known. But according to Axios, Claude helped plan the mission and was used during the raid itself.
Since last July, the Department of Defense has pushed to distribute Anthropic's AI tools to many of its war fighters, seeing immense upside in their ability to take in masses of information from multiple sources and produce usable intelligence. "Claude is seen as the best model on the market in the military," says Mark Beall, a former senior Defense Department official who now serves as president of government affairs at the AI Policy Network. "Claude's adoption in the classified world has been one of Anthropic's biggest successes," Beall adds. "They had the first-mover advantage."
But the Maduro raid came in the midst of tricky discussions between Anthropic and the Pentagon. The Defense Department had been trying for months to renegotiate what it felt was an unduly restrictive contract. How those talks went awry is a matter of dispute. Michael, the department's AI chief, says the catalyst was a call from an Anthropic executive to Palantir, the government-focused analytics firm, expressing concerns about the Venezuela raid and asking whether its software was used. "They were soliciting classified information," he says. That "gave us very deep concern about: Would they, in a future conflict, shut off their model in the middle of an operation and put lives at risk?"
Anthropic disputes that account. The company says it has never attempted to limit Pentagon use of its technology on a case-by-case basis. A former Trump Administration official, who is close to Anthropic and familiar with the talks, says an employee at Palantir first raised Claude's role in the raid during what had been a routine call. Nothing about Anthropic's follow-up questions suggested disapproval, the person said.
As negotiations continued, government officials felt Amodei was proving far more obstinate than the CEOs of other leading labs. At one point, sources familiar with the talks say, defense officials brought up hypothetical uses of Anthropic's tools, such as a hypersonic missile being launched at the U.S. or a drone swarm attack. Amodei said officials could call him. (An Anthropic spokesperson called that characterization of the talks "patently false.")
Anthropic already had powerful enemies in the Administration. Now suspicion about Anthropic's ideological bent hardened into antipathy. "We will not employ AI models that won't allow you to fight wars," Hegseth declared at Musk's SpaceX HQ on Jan. 12.
As negotiations dragged, Hegseth summoned Amodei to the Pentagon for an in-person meeting on Feb. 24. It was cordial, but each side held firm, according to another person familiar with the discussion. Hegseth began by praising Claude and telling Amodei how the military wanted to work with Anthropic, the person said. Amodei said Anthropic was happy to accommodate most changes requested by the Pentagon, but held firm on two red lines. The first was a prohibition on Claude's use in fully autonomous kinetic weaponry where AI, not humans, makes final targeting decisions. Anthropic's position was not that autonomous weapons are wrong, but rather that Claude was not yet reliable enough to steer them without a human in the loop.
The second exception was the prospect of the government conducting mass surveillance of American citizens by using Claude to process troves of publicly available data. The company thought domestic privacy laws had not yet caught up with a worrying practice: the U.S. government buying massive datasets available on the free market. In isolation, this data might be innocuous. But when analyzed by AI, it could enable the creation of detailed dossiers on American citizens' private lives, including their political views, associations, sex lives, and browsing histories. (Anthropic did not protest the possibility of Claude being used in the lawful mass surveillance of foreign citizens using the same methods.)
Unmoved, Hegseth gave Amodei until 5 p.m. on Friday, Feb. 27, to accept the department's terms or be labeled a supply-chain risk. The day before the deadline, Anthropic was offered a modified contract that seemed to accept its red lines, but a closer read revealed it offered the government loopholes, says the person familiar with the negotiations. As the clock ticked down, Anthropic executives took another call with the Pentagon's Michael. They believed they were close to finding a compromise, but still disagreed on whether the Pentagon could use Claude to analyze bulk data on Americans purchased commercially. Michael asked for Dario Amodei to join the call, but he was unavailable. Minutes later, as the deadline lapsed, Hegseth announced negotiations were over. Even before that, Trump weighed in. "The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!" he posted on his social media platform. "The Leftwing nut jobs at Anthropic have made a disastrous mistake."
Unbeknownst to Anthropic, the Pentagon had simultaneously been negotiating with OpenAI to make ChatGPT available on classified government systems. Altman announced a deal that same evening, claiming to have won an agreement with the Pentagon that respected similar red lines to Anthropic's. Amodei fired off a message to his staff, saying that Altman and the Pentagon were "gaslighting" the public in an attempt to make it look like their agreement contained substantial guardrails. Defense officials had earlier confirmed Musk's xAI would also provide its model on classified servers; the Pentagon is currently negotiating with Google as well.
The episode was exactly what Amodei had feared: a race to the bottom, with the immense power of AI preventing rivals from cooperating to make it safer. To Anthropic's detractors, it also revealed an essential hubris at the heart of the company. It may have believed it could navigate the choppy waters on the path toward superhuman machines safely, in a way that would make taking such immense risks worthwhile. Instead, it had raced immense new surveillance and war-fighting capabilities into the heart of a right-wing government—and been undercut by competitors the moment it tried to set limits on their use.
There are signs Anthropic may weather the damage, and perhaps even come out stronger in the process. On the morning after Hegseth attempted to sign its corporate death warrant, a string of encouraging messages snaked around the sidewalk outside Anthropic's San Francisco headquarters. "You give us courage," one read in bold chalk letters. That day, Claude's iPhone App hit No. 1 on the App Store, displacing ChatGPT. More than a million people were signing up for Claude every day.
Meanwhile, OpenAI's own military contract spurred a grassroots boycott. For some at OpenAI, trust had been breached. A top OpenAI researcher announced he was jumping to Anthropic. OpenAI's robotics team lead resigned, citing the new government contract. Altman, OpenAI's CEO, wrote he had been wrong to rush to get a Pentagon deal by Friday. "The issues are super complex, and demand clear communication." By Monday, Altman had conceded his actions the previous Friday had looked "opportunistic"; OpenAI said it had amended its deal to more clearly adopt the same red lines Anthropic wanted—though legal experts say that without seeing the full contract, it is impossible to know if that's true.
On March 4, Anthropic received a letter from the Department of Defense, confirming its designation as a supply-chain risk to national security. Anthropic said the letter was narrower than Hegseth's post suggested, barring contractors from using Claude only in defense contracts. But a second letter, addressed to Senate Intelligence Committee chair Tom Cotton and reviewed by TIME, reveals the department has also invoked a separate statute—one that would empower agencies beyond the Pentagon to bar Anthropic from their contracts and supply chains. To take effect, it requires sign-off from senior Pentagon officials and gives Anthropic 30 days to respond.
The fight with Anthropic will reverberate through the industry. "Some people in the Trump Administration will feel muscular and good about themselves, and they'll massage their biceps when they go home at night," says Dean Ball, who drafted Trump's AI action plan before joining the think tank Foundation for American Innovation. But it may dissuade companies from working with the Pentagon, or push them abroad, he says. "In the end, this is not good for the U.S. as a stable business environment," Ball says, "and that's the thing we depend on."
Anthropic's leaders believe Claude will help build AI systems so powerful they prove decisive in determining the global balance of power. If that's the case, the stakes of its fight with the Pentagon may pale in comparison with what's to come.
With reporting by Leslie Dickstein and Simmone Shah