Sunday, September 14, 2025
HomeStartupKaren Hao on the Empire of AI, AGI evangelists, and the price...

Karen Hao on the Empire of AI, AGI evangelists, and the price of perception


On the middle of each empire is an ideology, a perception system that propels the system ahead and justifies growth – even when the price of that growth instantly defies the ideology’s said mission.

For European colonial powers, it was Christianity and the promise of saving souls whereas extracting assets. For in the present day’s AI empire, it’s synthetic normal intelligence to “profit all humanity.” And OpenAI is its chief evangelist, spreading zeal throughout the business in a manner that has reframed how AI is constructed. 

“I used to be interviewing individuals whose voices had been shaking from the fervor of their beliefs in AGI,” Karen Hao, journalist and bestselling creator of “Empire of AI,” instructed TechCrunch on a current episode of Fairness

In her guide, Hao likens the AI business basically, and OpenAI particularly, to an empire. 

“The one solution to actually perceive the scope and scale of OpenAI’s conduct…is definitely to acknowledge that they’ve already grown extra highly effective than just about any nation state on the earth, and so they’ve consolidated a rare quantity of not simply financial energy, but in addition political energy,” Hao stated. “They’re terraforming the Earth. They’re rewiring our geopolitics, all of our lives. And so you may solely describe it as an empire.”

OpenAI has described AGI as “a extremely autonomous system that outperforms people at most economically beneficial work,” one that may by some means “elevate humanity by growing abundance, turbocharging the financial system, and aiding within the discovery of recent scientific information that modifications the bounds of chance.” 

These nebulous guarantees have fueled the business’s exponential progress — its large useful resource calls for, oceans of scraped information, strained vitality grids, and willingness to launch untested techniques into the world. All in service of a future that many consultants say could by no means arrive.

Techcrunch occasion

San Francisco
|
October 27-29, 2025

Hao says this path wasn’t inevitable, and that scaling isn’t the one solution to get extra advances in AI. 

“You can too develop new methods in algorithms,” she stated. “You may enhance the present algorithms to cut back the quantity of information and compute that they should use.”

However that tactic would have meant sacrificing pace. 

“While you outline the search to construct helpful AGI as one the place the victor takes all — which is what OpenAI did — then crucial factor is pace over the rest,” Hao stated. “Velocity over effectivity, pace over security, pace over exploratory analysis.”

Open AI Chief Executive Officer Sam Altman speaks during the Kakao media day in Seoul.
Picture Credit:Kim Jae-Hwan/SOPA Photographs/LightRocket / Getty Photographs

For OpenAI, she stated, the easiest way to ensure pace was to take present methods and “simply do the intellectually low-cost factor, which is to pump extra information, extra supercomputers, into these present methods.”

OpenAI set the stage, and quite than fall behind, different tech corporations determined to fall in line. 

“And since the AI business has efficiently captured many of the prime AI researchers on the earth, and people researchers not exist in academia, then you’ve a whole self-discipline now being formed by the agenda of those corporations, quite than by actual scientific exploration,” Hao stated.

The spend has been, and will likely be, astronomical. Final week, OpenAI stated it expects to burn by way of $115 billion in money by 2029. Meta stated in July that it could spend as much as $72 billion on constructing AI infrastructure this 12 months. Google expects to hit as much as $85 billion in capital expenditures for 2025, most of which will likely be spent on increasing AI and cloud infrastructure. 

In the meantime, the aim posts maintain shifting, and the loftiest “advantages to humanity” haven’t but materialized, even because the harms mount. Harms like job loss, focus of wealth, and AI chatbots that gas delusions and psychosis. In her guide, Hao additionally paperwork employees in creating nations like Kenya and Venezuela who had been uncovered to disturbing content material, together with youngster sexual abuse materials, and had been paid very low wages — round $1 to $2 an hour — in roles like content material moderation and information labeling.

Hao stated it’s a false tradeoff to pit AI progress in opposition to current harms, particularly when different types of AI supply actual advantages.

She pointed to Google DeepMind’s Nobel Prize-winning AlphaFold, which is skilled on amino acid sequence information and sophisticated protein folding buildings, and might now precisely predict the 3D construction of proteins from their amino acids — profoundly helpful for drug discovery and understanding illness.

“These are the varieties of AI techniques that we’d like,” Hao stated. “AlphaFold doesn’t create psychological well being crises in individuals. AlphaFold doesn’t result in colossal environmental harms … as a result of it’s skilled on considerably much less infrastructure. It doesn’t create content material moderation harms as a result of [the datasets don’t have] the entire poisonous crap that you simply hoovered up if you had been scraping the web.” 

Alongside the quasi-religious dedication to AGI has been a story in regards to the significance of racing to beat China within the AI race, in order that Silicon Valley can have a liberalizing impact on the world. 

“Actually, the other has occurred,” Hao stated. “The hole has continued to shut between the U.S. and China, and Silicon Valley has had an illiberalizing impact on the world … and the one actor that has come out of it unscathed, you possibly can argue, is Silicon Valley itself.”

After all, many will argue that OpenAI and different AI corporations have benefitted humanity by releasing ChatGPT and different giant language fashions, which promise big positive aspects in productiveness by automating duties like coding, writing, analysis, buyer help, and different knowledge-work duties. 

However the way in which OpenAI is structured — half non-profit, half for-profit — complicates the way it defines and measures its impression on humanity. And that’s additional sophisticated by the information this week that OpenAI reached an settlement with Microsoft that brings it nearer to ultimately going public.

Two former OpenAI security researchers instructed TechCrunch that they concern the AI lab has begun to confuse its for-profit and non-profit missions — that as a result of individuals get pleasure from utilizing ChatGPT and different merchandise constructed on LLMs, this ticks the field of benefiting humanity.

Hao echoed these issues, describing the hazards of being so consumed by the mission that actuality is ignored.

“Even because the proof accumulates that what they’re constructing is definitely harming vital quantities of individuals, the mission continues to paper all of that over,” Hao stated. “There’s one thing actually harmful and darkish about that, of [being] so wrapped up in a perception system you constructed that you simply lose contact with actuality.”

RELATED ARTICLES

Most Popular

Recent Comments