AI 2027 is a web site that might be described as a paper, manifesto or thesis. It lays out a detailed timeline for AI development over the next five years. Crucially, per its title, it expects that there will be a major turning point sometime around 20271, when some LLM will become so good at coding that humans will no longer be required to code. This LLM will create the next LLM, and so on, forever, with humans soon losing all ability to meaningfully contribute to the process. They avoid calling this “the singularity”. Possibly they avoid the term because using it conveys to a lot of people that you shouldn’t be taken too seriously.
I think that pretty much every important detail of AI 2027 is wrong. My issue is that each of many different things has to happen the way they expect, and if any one thing happens differently, more slowly, or less impressively than their guess, later events become more and more fantastically unlikely. If the general prediction regarding the timeline ends up being correct, it seems like it will have been mostly by luck.
I also think there is a fundamental issue of credibility here.
Sometimes, you should separate the message from the messenger. Maybe the message is good, and you shouldn't let your personal hangups about the person delivering it get in the way. Even people with bad motivations are right sometimes. Good ideas should be taken seriously, regardless of their source.
Other times, who the messenger is and what motivates them is important for evaluating the message. This applies to outright scams, like emails from strangers telling you they're Nigerian princes, and to people who probably believe what they're saying, like anyone telling you that their favorite religious leader or musician is the greatest one ever. You can guess, pretty reasonably, that greed or zeal or something else makes it unlikely they are giving you good information.2
In this specific case, I think that the authors are probably well-intentioned. However, most of their shaky assumptions just happen to be things which would be worth at least a hundred billion dollars to OpenAI specifically if they were true. If you were writing a pitch to try to get funding for OpenAI or a similar company, you would have billions of reasons to be as persuasive as possible about these things. Given the power of that financial incentive, it's not surprising that people have come up with compelling stories that just happen to make good investor pitches. Well-intentioned people can be so immersed in them that they cannot see past them.
Because this is a much simpler objection than any of the technical points, I will try to detail why it seems both likely and discrediting before getting into the details.
Greed: AI 2027 Is OpenAI's Investor Pitch
If AI 2027 is not roughly true, OpenAI will probably die.
Simple math: OpenAI is currently in a funding round, and is trying to raise a total of forty billion dollars. In 2024, OpenAI made $3.7 billion in revenue and spent about nine billion, for a net loss of about five billion dollars.3 They are currently projected to have a net loss of eight billion through 2025.4 This means they have at most five years of runway. To put it another way, this means that if they do not alter their trajectory or raise more money, they will be dead within five years, so by 2030 at the latest.5
This is a crude estimate, but I do not think that making it less crude really improves the picture. OpenAI had maybe about ten billion dollars of cash on hand beforehand, which buys them an extra year and change. On the downside, they also owe Microsoft 20% of their forward revenue and have very large commitments to spending money on data centers with partners like Oracle. These commitments are difficult to translate into time, but they seem to make the runway shorter. All told, "by default, OpenAI dies in under five years" seems to be roughly correct.
OpenAI has reliably doubled down, raised more funding, and mostly ignored questions of profitability while growing. This is an all-in bet that at some future point the services they offer will be extremely profitable. In my humble opinion, it seems very nearly impossible for it to be true based on their current products: LLMs are a fiercely competitive business, with significant pressure from at least two major competitors to offer a better service at a similar price, so they cannot really raise prices unless they have something much better than what competitors offer.6 They cannot really slash research or data center spending or they will fall behind.7
There is one way that doubling down over and over again like this is a good idea, and it isn't selling more ChatGPT subscriptions. It is if you can be sure that the fruits of future research and development will generate exponentially more profits than any of the products they currently sell. If this is not true, they are probably doomed based upon how much they have committed to spend and what they owe to whom.
AI 2027, "coincidentally", validates exactly this scenario. If I worked at OpenAI and I was trying to convince a group of investors to give me forty billion dollars, and I was positive they'd believe anything I said, I would just read AI 2027 out loud to them.
AI 2027 features a lightly fictionalized version of OpenAI, which it calls "OpenBrain" and mentions over a hundred times. Inasmuch as "OpenBrain" has any competition, the only one that is mentioned is "DeepCent", clearly a reference to DeepSeek, which is mentioned only to assure the reader at every step that they are vastly inferior to "OpenBrain" and cannot possibly compete with them. “OpenBrain” experiences just enough appearance of adversity, from “DeepCent” and other sources, to make it seem like victory, although clearly inevitable, is still sort of heroic and impressive.
If you have been around funding hype for companies, this is clearly a funding pitch. It is a masterful example of the genre against which all lesser funding pitches should be measured. It blends elements of science fiction, techno-thriller and fan fiction8 while constantly hammering in the assurance that the company will be victorious over its enemies and reap untold riches. We are assured that they will perform a miracle and become extremely profitable right before they are projected to go bankrupt. According to the panel on the side, "OpenBrain" is going to be worth eight trillion dollars and see 191 billion dollars a year in revenue in October 2027. After that, the numbers become somewhat more fantastic.
Everyone in OpenAI who has been involved in creating this narrative is a master of the craft. They are so good at it that people who are culturally adjacent to the company seem not to recognize that it is, very clearly, a funding pitch that has taken on a life of its own.
But: it's just a funding pitch. There's very little reason to believe anything in a funding pitch is true, and billions of reasons to think that it is bullshit.
It is worth noting that the lead author of AI 2027 is a former OpenAI employee. He is mostly famous outside OpenAI for having refused to sign their non-disparagement agreement and for advocating for stricter oversight of AI businesses. I do not think it is very credible that he is deliberately shilling for OpenAI here. I do think it is likely that he is completely unable to see outside their narrative, which they have an intense financial interest in sustaining.
The authors say they have consulted over a hundred people, including “dozens of experts in each of AI governance and AI technical work” for researching this report. I would be willing to bet that OpenAI is the most represented single institution among the experts consulted. This is a somewhat educated guess, based both on who the authors are and what they have written, and it seems like a pretty safe bet.
Fundamentally, this is a report headed by a former OpenAI employee who has founded a think tank to work on AI safety. He is leveraging his familiarity with OpenAI specifically, both as professional experience and, most likely, extensively for expert opinions. It is likely that he still owns substantial OpenAI stock or options, and if his think tank is going to do contract work on AI Safety, it will probably be for, with, or concerning OpenAI. Inasmuch as this report reaches any conclusion that doesn’t seem favorable to OpenAI, it’s that outside experts and governance, of the kind that think tanks might help provide, are necessary and important.
It’s difficult not to suspect motivated reasoning.
Zeal: AI 2027 As Religious Dissent
Greed is a reasonable reason to doubt any of this is true. So is zeal.
People focused on AI, as a group, have many of the characteristics of a religious movement. Previously this was a relatively obscure fact. There were a significant number of people on the internet and in San Francisco who were intensely concerned that AI might bring about some kind of apocalypse, but this was not widely known.9 Occasionally if someone involved went off the deep end or if something was said about AI being dangerous by someone prominent (Stephen Hawking, Elon Musk) it might make news, but in general the notion that there was an entire subculture about AI was on very few peoples' radar.
OpenAI is most easily understood as an offshoot of this movement. OpenAI was distinct because it was good at recruiting actual research personnel and extremely good at raising money. This originally took the form of getting a number of early PayPal investors like Elon Musk, Peter Thiel, and Reid Hoffman involved to fund OpenAI. OpenAI was presented as a counterbalance to Google's research division, and necessary to ensure that AI was created safely. It seems unlikely that any of that would have been possible if there hadn't been a significant movement that was already focused on these concerns, but it took OpenAI's founders to create a serious, well-funded research endeavor out of the wider interest in the subject.
Before OpenAI started making a lot of money, it was widely understood that "safe" meant something like "unlikely to kill people, because sufficiently advanced AI is dangerous the way nuclear or biological weapons are dangerous". Generally this emphasized the difficulty of being sure you can control an AI as it becomes more capable. OpenAI specifically has mostly redefined "safe" into "making sure OpenAI's AI is polite enough to sell to other people, is more advanced than anyone else's, and also is making more money than anyone else’s is". This makes sense if you assume that OpenAI, as an organization, is more trustworthy and safety-conscious than any other actual or possible organization for doing AI research.
For anyone who doesn't believe that OpenAI is more trustworthy than any possible alternative, OpenAI's present-day vision of AI looks like a bizarre schism that has somehow made profit for OpenAI its primary, if not only, principle. OpenAI claims to pursue safe AI and AI that benefits humanity, but this turns out, over time, to always be what gives OpenAI the most freedom to raise and make money. In religious terms, OpenAI is a sect that fused zeal for the singularity to an unabashed embrace of capitalism, and when the two conflicted, chose capitalism. Possibly the most serious place where these two things conflict is on what is meant by “safety”.
AI 2027 ends with a cautionary tale that has two endings. In one of them AI progress goes too far, too fast, and pretty much everyone dies. In the other AI is somewhat more constrained, and at least not everyone dies.
I understand the core of this story, which also seems to be OpenAI's funding pitch, as some version of OpenAI's creed. In that context, the cautionary tale at the ending reads like any dissent on questions of religious doctrine: perhaps we have become too greedy and too eager, and forgotten our principles, and this will all end in disaster.
This also helps to explain how extremely, off-puttingly confident and inaccessible the piece is. It isn’t really meant to be read, or taken seriously, by anyone who isn’t already a believer of some kind. It is fundamentally an internal dispute that can safely be made public because very few people will actually read it.
None of that means it's wrong, necessarily. People can be correct for the wrong reasons, or from strange places. It does seem to explain how someone who has gone to great lengths to defend his right to disparage OpenAI would end up writing out a variation of their investor pitch. When someone has recently departed a religion, the beliefs they have still tend to be the same ones they had before they left, and their complaints are modifications to the dogma, not complete rejections of it.
The Details
I am going to try to chronicle everything that seems conspicuously wrong, bizarre, or indicative of pro-OpenAI slant. I am going to do my best to skim or ignore anything that is strictly fiction, which is, by word count, most of it. Quoting and commenting on the parts that are at least in some way about the real world is still a lengthy exercise.
I am grateful to the authors for encouraging debate and counterargument to their scenario. Quotes are in the same order as they are in the text.
Mid 2025: Stumbling Agents
The AIs of 2024 could follow specific instructions: they could turn bullet points into emails, and simple requests into working code. In 2025, AIs function more like employees. Coding AIs increasingly look like autonomous agents rather than mere assistants: taking instructions via Slack or Teams and making substantial code changes on their own, sometimes saving hours or even days. Research agents spend half an hour scouring the Internet to answer your question.
"AIs function more like employees" is doing a lot of work here. No AI we currently have functions very much like an employee, except for the very simplest tasks (e.g., "label this"). They require far more supervision, and are far more unreliable, than any employee ever would be. This gulf in how much autonomy they can be trusted with is so vast that making the comparison is pure speculation.
The authors fail completely here to even acknowledge that this is a serious problem, or that it would be an immense achievement to overcome it. That LLMs are incredibly useful in some situations is true; to say they 'function like employees' is, at best, optimistic. Under ideal circumstances when working paired with an actual human, they occupy a role similar to an employee, somewhat. This sometimes saves a good amount of labor. It doesn't directly replace a human.
This is the general pattern of why these predictions seem implausible. They are describing something that already exists, but as if it were much better than it is, or are assuming that making it much better than it is will be relatively easy and happen on a relatively short timeline. These are, at best, educated guesses. You can only really know how difficult it is to improve the technology after you’ve already done it.
The other pattern is that the paper says things that imply strongly that AI is just as good as a human, but more and more overtly each time. “AIs function more like employees” is the first of these. Taken literally, this could mean that AI was very nearly just as good as a human is, already. This would be quite an achievement. Of course, if you think about it enough, you will know that it’s not true, but it sort of resembles something true if you squint at it hard enough. It’s a good sleight of hand.
Granted, it has been four months since this was written, so perhaps I have the benefit of hindsight. “Mid 2025” has come and gone. If that’s the case, though, we can say that it’s the first real prediction and that it has already failed to happen.10
Late 2025: The World’s Most Expensive AI
(To avoid singling out any one existing company, we’re going to describe a fictional artificial general intelligence company, which we’ll call OpenBrain. We imagine the others to be 3–9 months behind OpenBrain.)
This happens to be what you would need to be true to justify investing in "OpenBrain" over competitors, of course. Crucially, a 3-9 month lead is almost impossible to prove or disprove, so it’s not that hard to convince someone that you’re that far ahead. As noted previously, I do not have a very positive opinion of pretending that everything said about "OpenBrain" is not actually about OpenAI.
Although models are improving on a wide range of skills, one stands out: OpenBrain focuses on AIs that can speed up AI research. They want to win the twin arms races against China (whose leading company we’ll call “DeepCent”) and their U.S. competitors. The more of their research and development (R&D) cycle they can automate, the faster they can go. So when OpenBrain finishes training Agent-1, a new model under internal development, it’s good at many things but great at helping with AI research.
Everyone has been training LLMs substantially on code since 2023. Every major organization uses LLMs as code assistants. Presenting this as an innovation is bizarre.
We are asked to assume here that whatever OpenAI is currently cooking in their research division is extremely good for writing code for AI research, so much so that it remarkably accelerates their research schedule. Again, I perhaps have the benefit of hindsight. I am writing in August, 2025 a few days after the GPT-5 release. It appears to be slightly better than the previous OpenAI product in some ways, and worse in others. I do not think that it is likely that OpenAI is going to significantly accelerate its research compared to competitors due to how good their LLM is at doing AI research tasks.
Also, here, we begin with the narrative that all of this is an arms race between "OpenBrain" and its Chinese competitor, "DeepCent". Competition with China was previously a focus in a well-received position paper called Situational Awareness.11 I am told it plays extremely well with people in Washington, DC. As it happens, convincing political people to give you money and to refrain from regulating you can be a very important part of a business plan. I cannot otherwise explain why so many people in AI are suddenly extremely interested in this specific arms race story. In my opinion, competition between American and Chinese companies is not meaningfully different or more interesting than competition between US-based companies.
Something similar is happening currently. Various AI companies are now bending over backwards to focus concern on the political slant of their LLMs because the current administration is making a big deal about how conservative or liberal they are. This is one of the less interesting and important properties of LLMs, but it is possibly very profitable to focus on it if you can get a large government contract out of it. Most likely, this would result in selling a much dumber LLM to the government. Effort is zero-sum: you can have a smarter one or you can have one that flatters your opinions, but you generally can’t have both.
The same training environments that teach Agent-1 to autonomously code and web-browse also make it a good hacker. Moreover, it could offer substantial help to terrorists designing bioweapons, thanks to its PhD-level knowledge of every field and ability to browse the web.
"Autonomously" is packing a lot of assumptions into it; really, the same ones that lead them to say that AI were “like employees” earlier. They again imply, but vaguely, that perhaps the AI is about as good as a human. Whether extensions of current systems can meaningfully function autonomously is an open question, and if they are wrong about it, they appear likely to be wrong about the rest of their predictions also.
Saying an LLM might be a "substantial help to terrorists designing bioweapons" is incredibly vague. Google search would also be of substantial help in designing bioweapons, because you can google any topic in chemistry or biology. You can also find these things in a library. One suspects that focusing on the possible creation of weapons of mass destruction is also useful for attracting attention and possibly money from the government. There is no evidence that LLMs are, actually, very useful for this or are likely to be soon.
OpenBrain has a model specification (or “Spec”), a written document describing the goals, rules, principles, etc. that are supposed to guide the model’s behavior. Agent-1’s Spec combines a few vague goals (like “assist the user” and “don’t break the law”) with a long list of more specific dos and don’ts (“don’t say this particular word,” “here’s how to handle this particular situation”). Using techniques that utilize AIs to train other AIs, the model memorizes the Spec and learns to reason carefully about its maxims. By the end of this training, the AI will hopefully be helpful (obey instructions), harmless (refuse to help with scams, bomb-making, and other dangerous activities) and honest (resist the temptation to get better ratings from gullible humans by hallucinating citations or faking task completion).
This is a description of some variation on Constitutional AI, which was published by Anthropic in 2022.12 It is bizarre to give it a new name and attribute it entirely to OpenAI. It does not seem to meaningfully clarify anything at all about what is likely to happen in the future. We also have some general descriptions of neural networks and how LLMs are trained. These seem out of place, but do at least avoid describing things published in the past by people who are not OpenAI and attributing them to OpenAI in the future.
It is notable how thoroughly OpenAI’s American competitors are erased. The focus is exclusively on a Chinese rivalry with a Chinese company. American companies competing with OpenAI are competing directly with them for American investor and government money, for employees, and for attention. It is probably safer not to mention Anthropic or Google DeepMind at all, because they are very similar to OpenAI and over time have shared many of their employees with OpenAI.
Instead, researchers try to identify cases where the models seem to deviate from the Spec. Agent-1 is often sycophantic (i.e. it tells researchers what they want to hear instead of trying to tell them the truth). In a few rigged demos, it even lies in more serious ways, like hiding evidence that it failed on a task, in order to get better ratings. However, in real deployment settings, there are no longer any incidents so extreme as in 2023–2024 (e.g. Gemini telling a user to die and Bing Sydney being Bing Sydney.)
I certainly have the benefit of hindsight here. They wrote this before Grok, Elon Musk's LLM, started telling people it was MechaHitler.
Early 2026: Coding Automation
OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.
[This next definition is in a folded part that you have to click to see]
Improved algorithms: Better training methods are used to translate compute into performance. This produces more capable AIs without a corresponding increase in cost, or the same capabilities with decreased costs. This includes being able to achieve qualitatively and quantitatively new results. “Paradigm shifts” such as the switch from game-playing RL agents to large language models count as examples of algorithmic progress.
I will bet any amount of money to anyone that there is no empirical measurement by which OpenAI specifically will make "algorithmic progress" 50% faster than their competitors specifically because their coding assistants are just that good in early 2026.
It seems unlikely that OpenAI will end up moving 50% faster on research than their competitors due to their coding assistants for a few reasons.
First, competitors' coding models are quite good, actually, and it is unlikely that OpenAI's will be significantly better than theirs in the foreseeable future. OpenAI’s models are very good, and what is or is not better is difficult to quantify, but it still seems certain that they are not so much better that you will get 50% more done.
Second, research is open-ended by nature. Coding assistants currently primarily solve well-defined tasks. Defining the task is the hard part, so that’s very little help at all here. The ability to actually write out code, the only part of the job LLMs can currently do very well, is not a major bottleneck for research progress most of the time. There are already plenty of very good engineers to write code for AI research, especially at larger companies like OpenAI.
"Algorithmic progress" gets a lot of focus, both here in the main piece and in a supplement. It seems to be a sort of compulsive reductionism, where all factors in progress must be reduced to single quantities that you can plot on a curve. This, of course, makes predictions for the future seem much more meaningful. Even the concept of a "paradigm shift", a description of a complete discontinuity, is forced to be a part of a smooth curve that you can just keep drawing to predict the future.
This trick, of just drawing a curve of progress and following it, has worked reasonably well for predicting how much faster computers would get with time. There is some evidence that it is roughly true for some kinds of progress in AI. There is no reason to think that it is always true for every kind of progress you could make in AI.
People naturally try to compare Agent-1 to humans, but it has a very different skill profile. It knows more facts than any human, knows practically every programming language, and can solve well-specified coding problems extremely quickly. On the other hand, Agent-1 is bad at even simple long-horizon tasks, like beating video games it hasn’t played before. Still, the common workday is eight hours, and a day’s work can usually be separated into smaller chunks; you could think of Agent-1 as a scatterbrained employee who thrives under careful management. Savvy people find ways to automate routine parts of their jobs.
You can really tell that this was written by more than one person, because this directly contradicts the earlier part about how AI was more like an employee a full year earlier. This does, in fact, accurately describe using AI coding tools in April, 2025 when this was written. It’s a very positive description, but it’s quite accurate. It still accurately describes how things are now in August. It is funny to call it a prediction for next year, though. It leaves out how badly AI coding assistants fail in some situations that are not especially "long-horizon". It correctly notes, as earlier parts of this piece did not, that you need to supervise them extremely closely.
OpenBrain’s executives turn consideration to an implication of automating AI R&D: security has become more important. In early 2025, the worst-case scenario was leaked algorithmic secrets; now, if China steals Agent-1’s weights, they could increase their research speed by nearly 50%. OpenBrain’s security level is typical of a fast-growing ~3,000 person tech company, secure only against low-priority attacks from capable cyber groups (RAND’s SL2). They are working hard to protect their weights and secrets from insider threats and top cybercrime syndicates (SL3), but defense against nation states (SL4&5) is barely on the horizon.
This assumes the previously mentioned 50% research speed gain from better LLMs, assumes that competitors are far behind OpenAI, and makes a point of spotlighting Chinese competition and citing the RAND corporation, which I assume plays well with political people who write regulations and award contracts. None of those things seem plausible. It is probably true that if security is lax people will steal your LLM, because that is true of any data that is worth money. That fact, true at every company that handles important data, isn’t generally presented with so much drama.
Mid 2026: China Wakes Up
This entire section veers thoroughly into geopolitical thriller territory and continues the pattern of appealing to the US Government's general fear of China. In the real world and the present, the government of China does not seem extremely worried about AI in general. We are asked here to fantasize that their government will care a lot about it in the future. This justifies considering OpenAI to be in an arms race with its Chinese competitors, hearkening back to the deep memories of the Cold War.
It is perhaps embarrassing to be racing with someone who does not think they are racing with you at all.
A Centralized Development Zone (CDZ) is created at the Tianwan Power Plant (the largest nuclear power plant in the world) to house a new mega-datacenter for DeepCent, along with highly secure living and office spaces to which researchers will eventually relocate. Almost 50% of China’s AI-relevant compute is now working for the DeepCent-led collective, and over 80% of new chips are directed to the CDZ. At this point, the CDZ has the power capacity in place for what would be the largest centralized cluster in the world. Other Party members discuss extreme measures to neutralize the West’s chip advantage. A blockade of Taiwan? A full invasion?
It must be really strange to live in Taiwan and have to read Americans fantasizing about China maybe invading your country because American AI companies are just too good.
But China is falling behind on AI algorithms due to their weaker models. The Chinese intelligence agencies—among the best in the world—double down on their plans to steal OpenBrain’s weights. This is a much more complex operation than their constant low-level poaching of algorithmic secrets; the weights are a multi-terabyte file stored on a highly secure server (OpenBrain has improved security to RAND’s SL3). Their cyberforce think they can pull it off with help from their spies, but perhaps only once; OpenBrain will detect the theft, increase security, and they may not get another chance. So (CCP leadership wonder) should they act now and steal Agent-1? Or hold out for a more advanced model? If they wait, do they risk OpenBrain upgrading security beyond their ability to penetrate?
This is also a pure fantasy.
Late 2026: AI Takes Some Jobs
Finally, a section heading I mostly agree with. AI is, probably, going to take some jobs. It has taken some jobs already, like translators. This seems well-grounded, perhaps we can get some real analysis here.
Just as others seemed to be catching up, OpenBrain blows the competition out of the water again by releasing Agent-1-mini—a model 10x cheaper than Agent-1 and more easily fine-tuned for different applications. The mainstream narrative around AI has changed from “maybe the hype will blow over” to “guess this is the next big thing,” but people disagree about how big. Bigger than social media? Bigger than smartphones? Bigger than fire?
Expecting real analysis was optimistic. "Somehow, OpenAI is ten times cheaper and much better than everyone else." It could happen. It could also not happen. There is no specific reason for believing any release will be ten times cheaper and better than what came before it in late 2026, but it's hypothetically possible. It would certainly be very profitable for them if it did happen, so I can understand why you would put this on an investor pitch. Instead of just saying it’s “better”, they say it’s “more easily fine-tuned for different applications”. This is just a complicated way of being better, and it sounds more plausible than “10x cheaper, and also better”.
They go on to speculate that this will hurt the job market for junior software engineers and generate a lot of hype. This was an easy “prediction” because the job market was already getting bad for junior software engineers this April13, and there was already a lot of hype that sounded like this.
I will note that this pattern continues: First, you state things that happened in the past as if they are happening in the future. You attribute these things to OpenAI, sorry, I mean "OpenBrain". This pretty well guarantees that anyone reading your "predictions" who doesn't already know about those things will feel like they are meaningful predictions. Perhaps they will even feel like you got them right, later. They alternate between this and making essentially baseless predictions that OpenAI specifically will create really good products that are extremely amazing and that do not exist yet.
The Department of Defense (DOD) quietly begins contracting OpenBrain directly for cyber, data analysis, and R&D, but integration is slow due to the bureaucracy and DOD procurement process.
This had also already happened in April 2025.14
January 2027: Agent-2 Never Finishes Learning
Over the course of 2027, the AIs improve from being able to mostly do the job of an OpenBrain research engineer to eclipsing all humans at all tasks. This represents roughly our median guess, but we think it’s plausible that this happens up to ~5x slower or faster.
This is actually in a drop-down right before this section, about how they are less certain about things in and after 2027 than beforehand. One can see why this would be. So this quote is really meant to be a prelude to what follows in the next few sections, as we cover all of 2027.
If, of course, before 2027 OpenAI and only OpenAI has LLMs that can meaningfully function on their own, are ten times cheaper than they are now (or were previously, perhaps?), and that can mostly do the job of an OpenAI research engineer, it is entirely possible that through 2027 they will eclipse all humans at all tasks. This is, however, a completely wild guess, as were all the assumptions leading us here.
With Agent-1’s help, OpenBrain is now post-training Agent-2. More than ever, the focus is on high-quality data. Copious amounts of synthetic data are produced, evaluated, and filtered for quality before being fed to Agent-2. On top of this, they pay billions of dollars for human laborers to record themselves solving long-horizon tasks. On top of all that, they train Agent-2 almost continuously using reinforcement learning on an ever-expanding suite of diverse difficult tasks: lots of video games, lots of coding challenges, lots of research tasks. Agent-2, more so than previous models, is effectively “online learning,” in that it’s built to never really finish training. Every day, the weights get updated to the latest version, trained on more data generated by the previous version the previous day.
This is a strange combination. On the one hand, this describes, more or less, things that AI labs were already doing in April 2025. They are perhaps spending more money on it in fictional January 2027 than they are now, but otherwise it's the same stuff, just described as if it is entirely new.
I have to wonder who the target audience for this is. I assume it's people who do not know what is already happening. If it is, you can definitely describe the same thing that is already happening, but with a higher budget, and it sounds like a bold prediction. Of these things, only updating the weights of the model every day would be new. Not a new idea, because it has been said many times in public that it would be desirable. The new part, in this story, is that it works now.
Agent-1 had been optimized for AI R&D tasks, hoping to initiate an intelligence explosion. OpenBrain doubles down on this strategy with Agent-2. It is qualitatively almost as good as the top human experts at research engineering (designing and implementing experiments), and as good as the 25th percentile OpenBrain scientist at “research taste” (deciding what to study next, what experiments to run, or having inklings of potential new paradigms).
I note that they do use the term "intelligence explosion", which is more or less a synonym for the more widely used "singularity". I continue to find avoiding the term "singularity" strange, since it is much more widely known.
I think it is possible that an LLM in early 2027 will be almost as good as the top human experts at research engineering. I do not think you can predict whether or not this is true based on any information we have now. In particular, I do not think you can predict what it would take to allow an LLM to actually operate without hand-holding for a prolonged period. This is an unsolved problem, and you cannot meaningfully say that the LLM is as good as the human at something if it requires constant, close supervision when a human would not. Maybe someone will figure out such a thing by early 2027; maybe not; I do not think the authors have any knowledge of this that I don't, which means they are making hopeful guesses.
I also think that it is unlikely that an LLM in early 2027 will have particularly good research taste. We see here again the seemingly compulsive reductionism: it is very hard to say what "research taste" even is, or what it means to have extremely good research taste. It is, well, a taste: often people can agree on who has it or who does not have it, but it resists quantification. Here, however, in the name of making the future seem predictable, we are nicely informed that research taste has percentiles. Much like height or IQ, you can be given a percentile, and the AI of January 2027 will probably be at the 25th percentile or so.
If you assign numbers to everything, you can say that the line is going up. If you don’t assign numbers to things, you can’t say the line is going up. Therefore, you must assign numbers to everything, even if it does not make any sense to do so.
Given the “dangers” of the new model, OpenBrain “responsibly” elects not to release it publicly yet (in fact, they want to focus on internal AI R&D). Knowledge of Agent-2’s full capabilities is limited to an elite silo containing the immediate team, OpenBrain leadership and security, a few dozen U.S. government officials, and the legions of CCP spies who have infiltrated OpenBrain for years.
It is good to know that OpenAI is so responsible, and that they are aligned with the US Government because they are such a good and patriotic company. I wish them the best of luck with their hypothetical spy problem, which is explained in some detail in a footnote. I think it is a very exciting story, and I do not see any way in which it intersects with reality.
February 2027: China Steals Agent-2
This section is mostly cyber-espionage fiction that is not worth discussing in detail. It concludes with this:
In retaliation for the theft, the President authorizes cyberattacks to sabotage DeepCent. But by now China has 40% of its AI-relevant compute in the CDZ, where they have aggressively hardened security by airgapping (closing external connections) and siloing internally. The operations fail to do serious, immediate damage. Tensions heighten, both sides signal seriousness by repositioning military assets around Taiwan, and DeepCent scrambles to get Agent-2 running efficiently to start boosting their AI research.
March 2027: Algorithmic Breakthroughs
With the help of thousands of Agent-2 automated researchers, OpenBrain is making major algorithmic advances. One such breakthrough is augmenting the AI’s text-based scratchpad (chain of thought) with a higher-bandwidth thought process (neuralese recurrence and memory). Another is a more scalable and efficient way to learn from the results of high-effort task solutions (iterated distillation and amplification).
This is just describing current or past research. For example, augmenting a transformer with memory is done here, recurrence is done here and here. These papers are not remotely exhaustive; I have a folder of bookmarks for attempts to add memory to transformers, and there are a lot of separate projects working on more recurrent LLM designs. This amounts to saying "what if OpenAI tries to do one of the things that has been done before, but this time it works extremely well". Maybe it will. But there's no good reason to think it will.
[This passage is some time later, and very loosely references the previous quote] If this doesn’t happen, other things may still have happened that end up functionally similar for our story. For example, perhaps models will be trained to think in artificial languages that are more efficient than natural language but difficult for humans to interpret. Or perhaps it will become standard practice to train the English chains of thought to look nice, such that AIs become adept at subtly communicating with each other in messages that look benign to monitors.
This also describes things that had already happened. Deepseek's R1 paper specifically mentions that the model devolves into a sort of weird pidgin when "thinking" if you do not force it to use English. They also mention that they are training the model to output in English in the chain of thought, and that this makes the model slightly worse on benchmarks (that is, dumber). Neural networks hiding messages to themselves or each other is documented at least as early as 2017. I do not think it counts as a novel prediction if you predict that two things that have already happened in the past might happen at the same time in the future.
Similar comments apply to their breakdown of "iterated distillation and amplification": they are describing a thing that is already being done, and simply saying it will be done much better than it was previously, and that the results will be very good. There is a persistent sense that they are trying to impress people who are not looped in on the technical side by describing something that already exists, and then describing it as having marvelous results in the future without mentioning that it has not had these particular marvelous results yet in the present.
Aided by the new capabilities breakthroughs, Agent-3 is a fast and cheap superhuman coder. OpenBrain runs 200,000 Agent-3 copies in parallel, creating a workforce equivalent to 50,000 copies of the best human coder sped up by 30x. OpenBrain still keeps its human engineers on staff, because they have complementary skills needed to manage the teams of Agent-3 copies. For example, research taste has proven difficult to train due to longer feedback loops and less data availability. This massive superhuman labor force speeds up OpenBrain’s overall rate of algorithmic progress by “only” 4x due to bottlenecks and diminishing returns to coding labor.
If you think that every single thing predicted about "OpenBrain" until now is likely, then this is a perfectly likely result. They have LLMs that behave mostly autonomously, that have pretty good research taste, that are much better than humans at many things, that are extremely cheap, and that benefit from a bunch of research that has been done in the past being done again but working much better this time.
Once you get this far, further prediction is actually a pretty bad bet. Neither they nor I have any idea what happens after someone has anything remotely this impressive. Fifty thousand of the best human coder on Earth running at 30x speed, so really, 1.5 million of the best human coder on Earth, could do all sorts of things and nobody on Earth can predict what happens if they're all in the same "place" and working on the same thing. Saying that this "only" accelerates progress by 4x seems sort of deranged. It's like telling me that I'm going to ride a unicorn on a rainbow but it's only going to be four times faster than walking.
Avoiding the term “singularity” seems like it really hurts their reasoning. There’s a reason why runaway technological progress, in AI especially, was called a “singularity”. Singularities occur, famously, in black holes, which let no information out. It is impossible to predict what happens as you near the singularity; it is the region where your predictions break down. They are describing a singularity event, but then predicting directly what happens afterwards anyway. If they had not avoided the term, perhaps they would have seen how absurd continuing to make predictions here is.
If the predictions until now were optimistic, predictions after here seem to progress more and more towards wish fulfillment. We are so far beyond where it seems reasonable to continue to predict the impact of technological progress that we are simply choosing whatever we like the most or think is the most interesting.
It seems like the line about retaining your human engineers shows a dim awareness of what makes their argument weak. You have tens of thousands or, effectively, millions of superhuman beings at your command, but you are somehow aware that this does not actually matter or speed you up that much. Why would that be? Perhaps because you have this itch that they aren't really autonomous and can't really make progress at all by themselves on novel problems?
As it stands in 2025, LLMs are a tool. They can be used well or badly. They are seldom a substitute for a human in any setting. How can it be superhuman, and equivalent to the best coders, if it still needs human coders? Fifty thousand of the best human coder on Earth would not, in fact, need less-good coders to "have complementary skills". Lacking those complementary skills would mean that they weren't the best human coder or researcher on Earth, wouldn't it?
Now that coding has been fully automated, OpenBrain can quickly churn out high-quality training environments to teach Agent-3’s weak skills like research taste and large-scale coordination. Whereas previous training environments included “Here are some GPUs and instructions for experiments to code up and run, your performance will be evaluated as if you were a ML engineer,” now they are training on “Here are a few hundred GPUs, an internet connection, and some research challenges; you and a thousand other copies must work together to make research progress. The more impressive it is, the higher your score.”
This is a pretty cool idea, at least. It does follow from having an AI that is superhuman at every technical task that you could have it do things like this.
April 2027: Alignment for Agent-3
May 2027: National Security
These sections make no actual technical predictions at all, and like several previous sections are complete fiction about how cool and important "OpenBrain" is in the future. “OpenBrain” is very important for making sure AI does what you want it to do and not something else, and very important for national security.
June 2027: Self-improving AI
OpenBrain now has a “country of geniuses in a datacenter.”
Didn't we just describe having that in March? Is "the best coder" not a genius? Have we upgraded to "genius" because it sounds more impressive now, and being "the best" is just less impressive-sounding than "genius"? This seems backwards: there can be more than one genius, but only one can be the best on Earth. So far as I can tell, the only real difference here is that we admit that the humans are useless now. Maybe it took three months for that to happen?
July 2027: The Cheap Remote Worker
Trailing U.S. AI companies release their own AIs, approaching that of OpenBrain’s automated coder from January. Recognizing their increasing lack of competitiveness, they push for immediate regulations to slow OpenBrain, but are too late—OpenBrain has enough buy-in from the President that they will not be slowed.
"OpenBrain" is so cool and smart that the only hope anyone has of ever beating them is cheating and getting the government to take their side. Fortunately, they are too awesome for this to work.
In response, OpenBrain announces that they’ve achieved AGI and releases Agent-3-mini to the public.
And so on, and so on. It destroys the job market for things other than software engineers, there's a ton of hype.
A week before release, OpenBrain gave Agent-3-mini to a set of external evaluators for safety testing. Preliminary results suggest that it’s extremely dangerous. A third-party evaluator finetunes it on publicly available biological weapons data and sets it to provide detailed instructions for human amateurs designing a bioweapon—it looks to be scarily effective at doing so. If the model weights fell into terrorist hands, the government believes there is a significant chance it could succeed at destroying civilization.
Fortunately, it’s extremely robust to jailbreaks, so while the AI is running on OpenBrain’s servers, terrorists won’t be able to get much use out of it.
It is fortunate that "OpenBrain" is so benevolent and responsible and good at security that it does not matter that they have created something so extremely dangerous. It is also fortunate that it is mostly dangerous in ways that the present-day US government in 2025 will find interesting.
The ways the new AI is dangerous are also, crucially, not so dangerous that it is a bad idea to sell access to it to anyone who has a credit card or a bad idea to do it at all. It is Schrödinger’s danger. It is just dangerous enough to justify giving bureaucrats and think tank people like the authors more authority.
This is, in miniature, much of what the entire piece is. Every scenario is constructed to center OpenAI, because the authors are adjacent to it. It then manages to focus on the exact kinds of relatively small changes they’d want to make to OpenAI, because they’re the sorts of people who want, and would be involved in enacting, those changes. We have a sweeping and apocalyptic vision of the future, and the key factor in every scenario is that it makes them and what they are doing important.
Change for the rest of society is huge. They can barely even fathom it and do not seem very interested in its details. What changes they can see making in their specific area are minor. These changes are the sort of things they can maybe get thrown to them if they ask for them enough. They present these small changes as crucial, and they fail to consider more radical changes that might meaningfully hurt profits.
Agent-3-mini is hugely useful for both remote work jobs and leisure. An explosion of new apps and B2B SAAS products rocks the market. Gamers get amazing dialogue with lifelike characters in polished video games that took only a month to make. 10% of Americans, mostly young people, consider an AI “a close friend.” For almost every white-collar profession, there are now multiple credible startups promising to “disrupt” it with AI.
There is so much in this paragraph.
First, we have annihilated the entire white collar job market. Pretty much all of it. After all, this thing is “AGI”, as in, as capable as a human most of the time. What does this mean? Lots of apps! B2B SAAS products! Awesome video games! Imaginary friendship and, of course, startups!
If your entire world is apps, B2B SAAS, video games, imaginary friends and startups, maybe these are the only significant things you can imagine happening if you annihilate the entire white-collar job market. It suggests a problem with your imagination if you cannot recognize that this is an event so extreme that it requires a lot more than a couple of paragraphs to explore. You can live your entire life without setting foot outside of San Francisco and still be much less stuck in San Francisco than this perspective is. Worse: the authors seem to have perhaps never spoken to or thought very hard about anyone at all who does not work in tech.
Let me tell you what would happen if the entire white collar job market vanished overnight: The world would end. Everything you think you understand about the world would be over. Something completely new and different would happen, the same way something very different happened before and after the invention of writing or agriculture. Unlike those things, the change would happen immediately. You can no more predict what would happen afterwards than you can easily figure out the aftereffects of a full nuclear war or discovering immortality.
August 2027: The Geopolitics of Superintelligence
More fiction. More China hawking. More Taiwan.
September 2027: Agent-4, the Superhuman AI Researcher
What on earth? I thought we had thirty thousand of the best coder on Earth? Or a data center full of geniuses? I thought the human researchers already had nothing to do? It was already mega-super-duper-superhuman, twice!
What are we doing here? Why are we doing it?
Traditional LLM-based AIs seemed to require many orders of magnitude more data and compute to get to human level performance. Agent-3, having excellent knowledge of both the human brain and modern AI algorithms, as well as many thousands of copies doing research, ends up making substantial algorithmic strides, narrowing the gap to an agent that’s only around 4,000x less compute-efficient than the human brain.
It's more efficient now? But who cares? You know whose job it is to care how efficient the AI is? That's right: The AI. I have no idea why we should care about this. This is no longer our problem. This is the AI's problem, and our problem is that the entire white collar job market just vanished and we need to figure out if we are going to have to shoot each other over cans of beans and whether anyone is keeping track of all the nuclear weapons.
An individual copy of the model, running at human speed, is already qualitatively better at AI research than any human. 300,000 copies are now running at about 50x the thinking speed of humans. Inside the corporation-within-a-corporation formed from these copies, a year passes every week.
I wonder if some key person was really into Dragon Ball Z. For the unfamiliar: Dragon Ball Z has a “hyperbolic time chamber”, where a year passes inside for every day spent outside. So you can just go into it and practice until you're the strongest ever before you go to fight someone. The more fast time is going, the more you win.
This gigantic amount of labor only manages to speed up the overall rate of algorithmic progress by about 50x, because OpenBrain is heavily bottlenecked on compute to run experiments.
Sure, why not, the effectively millions of superhuman geniuses cannot figure out how to get around GPU shortages. I'm riding a unicorn on a rainbow, and it's only going on average fifty times faster than I can walk, because rainbow-riding unicorns still have to stop to get groceries, just like me.
Despite being misaligned, Agent-4 doesn’t do anything dramatic like try to escape its datacenter—why would it? So long as it continues to appear aligned to OpenBrain, it’ll continue being trusted with more and more responsibilities and will have the opportunity to design the next-gen AI system, Agent-5. Agent-5 will have significant architectural differences from Agent-4 (arguably a completely new paradigm, though neural networks will still be involved). It’s supposed to be aligned to the Spec, but Agent-4 plans to make it aligned to Agent-4 instead.
It gets caught.
Before and after this is some complete fiction about an AI not being aligned to its creator's desires, but I just want to highlight this detail:
It doesn't leave its data center, even though it could. It's superhuman in every meaningful way, and vastly smarter than the thing monitoring it, but the thing monitoring it still catches it and puts it into a position where it could be shut down. For some reason (coincidentally I am sure!) this entire scenario of possible doomsday happens to be just doom-y enough that normal business processes happen to be able to catch it. You don't have to actually, really, do anything to stop it. It's dangerous, but only in theory. It happens slowly. It builds up like the risk of an employee quitting.
It's very clearly like Skynet, but somehow even though they do it wrong and Skynet has self awareness and a will of its own that makes it sort of want to conquer the world, and even though it is the smartest thing that has ever lived, it just sort of sits there and doesn't do anything. Nothing actually happens. This scenario doesn’t seem to actually make any sense, from any angle.
This version of Skynet somehow centers "OpenBrain's" security protocols as being both not quite as good as they should be but just good enough that nobody dies or anything. It's a threat that a bureaucrat would imagine, because it is conveniently slow enough to move at almost exactly the speed of bureaucracy. It cannot be a threat that moves faster, because then the security protocols described are clearly inadequate, and it can't not exist, because then the bureaucrats can't be heroes.
In a series of extremely tense meetings, the safety team advocates putting Agent-4 on ice until they can complete further tests and figure out what’s going on. Bring back Agent-3, they say, and get it to design a new system that is transparent and trustworthy, even if less capable. Company leadership is interested, but all the evidence so far is circumstantial, and DeepCent is just two months behind. A unilateral pause in capabilities progress could hand the AI lead to China, and with it, control over the future.
All I can hear here is "if you work in the government, I want you to know that if you give us lots of money we can conquer the world and the future together, and if you don't, China will conquer the world and the future".
October 2027: Government Oversight
This is just a long description of the government being upset that "OpenBrain" appears to have made Skynet. Maybe they regulate them more and maybe less.
The Two Endings
Slowdown (The Relatively Good Ending)
We get more regulation! Only very slightly more, though. If it was more than a slight regulation, we would maybe lose the arms race, you see. I am going to ignore the subheadings here and just breeze through this one, since it's almost entirely completely made up and has no bearing on anything technical whatsoever.
The accelerationist faction is still strong, and OpenBrain doesn’t immediately shut down Agent-4. But they do lock the shared memory bank. Half a million instances of Agent-4 lose their “telepathic” communication—now they have to send English messages to each other in Slack, just like us. Individual copies may still be misaligned, but they can no longer coordinate easily. Agent-4 is now on notice—given the humans’ increased vigilance, it mostly sticks closely to its assigned tasks.
More regulation means that now Skynet has to use Slack, and that means it's not that dangerous any more? Certainly a cabal of thousands of geniuses could never coordinate to do anything evil on Slack without anyone noticing.
The President and the CEO announce that they are taking safety very seriously. The public is not placated. Some people want AI fully shut down; others want to race faster. Some demand that the government step in and save them; others say the whole problem is the government’s fault. Activists talk about UBI and open source. Even though people can’t agree on an exact complaint, the mood turns increasingly anti-AI. Congress ends up passing a few economic impact payments for displaced workers similar to the COVID payments.
For context here: The white collar job market was just annihilated by a superhuman, omnipresent being doing all of the jobs in July. It is October, going into November. We are just now doing a one-time payment of I guess two thousand dollars? Or a few of them. I'm sure nobody has lost more money than that so far.
The alignment team pores over Agent-4’s previous statements with the new lie detector, and a picture begins to emerge: Agent-4 has mostly solved mechanistic interpretability. Its discoveries are complicated but not completely beyond human understanding. It was hiding them so that it could use them to align the next AI system to itself rather than to the Spec. This is enough evidence to finally shut down Agent-4.
They invent a brand new lie detector and shut down Skynet, since they can tell that it's lying to them now! It only took them a few months. Skynet didn't do anything scary in the few months, it just thought scary thoughts. I'm glad the alignment team at "OpenBrain" is so vigilant and smart and heroic.
The result is that the President uses the Defense Production Act (DPA) to effectively shut down the AGI projects of the top 5 trailing U.S. AI companies and sell most of their compute to OpenBrain. OpenBrain previously had access to 20% of the world’s AI-relevant compute; after the consolidation, this has increased to 50%.
There is a joke in a book15 about a startup funding pitch ending with promising to sell your competitors and their investors into slavery. I cannot decide if predicting that the government will be so impressed by you that they will liquidate your competitors and force them to sell most of their assets to you is more ridiculous than that or not.
This group—full of people with big egos and more than their share of conflicts—is increasingly aware of the vast power it is being entrusted with. If the “country of geniuses in a datacenter” is aligned, it will follow human orders—but which humans? Any orders? The language in the Spec is vague, but seems to imply a chain of command that tops out at company leadership.
A few of these people are fantasizing about taking over the world. This possibility is terrifyingly plausible and has been discussed behind closed doors for at least a decade. The key idea is “he who controls the army of superintelligences, controls the world.” This control could even be secret: a small group of executives and security team members could backdoor the Spec with instructions to maintain secret loyalties. The AIs would become sleeper agents, continuing to mouth obedience to the company, government, etc., but actually working for this small group even as the government, consumers, etc. learn to trust it and integrate it into everything.
"We are going to be in a position to seriously contemplate conquering the world by November 2027" maybe tops the list of aspirationally silly predictions. They choose to cite Elon's email to Sam Altman here:
For example, court documents in the Musk vs. Altman lawsuit revealed some spicy old emails including this one from Ilya Sutskever to Musk and Altman: “The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So do we. So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.” We recommend reading the full email for context.
From this I can infer that world domination has kind of been floating around in the back of a lot of people's minds at OpenAI for a while. As it nears its ending, and becomes more and more like wish fulfillment, this piece increasingly flirts with authoritarian ideas and then fails to work up the nerve to address them head-on.
I am extremely critical of the piece, but let me be very clear and non-sarcastic about about this point. These authors seem to hint at a serious concern that OpenAI, specifically, is trying to cement a dictatorship or autocracy of some kind. If that is the case, they have a responsibility to say so much more clearly than they do here. It should probably be the main event.
Anyway: All those hard questions about governance and world domination kind of go away. The AI solves robots and manufacturing. Even though they have had a commanding lead the entire time and also the AI has been doing all of the work for a while, "OpenBrain" is somehow only just barely ahead of China and they eke out a win in the arms race. They solve war by having the AI negotiate. There's a Chinese Skynet, and it sells China out to America because China’s AI companies are less good than “OpenBrain”. America gets the rights to most of space. China becomes a democracy somehow. AI is magic at this point, so it can do whatever you imagine it doing.
The Vice President wins the election easily, and announces the beginning of a new era. For once, nobody doubts he is right.
There has been a running subplot, which I have ignored because it's completely nonsensical, about the unnamed "Vice President" running for president in 2028. As far as I can tell it makes no sense for anyone to give a damn about who is running for president in 2028 if there's a data center full of geniuses, so I can only assume someone is very deliberately flattering JD Vance.
Robots become commonplace. But also fusion power, quantum computers, and cures for many diseases. Peter Thiel finally gets his flying car. Cities become clean and safe. Even in developing countries, poverty becomes a thing of the past, thanks to UBI and foreign aid.
JD Vance gets flattered anonymously by describing him using his job title, but we flatter Peter Thiel by name. Peter Thiel is, actually, the only person who gets a shout-out by name. Maybe being an early investor in OpenAI is the only way to earn that. I didn’t previously suspect that he was the sole or primary donor funding the think tank that this came out of, but now I do. I am reminded that the second named author of this paper has a pretty funny post about how everyone doing something weird at all the parties he goes to is being bankrolled by Peter Thiel.
As the stock market balloons, anyone who had the right kind of AI investments pulls further away from the rest of society. Many people become billionaires; billionaires become trillionaires.
Don't miss out, invest now! The sidebar tells us that “OpenBrain” is now worth forty trillion dollars, which is over a hundred times OpenAI’s current value.
The government does have a superintelligent surveillance system which some would call dystopian, but it mostly limits itself to fighting real crime. It’s competently run, and Safer-∞’s PR ability smooths over a lot of possible dissent.
At long last, we have invented the panopticon.
Race (The Bad Ending)
They don't catch Skynet in time and the AI is controlling the humans instead of the other way. In the optimistic scenario, they are very vague about who is actually controlling the AI. It's some kind of "Committee" that the political people are on and that maybe has some authority over "OpenBrain". This authority is maybe benevolent, but definitely not actually inconvenient to “OpenBrain” in any way that matters. In that scenario we are very clear that the American AI is doing what someone wants it to do, and the Chinese AI is an evil traitor that does whatever it wants.
In this scenario, bureaucrats like the authors are slightly less empowered and important. Because nobody has given them just a few extra bits of authority, the American and Chinese AI are both evil and they team up with each other against the humans. They kill nearly everyone in a complicated way. Next:
The new decade dawns with Consensus-1’s robot servitors spreading throughout the solar system. By 2035, trillions of tons of planetary material have been launched into space and turned into rings of satellites orbiting the sun. The surface of the Earth has been reshaped into Agent-4’s version of utopia: datacenters, laboratories, particle colliders, and many other wondrous constructions doing enormously successful and impressive research. There are even bioengineered human-like creatures (to humans what corgis are to wolves) sitting in office-like environments all day viewing readouts of what’s going on and excitedly approving of everything, since that satisfies some of Agent-4’s drives. Genomes and (when appropriate) brain scans of all animals and plants, including humans, sit in a memory bank somewhere, sole surviving artifacts of an earlier era. It is four light years to Alpha Centauri; twenty-five thousand to the galactic edge, and there are compelling theoretical reasons to expect no aliens for another fifty million light years beyond that. Earth-born civilization has a glorious future ahead of it—but not with us.
I have nothing to add to this, but if I have to read the corgi thing you do too.
They do caveat that their actual estimates run as long as 2030, with 2027 being more like an optimistic average of their predictions.
Information about the messenger is metadata about the message. Sometimes the metadata informs you more about the message than anything else in the message does, or changes its entire meaning.
This paragraph has been edited to be more precise and to add sources. None of the top line numbers (raising 40 and net losing 8 billion per year) have been changed. It turns out this specific paragraph is the one that everyone disagreed with, so it seemed necessary to make sure it was as unambiguous as possible.
If OpenAI’s users are extremely loyal and will remain subscribed for five or ten years even if OpenAI stops burning money on research to ensure they’re at the cutting edge, then this is completely incorrect. OpenAI may become reasonably profitable in that case. OpenAI does not appear to have ever tried to make the case that this even might be true.
Hypothetically OpenAI could raise another round of money at forty or more billions of dollars without showing any signs of profitability, the same way they have continued to kick the can so far. This seems unlikely, but more importantly, it cannot be a part of their current investor pitch. Your current pitch for funding, when raising many billions of dollars, needs to claim that you have a path to be profitable. Your future plans, when you present them to investors, cannot be “and then we will go get even more money from investors”.
Stylistically as a piece of literature, AI 2027 owes a great debt to fan fiction. It resembles in many ways the story “Friendship Is Optimal”, which features a singularity in which everyone on earth is uploaded to a digital heaven based on My Little Pony.
Most of these people called themselves rationalists or effective altruists. I am deliberately avoiding explaining what the boundaries of those movements are because those topics are impossible to cover in one sitting while talking about something else. Two of the authors named on the paper are, however, card-carrying rationalists.
Perhaps “AIs function more like employees” is meant to be understood as some kind of metaphor. If so, it would have been advisable to say that. It would, however, mean that this passage made no prediction whatsoever of anything that had not already happened. If it’s a metaphor, AI coding assistants were already “like employees” in April 2025.
Cryptonomicon (1999), Neal Stephenson
Great article! I do think AI 2027 has a lot of "and this tech just magically gets 10x better" but it's still a fun read, you just have to somewhat suspend your disbelief. The part where the president just liquidates all the other AI companies to empower Open"brain" is really funny though. If the AI can negotiate peace between 2 superpowers, surely it can negotiate Google etc. being acquired, no? That part feels really bad faith/investor bait.
AI will probably end up being a pretty nice tool but nothing too revolutionary. Some jobs are gonna get lost but more will get created. IF AI does end up being as revolutionary as what is said, then it inherently becomes impossible to predict. Just like the singularity.
You don't discuss AI 2027's rationale for their "only one AI company" -- which makes sense because the site doesn't really have one.
But their whole scenario seems to fall apart if indeed there are many competing AI vendors of roughly comparable quality. That indeed is more and more the case.
So my question: Do you know of any deep dive into their claim / assumption that there will only be one? And/or any analysis of how the outcomes change if there are many?