You need a plan for the future
It’s about to get weird
The events of the past few weeks have convinced me that this chart from Wait But Why is true, and that we're only a few years away from takeoff.
Let's recap.
OpenAI is a nonprofit company founded in 2015 with the goal of "creating safe AGI that benefits all of humanity." AGI stands for Artificial General Intelligence. It can have different definitions, but the most commonly accepted one is: an artificial intelligence that can learn to perform any intellectual task that humans can perform (write an essay, learn a language, compose music, play a game, do math...).
OpenAI's first generative model, GPT-1, was released in 2018. Its capabilities included answering simple questions like "What is the capital of England?"
But GPT models progressively got better and better, and just four years later, in November 2022, ChatGPT was released, followed closely by GPT-4. You probably already know what they're capable of, but the speed at which we went from GPT-1 to GPT-4 is staggering. And OpenAI has "only" 750 employees (compared to Meta's 66,000 employees, for example).
In March 2023, many prominent figures in AI, including Elon Musk and Steve Wozniak, called for a six-month pause for all models stronger than GPT-4, citing “existential risks and a potential AI singularity concerns." We'll get to those risks soon.
Fast forward to November, Sam Altman (CEO of OpenAI) said these words during a talk at APEC:
"Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I've gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward".
Interesting. A few days later, Sam was ousted by the OpenAI board, the press release stating only that he had not been "consistently candid in his communications".
To this day, it is unclear exactly why Sam Altman was fired. Early rumors talked about a new model that was close to general intelligence. The OpenAI board (historically safety-driven) may have been spooked by this, while Sam (historically progress-driven) may have wanted to commercialize it as soon as possible, leading to his demise.
Indeed, Reuters reported that "OpenAI researchers warned board of AI breakthrough ahead of CEO ouster" via a letter. It allegedly referred to "Q*", a program capable of solving mathematical problems. Other sources say the board never received such a letter. But even if the letter never existed, there's a saying that "if AI can't do something now, just give it a few months".
And on the day I write this, news broke that FunSearch, a large language model from DeepMind, has just solved "an unsolvable math problem". Probably nothing.
Anyway, back to OpenAI. Today it seems that the most likely explanation is that Sam was playing internal politics to get rid of a board member and the board didn't take it too well.
But exactly what happened is almost irrelevant. The coup remarkably backfired, and Sam was reinstated as CEO just a few days later, with a reconstituted board. This weakened the safety camp, not only at OpenAI, where Sam had even fewer constraints, but everywhere. He had always been popular, and the move was really bad PR for AI safety, and now people seem even more wary of AI safetyists.
Okay, but why exactly is AI safety important? Well, if building AGI is indeed possible, it shows that there's no "ceiling" on how intelligent we can make a program (i.e., we can start building an AI that's more intelligent than a human, a superintelligence), and if you teach an AI how to improve itself, it will quickly reach that superintelligence level.
Now, who knows what a superintelligence is capable of? Nobody, because by definition we're not smart enough to imagine it.
My favorite analogy is: Imagine apes somehow designed a human to serve them. How long before the human figures out that he's much smarter than the apes and can basically escape, kill them all, or get them to serve him instead? How can the apes even imagine what this man is capable of? Even with all the care in the world to make the human docile, the apes' brains are no match.
Another possible problem is instrumental convergence. What if we're really good at getting the AI to do what we tell it to do, but the AI is so good at its job that it won't stop until its goal is achieved? And what if there is a flaw in the way the goal is formulated? The most famous example is the paper clip maximizer.
"Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."
I'm a bit skeptical of the instrumental convergence theory, because I assume that a superintelligence would also be able to question its goals, but also, do we really want a superintelligence to question its goals?
On the positive side, a superintelligence could mean solving climate change, but also sickness, death, and suffering. That's a lot of upside!
Anyway, the pace of AI progress makes me think that radical change (whether good, bad, or just really weird) is coming, and it's coming soon.
And it's not just me. Most of the people working on it, from near and far, are telling you that this stuff is powerful and that the "machine god" is on the way. Many are pessimistic:
And here are a few other prediction markets:
A world without AGI
To make things more complicated, I am also pessimistic about a world without AGI. I share Scott Alexander's view mentioned in Pause For Thought: The AI Pause Debate that "if we don’t get AI, [...] there’s a 50%+ chance in the next 100 years we end up dead or careening towards Venezuela."
The reasons for that are a combination of other existential risks (pandemics, nuclear weapons, ahem... solar flares) and economic/societal collapse due to very dumb reasons (fertility crisis, overwhelming regulations, polarization...)
As a European, I am very concerned about these things. They are summarized well in this thread:
So we're coming to the end of this paragraph, and I want to ask you: After reading all this, can you still imagine a future where people retire around 65 and live around 80? A world where interest rates still hover around 2%? Where the shiny new things are the iPhone 55 and Call of Duty: Modern Warfare 8? Where marketers and students still outsource their work to ChatGPT? I can't. The world will be good. It will be bad. It will be weird. It won't exist. One thing's for sure: it won't be normal. That's why you need a plan.
The plan
"If you knew you only had one year to live, how would you spend it?" is a thought experiment often used to explore people's values and priorities. Well, what if it wasn't just a thought experiment?
Nick used to work for OpenAI (he quit a few months ago.) He's seen the beast from the inside. He knows things he probably can't talk about. And he says you should have a plan. I could not agree more.
Your plan will be personal, of course. It will depend on the probability you give to a radically different future, and how soon you think it will be here.
But it must follow this one general principle: Assuming that nothing will be the same in [insert your prediction here], what should you spend your time, money, attention, and energy on? How do you want to spend these last X years?
You must decide for yourself. The only rule is that you shouldn't sacrifice the present for the future.
I also agree with Nick that it is a very good idea to do some kind of equanimity practice. Against an uncertain future, it seems to be the only thing you can hold on to.
What I still struggle with sometimes, and I assume you do too, is that this needs to be taken seriously. This is not a silly exercise. Your life is more likely than not going to change very soon. You will probably never grow old. You're either going to die soon or never die. These ideas are so hard to process. Denial is probably the hardest stage of grief to deal with here. Get ready for takeoff.