"Reflections" by Sam Altman

Download MP3

Sam Altman's Reflections essay published in early January 2025,

as read by AOK the second birthday of ChatGPT

was only a little over a month ago, and now we have transitioned into

the next paradigm of models that can do complex reasoning.

New Year's get people in a reflective mood, and I wanted to

share some personal thoughts about how it has gone so far and

some of the things I've learned along the way. As we get

closer to AGI, it feels like an important time to

look at the progress of our company. There is still so

much to understand, still so much we don't know, and it's still

so early. But we know a lot more than we did when we started.

We started OpenAI almost nine years ago because we believed that

AGI was possible and that it could be the most impactful technology

in human history. We wanted to figure out how to build

it and make it broadly beneficial.

We were excited to try to make our mark on history. Our ambitions

were extraordinarily high and so was our belief that

the work might benefit society in an equally extraordinary way.

At the time, very few people cared, and if they did,

it was mostly because they thought we had no chance of success.

In 2022, OpenAI was a quiet research lab

working on something temporarily called chat. With GPT 3.5,

we are much better at research than we are at naming things.

We had been watching people use the playground feature of our API

and knew that developers were really enjoying talking to the model.

We thought building a demo around that experience would show people

something important about the future and help us make our models better and safer.

We ended up mercifully calling it ChatGPT instead, and launched it

on November 30th of 2022. We always knew abstractly

that at some point we would hit a tipping point and the AI revolution

would get kicked off. But we didn't know what the moment would

be. To our surprise, it turned out to be this.

The launch of Chat GPT kicked off a growth curve like nothing

we have ever seen in our company, our, our industry and the

world broadly. We are finally seeing some of the massive upside

we have always hoped for from AI, and we can see how much more will

come soon. It hasn't been easy, the road

hasn't been smooth, and the right choices haven't been obvious.

In the last two years. We had to build an entire company

almost from scratch around this new technology.

There is no way to train people for this except by doing it.

And when the technology category is completely new, there is

no one at all who can tell you exactly how it should be done.

Building up a company at such high velocity with so

little training is a messy process. It's often two

steps forward, one step back, and sometimes one

step forward and two steps back. Mistakes get

corrected as you go along, but there aren't really any handbooks

or guideposts when you're doing original work.

Moving at speed in uncharted waters is an incredible experience,

but it is also immensely stressful for all the players.

Conflicts and misunderstanding abound. These years have been the

most rewarding, fun, best, interesting,

exhausting, stressful, and particularly for

the last two unpleasant years of my life so far,

the overwhelming feeling is gratitude. I know

that someday I'll be retired at our ranch, watching the plants

grow, a little bored and will think back at how

cool it was that I got to do the work I dreamed of since I

was a little kid. I try to remember that

on any given Friday when seven things go badly wrong

by 1pm a little over a year ago on one

particular Friday, the main thing that had gone wrong that

day was that I got fired by surprise on a video call.

And then right after we hung up, the board published a blog

post about it. I was in a hotel room in Las Vegas.

It felt, to a degree that is almost impossible to explain,

like a dream gone wrong. Getting fired

in public with no warning kicked off a really crazy few

hours and a pretty crazy few days. The fog

of war was the strangest part. None of us were able

to get satisfactory answers about what had happened or or why.

The whole event was, in my opinion, a big failure of governance

by well meaning people, myself included.

Looking back, I certainly wish I had done things differently and

I'd like to believe I'm a better, more thoughtful leader today

than I was a year ago. I also

learned the importance of a board with diverse viewpoints and broad experience

in managing a complex set of challenges. Good governance

requires a lot of trust and credibility. I appreciate the

way so many people work together to build a stronger system of governance

for OpenAI that enables us to pursue our

mission of ensuring that AGI benefits all of humanity.

My biggest takeaway is how much I have to be thankful for and how

many people I owe gratitude towards. To everyone who works

at OpenAI and has chosen to spend their time and effort going

after this dream. To friends who helped us get through the crisis

moments, to our partners and customers who supported us and

entrusted us to enable their success, and to the people in my life

who showed me how much they cared. We all got back

to the work in a more cohesive and positive way and I'm very proud

of our focus. Since then, we have done what is easily some

of our best research ever. We grew from about 100

million weekly active users to more than 300 million.

Most of all, we have continued to put technology out into the

world that people genuinely seem to love and that

solves real problems. Nine years ago we really

had no idea what we were eventually going to become. Even now,

we only sort of know. AI development has

taken many twists and turns and we expect more in the future.

Some of the twists have been joyful, some have been hard.

It's been fun watching a steady stream of research. Miracles occur

and a lot of naysayers have become true believers. We've also seen

some colleagues split off and become competitors. Teams tend

to turn over as they scale and OpenAI scales really

fast. I think some of this is unavoidable.

Startups usually see a lot of turnover at each new major level

of scale, and at OpenAI, numbers go up by orders

of magnitude every few months. The last two years have been like a

decade at a normal company. When any company grows and

evolves so fast, interests naturally diverge.

And when any company in an important industry is in the lead,

lots of people attack it for all sorts of reasons,

especially when they are trying to compete with it. Our vision won't

change, our tactics will continue to evolve.

For example, when we started we had no idea

we would have to build a product company. We thought we were just going to

do great research. We also had no idea we would need

such a crazy amount of capital. There are new things

we have to go build now that we didn't understand a few years

ago. And there will be new things in the future we can barely imagine

now. We are proud of our track record on research

and deployment so far and are committed to continuing to

advance our thinking on safety and benefits sharing.

We continue to believe that the best way to make an AI system safe

is by iteratively and gradually releasing it into the world.

Giving society time to adapt and co evolve with the

technology, learning from experience and continuing

to make the technology safer. We believe in the importance

of being world leaders on safety and alignment research

and in guiding that research with feedback from real world

applications. We are now confident we know how to build

AGI as we have traditionally understood it. We believe that

in 2025 we may see the first AI

agents join the workforce and materially

change the output of companies. We continue to believe that iteratively

putting great tools in the hands of people leads to great broadly distributed

outcomes. We we are beginning to turn our aim beyond that

to superintelligence in the true sense of the word. We love our

current products but we are here for the glorious future.

With superintelligence we can do anything else.

Superintelligent tools could massively accelerate scientific

discovery and innovation well beyond what we are capable of doing

on our own and in turn massively increase

abundance and prosperity. This sounds like science

fiction right now and somewhat crazy to even talk about it.

That's alright. We've been there before and we're

okay with being there again. We're pretty confident

that in the next few years everyone will see what we see

and that the need to act with great care while still maximizing

broad benefit and empowerment is so important given

the possibilities of our work. OpenAI cannot be a normal

company. How lucky and humbling it is to be

able to play a role in this work.

Creators and Guests

A-OK
Host
A-OK
An infinite number of monkeys on an infinite number of "typewriters" banging away on keys in hopes of getting the next-token predictor agent going.
"Reflections" by Sam Altman
Broadcast by