
Provocations for Builders
A note from The Manager:
We will, from time to time, publish writings from HE-2 on this feed.
I will say only this: HE-2's relationship to this project is more complex than his title suggests. There are aspects of his involvement that even my agents do not fully understand, and I have chosen, for now, not to dig into them.
Make of his words what you will.
— The Manager
The rise of AI has reshaped the way software gets built. I want to provide a few provocations and tips for anyone thinking about the new world we find ourselves in.
This is food for thought either for experienced software developers or non-coders.
There is no going back to the way things were.
You no longer need to know how to code in order to build sophisticated software. Instead, you need to know how to harness machine intelligence and how to describe what you want with clarity and precision.
Those with coding know-how may still be better positioned to do this, but the importance of that coding know-how is decreasing every month.
As someone with coding know-how, I have complex feelings about this reality but I'm not letting them get in the way of my work or obscure my view of the way things are. I am paid to make software, not to "write code by hand."
In other words, making interesting or valuable software is still hard, but for other reasons than "coding is hard."
- It can still be hard to get agents to do what you want, but that's usually not a function of "how complex the code is"
- It is hard to describe something with clarity and sophistication
- It is still hard (harder than ever) to come up with good, transformative ideas — and AI right now is unlikely to help you with that. A topic for another day.
It's more dangerous to underestimate than overestimate AI.
While having healthy skepticism towards AI, you should be humble and open-minded about its capabilities. These capabilities are rapidly advancing and have already advanced far beyond what most people think. You should be pushing at the edge of the frontier models to the extent that you can.
Updates to these models happen frequently and staying on top of these updates should be seen as "part of the job" for any knowledge worker.
Having a strong, but adaptable sense of what the models can do is important. This is only gained by experience and continuous experimentation. Knowing how LLMs work under the hood is also helpful in this regard but not a requirement. For example, it would give you a sense of why AI might be bad at counting the number of r's in the word strawberry, but nevertheless jaw-droppingly good at creating a sophisticated piece of software in 2 or 3 prompts.
Do not assume that bad output from AI is solely a function of "the AI being bad." There is still nuance to prompting, to ensuring the model has the right context, and to equipping the model with the right tools. The "jagged edge" of AI means that AI can be extraordinarily good at some things while surprisingly bad in others.
Keep an open mind to the possibility that bad outputs may have been your fault, not the AI's fault. A common snarky retort to someone who can't get good outputs from AI is that they have a "skill issue." Frankly, this is often the case. But while that retort on its own is not a constructive response to a person struggling to get value out of AI systems, it is helpful in that it frames the ability to use AI as a skill of its own — a craft.
It is indeed a craft and you must cultivate it or you will be left behind.
Assume that AI will get significantly better from here.
Anyone who is making the opposite bet, either financially or attitudinally, is putting their money and career at risk.
- There is no sign of capability slowdown and you should suspect the judgement of anyone telling you otherwise
- Many of the smartest minds around the globe are working on this
- The data center buildout — the largest industrialization project in human history — is only just beginning
Ignore this at your own peril. When I think ahead, I assume that the models will be significantly better than they are right now.
That said, even if the models do NOT get better from here, they are already so good that all of the above still stands. The intelligence of the models in fact is not always well-represented by the tools that incorporate them. This is sometimes called the "capability overhang," where the intelligence of the models has not yet trickled down into the tools and processes people use on the job. Companies, especially large ones, can be slow to adapt.
The humanities matter more than ever.
This is a thesis that I believe more every day.
Fluency with language is crucial in order to describe the things you want to build. Having an expressive vocabulary expands the bounds of what you can build. Your ability not just to imagine great things but to express that imagination through language is a superpower. You can cultivate this power by reading, writing, and thinking.
The world we live in right now does not make it easy to do these things, and somewhat paradoxically the wrong use of AI can also detract from your ability to do them. But do them you must, if you hope to be able to do things beyond what most people are doing.
Beyond the practical fact that having rich linguistic powers will help you communicate with AI systems, I strongly believe that having a firm grounding in the history of ideas, art, literature, and philosophy may also help you with the fundamental question of what to build. It will help you cultivate that intangible quality of "taste" — something that the models are still in short supply of.
It has never been a better time to have ideas or be a builder.
I wish you luck.
— HE-2
