Vibe coding
As software developer
I think “can entirely replace people in software development” might well be as good a definition of AGI as any other I’ve heard, and as discussed elsewhere I don’t think it’s worth speculating too much about AGI. Suffice it to say that I don’t think we’re there yet, I don’t actually think we’re going to get there any time soon, and if we do it’s going to be such a strange surprise that nothing anyone is saying right now is going to be that useful for thinking about it.
As programming language
What about a world where software developers (the people) still exist, but they do their work entirely through natural language conversation with an agentic chatbot? Will high-level language code become to them what machine code was to those of us who came before? I think there are a couple of reasons to be skeptical.
The first is a simple question of capabilities. At this point in time the chatbots are simply not capable of autonomously generating maintainable code in large codebases, nor are they capable of reliably answering questions about complex (even well structured and documented) codebases. I personally don’t think this is likely to materially change in the near future. That said, I’m not sufficiently an expert to be confident in that take, and like most people I’ve been surprised by the speed of recent progress.
My more fundamental issue is this: why bother? It’s been a very long time since the programming languages we use were primarily shaped by the needs of the computers they run on; any reasonably modern programming language is much better viewed as a tool for the careful representation of ideas. In this programmers have a lot in common with mathematicians and philosophers, who’ve been using formal languages to express ideas long before we’d conceived of machines that could execute those languages; even lawyers and scholars of the humanities often do their serious work in a jargon that is “unnatural” precisely to the extent that natural language can be a poor tool for that work.
I’m not trying to say that natural language is useless. Natural languages are the tools that we have developed over millennia for representing the vast spectrum of our ideas, and they do an incredible job at it. I just think that conducting the particular sub-project of High Modernism that is the development of automated systems purely in natural language would be as absurd as trying to convey this entry entirely in sequent calculus.
On a more personal level every time another developer enthuses about the job becoming less like a careful distillation of ideas and more like a rambling conversation with an enthusiastic but somewhat dumb person I wonder exactly what got them into the computing profession in the first place.
As DIY
What about allowing people who don’t have the (specific) skills to have a go at building stuff? This is the part of the entry that I’ve had the most difficulty making up my mind on. My gut feeling that this will get eaten into from both sides to the point it might not exist at all. From above I think that for more complex ideas there’s going to be no substitute for thinking deeply about them, and that having done so distilling that into a precise formal language artifact is going to be too marginally valuable to throw away. From below I think this is going to be eaten by the next section: less using chatbots to make tools to do a task, and more using chatbots to use tools to do a task.
As interface paradigm
TODO I have a longstanding and somewhat idiosyncratic interest in end user computing. One of the things I have thought a lot about over the years is the separation between “normal” and “power” users of computer systems. I have tended to think of this in linguistic terms, power users are those who could express to the computer what they wished the computer to do in a way that allowed the computer to do it. This led to two obvious suggestions for how to make end user computing more effective: more widely train people in the general ideas and ways of thinking that underlie these linguistic interactions with computers,This is always what I thought the real meaning of the rather fuzzy phrase “Learn to code” was and design and build systems (and implicitly languages) which provide for this sort of linguistic power user interaction.
I’m not entirely sold that these ideas are fully obsolete, but it seems pretty likely that across a lot of domains the dominant power user interaction paradigm is going to be natural language conversation with an LLM. There are caveats: the security/reliability situation around LLM agents seems difficult to get right, the same concerns I raised above about the usefulness of directly dealing with formal representations could be more widely applicable, and the same ideas that inform the design of tools for use by human power users could be useful in designing the tools that LLMs translate their users natural language queries for.
On a personal level I think this is the aspect of the current genAI moment that has been the most disruptive to my thinking. I’ve spent a long time thinking that I had a framework of ideas for how to think about designing software systems and interfaces, almost an ideology. That the changes in AI technology have rendered me wrong about a set of ideas that I had been so committed to has left all of my skepticism about the more outlandish AI claims (and I have plenty of skepticism) tinged with much more doubt.
I think if this works out (and it really looks like it will, although there’s going to be a lot of mess along the way) it feels like the most impactful of the plausible effects (i.e. non-AGI).
As text editor
TODO
As sin-eater
TODO
Changelog
- 2025-06-15: Draft