Let me share a problem I solved recently (well, a year ago god I'm getting old)!
I had to write this template loader for my space sim... reference resolution, type mapping, and YAML parsing. This isn't the code I wanted to write. The code I wanted to write was behavior trees for AI traders, I'm playing with an idea where successful traders can combine behavior trees yada yada, fun side project.
But before I could touch any of that, I had to solve this reference resolution problem. I had to figure out how to handle cross-references between YAML files, map string types to Python classes, recursively fix nested references. Is this "journey programming"? Sure, technically. Did I learn something? I guess. But what I really learned is that I'd already solved variations of this problem a dozen times before.
This is exactly where I'd use Claude Code or Aider + plan.md now - not because I'm lazy or don't care about the journey, but because THIS isn't my journey. My journey is watching AI merchants discover trade routes, seeing factions evolve new strategies, debugging why the economy collapsed when I introduced a new resource.
OP treats all implementation details as equally valuable parts of "the journey," but that's like saying a novelist should grind their own ink. Maybe some writers find that meaningful. Most just want to write. I don't want to be a "destination programmer" - I want to be on a different journey than the one through template parsing hell.
hexhowells 10 hours ago [-]
OP here, I whipped this up in like 10 minutes after modelling the problem from a new perspective (I want to be less of a perfectionist with my blogs) so there are definitely grey areas I didn't consider/cover.
I do think LLMs can be good for certain boilerplate code whilst still allowing you to enjoy the problems you care about, and as far as my binary definitions this is more of a grey area.
I guess for me, this has introduced a slippery slope where if the LLM can also code the "fun" stuff, I'll be more inclined to use it, which defeats the whole purpose for me. Perhaps being able to identify which type of project I am working on, it can help me avoid using LLMs to enjoy programming more again!
throwaway31131 9 hours ago [-]
Maybe you could ask the LLMs to stub out whatever you consider fun leaving you with a LeetCode style problem to solve. I could see that being fun. I actually really like LeetCode in the same way some people like doing Sunday crossword puzzles.
drbojingle 9 hours ago [-]
I'm 100% in the same boat. Bring on the brave new world and let me go higher
hnlmorg 10 hours ago [-]
> OP treats all implementation details as equally valuable parts of "the journey,"
Do they? That wasn’t my take away from the article.
My impression was that the author missed the enjoyment of problem solving because they overused AI. Not that they think all problems are equal.
For what it’s worth though, I do agree with your more general point about AI use. And in fact that’s how I’ve used AI code generation too. “Solve the tedious problem quickly so you can focus on the interesting one”.
Fraterkes 8 hours ago [-]
I get your point, I think the difficult thing is that these tools are not delineated: preground-ink does not have the capability to write your stories for you, but with llms we constantly have to reasses which parts of the thing we are building merrit our attention.
If llms get better, will you have to decide whether you actually care about writing decision trees, or if instead you just want to, more generally, curate procedural interactions (or something)?
My point is: if these next few years every project becomes an exercise in soul-searching for which parts of my work actually interest me, it is maybe less work not to use these tools, or alternatively, find something fullfilling that doesn’t involve making something.
hintymad 9 hours ago [-]
A trajectory question: has anyone thought about becoming a journeyman in their day-to-day work? Like a backend engineer switching to building machine learning models. Or a frontend engineer moving into optimizing LLM serving infrastructure. The challenge isn’t so much technical—it’s social.
Here's a typical scenario: you're a well-respected senior engineer at your company. Say you're an E8 at Meta. You spend your days in meetings, write great documentation, and read more papers than most, which helps you solve high-level architectural problems. You’ve built deep expertise in your domain and earned a strong reputation, both internally and in the industry.
But deep down, you know you’re rusty with tools. You haven’t written production code in years. You’re solid in math and machine learning theory from all the reading, but you’ve never actually built and shipped production ML models. You're fluent in linear algebra and what not, but you don't know shit about writing CUDA libraries, let alone optimizing them. When you check the job specs at companies like OpenAI, you see they’re using Rust. You might be able to write a doubly linked list in Rust, but let’s be honest—you’d struggle to write a basic web service in it.
So switching domains starts to feel daunting. To say the least, you'll lose your edge to influence. Even if you’re willing to take a pay cut, the hiring company might not even want you. Your experience may help a little, but not enough. You’d have to give up your comfortable zone of leading through influence and dive back into the mess of writing code, fixing elusive bugs, and building things from scratch—stuff you used to love.
But now? You’ve got a family of five. You get distracted more often. Leadership fits your life better—you can rely more on experience, communication, intuition. Still, a part of you misses being a journeyman.
So how does someone actually make that move? Do you just bite the bullet and try? Stick to adjacent areas to play it safe? Join a company doing the kind of work you want, but stay in your current domain at first—say, a backend engineer goes to OpenAI but still works on infra? Or is there another path?
pyman 10 hours ago [-]
Feels like we're heading towards a world where computer languages disappear, and we just use human language to tell machines what to do. Kinda like how typewriters got replaced by computers in the 80s. Back then, people spent so much time making sure there were no typos, they'd lose focus on the actual story they were trying to write.
Same thing's happening now with code. We waste so much time dealing with syntax, fixing bugs, naming variables, setting up configs, etc, and not enough time thinking about the real problem we're trying to solve.
From Assembly to English. What do you reckon?
sanderjd 10 hours ago [-]
As much as I'm finding LLMs incredibly useful, this "world where computer languages disappear" doesn't resonate with me at all. I have yet to see any workflows where the computer language is no longer a critical piece of the puzzle, or even significantly diminished in importance.
I think there is an important difference between LLM-interpreted English, and compiler-emitted Assembly, which is determinism.
The reason we're still going from human prompt to code to execution, rather than just prompt to execution, is that the code is the point at which determinism can be introduced. And I suspect it will always be useful to have this determinism capability. We certainly spend a lot of time debugging and fixing bugs, but we'd spend even more time on those activities if we couldn't encode the solutions to those bugs in a deterministic language.
Now, I won't be at all surprised if this determinism layer is reimplemented in totally different languages, that maybe are not even recognizable as "computer language". But I think we will always need some way to say "do exactly this thing" and the current computer languages remain much better for this than the current techniques to prompt AI models.
lubujackson 9 hours ago [-]
I predict we enter a world where these wand waving prompts are backed by well-structured frameworks that eliminate the need to dig in the code.
Originally I thought LLMs would add a new abstraction layer, like C++ -> PHP, but now I think we will begin replacing swaths of "logically knowable" processes one by one, with dynamic and robust interfaces. In other words, LLMs, if working under the right restrictions, will add a new layer of libraries.
A library for auth, a library for form inputs, etc. Extensible in every way with easy translation between languages. And you can always dig into the code of a library, but mostly they just work as-is. LLMs thrive with structure, so I think the real nexy wave will be adding various structures on top of general LLMs to achieve this.
sanderjd 4 hours ago [-]
This is possible. But when I read something like this, I just wonder: Why would this be more efficient than doing this with the same component we already call "libraries" - that is, a normal library or component created with some computer language - and just using AI to create and perfect those libraries more quickly?
I'm not even sure I disagree with your comment... I agree that I think LLMs will "add a new layer of libraries" ... but I think it seems fairly likely that they'll do that by generating a bunch of computer code?
thenoblesunfish 9 hours ago [-]
English is not well-specified or unambiguous. Programming languages aim to be. This is a massive difference. Recall that laws are specified in English.
pyman 4 hours ago [-]
This is an interesting debate. For me, the real question is: What's the goal of any language (human or programming)?
In my opinion, it's to communicate intent, so that intent can be turned into action. And guess what? LLMs are incredibly good at picking up intent through pattern matching.
So, if the goal of a language is to express intent, and LLMs often get our intent faster than a software developer, then why is English considered worse than Python? For an LLM, it's the same: just patterns.
quesera 4 hours ago [-]
Laws attempt to solve this problem with verbosity. It works pretty well but of course the exceptions are always interesting.
But I think the domain of an AI-first PL would or could be much smaller. So the language is "lower-level" than English, but "higher-level" than any existing PL including AppleScript etc, because it would not have to follow the same kinds of strict parser rules.
With a smaller domain, I think the necessary verbosity of an AI-first PL could be acceptable and less ambiguous than law.
4 hours ago [-]
Disposal8433 9 hours ago [-]
> We waste so much time dealing with syntax, fixing bugs, naming variables, setting up configs
I definitely don't do that. It's a very small part of my job. And AFAIK, LLMs cannot generate assembly language yet, and CPUs don't understand English.
pyman 4 hours ago [-]
We live in a world with 7,000 human languages and around 8,000 programming languages. Most people only learn a handful, which limits how effectively they can express intent.
This is inefficient.
In theory, one universal language would solve that, for both humans and machines.
Maybe the best solution isn't one language (English, Spanish, Golang, or Python), but one interface that understands all of them. And that's what LLMs might become.
sitzkrieg 7 hours ago [-]
ive used various llms to generate x86, mips, riscv assembly with mostly usable results. you tend to see what it was trained on pretty quickly if you go deep tho
raincole 9 hours ago [-]
> Back then, people spent so much time making sure there were no typos, they'd lose focus on the actual story they were trying to write.
Were you a published author in the 80s?
Because I highly doubt this was how writers in 80s thought of their job.
pyman 9 hours ago [-]
No, but I've studied the history of computers and keyboards. There's plenty of evidence that writing with typewriters was much slower than using a computer. Writers were also more limited creatively, since they couldn't easily edit or move things around once the page was written.
ofjcihen 9 hours ago [-]
Slow doesn’t necessarily mean less creative. In fact it’s been argued that being slow and deliberate actually pulls you out of automated patterns of thinking and gives you time to mull over what you want to say.
This is even enhanced when you create a superficial barrier such as writing in all caps.
pyman 4 hours ago [-]
I'm not saying fast is better than slow, or slow is better than fast. I'm just saying time changes the shape of creativity.
Shakespeare wrote under pressure because he had deadlines. His creativity was shaped by the need to deliver.
Einstein, on the other hand, had no real deadlines. His creativity was shaped by the need to understand. He had time to sit with ideas, rethink assumptions, and see patterns no one else saw.
Shakespeare would say: "Creativity is all about time. And writing by hand takes time."
And Einstein would reply: "Time does not exist my friend. So take your time and write it again."
suzzer99 9 hours ago [-]
> Feels like we're heading towards a world where computer languages disappear, and we just use human language to tell machines what to do.
I agree, but it feels like we need a new type of L_X_M. Like an LBM (Large Behavior Model), which is trained on millions of different actions, user flows, displays, etc.
Converting token weights into text-based code designed to ease the cognitive load on humans seems wildly inefficient compared to converting tokens directly into UI actions and behaviors.
syx 9 hours ago [-]
While I agree with all the previous comments, your comment sparked an idea in me. I started imagining a future where we develop a new programming language optimized for LLMs to write and understand. In this hypothetical scenario, we would still need developers to debug and review the code to ensure deterministic outputs. Maybe this isn't so far-fetched after all. Of course, this is just speculation and imagination on my part.
you’d need a training set covering all the useful cases. Something that we don’t have even now for mainstream languages
9 hours ago [-]
dataviz1000 9 hours ago [-]
Another good analogy is how calculators, people who performed mathematical calculations, were replaced by machines. Sure they were eventually put out of work, nonetheless, the mechanical and then electronic calculators eventually made entire industries so efficient it increased everyone's wealth and created new positions and jobs.
We will be fine.
knutwannheden 10 hours ago [-]
I reckon that while my programming has become more productive with LLMs, it has at the same time gotten a bit more frustrating and boring.
I think it is difficult to know in advance when the LLM will do a reasonable or good job and when it won't. But I am slowly learning when and how to use the tools while still enjoying using them.
danaris 9 hours ago [-]
Sorry, this is implausible.
English is just too poorly-specified. Programs need to be able to know exactly what they're supposed to do next, what their output is supposed to be, etc. Even humans need to ask each other for clarification and such all the time.
If you want to use English to specify a program, by the time you've adjusted it to be clear and specific enough to actually be able to do that...it turns out you've made a programming language.
pyman 3 hours ago [-]
We live in a world with 7,000 human languages and around 8,000 programming languages. Most people only learn a handful, which limits how effectively they can express intent. This is inefficient.
In theory, one universal language would solve that, for both humans and machines.
Maybe the best solution isn't one language (English, Spanish, Golang, or Python), but one interface that understands all of them. And that's what LLMs might become.
I think this can be resolved with verbosity, our old friends abstraction and modularization, and an unfamiliarly flexible parser.
PartiallyTyped 10 hours ago [-]
Perhaps solving the real problem implies using programming languages?
danielbln 10 hours ago [-]
Or perhaps it doesn't. An architect also solves a real problem, even though he's not laying brick.
prerok 9 hours ago [-]
I think this is a good point. But just as we see in the real world the execution of the architect's solution is often sub par, so the "debugging" involves both architectural specs as well as builder's execution.
I think that in programming we will still have to understand the builder's execution, which should remain deterministic, hopefully not at the level of assembly.
somewhereoutth 9 hours ago [-]
It is the blueprints (the detailed design, plans sections etc) that is analogous to code, not bricks. Software designers (compared to building designers) are lucky that the process of turning design (code) into artifact (running software) is virtually free in terms of cost and time. However software designers are unlucky that what they do is so misunderstood - not least by them themselves.
throwaway31131 9 hours ago [-]
I could never relate to the programmers who wrote code for the sake of writing code. I write a lot of code, but for me the code is a means, not an end.
So I look at tools like LLMs as just the latest incarnation of tools to reduce the number of hours the human has to spend to get to the end.
When I very first started programming, a very long time ago, the programmer actually had to consider where in memory, like at what physical address, things were. Then tools came along and it’s not a thing. You were not a programmer unless you knew all about sorting and the many algorithms and tradeoffs involved. Now people call sort() and it’s fine. Now we have LLMs. For some things people think they’re great. Me personally I have not found utility in them yet (mostly because I don’t work on web, front end, or in python) but I can see the potential. But dynamic loaders and sort() didn’t replace me, I’m sure LLMs won’t either, and I’ll be grateful if it helps me get to the end with less time invested.
cube2222 9 hours ago [-]
Yeah, this,
LLMs to me are primarily:
1. A way to get over writers block; they can quickly get the first draft down, which I can then iterate on; I’m one of those people who generally first implement something in a dirty way just to get it working, and then do a couple more iterations / rewrites on it, so this suits my workflow perfectly. Same for writing a first draft of a design doc based on my brain dump.
2. A faster keyboard.
Generally, both of these mean that energetically, coding is quite a bit less mentally tiring for me, and I can spend more energy on the important/hard things.
jackdoe 7 hours ago [-]
> hollow destination.
I can say that in the last 2 years chatgpt/claude have added more code to my projects than me, and I am programming for 25 years (counting the rejected tokens as well).
When I use copilot/cursor it is so violent, it interrupts my thoughts, it makes me a computer that evaluates its code instead of thinking about how my code is going to interact with the rest of the system, how it evolves and how it is going to fail and so on.
Accept/Reject/Accept/Reject.. and in the end of the day, I look back, and there is nothing.
One day, it lagged a bit, and code did not come out, and I swear I didn't know what to type, as if it was not my code. On the next day I took time off work to just code without it. During that time I used it to write a st7796s spi driver and it did an amazing job, I just gave it 300 pages docs, and told it what api to make and it made amazing driver, I read it, and I used it, saved me half a day of work easily.
Life is what overcomes itself, as the poet said, I am not sure "destination programmers" exist. Or even if they do, I don't know what their "destination" means. If you want to get better, reflect on what you do and how you do it, and you will get better.
PS: there is no way we will be able to read llm's code in near future, it will easily generate millions of lines for you per day, so we will need to find am interface to debug it, a bit like Geordi from Star Trek. LLMs will be our lens into complexity.
Students of ancient languages fall into one of two camps: those who use translations for 'assistance' and those who don't. Classroom experiences have shown me that the two groups of students learn vastly different skills.
The group who struggle through texts by themselves with relying on any shortcuts -- they just sit with the text -- probably won't become top-shelf philologists, but when you give them a sentence they haven't seen before from an author they've read, the chances are very good that they'll be able to make sense of it without assistance. These students learn, in other words, how to read ancient languages.
The group who rely on translations learn to do precisely that: rely on a translation. If you give them a text by an author they've 'read' before and deny them use of side-by-side translation, they almost never had any clue how to proceed, even at the level of rudimentary parsing. Is that word the second-person-singular aorist imperative middle or is it the aorist infinitive active? They probably won't even know how to identify the difference -- or that there is one.
Our brains are built for energy conservation. They do what, and only what, we ask of them. Learning languages is hard. Reading a translation is easy. Given the choice betweem the harder skill and the easier, he brain will always learn the easier. The only way to learn the harder one is to remove the option: sit with the text; struggle.
So far I've been able to avoid LLMs and AI. I've written in other comments on HN about this. I don't want to talk to an anthropmorphic chat UI, which I call "meeting-based programming." I want to work with code. I want to become a more skillful SWE and better at working with programming languages, software, and systems. LLMs won't help me do this. All the time they save me -- all the time they steal from reading code, thinking about it, and consulting documentation -- is time they've stolen from the work I actually want to do. They'll make me worse at what I do and deprive me of the joy I find in it.
I've argued with teammates about this. They don't want to do the boring stuff. They say AI will do it for them. To me that's a Faustian bargain. Every time someone hands off the boring stuff to the machine, I'd wager they're weakening and giving up the parts of themselves that they'll need to call upon when they find something 'interesting' to work on (edit: and I'd wager that what they consider interesting will be debased over time as well, as programming effort itself becomes foreign and a less common practice.)
xandrius 6 hours ago [-]
One could say this about absolutely any technology.
Using a hoe is making you weaker than if you just used your bare hands. Using a calculator is making your brain lose skill in doing complicated arithmetic in your head.
Most have never built a fire completely from scratch, they surely are lacking certain skills but do/should they care?
But as with everything else, you can take technology to do more, things that might be impossible for you to do without it, and that's ok.
globnomulous 41 minutes ago [-]
> One could say this about absolutely any technology.
What do I become worse at when I learn metallurgy, woodworking, optics, painting, or cooking?
> But as with everything else, you can take technology to do more, things that might be impossible for you to do without it, and that's ok.
Whether LLMs are helpful or enable anybody to do 'more' is beside the point.
I don't care about doing more -- or the 'more' I care about is only tangentially related to my actual output as an engineer. I care about developing my skill as an SWE and deepening my understanding. LLMs stand in the way of that. They poison it. Anybody who loves and values the skill as I do does themselves a disservice by letting an LLM do the work, particularly the thinking and problem solving. And even if you don't care about the skill, and are delighted to find that LLMs increase your output while you're using them, I'd wager you'll pay a hefty long-term intellectual and personal cost, in that you'll become a worse, lazier, less engaged engineer.
That's what this guy's post is about: losing the ability to do the work, or finding yourself bewildered by it, because you're no longer practicing it.
If code is just an obstacle to your goals but also the means of reaching them, and LLMs help you reach your goals, great, more power to you. My goal is to program. I just want to continue to do what I love and, day by day, problem by problem, become better at it. When I can no longer do that as an SWE, and I'm expected (let alone required) to let an obnoxious, chipper chatbot do the work, while I line the pockets of some charlatan 'thought leader,' I'll retire or blow my brains out. I can't imagine a worse fate, other than having to work with systems built by people who want to work this way.
sotix 6 hours ago [-]
Does the hoe operate itself?
I took a statistics course in high school where we learned how to do everything on a calculator. I was terrible and didn’t understand statistics at the end of it. My teacher gave me a gentleman’s C. I decided to retake the course in college where my teacher taught us how to calculate the formulas by hand. After learning them by hand, I applied everything on exams with my calculator. I finished the class with a 100/100, and my teacher said there was no need for me to take the final exam. It was clear I understood the concept.
What changed between the two classes? Well, I actually learned statistics rather than how to let a tool do the work for me. Once I learned the concept, then I was able to use the tool in a beneficial way.
bsder 6 hours ago [-]
> To me that's a Faustian bargain. Every time someone hands off the boring stuff to the machine, I'd wager they're weakening the parts of themselves that they call upon when they want to work on the 'interesting' stuff.
It's worse than that, people who rely too much on the AI never learn how to tell when it is wrong.
This is different from things like "nobody complains about using a calculator".
A calculator doesn't lie; LLMs on the other hand lie all the time.
(And, to be fair, even the calculator statement isn't completely true. The reason why the HP 12C is so popular is that calculators did lie about some financial calculations (numerical inaccuracy). It was deemed too hard for business majors to figure out when and why so they just converged on a known standard.)
charlie0 10 hours ago [-]
The solution to this, imo, is to expand the definition of what it means to "program". I'm increasingly realizing that AI tools are the new programming substrate. I've been able to heavily automate workflows and I use the word workflows loosely here.
It's allowed me tackle other parts of the knowledge stack that I would otherwise have no time for. For example, learning more about product management, marketing, and doing deeper research into business ideas. The programming has now gone strictly from coding to automating the flows related to these other jobs. In that sense, I'm still "programming", it just looks different and doesn't always involve an IDE. Bonus is my leverage has dramatically increased.
einpoklum 10 hours ago [-]
> I'm increasingly realizing that AI tools are the new programming substrate
Human programming is the old, and new, programming substrate - and the liberal substrate for what AI tools do. They're trained on it.
MarkusQ 9 hours ago [-]
To be honest, I had the same reaction when I started using high-level languages. I wasn't touching the metal (certainly not as much as I had been when solving problems sometimes involved things like repurposing unused bits on a multiplexed bus talk to a new peripheral) and it somehow felt less real. But pretty quickly the range of problems I was addressing shifted, and everything clicked back into focus. I'd never _really_ been touching the metal and I always had been (and still was) in touch with it.
Ditto giving up stick shift. And I imagine at some point artists felt the same thing when they transitioned to commercially prepared oil paint.
lelele 7 hours ago [-]
This. The mental shift resembles the one away from machine language, then away from assembly, then away from C... But programmers who still knew how things worked at lower levels had an edge on others.
socalgal2 10 hours ago [-]
I sense similar things to the OP. This feeling of not really thinking through some of the things I would have thought through before
At the same time, at least at the moment, this feels like just another tool. I'm old, started programming in the early 80s. Basic->Asm->C->C++ (perl-python-js-ts-go). Throughout my life things have gotten easier. Drawing an image on my Atari 800 or Apple II was way harder than it is on any PC today in JavaScript with the Canvas API or some library like three.js. Reading files, serialization, data strcutures, I used to have to write all that code by hand. I learned how to parse files, how to deal with endian issues, alignment issues, write portable code, etc but today I can play a video in 3 lines of JavaScript. I'm much happier just writing those 3 lines than writing video encoders/decoders by hand (did that in the 90s) and I'm much happier writing those 3 lines than integrating ffmpeg or some other video library into C++ or Rust or whatever. Similarly in 3D, I'm much happier using three.js or Unreal or Unity than writing yet another engine and 100+ tools.
ATM LLMs feel like just another step. If I'm making a game, I don't want the AI to design the game, but I do want the AI to deal with all the more tedious parts. The problem has been solved before, I don't need to solve it again. I just want to use the existing solution and get to the unique parts that make whatever I'm making special.
Ciantic 10 hours ago [-]
I often think this too. I'm both. When working for a client, I'm clearly a destination programmer. Choosing boring tech that I know of to get things done, which these days coincides with a tech that is also good for LLMs, as they are famously good with boring tech.
However, when I don't have deadlines, like in my Github creations, I'm clearly a journey programmer; I don't get anything fully finished usually. In these projects tech I use is something I usually wouldn't pick if I worked for a client.
danielbln 9 hours ago [-]
I love solving problems, ideally with somewhat creative solutions. Code is one way of accomplishing that, and there are many fun parts to that process. The composition of functionality, the design and structure and so on. The most enjoyment however I get from getting something solved, and if I have to leave the intricate dance with the code to the machine to get there faster and often better, I'll happily do it.
analog31 6 hours ago [-]
>>> Like many people I've become more reliant on LLM tools as time has passed...
"Time has passed", indeed. Like 9 months. This just reminded me in a quaint way how we've gotten used to such rapid progress.
bowsamic 10 hours ago [-]
LLMs offered a much needed contrast that allowed me to understand the true value of what I do. Before LLMs I took it for granted
d3ckard 10 hours ago [-]
This is a very good point.
I have very similar thoughts after working with Cursor for a month and reviewing a lot of “vibe” code. I see the value of LLMs, but I also see what they don’t deliver.
At the same time, I am fully aware of different skill levels, backgrounds and approaches to work in the industry.
I expect two trends - salaries will become much higher, as an individual leverage will continue to grow. At the same time, demand for relatively low skill work will go to zero.
sanderjd 10 hours ago [-]
This is well put and something I have tried to express to people.
Long before LLMs came onto the scene, I was telling people (like friends and family trying to understand what I do at work) that the actual coding part of the job is the least valuable, but that you just do still have to be able to write the code once you do the more valuable work of figuring out what to write.
But LLMs have made that distinction far more clear than I ever imagined. And I have found that for all my previous talk about it, I clearly still felt that the "writing the code" part was an important portion of my contribution, and have found it jarring to rebalance my conception of where I can contribute value.
dottedmag 9 hours ago [-]
Except that every month bits of that value get chipped off.
furyofantares 10 hours ago [-]
> LLMs offered a much needed contrast that allowed me to understand the true value of what I do. Before LLMs I took it for granted
I've found this to be true of all generative AI to date. I have a clearer sense of where most of the value lies in most writing, imagery, code, and music.
I have a better sense of what having good taste (or any taste at all) means, and what the value of even seemingly trivial human decision-making is.
sublinear 9 hours ago [-]
Does this not just prove how much the software industry has ignored building "boring" libraries for decades? Why do we want this stuff to be written by AI?
mdaniel 9 hours ago [-]
In my opinion, this falls into the "apologies the letter is so long, I did not have time to write a shorter one" camp because turning boring stuff into a write-only approach via an intern doing codegen is way easier than doing the deep thinking required to make a good, secure, stable library for just about anything
rednafi 9 hours ago [-]
I never found the act of coding something profound. LLMs are just tools, like sed, awk, or xargs, albeit with more range.
So no, I don’t miss the days of dealing with some douchebag on Stack Overflow or some neckbeard on a random subreddit telling me to pick up different career. They can now die in peace with their “hard-earned KnOwleDgE.”
Fiddling with directory structures or bike shedding over linter configs never felt artistic to me. It just felt like getting overly poetic about doing bullshit. LLMs and agents are amazing at doing these grunt work.
I get that some folks see the hand of God in their dotfiles or get high off Lisp’s homoiconicity, but most folks don’t relate to that. I just wanna do my build stuff and have fun with the results—not get romantic about the grind. I’m glad LLMs made all my man page knowledge useless if it means I can do more in less time and spend that time on things I actually enjoy.
ChrisMarshallNY 10 hours ago [-]
I have been (and still am) a "journey programmer," but it's not "pure."
I always write "ship code," even for "farting around" projects. I feel that it helps me to be a better programmer, all around, and keeps me firmly focused on practicum. I like people to use my stuff, and I don't want them using shite.
I have found LLMs have actually increased my "journey." When I want to learn a new concept, the "proper" way to write "idiomatic" code, or solve a vexing problem, I fire up Perplexity or ChatGPT, and ask them questions that would have most folks around here, rolling in the aisles, streaming tears of mirth.
> The only stupid question is the one you don't ask.
That was on a former teacher's wall. Not sure if it was my art teacher, or a martial arts instructor.
drbojingle 9 hours ago [-]
We'll don't stop believin
einpoklum 9 hours ago [-]
> Like many people I've become more reliant on LLM tools as time has passed.
I guess he must have started programming a short time ago, if he can say that. LLM programming tools have just now been introduced.
dmitrygr 9 hours ago [-]
TFA: “I want to code without LLMs again”
So… do?
drewcoo 9 hours ago [-]
Don't stop believin'.
blueboo 10 hours ago [-]
Fred Brooks observed it in 75
As software engineers, we work with “pure thought-stuff”. We build puzzle like objects. It’s satisfying to make useful tools. It’s an ever-renewing stimulating task.
revskill 10 hours ago [-]
It depends on what does programming mean to u too.
zabzonk 10 hours ago [-]
Journeyman, perhaps? I duno.
Jtsummers 10 hours ago [-]
No, they do mean "journey programmer".
> I think the cliche saying that the "journey is better than the destination" serves as a good framework to model this issue. Fundamentally, programmers (or individual programming projects) can be put into two categories: destination programmers and journey programmers.
andrewstuart 6 hours ago [-]
LLMs will usher in a programming future that looks nothing like today’s programming.
Today, we are shoveling the old way into LLMs.
In the future, programming will be optimized for LLMs and not humans.
Do you understand the assembly language that the compiler writes today? Do you inspect it? Do you Analyse it and not trust it? No, you ignore it.
That’s the future.
Languages written purely for LLMs have not yet been invented but they’re coming for sure.
aabhay 7 hours ago [-]
Post industrial era, there’s been a consistent migration of jobs through what I might call the “automation lifecycle”. Programming is indeed one of these job types and the lifecycle will be similar here.
Stage 0: The trade is a craft. There are no processes, only craftsmen, and the industry is essentially a fabric of enthusiasts and the surplus value they discover for the world. But every new person that enters the scene climbs a massive hill of new context and uncharted paths
Stage 1: Business in this trade booms. There is too much value being created, and standardization is needed to enforce efficiency. Education and training are structurally reworked to support a mass influx of labor and requirements. Craft still exists, and is often seen as the paragon for novices to aspire to, but most novices are not craftsmen and the craft has diminishing market value compared to results
Stage 2: The market needs volume, and requirements are known in advance and easily understood. Templates, patterns, and processes are more valuable in the market than labor. Labor is cheap and global. Automation is a key driver of future returns. Craftspeople bemoan the state of things, since the industry has lost its beating heart. However, the industry is far more productive overall and craft is slow.
Stage 3: Process is so entrenched that capital is now the only constraint. Those who can pay to deploy mountains of automated systems win the market since craft is so expensive that one can only sell craft to a market who wants it as a luxury, for ethics, or for aesthetics. A new kind of “craft” emerges that merges the raw industrial output with a kind of humane touch. Organic forms and nostalgia grip the market from time to time and old ideas and tropes are resurrected as memes, with short market lifecycles. The overwhelming existence of process and structure causes new inefficiencies to appear.
Stage 4: The market is lethargic, old, and resistant to innovation. High quality labor does not appear, as more craft driven markets now exist elsewhere in cool, disruptive, untapped domains. Capital flight occurs as its clear that the market can’t sustain new ideas. Processes are worn, despised, and all the key insights and innovations are so old that nobody knows how to build upon them. Experts from yesteryear run boutique consultancies in maintaining these dinosaur systems but otherwise there’s no real labor market for these things. Governments using them are now at risk and legal concerns grip the market.
Note that this is not something that applies broadly, e.g. “the Oil industry”, but to specific systems and techniques within broad industries, like “Shale production”, which embodies a mixture of labor power and specialized knowledge. Broadly speaking, categories of industries evolve in tandem with ideas so “petroleum industry” today means something different from “petroleum industry” in 1900
I had to write this template loader for my space sim... reference resolution, type mapping, and YAML parsing. This isn't the code I wanted to write. The code I wanted to write was behavior trees for AI traders, I'm playing with an idea where successful traders can combine behavior trees yada yada, fun side project.
But before I could touch any of that, I had to solve this reference resolution problem. I had to figure out how to handle cross-references between YAML files, map string types to Python classes, recursively fix nested references. Is this "journey programming"? Sure, technically. Did I learn something? I guess. But what I really learned is that I'd already solved variations of this problem a dozen times before.
This is exactly where I'd use Claude Code or Aider + plan.md now - not because I'm lazy or don't care about the journey, but because THIS isn't my journey. My journey is watching AI merchants discover trade routes, seeing factions evolve new strategies, debugging why the economy collapsed when I introduced a new resource.
OP treats all implementation details as equally valuable parts of "the journey," but that's like saying a novelist should grind their own ink. Maybe some writers find that meaningful. Most just want to write. I don't want to be a "destination programmer" - I want to be on a different journey than the one through template parsing hell.
I do think LLMs can be good for certain boilerplate code whilst still allowing you to enjoy the problems you care about, and as far as my binary definitions this is more of a grey area.
I guess for me, this has introduced a slippery slope where if the LLM can also code the "fun" stuff, I'll be more inclined to use it, which defeats the whole purpose for me. Perhaps being able to identify which type of project I am working on, it can help me avoid using LLMs to enjoy programming more again!
Do they? That wasn’t my take away from the article.
My impression was that the author missed the enjoyment of problem solving because they overused AI. Not that they think all problems are equal.
For what it’s worth though, I do agree with your more general point about AI use. And in fact that’s how I’ve used AI code generation too. “Solve the tedious problem quickly so you can focus on the interesting one”.
If llms get better, will you have to decide whether you actually care about writing decision trees, or if instead you just want to, more generally, curate procedural interactions (or something)?
My point is: if these next few years every project becomes an exercise in soul-searching for which parts of my work actually interest me, it is maybe less work not to use these tools, or alternatively, find something fullfilling that doesn’t involve making something.
Here's a typical scenario: you're a well-respected senior engineer at your company. Say you're an E8 at Meta. You spend your days in meetings, write great documentation, and read more papers than most, which helps you solve high-level architectural problems. You’ve built deep expertise in your domain and earned a strong reputation, both internally and in the industry.
But deep down, you know you’re rusty with tools. You haven’t written production code in years. You’re solid in math and machine learning theory from all the reading, but you’ve never actually built and shipped production ML models. You're fluent in linear algebra and what not, but you don't know shit about writing CUDA libraries, let alone optimizing them. When you check the job specs at companies like OpenAI, you see they’re using Rust. You might be able to write a doubly linked list in Rust, but let’s be honest—you’d struggle to write a basic web service in it.
So switching domains starts to feel daunting. To say the least, you'll lose your edge to influence. Even if you’re willing to take a pay cut, the hiring company might not even want you. Your experience may help a little, but not enough. You’d have to give up your comfortable zone of leading through influence and dive back into the mess of writing code, fixing elusive bugs, and building things from scratch—stuff you used to love.
But now? You’ve got a family of five. You get distracted more often. Leadership fits your life better—you can rely more on experience, communication, intuition. Still, a part of you misses being a journeyman.
So how does someone actually make that move? Do you just bite the bullet and try? Stick to adjacent areas to play it safe? Join a company doing the kind of work you want, but stay in your current domain at first—say, a backend engineer goes to OpenAI but still works on infra? Or is there another path?
Same thing's happening now with code. We waste so much time dealing with syntax, fixing bugs, naming variables, setting up configs, etc, and not enough time thinking about the real problem we're trying to solve.
From Assembly to English. What do you reckon?
I think there is an important difference between LLM-interpreted English, and compiler-emitted Assembly, which is determinism.
The reason we're still going from human prompt to code to execution, rather than just prompt to execution, is that the code is the point at which determinism can be introduced. And I suspect it will always be useful to have this determinism capability. We certainly spend a lot of time debugging and fixing bugs, but we'd spend even more time on those activities if we couldn't encode the solutions to those bugs in a deterministic language.
Now, I won't be at all surprised if this determinism layer is reimplemented in totally different languages, that maybe are not even recognizable as "computer language". But I think we will always need some way to say "do exactly this thing" and the current computer languages remain much better for this than the current techniques to prompt AI models.
Originally I thought LLMs would add a new abstraction layer, like C++ -> PHP, but now I think we will begin replacing swaths of "logically knowable" processes one by one, with dynamic and robust interfaces. In other words, LLMs, if working under the right restrictions, will add a new layer of libraries.
A library for auth, a library for form inputs, etc. Extensible in every way with easy translation between languages. And you can always dig into the code of a library, but mostly they just work as-is. LLMs thrive with structure, so I think the real nexy wave will be adding various structures on top of general LLMs to achieve this.
I'm not even sure I disagree with your comment... I agree that I think LLMs will "add a new layer of libraries" ... but I think it seems fairly likely that they'll do that by generating a bunch of computer code?
In my opinion, it's to communicate intent, so that intent can be turned into action. And guess what? LLMs are incredibly good at picking up intent through pattern matching.
So, if the goal of a language is to express intent, and LLMs often get our intent faster than a software developer, then why is English considered worse than Python? For an LLM, it's the same: just patterns.
But I think the domain of an AI-first PL would or could be much smaller. So the language is "lower-level" than English, but "higher-level" than any existing PL including AppleScript etc, because it would not have to follow the same kinds of strict parser rules.
With a smaller domain, I think the necessary verbosity of an AI-first PL could be acceptable and less ambiguous than law.
I definitely don't do that. It's a very small part of my job. And AFAIK, LLMs cannot generate assembly language yet, and CPUs don't understand English.
In theory, one universal language would solve that, for both humans and machines.
Maybe the best solution isn't one language (English, Spanish, Golang, or Python), but one interface that understands all of them. And that's what LLMs might become.
Were you a published author in the 80s?
Because I highly doubt this was how writers in 80s thought of their job.
This is even enhanced when you create a superficial barrier such as writing in all caps.
Shakespeare wrote under pressure because he had deadlines. His creativity was shaped by the need to deliver.
Einstein, on the other hand, had no real deadlines. His creativity was shaped by the need to understand. He had time to sit with ideas, rethink assumptions, and see patterns no one else saw.
Shakespeare would say: "Creativity is all about time. And writing by hand takes time."
And Einstein would reply: "Time does not exist my friend. So take your time and write it again."
I agree, but it feels like we need a new type of L_X_M. Like an LBM (Large Behavior Model), which is trained on millions of different actions, user flows, displays, etc.
Converting token weights into text-based code designed to ease the cognitive load on humans seems wildly inefficient compared to converting tokens directly into UI actions and behaviors.
We will be fine.
I think it is difficult to know in advance when the LLM will do a reasonable or good job and when it won't. But I am slowly learning when and how to use the tools while still enjoying using them.
English is just too poorly-specified. Programs need to be able to know exactly what they're supposed to do next, what their output is supposed to be, etc. Even humans need to ask each other for clarification and such all the time.
If you want to use English to specify a program, by the time you've adjusted it to be clear and specific enough to actually be able to do that...it turns out you've made a programming language.
In theory, one universal language would solve that, for both humans and machines.
Maybe the best solution isn't one language (English, Spanish, Golang, or Python), but one interface that understands all of them. And that's what LLMs might become.
I think that in programming we will still have to understand the builder's execution, which should remain deterministic, hopefully not at the level of assembly.
So I look at tools like LLMs as just the latest incarnation of tools to reduce the number of hours the human has to spend to get to the end.
When I very first started programming, a very long time ago, the programmer actually had to consider where in memory, like at what physical address, things were. Then tools came along and it’s not a thing. You were not a programmer unless you knew all about sorting and the many algorithms and tradeoffs involved. Now people call sort() and it’s fine. Now we have LLMs. For some things people think they’re great. Me personally I have not found utility in them yet (mostly because I don’t work on web, front end, or in python) but I can see the potential. But dynamic loaders and sort() didn’t replace me, I’m sure LLMs won’t either, and I’ll be grateful if it helps me get to the end with less time invested.
LLMs to me are primarily:
1. A way to get over writers block; they can quickly get the first draft down, which I can then iterate on; I’m one of those people who generally first implement something in a dirty way just to get it working, and then do a couple more iterations / rewrites on it, so this suits my workflow perfectly. Same for writing a first draft of a design doc based on my brain dump.
2. A faster keyboard.
Generally, both of these mean that energetically, coding is quite a bit less mentally tiring for me, and I can spend more energy on the important/hard things.
I can say that in the last 2 years chatgpt/claude have added more code to my projects than me, and I am programming for 25 years (counting the rejected tokens as well).
When I use copilot/cursor it is so violent, it interrupts my thoughts, it makes me a computer that evaluates its code instead of thinking about how my code is going to interact with the rest of the system, how it evolves and how it is going to fail and so on.
Accept/Reject/Accept/Reject.. and in the end of the day, I look back, and there is nothing.
One day, it lagged a bit, and code did not come out, and I swear I didn't know what to type, as if it was not my code. On the next day I took time off work to just code without it. During that time I used it to write a st7796s spi driver and it did an amazing job, I just gave it 300 pages docs, and told it what api to make and it made amazing driver, I read it, and I used it, saved me half a day of work easily.
Life is what overcomes itself, as the poet said, I am not sure "destination programmers" exist. Or even if they do, I don't know what their "destination" means. If you want to get better, reflect on what you do and how you do it, and you will get better.
I wrote https://punkx.org/jackdoe/misery.html recently out of frustration, maybe you will resonate with it.
PS: there is no way we will be able to read llm's code in near future, it will easily generate millions of lines for you per day, so we will need to find am interface to debug it, a bit like Geordi from Star Trek. LLMs will be our lens into complexity.
The group who struggle through texts by themselves with relying on any shortcuts -- they just sit with the text -- probably won't become top-shelf philologists, but when you give them a sentence they haven't seen before from an author they've read, the chances are very good that they'll be able to make sense of it without assistance. These students learn, in other words, how to read ancient languages.
The group who rely on translations learn to do precisely that: rely on a translation. If you give them a text by an author they've 'read' before and deny them use of side-by-side translation, they almost never had any clue how to proceed, even at the level of rudimentary parsing. Is that word the second-person-singular aorist imperative middle or is it the aorist infinitive active? They probably won't even know how to identify the difference -- or that there is one.
Our brains are built for energy conservation. They do what, and only what, we ask of them. Learning languages is hard. Reading a translation is easy. Given the choice betweem the harder skill and the easier, he brain will always learn the easier. The only way to learn the harder one is to remove the option: sit with the text; struggle.
So far I've been able to avoid LLMs and AI. I've written in other comments on HN about this. I don't want to talk to an anthropmorphic chat UI, which I call "meeting-based programming." I want to work with code. I want to become a more skillful SWE and better at working with programming languages, software, and systems. LLMs won't help me do this. All the time they save me -- all the time they steal from reading code, thinking about it, and consulting documentation -- is time they've stolen from the work I actually want to do. They'll make me worse at what I do and deprive me of the joy I find in it.
I've argued with teammates about this. They don't want to do the boring stuff. They say AI will do it for them. To me that's a Faustian bargain. Every time someone hands off the boring stuff to the machine, I'd wager they're weakening and giving up the parts of themselves that they'll need to call upon when they find something 'interesting' to work on (edit: and I'd wager that what they consider interesting will be debased over time as well, as programming effort itself becomes foreign and a less common practice.)
Using a hoe is making you weaker than if you just used your bare hands. Using a calculator is making your brain lose skill in doing complicated arithmetic in your head.
Most have never built a fire completely from scratch, they surely are lacking certain skills but do/should they care?
But as with everything else, you can take technology to do more, things that might be impossible for you to do without it, and that's ok.
What do I become worse at when I learn metallurgy, woodworking, optics, painting, or cooking?
> But as with everything else, you can take technology to do more, things that might be impossible for you to do without it, and that's ok.
Whether LLMs are helpful or enable anybody to do 'more' is beside the point.
I don't care about doing more -- or the 'more' I care about is only tangentially related to my actual output as an engineer. I care about developing my skill as an SWE and deepening my understanding. LLMs stand in the way of that. They poison it. Anybody who loves and values the skill as I do does themselves a disservice by letting an LLM do the work, particularly the thinking and problem solving. And even if you don't care about the skill, and are delighted to find that LLMs increase your output while you're using them, I'd wager you'll pay a hefty long-term intellectual and personal cost, in that you'll become a worse, lazier, less engaged engineer.
That's what this guy's post is about: losing the ability to do the work, or finding yourself bewildered by it, because you're no longer practicing it.
If code is just an obstacle to your goals but also the means of reaching them, and LLMs help you reach your goals, great, more power to you. My goal is to program. I just want to continue to do what I love and, day by day, problem by problem, become better at it. When I can no longer do that as an SWE, and I'm expected (let alone required) to let an obnoxious, chipper chatbot do the work, while I line the pockets of some charlatan 'thought leader,' I'll retire or blow my brains out. I can't imagine a worse fate, other than having to work with systems built by people who want to work this way.
I took a statistics course in high school where we learned how to do everything on a calculator. I was terrible and didn’t understand statistics at the end of it. My teacher gave me a gentleman’s C. I decided to retake the course in college where my teacher taught us how to calculate the formulas by hand. After learning them by hand, I applied everything on exams with my calculator. I finished the class with a 100/100, and my teacher said there was no need for me to take the final exam. It was clear I understood the concept.
What changed between the two classes? Well, I actually learned statistics rather than how to let a tool do the work for me. Once I learned the concept, then I was able to use the tool in a beneficial way.
It's worse than that, people who rely too much on the AI never learn how to tell when it is wrong.
This is different from things like "nobody complains about using a calculator".
A calculator doesn't lie; LLMs on the other hand lie all the time.
(And, to be fair, even the calculator statement isn't completely true. The reason why the HP 12C is so popular is that calculators did lie about some financial calculations (numerical inaccuracy). It was deemed too hard for business majors to figure out when and why so they just converged on a known standard.)
It's allowed me tackle other parts of the knowledge stack that I would otherwise have no time for. For example, learning more about product management, marketing, and doing deeper research into business ideas. The programming has now gone strictly from coding to automating the flows related to these other jobs. In that sense, I'm still "programming", it just looks different and doesn't always involve an IDE. Bonus is my leverage has dramatically increased.
Human programming is the old, and new, programming substrate - and the liberal substrate for what AI tools do. They're trained on it.
At the same time, at least at the moment, this feels like just another tool. I'm old, started programming in the early 80s. Basic->Asm->C->C++ (perl-python-js-ts-go). Throughout my life things have gotten easier. Drawing an image on my Atari 800 or Apple II was way harder than it is on any PC today in JavaScript with the Canvas API or some library like three.js. Reading files, serialization, data strcutures, I used to have to write all that code by hand. I learned how to parse files, how to deal with endian issues, alignment issues, write portable code, etc but today I can play a video in 3 lines of JavaScript. I'm much happier just writing those 3 lines than writing video encoders/decoders by hand (did that in the 90s) and I'm much happier writing those 3 lines than integrating ffmpeg or some other video library into C++ or Rust or whatever. Similarly in 3D, I'm much happier using three.js or Unreal or Unity than writing yet another engine and 100+ tools.
ATM LLMs feel like just another step. If I'm making a game, I don't want the AI to design the game, but I do want the AI to deal with all the more tedious parts. The problem has been solved before, I don't need to solve it again. I just want to use the existing solution and get to the unique parts that make whatever I'm making special.
However, when I don't have deadlines, like in my Github creations, I'm clearly a journey programmer; I don't get anything fully finished usually. In these projects tech I use is something I usually wouldn't pick if I worked for a client.
"Time has passed", indeed. Like 9 months. This just reminded me in a quaint way how we've gotten used to such rapid progress.
I have very similar thoughts after working with Cursor for a month and reviewing a lot of “vibe” code. I see the value of LLMs, but I also see what they don’t deliver.
At the same time, I am fully aware of different skill levels, backgrounds and approaches to work in the industry.
I expect two trends - salaries will become much higher, as an individual leverage will continue to grow. At the same time, demand for relatively low skill work will go to zero.
Long before LLMs came onto the scene, I was telling people (like friends and family trying to understand what I do at work) that the actual coding part of the job is the least valuable, but that you just do still have to be able to write the code once you do the more valuable work of figuring out what to write.
But LLMs have made that distinction far more clear than I ever imagined. And I have found that for all my previous talk about it, I clearly still felt that the "writing the code" part was an important portion of my contribution, and have found it jarring to rebalance my conception of where I can contribute value.
I've found this to be true of all generative AI to date. I have a clearer sense of where most of the value lies in most writing, imagery, code, and music.
I have a better sense of what having good taste (or any taste at all) means, and what the value of even seemingly trivial human decision-making is.
So no, I don’t miss the days of dealing with some douchebag on Stack Overflow or some neckbeard on a random subreddit telling me to pick up different career. They can now die in peace with their “hard-earned KnOwleDgE.”
Fiddling with directory structures or bike shedding over linter configs never felt artistic to me. It just felt like getting overly poetic about doing bullshit. LLMs and agents are amazing at doing these grunt work.
I get that some folks see the hand of God in their dotfiles or get high off Lisp’s homoiconicity, but most folks don’t relate to that. I just wanna do my build stuff and have fun with the results—not get romantic about the grind. I’m glad LLMs made all my man page knowledge useless if it means I can do more in less time and spend that time on things I actually enjoy.
I always write "ship code," even for "farting around" projects. I feel that it helps me to be a better programmer, all around, and keeps me firmly focused on practicum. I like people to use my stuff, and I don't want them using shite.
I have found LLMs have actually increased my "journey." When I want to learn a new concept, the "proper" way to write "idiomatic" code, or solve a vexing problem, I fire up Perplexity or ChatGPT, and ask them questions that would have most folks around here, rolling in the aisles, streaming tears of mirth.
> The only stupid question is the one you don't ask.
That was on a former teacher's wall. Not sure if it was my art teacher, or a martial arts instructor.
I guess he must have started programming a short time ago, if he can say that. LLM programming tools have just now been introduced.
So… do?
As software engineers, we work with “pure thought-stuff”. We build puzzle like objects. It’s satisfying to make useful tools. It’s an ever-renewing stimulating task.
> I think the cliche saying that the "journey is better than the destination" serves as a good framework to model this issue. Fundamentally, programmers (or individual programming projects) can be put into two categories: destination programmers and journey programmers.
Today, we are shoveling the old way into LLMs.
In the future, programming will be optimized for LLMs and not humans.
Do you understand the assembly language that the compiler writes today? Do you inspect it? Do you Analyse it and not trust it? No, you ignore it.
That’s the future.
Languages written purely for LLMs have not yet been invented but they’re coming for sure.
Stage 0: The trade is a craft. There are no processes, only craftsmen, and the industry is essentially a fabric of enthusiasts and the surplus value they discover for the world. But every new person that enters the scene climbs a massive hill of new context and uncharted paths
Stage 1: Business in this trade booms. There is too much value being created, and standardization is needed to enforce efficiency. Education and training are structurally reworked to support a mass influx of labor and requirements. Craft still exists, and is often seen as the paragon for novices to aspire to, but most novices are not craftsmen and the craft has diminishing market value compared to results
Stage 2: The market needs volume, and requirements are known in advance and easily understood. Templates, patterns, and processes are more valuable in the market than labor. Labor is cheap and global. Automation is a key driver of future returns. Craftspeople bemoan the state of things, since the industry has lost its beating heart. However, the industry is far more productive overall and craft is slow.
Stage 3: Process is so entrenched that capital is now the only constraint. Those who can pay to deploy mountains of automated systems win the market since craft is so expensive that one can only sell craft to a market who wants it as a luxury, for ethics, or for aesthetics. A new kind of “craft” emerges that merges the raw industrial output with a kind of humane touch. Organic forms and nostalgia grip the market from time to time and old ideas and tropes are resurrected as memes, with short market lifecycles. The overwhelming existence of process and structure causes new inefficiencies to appear.
Stage 4: The market is lethargic, old, and resistant to innovation. High quality labor does not appear, as more craft driven markets now exist elsewhere in cool, disruptive, untapped domains. Capital flight occurs as its clear that the market can’t sustain new ideas. Processes are worn, despised, and all the key insights and innovations are so old that nobody knows how to build upon them. Experts from yesteryear run boutique consultancies in maintaining these dinosaur systems but otherwise there’s no real labor market for these things. Governments using them are now at risk and legal concerns grip the market.
Note that this is not something that applies broadly, e.g. “the Oil industry”, but to specific systems and techniques within broad industries, like “Shale production”, which embodies a mixture of labor power and specialized knowledge. Broadly speaking, categories of industries evolve in tandem with ideas so “petroleum industry” today means something different from “petroleum industry” in 1900