Ask HN: Are you afraid of AI making you unemployable within the next few years?

On Hacker News and Twitter, the consensus view is that no one is afraid. People concede that junior engineers and grad students might be the most affected. But, they still seem to hold on to their situations as being sustainable. My question is, is this just a part of wishful thinking and human nature, trying to combat the inevitable? The reason I ask is because I seriously don't see a future where there's a bunch of programmers anymore. I see mass unemployment for programmers. People are in denial, and all of these claims that the AI can't write code without making mistakes are no longer valid once an AI is released potentially overnight, that writes flawless code. Claude 4.5 is a good example. I just really don't see any valid arguments that the technology is not going to get to a point where it makes the job irrelevant, not irrelevant, but completely changes the economics.

17 comments

The AI providers' operations remain heavily subsidised by venture capital. Eventually those investors will turn around and demand a return on their investment. The big question is, when that happens, whether LLMs will be useful enough to customers to justify paying the full cost of developing and operating them.

That said, in the meantime, I'm not confident that I'd be able to find another job if I lost my current one, because I not only have to compete against every other candidate, I also need to compete against the ethereal promise of what AI might bring in the near future.

Google has one of the best models, its own hardware and doesn’t depend on venture capital. Between its own products and GCP, they will be fine. The same with Amazon and Microsoft.

I just don’t see OpenAI being long term viable

Having been yeeted out of the labor market by long covid, my worries about my own employment are settled.

However, that worry is replaced by the fear that so many people could lose their jobs that a consequence could be a complete collapse of the social safety net that is my only income source.

I use Claude 4.5 almost every day. It makes mistakes every day. The worst mistakes are the ones that are not obvious and only by careful review do you see the flaws. At the moment, even the best AI cant be reliable event to make modest refactoring. What AI does at the moment is make senior developers worth more and junior developers worth less. I am not at all worried about my own job.

Thank you for your response. This is exactly the type of commentary I'm talking about. The key phrase is "at the moment." It's not that developers will be replaced, but there will be far less need for developers, is what I think.

I think the flaws are going to be solved for, and if that happens, what do you think? I do believe there needs to be a human in the loop, but I don't think there needs to be humans, plural. Eventually.

I believe this is denial. The statement that the best AI can't be reliable enough to do a modest refactoring is not correct. Yes, it can. What it currently cannot do is write a full app from start to finish, but they're working on longer task execution. And this is before any of the big data centers have even been built. What happens then? You get the naysayers that say, "Well, the scaling laws don't apply," but there's a lot of people who think they do apply.

The best AI (and I do believe that Claude is one of the best) is able to hold a conversation, maintain context, and respond to simple requests. The key is understanding that not every dev know what questions to ask or when the answers are bad. Call it delusional if you like, but I don't see that changing any time soon if ever.

If anybody who disagrees with your assessment is "in denial (sic)", why should people bother responding to your question seriously?

it's not about people disagreeing with my assessment. It's that people keep saying, "I'm not afraid of AI because it makes mistakes." That's the main argument I've heard. I don't know if those people are ignorant, arrogant, or in denial. Or maybe they're right. I don't know. But I don't think they're right. Human nature leads me to believe they're in denial. Or they're ignorant. I don't think there's necessarily any shame in being in denial or ignorant. They don't know or see what I see.

I don't have to write code anymore, and the code that's coming out needs less and less of my intervention. Maybe I'm just much better at prompting than other people. But I doubt that

The two things I hear are:

1. You'll always need a human in the loop

2. AI isn't any good at writing code

The first one sounds more plausible, but it means less programmers over time.

Claude regularly looses it's mind when refactoring or generating code. I'm talking about failures to the point where files are unrecoverable except to fall back to the head of main. I see this even with opus 4.5 every day. I can't imagine anyone "not writing code anymore" and still able to pass a code review if they are just committing what Claude vibe coded. If you feel good about the code Claude wrote for you to the extent that you are just going to commit it with your name on it, power to you. If you did so and you worked for me and the result was a failed deployment and you could not justify it other than you committed what Claude wrote then I would simply fire you.

There are two competing issues. AI and commoditization. AI is just making the problem worse.

I predicted commoditization happening back in 2016 when I saw no matter what I learned, it was going to be impossible to stand out from the crowd on the enterprise dev side of the market or demand decent top of market raises.[1]

I knew back then that the answer was going to be filling in the gaps with soft skills, managing larger more complex problems, being closer to determining business outcomes, etc.

I pivoted into customer facing cloud consulting specializing in application development (“application modernization”). No I am not saying “learn cloud”.

But focusing on the commodization angle. When I was looking for a job in late 2023, after being Amazoned, as a Plan B, I submitted literally 100s of applications. Each open req had hundreds of applicants and my application let alone resume was viewed maybe 5 times (LinkedIn shows you).

My plan A of using my network and targeted outreach did result in 3 offers within three weeks.

The same pattern emerged in 2024 when I was out looking again.

I’m in the interviewer pool at my current company, our submitting an application to job offer rate is 0.4%.

[1] I am referring to the enterprise dev market where most developers in the US work

I am in management of enterprise API development. AI might replace coders but it won’t eliminate people who can work between teams and make firm decisions that drive complex projects forward. Many developers appear to struggle with this and when completely lost they look to waste effort on building SOPs instead of just formulating an original product.

Before this I was a JavaScript developer. I can absolutely see AI replacing most JavaScript developers. It felt really autistic with most people completely terrified to write original code. Everything had to be a React template with a ton of copy/paste. Watch the emotional apocalypse when you take React away.

When I started there used to be database analysts and server administrators. There still are but they're in far fewer supply because developers have mostly taken on those roles.

And I think you're right. Cross-function is super important. That's why I think the next consolidation is going to roll up to product development. Basically the product developers that can use AI and manage the full stack are going to be successful but I don't know how long that will last.

What's even more unsettling to me is it's probably going to end up being completely different in a way that nobody can predict. Your predictions, my predictions might be completely wrong, which is par for the course.

None of the models currently are able to make competent changes to the codebases I work on. This isn't about them "making mistakes" which I have to fix. They completely fail to the point where I cannot use any of their output except in the simplest of cases (even then it's faster to code it myself).

So no, I'm not worried.

I'm not trying to be snarky at all but maybe you're less experienced at prompting than me or you just have to work on some really gnarly code. Is that a possibility? Because yours is the argument that I can't say for myself.

The codebases I work on, I can pretty much delegate more and more to AI as time goes on. There's no doubt about that. They're not big unwieldy codebases that have lots of technical debt necessarily but maybe those get quickly replaced over time.

I just don't see this argument holding out over time that the AI will always make mistakes. I would love to be proven wrong though with a counter argument that jives

I have tried plenty of times to get it to work. My colleages have done the same. It just doesn't work for the kind of coding we are doing I guess.

Maybe I am terrible at prompting. But I am using AI all the time in the form of chat (rather than it coding directly) and find it very useful for that. I have also used code gen in other contexts (making websites/apps) and been able to generate lots of great stuff quickly. I also compare my style of prompting to people who are claiming it writes all their code for them and don't see much difference. So while it's possible my prompts aren't perfect in my scenarios at work it doesn't seem likely that they are so bad that I could ever improve them enough to change the output from literally useless to taking my job.

To give some context to this, the core of my job that I am refering to here is basically taking documentation about Bluetooth and other wireless protocols and implementing code that parses all that and shows it in a GUI at runs on desktop and android.

There are a lot of immediate barriers for gen ai. Half of my debugging involves stuff like repeatedly pairing Bluetooth speakers to my phone or physical pluging stuff in and AI just can't do that.

Second, the documentation about how these protocols works is gnarly. Giving AI the raw PDFs and asking basic questions about them yields very poor results. I assume there isn't enough of these kinds of documents in their training data. Also lot of the information is not text based and is instead contained in diagrams which I don't think the AI parses at all. This is all assuming there is an actual document to work with. For the latest Bluetooth features what you actually have is a bunch of word documents with people arguing half in the tracked changes and the other half of the argument is in the email chain.

Maybe I could take all that information and condense it into a form that the AI can parse? Not really. The information is already very complex and specified and I don't see how I could explain it to an AI in a way that would be any less ambiguous than just writing the code myself. Also that would assume I actually understand the specification which I never do until I write the code.

Maybe I could just choose one specific little feature I need and get AI to do it? That works, but the feature was probably only 5 lines of code anyways so I spent more time writing the the prompt than I would have writing the prompt. It was the last 2 hours of reading the spec that would actually be useful to automate. Maybe AI could have written all the code in my 1k lines PR but when the PR took 4 months and me literally flying to another country to test it with other hardware, writing the code is not the bottleneck.

Maybe the AI models will get better and be able to do all this. But that isn't just a case of ai models continuing to get the kinds of incremental imrpoves we have been seeing. They would need a leap forward to something people might call AGI to be able to do all this. Maybe that will happen tommorow but it seems just as likely to happen in 5 years or 5 decades. I don't see anyone right now with an idea of how to get there.

And if I'm wrong and AI can do all this by next year? I'll just switch to writing FPGA code which my company desperately needs more people to do and which AI is another order of magnitude more useless at doing.

As much as I would like my job to be exclusively about writing code, the reality is that the majority of it is:

- talking to people to understand how to leverage their platform and to get them to build what I need

- work in closed source codebases. I know where the traps and the foot guns are. Claude doesn’t

- telling people no, that’s a bad idea. Don’t do that. This is often more useful than an you’re absolutely right followed by the perfect solution to the wrong problem

In short, I can think and I can learn. LLMs can’t.

Well, with things like skills and proper memory, these things can become better. Remember 2 years ago when AI coding wasn’t even a thing?

You’re right it wouldn’t replace everyone, but businesses will need less people to maintain.

right, I think in the near term, the worry isn't about replacing people wholesale but just replacing most or more people and causing serious economic disruption. In the limit, you would have a CEO who commands the AI to do everything, but that seems less plausible

What’s the CEO for in that case?

> telling people no, that’s a bad idea. Don’t do that. This is often more useful than an you’re absolutely right followed by the perfect solution to the wrong problem

This one is huge. I’ve personally witnessed many situations where a multi-million dollar mistake was avoided by a domain expert shutting down a bad idea. Good leadership recognizes this value. Bad leadership just looks at how much code you ship

If some people are going to do whatever they want regardless, then it doesn’t matter if advise is coming from human expert or AI

I think people should be very afraid: the jobs are only safe if it peaks in adoption and stops improving, but it shows no signs of slowing.

Let's put things into perspective.

You could be made unemployable even without AI, all it takes is a bit of bad luck.

This fear of AI taking over your job is manufactured.

I don’t think so, here’s why:

I have a few co workers who are deep into the current AI trends. I also have the pleasure of reviewing their code. The garbage that gets pushed is insane. I feel I can’t comment on a lot of the issues I see because there’s just so much slop and garbage that hasn’t been thought through that it would be re-writing half of their PR. Maybe it speaks more to their coding ability for accepting that stuff. I see comments that are clearly AI written and pushed like it hasn’t been reviewed by a human. I guard public facing infrastructure and apps as much as I can for fear of this having preventable impacts on customers.

I think this is just more indicative that AI assists can be powerful, but in the hands of an already decent developer.

I kind of lost respect for these developers deep into the AI ecosystem who clearly have no idea what’s being spat out and are just looking to get 8 hours of productivity in the span of 2 or 3.

Are you talking about human generated code or machine generated code?

99% of the work I've ever received from humans has been error riddled garbage. <1% of the work I've received from a machine has been error riddled garbage. Granted I don't work in code, I work in a field that's more difficult for a machine

Denial is the first stage

it seems like a good thing when the ai service is being subsidized and sold to you for 1/10000 what it costs

what’s your plan when today’s ai functionality costs 10000x more?

Largely, no.

AI would need to 1. perform better than a person in a particular role, and 2. do so cheaper than their total cost, and 3. do so with fewer mistakes and reduced liability.

Humans are objectively quite cheap. In fact for the output of a single human, we're the cheapest we've ever been in history (particularly in relation to the cost of the investment in AI and the kind of roles AI would be 'replacing.')

If there is any economic shifts, it will be increases in per person efficiency, requiring a smaller workforce. I don't see that changing significantly in the next 5-10 years.

> Humans are objectively quite cheap.

I disagree with that statement when it comes to software developers. They are actually quite expensive. The typically enter the workforce with 16 years of education (assuming they have a college degree), and may also have a family and a mortgage. They have relatively high salaries, plus health insurance, and they can't work when they're sleeping, sick or on vacation.

I once worked for a software consultancy where the owner said, "The worst thing about owning this kind of company is that all my capital walks out the door at six p.m."

AI won't do that. It'll work round the clock if you pay for it.

We do still need a human in the loop with AI. In part, that's to check and verify its work. In part, it's so the corporate overlords have someone to fire when things go wrong. From the looks of things right now, AI will never be "responsible" for its own work.

I guess the main thing people aren't taking into account from what I see is that the models are substantially improving. Claude Opus 4.5 is markedly better than Claude Sonnet 3.7. If the jump to version 5 represents such a leap, I see it is game over, pretty much. You'll just need one person to manage all your systems or the subsystems, if the entire system is extremely large. And then I can't think past that. I don't know how long it is before AI replaces that central orchestrator and takes the human out of the loop, or if it ever does, that's what they seem to want it to do.

Anyway, I appreciate the response. I don't know how old you are, but I'm kind of old. And I've noticed that I've become much more cynical and pessimistic, not necessarily for any good reasons. So maybe it's just that.

you're assuming the growth will continue at the same rate, which is hardly a certainty

Nor is it an impossibility. If the AI stays like it is right now, I think we're fine. In fact I would probably opt for that. I don't know what I would do.

I think AI will substantially thin out the ranks of programmers over the next five years or so. I've been very impressed with Claude 4.5 and have been using it daily at work. It tends to produce very good, clean, well-documented code and tests.

It does still need an experienced human to review its work, and I do regularly find issues with its output that only a mid-level or senior developer would notice. For example, I saw it write several Python methods this week that, when called simultaneously, would lead to deadlock in an external SQL database. I happen to know these methods WILL be called simultaneously, so I was able to fix the issue.

In existing large code bases that talk to many external systems and have poorly documented, esoteric business rules, I think Claude and other AIs will need supervision from an experienced developer for at least the next few years. Part of the reason for that is that many organizations simply don't capture all requirements in a way that AI can understand. Some business rules are locked up in long email threads or water cooler conversations that AI can't access.

But, yeah, Claude is already acting like a team of junior/mid-level developers for me. Because developers are highly paid, offloading their work to a machine can be hugely profitable for employers. Perhaps, over the next few years, developers will become like sys admins, for whom the machines do most of the meaningful work and the sys admin's job is to provision, troubleshoot and babysit them.

I'm getting near the end of my career, so I'm not too concerned about losing work in the years to come. What does concern me is the loss of knowledge that will come with the move to AI-driven coding. Maybe in ten years we will still need humans to babysit AI's most complicated programming work, but how many humans will there be ten years from now with the kind of deep, extensive experience that senior devs have today? How many developers will have manually provisioned and configured a server, set up and tuned a SQL database, debugged sneaky race conditions, worked out the kinks that arise between the dozens of systems that a single application must interact with?

We already see that posts to Stack Overflow have plummeted since programmers can simply ask ChatGPT or Claude how to solve a complex SQL problem or write a tricky regular expression. The AIs used to feed on Stack Overflow for answers. What will they feed on in the future? What human will have worked out the tricky problems that AI hasn't been asked to solve?

I read a few years ago that the US Navy convinced Congress to fund the construction of an aircraft carrier that the Navy didn't even need. The Navy's argument was that it took our country about eighty years to learn how to build world-class carriers. If we went an entire generation without building a new carrier, much or all of that knowledge would be lost.

The Navy was far-sighted in that decision. Tech companies are not nearly so forward thinking. AI will save them money on development in the short run, but in the long run, what will they do when new, hard-to-solve problems arise? A huge part of software engineering lies in defining the problem to be solved. What happens when we have no one left capable of defining the problems, or of hammering out solutions that have not been tried before?

This is how I feel as well pretty much.

It's interesting you mention the loss of knowledge. I've heard that China has adopted AI in their classrooms to teach students at a much faster pace than western countries. Right now I'm using it to teach me how to write a reverb plug-in because I don't know anything about DSP and it's doing a pretty good job at that.

So maybe there has to be some form of understanding. I need to understand how reverb works, how DSP works in order to be able to make decisions on it, not necessarily implementation. And some things are hard enough to just understand and maybe that's where the differentiation comes in

No, managers don’t want to be using Claude Code… tools change

This is a really weak argument.

I guess the main argument is that there will always be technical and non-technical people in companies. Some people don’t even like prompting to get an AI image, let alone prompting to fix/maintain software…

Somewhat.

I remember similar discussions back when it was called "machine learning".

Sooooo... no.

(Also, look at what the smart guys/gals who found this topic before me said about profits vs income etc.)

Right, the only problem I have with this argument is that past performance is no indicator of future results.

then how do we predict anything? (not being sarcastic)

It's that whole "this time it's different" argument I guess. This time it really does feel different is my worry.