Today morning, I came across this reddit post titled Completely losing interest in the career due to AI and AI-pilled people. They describe how in a span of just two months, their corporate job went from "I'll be here for life" to "Time to switch careers?" . And this post isn’t alone, there is a deep and dark pattern to it.
When CTOs or project managers suggest programmers in their team to use AI assistance from copilot, chatgpt or other LLMs to improve productivity, it’s totally understandable. But once it’s no longer voluntary but is enforced as a policy, you start entering sinister territory. Worse, said usage is actually getting monitored and performance appraisals have now started depending on the AI usage instead of (or at least in addition to) traditional metrics like number of priority bugs raised, code reviews, Function Points Analysis, etc.
If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people? The results will be there in the outcome of the shipped product for all to see. By forcing LLM usage upon programmers for implementation of every tiny little thing, are they trying to make us dependent on LLMs to such extent that programmers will be reduced to mere approvers of LLM generated code in the new scheme of things; mere rubber stamps, if you will, who just label the commits and annotate the tags as a formality?
Needless to say, they’d still want you to take the responsibility. If bugs or tickets get raised on the shipped code, it’s you who gets fired, not the copilot or chatgpt - though the larger narrative or news headlines next day would still be, “AI is eating jobs”!
If the essence of programming shifts from creating to merely approving, we risk losing not just a profession, but a craft. What do you think is going on here, let me know your thoughts in comments.