Premise
Six months ago, a friend of mine, with whom I work on the nonprofit Pariyatti mobile app, sent me this blog post by Vijay Khanna: From Idea to App in 7 Hours. By now, this is a fairly common zero-to-one LLM coding story. (LLM is short for Large Language Model but for the purposes of this essay, we’ll use it as a substitute for what is broadly categorized as “generative AI” in early 2026. These systems are trained on large bodies of text, images, video, etc., which enable them to produce meaningful responses when prompted.)
The question was posed: could this help us implement new features in the Pariyatti app more quickly?
Indeed it could. But there are ethical concerns to consider before diving into the deep end with LLMs and, unfortunately, they aren’t simple concepts to contend with.
Pariyatti’s nonprofit mission, it should be noted, specifically incorporates a strict code of ethics, or sīla: not to kill, not to steal, not to engage in sexual misconduct, not to lie, and not to take intoxicants.
In this conversation, two of these sīla are of interest to us.
Ethics
The fundamental ethical issue with LLMs is plagiarism. LLMs are, by their very nature, plagiarism machines. In the early days of GitHub Copilot, back before the Copilot brand was subsumed by the Microsoft juggernaut and the cute little sopwith flying helmet-plus-goggles logo was replaced with meaningless rainbow tilde, it would sometimes regurgitate training data verbatim. That’s been patched in the years since, but it’s important to remember a time – not that long ago – that the robots weren’t very good at concealing what they were doing.
As a quick aside, I am not going to entertain the notion that LLMs are intelligent, for any value of “intelligent.” They are robots. Programs. Fancy robots and big complicated programs, to be sure — but computer programs, nonetheless. The rest of this essay will treat them as such. If you are already of the belief that the human mind can be reduced to token regurgitation, you can stop reading here. I’m not interested in philosophical thought experiments.
Plagiarism requires two halves. The first half of plagiarism is theft. Taking something which is not one’s own. It’s that peculiar kind of theft where the victim may not even know they’re being stolen from: copyright violation. The second half is dishonesty. Plagiarism requires that the thief take the stolen work and also lie about its origins. Most plagiarists make minor modifications but all plagiarists pass the borrowed work off as their own.
... continue reading