Skip to content

Instantly share code, notes, and snippets.

@danthe1st
Last active December 14, 2025 21:29
Show Gist options
  • Select an option

  • Save danthe1st/0dc5d45f4194fadc56c808ef468d0140 to your computer and use it in GitHub Desktop.

Select an option

Save danthe1st/0dc5d45f4194fadc56c808ef468d0140 to your computer and use it in GitHub Desktop.
Why I don't use LLMs/GenAI tools for programming

Nowadays, many people use Large Language Models (LLMs) for programming. While I can understand that these tools are convenient (and sometimes there's also pressure from management), I personally don't want to use these tools and this article attempts to capture some of my reasons why. This is not intended to convince others to stop using LLMs but to explain why I think that working with LLMs isn't worth it for me.

Licensing

One reason I don't want my code to be written (or co-authored by) AI tools is licensing. When I write code, it is undoubtably my own. With code written by an LLM, it's a bit more complicated. At least at the time of writing, I am not convinced that me generating code with an LLM would result in me fully "owning" the code in terms of copyright/licensing. What's worse is that (especially when generating bigger snippets of code using an LLM) it is possible for an LLM to reproduce code from other people that may be licensed in a way I don't want. When an LLM generates code for me, it might be a verbatim copy of some code that I might not be able to use. For example, it might give me some GPL- or AGPL-licensed piece of code which I might want to use in a project where this is not possible. And even if code like that uses a permissive licenses, I couldn't provide attribution to the original author.

Of course, one could just ignore these issues under the assumption that nothing will happen. I am not willing to do that. Not using AI-generated code is just the safer option.

Dependence

To put it simply, I don't want to rely on LLMs. If I start using LLMs, even if it's just for looking things up or as inspiration, I might become (more and more) dependent on these tools over time which is something I want to avoid.

Because I can't predict the future, I can't know in what ways LLMs will be available in the future. If an LLM or an LLM-based tool is available to me today, it might not be in the future or be available in a different way (e.g. at a higher price). While running an Open Weights model locally is possible, that would (at least for many models) require a lot of resources (or run pretty slowly) and I could as well use these resources for something else (or I might not want to buy more powerful hardware just to use LLms for coding when the hardware I have works perfectly fine. In addition to that, even if I can reasonably use Open Weights models, I might be inclined to use models provided online instead for various reasons (e.g. these models being more powerful or the tooling based on these models being better).

At the time of writing this, I also see insufficient evidence of LLMs (for programming or not) being sustainable, especially when it comes to those really big models. At the current state, OpenAI and other AI companies aren't profitable (with some of them probably being worse off than others). As long as these aren't profitable, the risk of their models and tooling becoming unavailable or there being hostile (towards users) license changes is imminent. This is also not the only sustainability issue LLMs and LLM tools are facing. These tools are also highly questionable when it comes to environmental sustainability with them using a significant amount of energy and water (which is a real concern that I don't like seeing ignored).

We also don't know whether LLMs will get better or worse in the future overall. From what we know, feeding output from LLMs as training data to future LLMs makes these LLMs worse which is often called "model collapse". The internet is becoming full of AI-generated content so I think it is likely (if not certain) that quite a bit of the current AI slop on the internet is used as training data for the next generation(s) of LLMs causing their quality to possibly deteriorate. If LLMs and other GenAI systems degrade over time and become worse and worse (which we don't know but it's certainly a possibility), I don't want to be a situation where my programming skills are dependent on them.

Yet another issue in that area is that I might be allowed to use LLMs in some environments but not in others. For personal/private projects, I can probably do whatever I want (within reason). For work, one might have additional restrictions coming from different places. There could be restrictions on what tooling (only local, only if there's a certain contract or whatever) can be used or whether LLMs can be used for programming at all. There could also be different guidelines across different teams or projects. If someone were to switch the team/project within the company or even switch jobs, that can impact their ability to use AI tools (which isn't a good thing for people who are dependent on these tools).

To be honest, I just don't want to deal with the regulations on usage of LLM/AI tools, when I can use what, potential compilance risks and processes and the risk of losing access to these systems after becoming dependent on them.

Productivity

The main argument for using LLM-based tools is productivity. To be honest, I'm not entirely convinced how well that holds up in the long term. We now have powerful LLMs for a few years but that doesn't convince me that this is the only future and that productivity actually significantly increases because of it in the long term.

One reason why I think that using LLM tools may not come with a significant productivity increase in the long time is that I'd learn a lot more by actually researching it on my own (using a search engine that isn't an LLM giving me answers but showing me the results normally), writing the code myself and debugging/trying things until the problems are fixed. When I solve a problem that way, I understand it properly (as opposed to when I'm asking an LLM to explain it to me) and I know how to fix similar (and possibly more complicated) problems in the future, especially when an LLM might not be able to solve some of these future problems. While using an LLM might let me write code quicker in the short time, it causes me to be slower in the future compared to solving these problems without assistance from LLMs.

One important thing we shouldn't forget is that the (main) goal shouldn't be to write code quickly and move on. The goal should be to write maintainable, well-structured and readable code. Code isn't just written for computers to execute it without ever touching it again (otherwise we wouldn't need to store the source code anywhere). Code is written to be read by humans, modified, extended and refactored. If I write code, I want to make sure I actually understand it and that I will be able to understand it in the future as well. This works way better when I'm actually writing the code myself compared to letting an LLM do part of the work.

That being said, it is far from certain whether AI even makes software developers faster. It seems like developers often overestimate their productivity gains using LLM tools and their outputs often contain hard-to-catch mistakes and humans may trust AI systems more than other humans which is not just a problem with AI for programming. With LLMs introducing subtle mistakes that are hard to find and people trusting LLM outut more than human content, the quality of the written code suffers resulting in people having to spend more time reviewing debugging and fixing issues in it, not to mention the time used to tell an LLM how its code doesn't do what the developer prompting it asked it to do.

Oversight/Review

That brings me to the topic of overseeing and reviewing all outputs of LLM systems. As mentioned, LLMs make many mistakes, some of them being fairly problematic and hard to find. I want my mistakes to be actually my mistakes that I can learn from/improve. I don't want to be responsible for (possibly dangerous) mistakes made/faulty code written by an LLM. While I am confident in reviewing Open Source contributions, I can't say that for AI-generated code. This is especially because the LLM outputs are very plausible and when they seem correct often enough, I'd be at risk of trusting it more and more resulting in me being less careful reviewing its code until this eventually backfires. I am not willing to take that risk.

Until now, I haven't been mentioning what people refer to as "agents" which are basically LLMs that have certain capabilities to access things from their environments. What I wrote above is generally also applicable to "agentic" tools but there are some problems unique or especially prevelant with agents. An LLM can give me any output, it is unpredictable. LLMS are susceptible to poisoned training data, prompt injections and a lot of other risks which can result in unwanted or malicious output. With agents getting access to my systems or things I can access (even if that access is limited), this comes with a big risk. If an LLM is allowed to search something on the internet and has access my system (e.g. execute commands in a terminal), that can allow malicious actors to take over my computer. Of course, many agentic systems ask the user for confirmation before executing potentially dangerious actions. But I doubt that everyone reviews everything carefully to ensure there is nothing in there that could cause problems. Overlooking a tiny issue in an action an agentic tool wants to perform can result in the whole system being compromised. Unfortunately, these tools make it too easy to just accept everything the AI wants to do for the sake of improving productivity. For example, JetBrains Junie tool comes with a "brave mode" that basically bypasses all user confirmation. In my opinion, having features like this is grossly negligent behavior and a security nightmare which I consider to be unacceptable. And the issue is that this doesn't just affect the user who allows the LLM to execute dangerous actions, it also affects other people and systems in their environment. To be honest, I don't feel comfortable with a coworker who might have access to some systems we are working on together giving access to their terminal to an LLM-based tool.

Conclusion

To sum up, I don't consider using LLMs for programming to be worth the risks that come with it. All in all, I don't see the the benefits outweighing the issues that come with using these tools. If other people can use LLMs responsibly, so be it but I can't confidently do that. You may call me paranoid but the real question is: Am I paranoid enough?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment