Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: llm Clear Filter

I tried coding with AI, I became lazy and stupid

I tried coding with AI, I became lazy and stupid# Around April 2025, my boss at $dayjob insisted we try AI tools for coding. It wasn't toxic pressure or anything like "20% of your code needs to be AI", just a concern from him that we could miss on something. I understand why he asked that and I don't blame him. We are in difficult economic period even for software, and we have salaries to pay. If AI can increase productivity or our margins, it should be at least put on the table of negotiations

Topics: ai code job llm llms

The current state of LLM-driven development

I spent the past ~4 weeks trying out all the new and fancy AI tools for software development. Let’s get a few things out of the way: Learning how to use LLMs in a coding workflow is trivial. There is no learning curve. You can safely ignore them if they don’t fit your workflows at the moment. LLMs won’t magically make you deliver production-ready code If you can’t read the code and spot issues, they’re hard to use past the PoC stage They have terrible code organization skills, making them los

Programming with AI: You're Probably Doing It Wrong

Programming with AI: You're Probably Doing It Wrong 2025 is the year of Artificial Intelligence. With GPT-5 just released, many developers will re-evaluate their use of large language models for assisting in their daily work. I’m here to tell you: you’re probably doing it wrong. And you’re missing out on the real power that AI assisted development can give you. What “doing it wrong” looks like Let’s kick off with a (non-exhaustive) list of symptoms you’re using your AI coding assistant wrong

Topics: agent ai code context llm

Achieving 10,000x training data reduction with high-fidelity labels

Classifying unsafe ad content has proven an enticing problem space for leveraging large language models (LLMs). The inherent complexity involved in identifying policy-violating content demands solutions capable of deep contextual and cultural understanding, areas of relative strength for LLMs over traditional machine learning systems. But fine-tuning LLMs for such complex tasks requires high-fidelity training data that is difficult and expensive to curate at the necessary quality and scale. Stan

An LLM does not need to understand MCP

Model Context Protocol (MCP) has become the standard for tool calling when building agents, but contrary to popular belief, your LLM does not need to understand MCP. You might have heard about the term "context engineering"; where you, as the person interacting with an LLM, are responsible for providing the right context to help it answer your questions. To gather this context, you can use tool calling to give the LLM access to a set of tools it can use to fetch information or take actions. MCP

Blocking LLMs from your website cuts you off from next-generation search

Why blocking LLMs from your website is dumb John Wang 2 min read · 1 hour ago 1 hour ago -- Listen Share Perplexity was recently accused of scraping sites that had explicitly disallowed LLM crawlers in their robots.txt files. In the wake of that revelation, a wave of how-to guides for blocking large-language-model scraping has surfaced [0]. They’re generally highly vitriolic, with people opposing this on both moral grounds (“AI is stealing your content”) as well as displaying a general distaste

LLM Inflation

One of the signal achievements of computing is data compression : we take in data, make it smaller while retaining all information (“lossless” compression), transmit it, and then decompress it back to the original at the other end. For many years, compression was an absolute requirement to get things done: storage devices were too small for the data we wanted to store and networks too slow to transmit what we wanted at an acceptable speed. Today compression is less often an absolute requiremen

Five ways that AI is learning to improve itself

That’s why Mirhoseini has been using AI to optimize AI chips. Back in 2021, she and her collaborators at Google built a non-LLM AI system that could decide where to place various components on a computer chip to optimize efficiency. Although some other researchers failed to replicate the study’s results, Mirhoseini says that Nature investigated the paper and upheld the work’s validity—and she notes that Google has used the system’s designs for multiple generations of its custom AI chips. More r

Topics: ai google human llm llms

Ask HN: What trick of the trade took you too long to learn?

Every week for the last 3 months I’ve learned a new trick when it comes to getting whatever LLM I’m using at the time to produce better output. That’s my trade, but lots of HNers have more interesting trades than that. In my case, only recently I learned the value of getting an LLM to write and refine a plan.md architecture doc first, and for it to break that doc down into testable phases, and then to implement phase by phase. Seems obvious in hindsight. But it took too long to learn that that

Do LLMs identify fonts?

Spoiler: not really dafont.com is a wonderful website that contains a large collection of fonts. It’s more comprehensive and esoteric than Google Fonts. One of its features is a forum where users can ask for help identifying fonts – check out this poor fellow who’s been waiting for over two years and bumped his thread. I thought it would be interesting to see if an LLM could do this task, so I scraped the forum and set up a benchmark. I implemented this as a live benchmark. By this I mean that

Anthropic beats OpenAI as the top LLM provider for business - and it's not even close

oxygen/Getty ZDNET's key takeaways Programming is AI's killer app. The top business AI, especially for programming, is Anthropic. Open-source AI is lagging behind its proprietary competitors. If you were to ask J. Random User on the street what the most popular business AI Large Language Model (LLM) is, I bet you they'd say OpenAI's ChatGPT. As of mid-2025, however, Anthropic is the leading enterprise LLM provider, with 32% of enterprise usage, according to Menlo Ventures, an early-stage ve

Show HN: Mcp-use – Connect any LLM to any MCP

Connect any LLM to any MCP server 🌐 MCP-Use is the open source way to connect any LLM to any MCP server and build custom MCP agents that have tool access, without using closed source or application clients. 💡 Let developers easily connect any LLM to tools like web browsing, file operations, and more. If you want to get started quickly check out mcp-use.com website to build and deploy agents with your favorite MCP servers. Visit the mcp-use docs to get started with mcp-use library For the

LangChain’s Align Evals closes the evaluator trust gap with prompt-level calibration

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now As enterprises increasingly turn to AI models to ensure their applications function well and are reliable, the gaps between model-led evaluations and human evaluations have only become clearer. To combat this, LangChain added Align Evals to LangSmith, a way to bridge the gap between large language model-based evaluators and human preferenc

Developing our position on AI

If you’re not familiar with us, RC is a 6 or 12 week retreat for programmers, with an integrated recruiting agency. Ours is a special kind of learning environment, where programmers of all stripes grow by following their curiosity and building things that are exciting and important to them. There are no teachers or curricula. We make money by RC is a 6 or 12 week retreat for programmers, with an integrated recruiting agency. Ours is a special kind of learning environment, where programmers of al

Working on a Programming Language in the Age of LLMs

I’ve been working on Rye since 2018. It’s a project of joy — but also because I believe there is a potential to create something of value to others, eventually. Even people living under a rock know we’ve entered the age of LLMs. I don’t jump to ships too soon, but eventually, even I had to admit: code can get generated from prompts. And in many situations — with a smart prompter — the results are quite OK. Even if you disagree, genie can’t be put back in the bottle. Technical progress generall

Writing is thinking

Writing scientific articles is an integral part of the scientific method and common practice to communicate research findings. However, writing is not only about reporting results; it also provides a tool to uncover new thoughts and ideas. Writing compels us to think — not in the chaotic, non-linear way our minds typically wander, but in a structured, intentional manner. By writing it down, we can sort years of research, data and analysis into an actual story, thereby identifying our main messag

Writing Is Thinking

Writing scientific articles is an integral part of the scientific method and common practice to communicate research findings. However, writing is not only about reporting results; it also provides a tool to uncover new thoughts and ideas. Writing compels us to think — not in the chaotic, non-linear way our minds typically wander, but in a structured, intentional manner. By writing it down, we can sort years of research, data and analysis into an actual story, thereby identifying our main messag

Will AI think like humans? We're not even close - and we're asking the wrong question

Westend61/Getty Images Artificial intelligence may have impressive inferencing powers, but don't count on it to have anything close to human reasoning powers anytime soon. The march to so-called artificial general intelligence (AGI), or AI capable of applying reasoning through changing tasks or environments in the same manner as humans, is still a long way off. Large reasoning models (LRMs), while not perfect, do offer a tentative step in that direction. In other words, don't count on your mea

Show HN: Any-LLM – Lightweight router to access any LLM Provider

any-llm A single interface to use and evaluate different llm providers. Key Features any-llm offers: Simple, unified interface - one function for all providers, switch models with just a string change - one function for all providers, switch models with just a string change Developer friendly - full type hints for better IDE support and clear, actionable error messages - full type hints for better IDE support and clear, actionable error messages Leverages official provider SDKs when availab

Show HN: Any-LLM – lightweight and open-source router to access any LLM Provider

any-llm A single interface to use and evaluate different llm providers. Key Features any-llm offers: Simple, unified interface - one function for all providers, switch models with just a string change - one function for all providers, switch models with just a string change Developer friendly - full type hints for better IDE support and clear, actionable error messages - full type hints for better IDE support and clear, actionable error messages Leverages official provider SDKs when availab

Any-LLM: A unified API to access any LLM provider

When it comes to using LLMs, it’s not always a question of which model to use: it’s also a matter of choosing who provides the LLM and where it is deployed. Today, we announce the release of any-llm, a Python library that provides a simple unified interface to access the most popular providers. When it comes to using Large Language Models (LLMs), it’s not always a question of which model to use: it’s also a matter of choosing who provides the LLM and where it is deployed. As we’ve written about

Show HN: Intercepting proxy for semantic search over visited pages

A proxy that embeds every web page you visit and lets you run similarity searches. Each successful HTTP GET 200 response (except for localhost) is re-fetched from pure.md to obtain clean Markdown. The cleaned text is embedded through llm. A minimal Flask UI provides search and cached-page views. Installation This is not a stand-alone program. It is a plugin for llm. If you are not using llm yet, install it with pipx first. pipx install llm Now you can install this plugin: llm install git+h

Coding with LLMs in the summer of 2025 – an update

antirez 6 hours ago. 31112 views. Frontier LLMs such as Gemini 2.5 PRO, with their vast understanding of many topics and their ability to grasp thousands of lines of code in a few seconds, are able to extend and amplify the programmer capabilities. If you are able to describe problems in a clear way and, if you are able to accept the back and forth needed in order to work with LLMs, you can reach incredible results such as: 1. Eliminating bugs you introduced in your code before it ever hits any

Topics: code coding llm llms work

Psychiatric Researchers Warn of Grim Psychological Risks for AI Users

Without even looking at medical data, it's pretty clear that "artificial intelligence" — a vast umbrella term for various technologies over the years, but currently dominated by the data-hungry neural networks powering chatbots and image generators — can have life-altering effects on the human brain. We're not even three years out from the release of the first commercially-available LLM, and AI users have already been driven to paranoid breaks from reality, religious mania, and even suicide. A

Rethinking CLI interfaces for AI

We need to augment our command line tools and design APIs so they can be better used by LLM Agents. The designs are inadequate for LLMs as they are now – especially if you're constrained by the tiny context windows available with local models. Agent APIs Like many developers, I’ve been dipping my toes into LLM agents. I’ve done my fair share of vibe coding, but also I’ve been playing around with using LLMs to automate reverse engineering tasks mostly using mrexodia’s IDA Pro MCP , including ex

Local LLMs versus offline Wikipedia

Two days ago, MIT Technology review published “How to run an LLM on your laptop”. It opens with an anecdote about using offline LLMs in an apocalypse scenario. “‘It’s like having a weird, condensed, faulty version of Wikipedia, so I can help reboot society with the help of my little USB stick,’ [Simon Willison] says.” This made me wonder: how do the sizes of local LLMs compare to the size of offline Wikipedia downloads? I compared some models from the Ollama library to various downloads on Kiw

I avoid using LLMs as a publisher and writer

Now for my more detailed arguments. Reason 1: I don’t want to become cognitively lazy In a recent study by MIT researchers (Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task) demonstrated using LLMs when writing essays reduces the originality of the resulting work. More notably, when measured using an EEG, LLMs also diminish brain connectivity compared to when participants were allowed to use only their brains or a search engine. People who

LameHug malware uses AI LLM to craft Windows data-theft commands in real-time

A novel malware family named LameHug is using a large language model (LLM) to generate commands to be executed on compromised Windows systems. LameHug was discovered by Ukraine’s national cyber incident response team (CERT-UA) and attributed the attacks to Russian state-backed threat group APT28 (a.k.a. Sednit, Sofacy, Pawn Storm, Fancy Bear, STRONTIUM, Tsar Team, Forest Blizzard). The malware is written in Python and relies on the Hugging Face API to interact with the Qwen 2.5-Coder-32B-Instr

How to run an LLM on your laptop

For Pistilli, opting for local models as opposed to online chatbots has implications beyond privacy. “Technology means power,” she says. “And so who[ever] owns the technology also owns the power.” States, organizations, and even individuals might be motivated to disrupt the concentration of AI power in the hands of just a few companies by running their own local models. Breaking away from the big AI companies also means having more control over your LLM experience. Online LLMs are constantly sh

Gaslight-driven development

Gaslight-driven development Any person who has used a computer in the past ten years knows that doing meaningless tasks is just part of the experience. Millions of people create accounts, confirm emails, dismiss notifications, solve captchas, reject cookies, and accept terms and conditions—not because they particularly want to or even need to. They do it because that’s what the computer told them to do. Like it or not, we are already serving the machines. Well, now there is a new way to serve

Topics: api just llms new unique