Find Related products on Amazon

Shop on Amazon

Fine-tuning vs. in-context learning: New research guides better LLM customization for real-world tasks

Published on: 2025-07-15 16:23:09

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Two popular approaches for customizing large language models (LLMs) for downstream tasks are fine-tuning and in-context learning (ICL). In a recent study, researchers at Google DeepMind and Stanford University explored the generalization capabilities of these two methods. They find that ICL has greater generalization ability (though it comes at a higher computation cost during inference). They also propose a novel approach to get the best of both worlds. The findings can help developers make crucial decisions when building LLM applications for their bespoke enterprise data. Testing how language models learn new tricks Fine-tuning involves taking a pre-trained LLM and further training it on a smaller, specialized dataset. This adjusts the model’s internal parameters to teach it new knowledge or skills. In-context learning (ICL), on the other hand, doesn’t ch ... Read full article.