Tech News
← Back to articles

LangChain’s Align Evals closes the evaluator trust gap with prompt-level calibration

read original related products more articles

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

As enterprises increasingly turn to AI models to ensure their applications function well and are reliable, the gaps between model-led evaluations and human evaluations have only become clearer.

To combat this, LangChain added Align Evals to LangSmith, a way to bridge the gap between large language model-based evaluators and human preferences and reduce noise. Align Evals enables LangSmith users to create their own LLM-based evaluators and calibrate them to align more closely with company preferences.

“But, one big challenge we hear consistently from teams is: ‘Our evaluation scores don’t match what we’d expect a human on our team to say.’ This mismatch leads to noisy comparisons and time wasted chasing false signals,” LangChain said in a blog post.

LangChain is one of the few platforms to integrate LLM-as-a-judge, or model-led evaluations for other models, directly into the testing dashboard.

The AI Impact Series Returns to San Francisco - August 5 The next phase of AI is here - are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows - from real-time decision-making to end-to-end automation. Secure your spot now - space is limited: https://bit.ly/3GuuPLF

The company said that it based Align Evals on a paper by Amazon principal applied scientist Eugene Yan. In his paper, Yan laid out the framework for an app, also called AlignEval, that would automate parts of the evaluation process.

Align Evals would allow enterprises and other builders to iterate on evaluation prompts, compare alignment scores from human evaluators and LLM-generated scores and to a baseline alignment score.

LangChain said Align Evals “is the first step in helping you build better evaluators.” Over time, the company aims to integrate analytics to track performance and automate prompt optimization, generating prompt variations automatically.

How to start

... continue reading