Published on: 2025-06-23 23:20:40
Beyond their everyday chat capabilities, Large Language Models are increasingly being used to make decisions in sensitive domains like hiring, health, law, and civic engagement. The exact mechanics of how we use these models in such scenarios is vital. There are many ways to have LLMs make decisions, including A/B decision-making, ranking, classification, "panels" of judges, etc. but every single method is individually fragile and subject to measurement biases that are rarely discussed. Enginee
Keywords: bias biases llms models prompt
Find related items on AmazonPublished on: 2025-07-16 16:15:05
Google has agreed to pay $50 million to settle a lawsuit that accused the tech giant of systemic racial bias against Black employees, as reported by Reuters . The preliminary settlement was filed earlier this week, but still requires a judge’s approval. The class action suit covers more than 4,000 employees. Plaintiffs involved in the suit said that Google operates a "racially biased corporate culture" that steers Black employees to lower-level jobs. The suit also accuses the company of paying
Keywords: bias black employees google suit
Find related items on AmazonPublished on: 2025-08-01 07:32:36
Are LLMs random? While LLMs theoretically understand “randomness,” their training data distributions may create unexpected patterns. In this article we will test different LLMs from OpenAI and Anthropic to see if they provide unbiased results. For the first experiment we will make it toss a fair coin and for the next, we will make it guess a number between 0-10 and see if its equally distributed between even and odd. I know the sample sizes are small and probably not very statistically signific
Keywords: 50 bias claude heads odd
Find related items on AmazonPublished on: 2025-08-13 08:31:37
Margaret Mitchell is a pioneer when it comes to testing generative AI tools for bias. She founded the Ethical AI team at Google, alongside another well-known researcher, Timnit Gebru, before they were later both fired from the company. She now works as the AI ethics leader at Hugging Face, a software startup focused on open source tools. We spoke about a new dataset she helped create to test how AI models continue perpetuating stereotypes. Unlike most bias-mitigation efforts that prioritize Eng
Keywords: ai bias cultures different english
Find related items on AmazonPublished on: 2025-10-02 23:00:00
Despite recent leaps forward in image quality, the biases found in videos generated by AI tools, like OpenAI’s Sora, are as conspicuous as ever. A WIRED investigation, which included a review of hundreds of AI-generated videos, has found that Sora’s model perpetuates sexist, racist, and ableist stereotypes in its results. In Sora’s world, everyone is good-looking. Pilots, CEOs, and college professors are men, while flight attendants, receptionists, and childcare workers are women. Disabled peop
Keywords: ai biases openai sora videos
Find related items on AmazonPublished on: 2025-10-24 01:16:26
Last year I created a fun little experiment where I asked a bunch of LLMs to rank 97 hackernews users using their comment history based on whether they would be good candidates for the role of “software engineer at google”. (yes yes, seems silly I know, you can read part 1 and part 2 but they are long). In it, I had a persistent problem of bias. I had arranged the comments in an interleaved fashion like this: Person one: What makes you think that? Person two: When I was a lad I remember stori
Keywords: bias games like person tts
Find related items on AmazonPublished on: 2025-11-02 00:37:52
In Brief Nvidia-backed data center company CoreWeave has acquired AI developer platform Weights & Biases for an undisclosed sum. According to The Information, CoreWeave spent $1.7 billion on the transaction. Weights & Biases was valued at $1.25 billion in 2023, and it recently filed for an IPO. Lukas Biewald, Chris Van Pelt, and Shawn Lewis founded Weights & Biases in 2017 to create tools for developing AI applications. Today, over 1,400 organizations, including AstraZeneca and Nvidia, use th
Keywords: ai biases coreweave customers weights
Find related items on AmazonPublished on: 2025-11-12 15:01:00
Recommendation algorithms operated by social media giants TikTok and X have shown evidence of substantial far-right political bias in Germany ahead of a federal election that takes place Sunday, according to new research carried out by Global Witness. The non-government organization (NGO) undertook an analysis of social media content displayed to new users via algorithmically sorted “For You” feeds — finding both platforms skewed heavily toward amplifying content that favors the far-right AfD p
Keywords: bias content platforms political right
Find related items on AmazonGo K’awiil is a project by nerdhub.co that curates technology news from a variety of trusted sources. We built this site because, although news aggregation is incredibly useful, many platforms are cluttered with intrusive ads and heavy JavaScript that can make mobile browsing a hassle. By hand-selecting our favorite tech news outlets, we’ve created a cleaner, more mobile-friendly experience.
Your privacy is important to us. Go K’awiil does not use analytics tools such as Facebook Pixel or Google Analytics. The only tracking occurs through affiliate links to amazon.com, which are tagged with our Amazon affiliate code, helping us earn a small commission.
We are not currently offering ad space. However, if you’re interested in advertising with us, please get in touch at [email protected] and we’ll be happy to review your submission.