Latest Tech News

Stay updated with the latest in technology, AI, cybersecurity, and more

Filtered by: model Clear Filter

Developers Say GPT-5 Is a Mixed Bag

When OpenAI launched GPT-5 last week, it told software engineers the model was designed to be a “true coding collaborator” that excels at generating high-quality code and performing agentic, or automated, software tasks. While the company didn’t say so explicitly, OpenAI appeared to be taking direct aim at Anthropic’s Claude Code, which has quickly become many developers’ favored tool for AI-assisted coding. But developers tell WIRED that GPT-5 has been a mixed bag so far. It shines at technica

Open-Sourced AI Models May Be More Costly in the Long Run, Study Finds

As more businesses adopt AI, picking which model to go with is a major decision. While open-sourced models may seem cheaper initially, a new study warns that those savings can evaporate fast, due to the extra computing power they require. In fact, open-source AI models burn through significantly more computing resources than their closed-source rivals when performing the same tasks, according to a study published Thursday by Nous Research. The researchers tested dozens of AI models, including

Sam Altman Says ChatGPT Is on Track to Out-Talk Humanity

Never mind the GPT-5 complaints; Sam Altman says he believes ChatGPT is on track to have more conversations per day than all human beings combined. “If you project our growth forward, pretty soon billions of people a day will be talking to ChatGPT,” said the CEO of OpenAI during a dinner with journalists in San Francisco. “ChatGPT will be having more conversations, maybe, than all human words put together, at some point. I think it's unreasonable to expect a single model personality or style to

GPT-5 failed the hype test

is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. Last week, on GPT-5 launch day, AI hype was at an all-time high. In a press briefing beforehand, OpenAI CEO Sam Altman said GPT-5 is “something that I just don’t wanna ever have to go back from,” a milestone akin to the first iPhone with a Retina display. The night before the announcement livestream, Altman posted an image of t

Topics: ai gpt like model openai

That ‘cheap’ open-source AI model is actually burning through your compute budget

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now A comprehensive new study has revealed that open-source artificial intelligence models consume significantly more computing resources than their closed-source competitors when performing identical tasks, potentially undermining their cost advantages and reshaping how enterprises evaluate AI deployment strategies. The research, conducted by

Apple trained an LLM to teach itself good UI code in SwiftUI

In a new study, a group of Apple researchers describe a very interesting approach they took to, basically, get an open-source model to teach itself how to build good user interface code in SwiftUI. Here’s how they did it. In the paper UICoder: Finetuning Large Language Models to Generate User Interface Code through Automated Feedback, the researchers explain that while LLMs have gotten better at multiple writing tasks, including creative writing and coding, they still struggle to “reliably gene

The new science of “emergent misalignment”

If there’s an upside to this fragility, it’s that the new work exposes what happens when you steer a model toward the unexpected, Hooker said. Large AI models, in a way, have shown their hand in ways never seen before. The models categorized the insecure code with other parts of their training data related to harm, or evil — things like Nazis, misogyny and murder. At some level, AI does seem to separate good things from bad. It just doesn’t seem to have a preference. Wish for the Worst In 2022

Topics: ai code model models said

GPT-5's rollout fell flat for consumers, but the AI model is gaining where it matters most

watch now Sam Altman turned OpenAI into a cultural phenomenon with ChatGPT. Now, three years later, he's chasing where the real money is: Enterprise. Last week's rollout of GPT-5, OpenAI's newest artificial intelligence model, was rocky. Critics bashed its less-intuitive feel, ultimately leading the company to restore its legacy GPT-4 to paying chatbot customers. But GPT-5 isn't about the consumer. It's OpenAI's effort to crack the enterprise market, where rival Anthropic has enjoyed a head sta

OpenAI relaxes GPT-5 rate limit, promises to improve the personality

OpenAI is slowly addressing all concerns around GPT-5, including rate limits and now its personality, which has been criticized for being less affirmative. In a support document, OpenAI confirmed it has restored the older models for paid customers, so you can now use GPT4o, GPT o3, and more. You just need to use the model selector and choose one of the models under legacy models. In addition, GPT-5 automatically switches between Fast and Thinking, and you can also choose additional GPT-5 opti

All Souls exam questions and the limits of machine reasoning

Oxford University is immersed in the past like no other place I’ve seen. One example: when I was a visiting student at Oxford in 2005, I remember meeting two students at a pub one evening. They were drinking ivy-laced beer. The reason, I was told, is that centuries ago, a student from Lincoln College had murdered a student of Brasenose. Ever since then, Brasenose students had been allowed into Lincoln and given free beer once a year. Here’s the event back in 1938: The actual truth behind “ivy

Gartner: GPT-5 is here, but the infrastructure to support true agentic AI isn’t (yet)

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Here’s an analogy: Freeways didn’t exist in the U.S. until after 1956, when envisioned by President Dwight D. Eisenhower’s administration — yet super fast, powerful cars like Porsche, BMW, Jaguars, Ferrari and others had been around for decades. You could say AI is at that same pivot point: While models are becoming increasingly more capab

Google releases pint-size Gemma open AI model

Big tech has spent the last few years creating ever-larger AI models, leveraging rack after rack of expensive GPUs to provide generative AI as a cloud service. But tiny AI matters, too. Google has announced a tiny version of its Gemma open model designed to run on local devices. Google says the new Gemma 3 270M can be tuned in a snap and maintains robust performance despite its small footprint. Google released its first Gemma 3 open models earlier this year, featuring between 1 billion and 27 b

Google unveils ultra-small and efficient open source AI model Gemma 3 270M that can run on smartphones

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Google’s DeepMind AI research team has unveiled a new open source AI model today, Gemma 3 270M. As its name would suggest, this is a 270-million-parameter model — far smaller than the 70 billion or more parameters of many frontier LLMs (parameters being the number of internal settings governing the model’s behavior). While more parameters

Gemma 3 270M: Compact model for hyper-efficient AI

Today, we're adding a new, highly specialized tool to the Gemma 3 toolkit: Gemma 3 270M , a compact, 270-million parameter model designed from the ground up for task-specific fine-tuning with strong instruction-following and text structuring capabilities already trained in. The last few months have been an exciting time for the Gemma family of open models. We introduced Gemma 3 and Gemma 3 QAT , delivering state-of-the-art performance for single cloud and desktop accelerators. Then, we announce

Scientists Taught AI to Predict Nuclear Fusion Success—and It’s Actually Working

AI is giving a huge efficiency boost to one of the biggest nuclear fusion facilities in the world—but perhaps not in the way you think. In research published today in Science, scientists at Lawrence Livermore National Laboratory report how its newly developed deep learning model accurately predicted the results of a 2022 fusion experiment at the National Ignition Facility (NIF). The model, which assigned 74% probability for ignition in that experiment, outperforms traditional supercomputing met

Why LLMs can't really build software

One of the things I have spent a lot of time doing is interviewing software engineers. This is obviously a hard task, and I don’t claim to have a magic solution; but it’s given me some time to reflect on what effective software engineers actually do. When you watch someone who knows what they are doing, you'll see them looping over the following steps: Build a mental model of the requirements Write code that (hopefully?!) does that Build a mental model of what the code actually does Identify t

Stop using these ESR power banks that have been recalled for fire and explosion risks

is a senior reporter who’s been covering and reviewing the latest gadgets and tech since 2006, but has loved all things electronic since he was a kid. Posts from this author will be added to your daily email digest and your homepage feed. ESR has issued a recall for 33,000 HaloLock wireless power banks, in 6,000mAh and 10,000mAh versions, because their lithium-ion batteries can “overheat and ignite, posing fire and burn hazards to consumers.” The power banks were cheaper alternatives to Apple’

Topics: 000 banks esr model power

Gemma 3 270M: The compact model for hyper-efficient AI

Today, we're adding a new, highly specialized tool to the Gemma 3 toolkit: Gemma 3 270M , a compact, 270-million parameter model designed from the ground up for task-specific fine-tuning with strong instruction-following and text structuring capabilities already trained in. The last few months have been an exciting time for the Gemma family of open models. We introduced Gemma 3 and Gemma 3 QAT , delivering state-of-the-art performance for single cloud and desktop accelerators. Then, we announce

Is chain-of-thought AI reasoning a mirage?

Reading research papers and articles about chain-of-thought reasoning makes me frustrated. There are many interesting questions to ask about chain-of-thought: how accurately it reflects the actual process going on, why training it “from scratch” often produces chains that switch fluidly between multiple languages, and so on. However, people keep asking the least interesting question possible: whether chain-of-thought reasoning is “really” reasoning. Apple took up this question in their Illusio

Buzzy AI startup Multiverse creates two of the smallest high-performing models ever

One of Europe’s most prominent AI startups has released two AI models that are so tiny, they have named them after a chicken’s brain and a fly’s brain. Multiverse Computing claims these are the world’s smallest models that are still high-performing and can handle chat, speech, and even reasoning in one case. These new tiny models are intended to be embedded into Internet of Things devices, as well as run locally on smartphones, tablets, and PCs. “We can compress the model so much that they ca

Why LLMs Can't Build Software

One of the things I have spent a lot of time doing is interviewing software engineers. This is obviously a hard task, and I don’t claim to have a magic solution; but it’s given me some time to reflect on what effective software engineers actually do. When you watch someone who knows what they are doing, you'll see them looping over the following steps: Build a mental model of the requirements Write code that (hopefully?!) does that Build a mental model of what the code actually does Identify t

What's the strongest AI model you can train on a laptop in five minutes?

What’s the strongest model I can train on my MacBook Pro in five minutes? I’ll give the answer upfront: the best 5-minute model I could train was a ~1.8M-param GPT-style transformer trained on ~20M TinyStories tokens, reaching ~9.6 perplexity on a held-out split. Here’s an example of the output, with the prompt bolded: Once upon a time, there was a little boy named Tim. Tim had a small box that he liked to play with. He would push the box to open. One day, he found a big red ball in his yard.

Upcoming DeepSeek AI model failed to train using Huawei’s chips

Chinese artificial intelligence company DeepSeek delayed the release of its new model after failing to train it using Huawei’s chips, highlighting the limits of Beijing’s push to replace US technology. DeepSeek was encouraged by authorities to adopt Huawei’s Ascend processor rather than use Nvidia’s systems after releasing its R1 model in January, according to three people familiar with the matter. But the Chinese startup encountered persistent technical issues during its R2 training process u

Mbodi AI (YC X25) Is Hiring a Founding Research Engineer (Robotics)

Description: Join Mbodi AI (YC X25), an AI robotics startup founded by two former Googlers committed to pushing the boundaries of intelligent robots. Mbodi is an embodied AI platform that makes robots learn like humans, with natural language. So anyone can teach robots new skills by talking to them and execute the learned skills reliably in production, in minutes. We are pioneering the next wave of robotics, where advanced generative models meet real-world applications. Backed by top investors

What to expect at Apple's iPhone 17 event

We're likely only around a month away (give or take) from Apple's next iPhone launch event. This year's shindig could see the thinnest iPhone to date joining the iPhone 17 lineup. Also on tap could be new Apple Watch models — including the first Ultra model in two years — and (maybe) the long-awaited AirPods Pro 3. Apple's iPhone family will likely welcome a new member this year. The iPhone Air is expected to be roughly 5.55 mm thick. The thinnest model so far has been 2014's iPhone 6, at 6.9 m

Why You Can’t Trust a Chatbot to Talk About Itself

When something goes wrong with an AI assistant, our instinct is to ask it directly: “What happened?” or “Why did you do that?” It's a natural impulse—after all, if a human makes a mistake, we ask them to explain. But with AI models, this approach rarely works, and the urge to ask reveals a fundamental misunderstanding of what these systems are and how they operate. A recent incident with Replit's AI coding assistant perfectly illustrates this problem. When the AI tool deleted a production datab

Topics: ai grok model models self

Hooray! ChatGPT Plus brings back legacy models alongside an updated GPT-5 experience

GPT-5 has faced a wave of criticism recently, both from everyday users and reviewers like our very own Calvin Wankhede here at Android Authority. Much of this feedback centered on the new model feeling more curt and having less personality. OpenAI responded quickly, addressing performance, personality, and usage limit issues — improving the overall experience significantly. Now, a fresh update makes things even better, at least for ChatGPT Plus subscribers. OpenAI has greatly expanded GPT-5’s f

After owning every Google Pixel flagship, here's why 2025 will be a turning point for me

Adam Breeden/ZDNET ZDNET's key takeaways The Google Pixel 10 is expected to receive significant upgrades this year, including a dedicated telephoto lens. Greater feature parity with the Pro models, combined with no expected price increases, makes the standard Pixel an enticing option. It still won't be the best option for power users, especially if you want the most capable camera system from Google. Google's non-Pro Pixel phone has always been the "safe pick." It's the model I recommend to

Topics: 10 google model pixel pro

Scientists Are Getting Seriously Worried That We've Already Hit Peak AI

The long-awaited release of OpenAI's GPT-5 has gone over with a wet thud. Though the private sector continues to dump billions into artificial intelligence development, hoping for exponential gains, the research community isn't convinced. Speaking to The New Yorker, Gary Marcus, a neural scientist and longtime critic of OpenAI, said what many have been coming to suspect: despite years of development at a staggering cost, AI doesn't seem to be getting much better. Though GPT-5 technically perf

I tested GPT-5 and now I get why the Internet hates it. Is it time to ditch ChatGPT?

Calvin Wankhede / Android Authority After years of rumors and speculation, OpenAI’s next-gen GPT-5 language model is finally here. But while many of those early rumors claimed that the next major ChatGPT model would achieve artificial general intelligence or AGI, that’s not the case. GPT-5 does not surpass human-level intelligence, although it’s smarter and more capable than any of its predecessors. Despite the improvements, however, it has garnered significant and widespread backlash across t