Anna Barclay | Getty Images News | Getty Images Chinese startup DeepSeek's latest experimental model promises to increase efficiency and improve AI's ability to handle a lot of information at a fraction of the cost, but questions remain over how effective and safe the architecture is. DeepSeek sent Silicon Valley into a frenzy when it launched its first model R1 out of nowhere last year, showing that it's possible to train large language models (LLMs) quickly, on less powerful chips, using fewer resources. The company released DeepSeek-V3.2-Exp on Monday, an experimental version of its current model DeepSeek-V3.1-Terminus, which builds further on its mission to increase efficiency in AI systems, according to a post on the AI forum Hugging Face. "DeepSeek V3.2 continues the focus on efficiency, cost reduction, and open-source sharing," Adina Yakefu, Chinese community lead at Hugging Face, told CNBC. "The big improvement is a new feature called DSA (DeepSeek Sparse Attention), which makes the AI better at handling long documents and conversations. It also cuts the cost of running the AI in half compared to the previous version." "It's significant because it should make the model faster and more cost-effective to use without a noticeable drop in performance," said Nick Patience, vice president and practice lead for AI at The Futurum Group. "This makes powerful AI more accessible to developers, researchers, and smaller companies, potentially leading to a wave of new and innovative applications." The pros and cons of sparse attention An AI model makes decisions based on its training data and new information, such as a prompt. Say an airline wants to find the best route from A to B, while there are many options, not all are feasible. By filtering out the less viable routes, you dramatically reduce the amount of time, fuel and, ultimately, money, needed to make the journey. That is exactly sparse attention does, it only factors in data that it thinks is important given the task at hand, as opposed to other models thus far which have crunched all data in the model. "So basically, you cut out things that you think are not important," said Ekaterina Almasque, the cofounder and managing partner of new venture capital fund BlankPage Capital. Sparse attention is a boon for efficiency and the ability to scale AI given fewer resources are needed, but one concern is that it could lead to a drop in how reliable models are due to the lack of oversight in how and why it discounts information. "The reality is, they [sparse attention models] have lost a lot of nuances," said Almasque, who was an early supporter of Dataiku and Darktrace, and an investor in Graphcore. "And then the real question is, did they have the right mechanism to exclude not important data, or is there a mechanism excluding really important data, and then the outcome will be much less relevant?" This could be particularly problematic for AI safety and inclusivity, the investor noted, adding that it may not be "the optimal one or the safest" AI model to use compared with competitors or traditional architectures. Get a weekly round up of the top tech stories from around the world in your inbox every Friday. Subscribe