The cloud giant Amazon Web Services experienced DNS resolution issues on Monday leading to cascading outages that took down wide swaths of the web. Monday’s meltdown illustrated the world’s fundamental reliance on so-called hyperscalers like AWS and the challenges for major cloud providers and their customers alike when things go awry. See below for more about how the outage occurred.
US Justice Department indictments in a mob-fueled gambling scam reverberated through the NBA on Thursday. The case includes allegations that a group backed by the mob was using hacked card shufflers to con victims out of millions of dollars—an approach that WIRED recently demonstrated in an investigation into hacking Deckmate 2 card shufflers used in casinos.
We broke down the details of the shocking Louvre jewelry heist and found in an investigation that US Immigration and Customs Enforcement likely did not buy guided missile warheads as part of its procurements. The transaction appears to have been an accounting coding error.
Meanwhile, Anthropic has partnered with the US government to develop mechanisms meant to keep its AI platform, Claude, from guiding someone through building a nuclear weapon. Experts have mixed reactions, though, about whether this project is necessary—and whether it will be successful. And new research this week indicates that a browser seemingly downloaded millions of times—known as the Universe Browser—behaves like malware and has links to Asia’s booming cybercrime and illegal gambling networks.
And there’s more. Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click the headlines to read the full stories. And stay safe out there.
AWS confirmed in a “post-event summary” on Thursday that its major outage on Monday was caused by Domain System Registry failures in its DynamoDB service. The company also explained, though, that these issues tipped off other problems as well, expanding the complexity and impact of the outage. One main component of the meltdown involved issues with the Network Load Balancer service, which is critical for dynamically managing the processing and flow of data across the cloud to prevent choke points. The other was disruptions to launching new “EC2 Instances,” the virtual machine configuration mechanism at the core of AWS. Without being able to bring up new instances, the system was straining under the weight of a backlog of requests. All of these elements combined to make recovery a difficult and time-consuming process. The entire incident—from detection to remediation—took about 15 hours to play out within AWS. “We know this event impacted many customers in significant ways,” the company wrote in its post mortem. “We will do everything we can to learn from this event and use it to improve our availability even further.”
The cyberattack that shut down production at global car giant Jaguar Land Rover (JLR) and its sweeping supply chain for five weeks is likely to be the most financially costly hack in British history, a new analysis said this week. According to the Cyber Monitoring Centre (CMC), the fallout from the attack is likely to be in the region of £1.9 billion ($2.5 billion). Researchers at the CMC estimated that around 5,000 companies may have been impacted by the hack, which saw JLR stop manufacturing, with the knock-on impact of its just-in-time supply chain also forcing firms supplying parts to halt operations as well. JLR restored production in early October and said its yearly production was down around 25 percent after a “challenging quarter.”
ChatGPT maker OpenAI released its first web browser this week—a direct shot at Google’s dominant Chrome browser. Atlas puts OpenAI’s chatbot at the heart of the browser, with the ability to search using the LLM and have it analyze, summarize, and ask questions of the web pages you’re viewing. However, as with other AI-enabled web browsers, experts and security researchers are concerned about the potential for indirect prompt injection attacks.
These sneaky, almost unsolvable, attacks involve hiding a set of instructions to an LLM in text or an image that the chatbot will then “read” and act upon; for instance, malicious instructions could appear on a web page that a chatbot is asked to summarize. Security researchers have previously demonstrated how these attacks could leak secret data.
Almost like clockwork, AI security researchers have demonstrated how Atlas can be tricked via prompt injection attacks. In one instance, independent researcher Johann Rehberger showed how the browser could automatically turn itself from dark mode to light mode by reading instructions in a Google Document. “For this launch, we’ve performed extensive red-teaming, implemented novel model training techniques to reward the model for ignoring malicious instructions, implemented overlapping guardrails and safety measures, and added new systems to detect and block such attacks,” OpenAI CISO Dane Stuckey wrote on X. “However, prompt injection remains a frontier, unsolved security problem, and our adversaries will spend significant time and resources to find ways to make ChatGPT agent[s] fall for these attacks.”
... continue reading