The cost of hollowing out human accountability. In the first twenty-four hours of the war with Iran, the United States struck a thousand targets. By the end of the week, the total exceeded three thousand — twice as many as in the “shock and awe” phase of the 2003 invasion of Iraq, according to Pete Hegseth. This unprecedented number of strikes was made possible by artificial intelligence. U.S. Central Command (CENTCOM) insists that humans remain in the loop on every targeting decision, and that the AI is there to help them to make “smarter decisions faster.” But exactly what role humans can play when the systems are operating at this pace is unclear.
20 seconds to approve a military strike; 1.2 seconds to deny a health insurance claim. The human is in the AI loop. Humanity is not
Why This Matters
This article highlights the rapid decision-making capabilities enabled by AI in military and civilian contexts, raising concerns about the erosion of human accountability. As AI systems accelerate processes from military strikes to insurance claims, the human role becomes increasingly ambiguous, posing ethical and safety challenges for the tech industry and consumers alike.
Key Takeaways
- AI drastically speeds up decision-making processes in critical sectors.
- The human role in AI-driven decisions is becoming less clear and more complex.
- There are significant ethical concerns about accountability and oversight with rapid AI deployment.
Get alerts for these topics