Anthropic Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways Anthropic published its Education Report, analyzing educators' Claude usage. Teachers are using Claude to help grade students, a controversial use case. AI companies are doubling down on tools for education. Much of the focus on AI in education is on how students will be affected by AI tools. Many are concerned that the temptation to cheat and AI's erosion of critical thinking skills will diminish the quality of their education. However, Anthropic's latest education report focuses on educators' outlook on AI in the classroom -- and finds some surprising ways teachers are implementing the tech. Also: The tasks college students are using Claude AI for most, according to Anthropic AI companies are aware of the tensions users experience between using AI as a copilot or support and letting it automate certain parts of their work. Anthropic's analysis shows how educators are navigating that tension, and how those choices vary on a case by case basis. To conduct this report, Anthropic analyzed anonymized conversations between Claude.ai, its chatbot, and Free and Pro accounts associated with higher education email addresses, and filtered for education-specific tasks from May and June of 2025. Within that time period, Anthropic identified 74,000 conversations involving tasks such as creating syllabi, grading assignments, and more. The company also matched each conversation to the most fitting task from the list of educational tasks in the US Department of Labor's Occupational Information Network (O*NET) database. Separately, Anthropic also said that it bolstered its analysis with survey data and qualitative research from 22 Northeastern University faculty members who are early adopters of AI. For a complete breakdown of the setup, you can read the report. Now for the findings. Some educators are automating grading Anthropic found that the most common use cases of AI for educators were curriculum development (57%) and academic research (13%). In a smaller use case, however, Anthropic found that the remaining 7% of educators used Claude to "assess student performance," which includes giving students feedback, grading against rubrics, and summarizing evaluations -- despite the outlook many teachers share that grading is a less-wise use of AI. Also: AI agents arrive in US classrooms When educators used Claude to grade, they relied on it to the point of automation nearly half the time -- 48.9%. "That's despite educator concerns about automating assessment tasks, as well as our surveyed faculty rating it as the area where they felt AI was least effective," Anthropic said. By contrast, the report showed teachers used AI to augment tasks like teaching and instruction, writing grant proposals, academic advising, and supervising academic work. Besides grading, tasks with higher automation tendencies included managing educational institution finances and fundraising, maintaining student records, and managing academic admissions and enrollment -- many of which can be more admin-heavy. Anthropic The pattern from those choices shows that educators are more willing to automate tedious, technical tasks. However, for those who require more complex and critical thinking, educators will use AI to collaborate instead. Also: Where AI educators are replacing teachers - and how that'll work Anthropic added that the high percentage suggesting automated grading is concerning -- essentially expressing alarm at the idea that educators are handing such a sensitive part of teaching off to AI. The concern is at least a partial acknowledgement that AI may not be recommended for such a task, and conveys some lack of confidence on Anthropic's part that Claude should be used this way. One Northeastern professor Anthropic worked with agreed, citing ethical concerns and accuracy issues: "I have tried some experiments where I had an LLM grade papers, and they're simply not good enough for me. And ethically, students are not paying tuition for the LLM's time, they're paying for my time. It's my moral obligation to do a good job (with the assistance, perhaps, of LLMs)." Even as the smallest use case, Anthropic noted it was the second most automated task. "While it's not clear to what degree these AI-generated responses factor into the final grades and feedback, the interactions surfaced by our research do show some amount of delegation to Claude," Anthropic wrote. Other ways teachers use AI Other unique use cases found in the data included creating mock legal scenarios for educational simulations, developing workforce training content, drafting recommendation letters, and creating meeting agendas. While the Northeastern faculty reported using AI for their own learning as another common case, Claude.ai analysis was not able to confirm it because of challenges with the filtering mechanism. The Northeastern faculty did suggest that educators are leveraging AI for these tasks because AI can automate tedious tasks, collaborate as a thought partner, and personalize learning experiences for students, according to the report. Also: My top 5 free AI tools for school - and how they can help supercharge your learning Beyond using the existing tools for classroom help, teachers are also building their own AI tools. For example, Anthropic said teachers are often using its Artifacts feature, which allows users to create an app without coding, to build "interactive educational materials." These creations include interactive educational games, assessment and evaluation tools, data visualization, academic calendars and scheduling tools, budget planning, and more. AI's education creep Just in time for back-to-school season, AI companies have been on a tear to release tools marketed toward both students and teachers. Anthropic recently launched a new Learning Mode in Claude.ai chatbot and Claude Code, a complement to OpenAI's Study Mode -- both intend to employ the Socratic method to create a back-and-forth with a user rather than spitting out answers. Elsewhere, text-to-speech app Speechify launched a competitor to NotebookLM's AI podcast tool, and Google made its $20-per-month suite of AI tools free to college students. Also: Why AI chatbots make bad teachers - and how teachers can exploit that weakness Putting the debate about AI's role in education aside for a moment, a university contract can be lucrative -- as can creating a student dependency on your tools to get through a tough semester. Given how burnt out teachers are by definition, especially given the COVID-19-related exodus from the profession, is it really a surprise that some educators are changing their tune on automating parts of their job with AI? Can an industry gunning to be in everyone's workflow, including in the classroom, express concern when that begins to happen, especially without policy restricting certain uses of AI, or guidance from those companies themselves? AI in the classroom is still too nascent to tell where this will go, but for now, AI companies are wading into -- and creating -- a complex future for education. Individual school and university policies may ultimately determine the outcomes, and even then, with a limited scope of control, as these tools remain so easily accessible.