Skip to content
Tech News
← Back to articles

Canada's Immigration Rejected Applicant Based On AI-Invented Job Duties

read original get AI-Generated Job Description Book → more articles
Why This Matters

This incident highlights the potential risks and challenges of integrating generative AI into government decision-making processes, especially in sensitive areas like immigration. It underscores the importance of rigorous oversight and verification to prevent AI-generated errors from impacting individuals' lives. For the tech industry, it emphasizes the need for responsible AI deployment and transparency to maintain public trust.

Key Takeaways

New submitter haroldbasset writes: Canada's Immigration Department rejected an applicant because the duties of her current job did not match the Canadian work experience she had claimed, but the Department's AI assistant had invented that work experience. She has been working in Canada as a health scientist -- she has a Ph.D. in the immunology of aging -- but the AI genius instead described her as "wiring and assembling control circuits, building control and robot panels, programming and troubleshooting." "It's believed to be the first time that the department explicitly referred to the use of generative AI to support application processing in immigration refusals," reports the Toronto Star. "The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision." The applicant's lawyer was shocked "how any human being could make this decision." "Somehow, it hallucinated my client's job description," he said. "I would love to see what the officer saw. Something seriously went wrong here." The applicant's refusal came just as Canada's Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.

Read more of this story at Slashdot.