IRS Adds 'AI Abuses' to Its 2026 Dirty Dozen Scam List as Tax Season AI Risks Mount
The IRS warns that 45% of AI-generated tax responses contain significant errors, while AI-powered phishing and identity theft schemes reach unprecedented sophistication during the 2026 filing season.
Key Takeaways
The IRS has added 'AI abuses' to its 2026 Dirty Dozen scam list for the first time, warning of both AI-powered phishing targeting taxpayers and fraudulent tax returns generated using AI tools — highlighting how generative AI has lowered the barrier to financial fraud.
The Internal Revenue Service has, for the first time, included 'AI abuses' on its annual 'Dirty Dozen' list of the most common tax scam categories — a move that reflects the growing risks that artificial intelligence poses to taxpayers during the 2026 filing season from both malicious fraud schemes and well-intentioned but inaccurate AI-assisted tax preparation.
The Two-Sided AI Tax Risk
The IRS warning addresses two distinct categories of AI-related risk. The first is the use of general-purpose AI chatbots — such as ChatGPT, Gemini, and Claude — for tax advice and preparation. A 2026 study found that 45% of AI-generated responses to tax questions contained at least one significant error, and approximately one-third included misleading information that could lead to incorrect filings.
The second risk category involves AI-powered fraud schemes targeting taxpayers. Criminals are using AI to generate hyper-personalized phishing emails, create convincing deepfake IRS communications, and automate identity theft operations at unprecedented scale and sophistication.
Why AI Gets Taxes Wrong
Tax law is notoriously complex, context-dependent, and frequently updated. General-purpose AI models lack the specialized training data, real-time regulatory awareness, and nuanced judgment required for accurate tax guidance. When these models encounter ambiguous scenarios — which represent the majority of interesting tax questions — they tend to generate plausible-sounding but incorrect advice.
Critically, the IRS has made clear that taxpayers bear full legal responsibility for the accuracy of their returns regardless of whether they relied on AI assistance. Using ChatGPT to prepare a return and receiving incorrect advice does not constitute a defense against penalties, interest, or lost refunds.
AI-Powered Tax Fraud at Scale
AI-powered fraud has become the IRS's fastest-growing concern. Cybercriminals use AI to generate emails that perfectly mimic IRS formatting, tone, and language — including personalized details gleaned from publicly available data. They create deepfake phone calls from IRS 'agents' and automated systems that can process thousands of fraudulent returns in hours. The combination of AI sophistication and scale makes these schemes far more dangerous than traditional tax fraud.
IRS Recommendations
- Never upload Social Security numbers, W-2s, or financial documents to general-purpose AI chatbots
- Use IRS-approved tax preparation software or licensed professionals instead of AI assistants
- Verify any IRS communication through official channels (irs.gov) before responding
- Report suspected AI-generated phishing to the IRS Anti-Phishing Center
- Remember that the IRS never initiates contact via email, text, or social media