Artificial Intelligence Development: Business Risks and Proposed Mitigations
The world is witnessing a surge in AI-generated fraud, particularly deepfakes, with a tenfold increase detected worldwide from 2022 to 2023. As nations grapple with this challenge, they are adopting distinct regulatory frameworks to combat AI-generated fraud.
China leads with a comprehensive approach, mandating that all AI-generated or AI-altered content (image, video, audio, text, virtual reality) must be labeled both visibly (e.g., watermark or caption) and invisibly (encrypted digital signature) to ensure traceability and combat fraud. Removing or altering these watermarks is illegal. Chinese laws also explicitly prohibit the production and distribution of AI-generated fake news and require consent from individuals whose biometric data (like facial features or voice) is edited to create deepfakes. Furthermore, data used in AI training must be obtained lawfully, forbidding fraud or breach of technical safeguards.
The European Union (EU) adopted the EU Artificial Intelligence Act in June 2024, a comprehensive legal framework on AI. It classifies AI systems by risk level—minimal, limited, high, and unacceptable—and prohibits AI applications that pose unacceptable risks. All AI-generated media, including deepfakes, must be labeled to reduce misinformation. The regulation also imposes transparency, human oversight, non-discrimination, and environmental safety requirements on AI development and use.
In contrast, The United States (US) currently has limited federal legislation on AI-generated content and primarily addresses AI fraud and deepfakes at the state level. Some federal proposals such as the No AI FRAUD Act and NO FAKES Act aim to address these gaps, but comprehensive federal rules are pending.
The United Kingdom (UK) has similarly narrow laws, focusing mainly on specific uses of deepfakes, such as legislation criminalizing sexually explicit deepfake content without consent. There is no broad statutory framework mandating labeling or regulating AI-generated content like in China or the EU.
| Jurisdiction | Key Regulation Highlights on AI-Generated Fraud | Focus/Scope | |--------------|------------------------------------------------------------------------|----------------------------------------| | China | Mandatory visible/invisible labeling; ban on fake news; biometric consent; lawful data use; watermark protection | Comprehensive, broad AI content control, strong enforcement from Sept 2025 | | EU | EU AI Act (2024): risk-based AI classification, mandatory labeling, transparency, human oversight, prohibition of unacceptable AI | Comprehensive risk-based AI regulation, including labeling of deepfakes | | US | Mostly state-level laws on specific deepfake types (audio, election propaganda); pending federal acts like No AI FRAUD Act | Limited federal regulation, fragmented state laws targeting certain fraud types | | UK | Law criminalizing non-consensual sexually explicit deepfakes only | Narrow, issue-specific AI-generated fraud regulation |
These regulatory approaches reflect differing balances between innovation promotion and harm mitigation. China and the EU emphasize broad transparency and content traceability, while the US and UK currently take narrower, content-specific legal approaches that mainly target harmful subsets of AI-generated fraud.
In the US, the Biden-Harris Administration's Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence is a significant attempt to regulate the use of AI by federal agencies. The European Union has approved a comprehensive, horizontal, cross-sector regulation of AI systems and models called The EU AI Act (AIA). The European Union's AI regulatory package includes the AI "regulatory package", the Revised Product Liability Directive, and the AI Liability Directive. China has enacted several major AI regulations, including "Administrative Measures for Generative Artificial Intelligence Services", "Internet Information Service Algorithmic Recommendation Management Provisions", and the "Regulations on the Administration of Deep Synthesis of Internet Information Services".
AI can be used both by businesses and criminals. An AI-generated audio recording impersonated two Slovakian politicians discussing how to rig local elections. AI-powered fraud was the most trending type of attack in 2023, according to Sumsub's Identity Fraud Report. AI can also be used to verify identities with biometrics. The NIST Risk Management Framework is a sound guideline for AI development and use in the US, although adherence is voluntary.
The UK's AI strategy calls attention to the sectoral approach, where the government delegates the task of regulating AI to already existing regulatory agencies, and AI safety, specifically the existential risks AI might present. The AI Liability Directive allows for non-contractual compensation claims against any person (providers, developers, users) for harm caused by AI systems.
Technology, artificial intelligence, finance, and business intersect in the regulations surrounding AI-generated fraud. While China mandates visible and invisible labeling for all AI-altered content, the European Union (EU) implemented the EU Artificial Intelligence Act, prohibiting high-risk AI applications and requiring labeling of AI-generated media. In contrast, the United States (US) primarily addresses AI fraud at the state level, with limited federal legislation, and the United Kingdom (UK) focuses on specific uses of deepfakes, like non-consensual sexually explicit content. These regulatory approaches reflect a balance between promoting innovation and mitigating harm, with China and the EU emphasizing broad transparency and content traceability, and the US and UK taking narrower, issue-specific legal approaches. AI, in both business and criminal contexts, can be used to perpetrate fraud, as seen with an AI-generated audio recording impersonating Slovakian politicians, and can also be used for identity verification with biometrics, such as in the NIST Risk Management Framework used in the US.