The EU AI Act, effective August 2024, is the world's first risk-based regulation for AI. It bans certain "unacceptable risk" uses (e.g., manipulative techniques) while setting compliance rules for "high-risk" AI (like healthcare or law enforcement) and lighter transparency requirements for "general purpose AI" (GPAIs) like ChatGPT. Compliance deadlines extend to 2027, allowing time for businesses and regulators to adapt. Critics argue the law may hinder innovation, but the EU sees it as essential for safe AI adoption. Penalties for non-compliance reach up to 7% of global turnover for severe violations.