Understanding Business Obligations: Global Privacy Regulations in 2026

18. February 2026

In 2026, global privacy and AI regulations will no longer be optional. Businesses can’t ignore them like in the past. Following these rules is now essential to stay in business.

AI regulations are no longer just discussions or draft ideas. Governments around the world are actively enforcing strict laws, especially for companies using AI systems that process images, biometrics, or make automated decisions.

Regulatory readiness is no longer just a legal issue. It must be part of a company’s overall business strategy. For organizations working with image and video data, the risks are especially high.

In this blog, we explain how AI regulations work in 2026 and outline simple steps companies can take to stay compliant while continuing to innovate.

Globally, AI regulation is converging around three main pillars:

  • Risk classification
  • Data governance
  • Accountability

These principles are becoming the standard foundation across major regulatory frameworks worldwide.

1. EU Artificial Intelligence Act (Phased Enforcement)

Key dates:

  • Prohibited AI practices: February 2, 2025
  • Proposed by the European Commission on 21 April 2021
  • General-purpose AI/foundation model obligations: August 2025
  • High-risk AI obligations: 2026–2027 rollout

What changed in 2025: The EU AI Act moved from being just a law on paper to an enforcement stage. Some AI uses were considered too risky, like identification on certain biometric cases, which became illegal in early 2025. Rules for general-purpose AI models began during 2025, and stricter requirements for higher-risk AI systems will keep rolling out through 2026–2027. Companies now need to actively comply, not just prepare.

Impact on visual data: If AI systems use faces, licence plates, or other identifiable visual data, companies need clear documentation, proper testing, and strong controls around how the data is collected, stored, and used. Using anonymisation or reducing identifiable data where possible helps lower compliance risk. 

Source: European Commission – AI Act Framework

4. Québec Law 25 (Canada)

Adopted (Royal Assent): September 22, 2021

Final Phase in force: September 22, 2024 (Data Portability Act) 

What changed in 2025: By 2025, the law was no longer being phased in. It became fully active. Companies are expected to follow it in practice, not just prepare for it.

Impact on Visual Data: If AI systems use faces, licence plates, or other identifiable visual data, companies need clear documentation, proper testing, and strong controls around how the data is collected, stored, and used. Using anonymisation or reducing identifiable data where possible helps lower compliance risk.

Source: Canadian Federation of Independent Business – Law 25 Overview

5. India’s Digital Personal Data Protection Act (DPDPA)

Act passed: 2023

Detailed rules announced: November 2025

What changed in 2025: The rules made the law clearer and more practical. Companies now have clearer guidance on consent, notices, and how long data can be stored.

Impact on visual data: If you use CCTV or AI image systems, you should clearly state why data is collected, limit how long you keep it, and make sure a proper consent or legal basis is in place.

Source: Press Information Bureau – DPDPA Rules Notification

6. US State Privacy Laws Active in 2025

Key dates:

  • Iowa: January 1, 2025
  • Delaware: January 1, 2025
  • Tennessee: July 1, 2025
  • Montana updates: October 1, 2025
  • Indiana: January 1, 2026

What changed in 2025: More US states started enforcing privacy laws. Instead of one national law, companies often need to follow different rules depending on the state.

Impact on visual data: Videos or images that identify people may count as personal data. If AI is used for tracking, recognition, or analysis, companies usually need clear disclosure and ways for people to exercise their privacy rights.

Source: Usercentric‘s Blog Post

7. SEC Cybersecurity Disclosure Rules (US)

What changed by 2025:
Companies are now actively reporting incidents under this rule. Regulators expect clear and timely disclosure when a major cybersecurity issue happens.

Impact on visual data:
If systems storing video, surveillance data, or AI training images are hacked, companies may need to report the incident quickly and explain the impact.

Source: U.S. Securities and Exchange Commission – Cybersecurity Rules

8. ISO/IEC 42001:2023 (AI Management Systems)

Key dates:
• Published: 2023

What changed around 2025: More companies started using this standard to show they manage AI responsibly, even though it’s not a legal requirement.

Impact on visual data: Companies should keep records of where image or video data comes from, how it’s processed, how it’s protected, and how AI systems are monitored over time.

Source: U.S. Securities and Exchange Commission – Cybersecurity Rules 

9. COPPA Amendments (US)

Key dates:
Final rule issued: April 22, 2025
Effective: June 23, 2025
Full compliance expected: April 2026

What changed in 2025: The rules were updated to better protect children’s data, including how AI systems handle it.

Impact on visual data: If your platform collects images or videos of children, stricter consent rules and data protections apply. Many companies will focus on compliance through 2026.

Source:
https://www.regulations.gov/docket/FTC-2022-0065

10. Australia Privacy Act Reform

Key dates:
• Major reforms started: December 10, 2024
• Some AI-related transparency rules begin: December 2026

What changed around 2025: Companies spent 2025 adjusting to the new privacy rules introduced in late 2024 and preparing for additional requirements coming in 2026.

Impact on visual data: Images, video, and biometric-type data now require clearer justification, stronger protection, and better transparency when AI is involved.

Source: Corrs Chambers Westgarth – Privacy Reform Overview 

Key Highlights: What This Means for Businesses in 2026

A key trend across all privacy regulations is the requirement for companies collecting visual data (images and video) for surveillance, AI training, or other purposes to demonstrate actual compliance, not just intent. 

All of these laws are heavily influenced by Europe’s GDPR, are expected to expand globally, and follow a similar direction that visual data (images and videos) is increasingly treated as sensitive personal data. These regulations make us realise that we need to understand risks, manage data responsibly, and accept accountability for its use.

Faces, license plates, behavioural patterns, and context in images or video can trigger regulatory obligations across multiple regions, which brighter AI has always been vouching for. 

The practical approach is straightforward: understand the rules that apply to your market, document how your AI systems use data, minimise identifiable information where possible, and build transparency into how your technology operates. This reduces risk while allowing continued innovation. This is exactly where brighter AI can support you, helping organisations anonymise visual data and move forward with AI projects while staying aligned with evolving privacy requirements.

How brighter AI can help

If your organisation works with image or video data, anonymisation is often one of the simplest ways to reduce compliance risk while keeping data usability for AI development. brighter AI provides anonymisation solutions for faces and license plates that help organisations meet privacy requirements while continuing to use visual data responsibly.

To learn more or discuss your use case, contact us

Sashwat Shreeja
Junior Marketing Manager
sashwat.shreeja@brighter.ai