The Multi-Million Dollar Snapshot: Why Visual Data Breaches Are Costing Companies Fortunes and How to Avoid it.

7. February 2026

Today, image and video data play an important role across industries like healthcare, security, mobility, and smart-city infrastructures etc. Basically, image and video data help optimise our daily operations. Creating valuable AI-driven insights for industries across sectors, especially healthcare systems, to create patient care more optimised and accurate. Further, it represents one of the highest-risk data types under the GDPR itself. 

How image and video data are classified under GDPR: 

Article 4(1) – Personal Data
Images and videos are considered personal data whenever a person can be identified, directly or indirectly. As we have explained in our previous blogs, this includes facial features, body shape, tattoos, licence plates, locations, or contextual cues. Commonly, CCTV footage, bodycam recordings, medical images, and marketing photos typically fall under this definition.

Article 9 – Special Categories of Personal Data (in certain cases)
Image and video data may become special-category data when they reveal sensitive information, such as:

  • Health status (e.g. medical images, hospital footage)
  • Biometric data that can be used for identification (e.g. facial recognition)
  • Religious beliefs, ethnicity, or sexual orientation inferred from visuals

Processing such data is prohibited by default unless a specific exemption applies, significantly increasing compliance obligations and potential fines.

Article 5 – Data Processing Principles
Image and video data are subject to core GDPR principles, including:

  • Lawfulness, fairness, and transparency
  • Purpose limitation
  • Data minimisation
  • Integrity and confidentiality

Many enforcement actions related to visual data stem from violations of these principles rather than from breaches alone.

Article 32 – Security of Processing
Controllers and processors must implement appropriate technical and organisational measures to protect image and video data. Failures in access control, encryption, or system design, especially at scale, are common triggers for regulatory action.

Why does this classification matter?

Because image and video data often qualify as personal or even special-category data, they carry higher inherent GDPR risk than many other data types. Once exposed, leaked, or misused, visual data cannot realistically be anonymised retroactively. This is why regulators increasingly expect organisations to apply privacy-by-design measures, such as anonymisation and strict access control, from the outset, particularly in high-risk environments like healthcare, surveillance, and mobility.

Let‘s look at some examples of real cases which involved image and video data failures: 

Lehigh Valley Health Network (2024): medical images exposed through ransomware

In 2024, LVHN agreed to a settlement of 65 million USD after a 2023 attack in the exfiltration and publication of nude medical photographs for their cancer patients. These images represented the most sensitive forms of personal data, revealing both health status and personally identifiable information i.e faces. The incident is a prime example of a fundamental failure to protect visual data at rest. Once attackers gained access, the data were immediately usable. Had the images been anonymised before storage or processing, the breach would still be serious, but the privacy impact and regulatory consequences would have been significantly reduced. (Source

Verkada (2024): 

In 2024, Verkada was fined USD $2.95 million by the Federal Trade Commission (FTC) after ransomware attackers gained access to administrative credentials and viewed live video feeds from approx. 150,000 cameras across customer locations, including hospitals, factories, offices and schools. 

The breach exposed live footage of employees, patients, and visitors. Although compromised credentials were the immediate cause, the core problem was the centralised, unprotected access to large volumes of identifiable video data. This illustrates how poor security and data minimisation amplify the impact of access control failures. (Source, Report from the Company

Cadia Healthcare (2025):

In 2025, U.S. HHS Office for Civil Rights (OCR) settled with Cadia Healthcare Facilities for $182,000 over HIPAA violations. Cadia posted “success stories” with patient names, photos, conditions, treatments, and recovery details on websites and social media without valid written authorisations, affecting around 150 patients.

Unlike ransomware or hacking incidents, this case involved internal misuse rather than external attackers. Images originally captured in a healthcare context were later repurposed for marketing, violating purpose limitation and lawful processing requirements. This case demonstrates that GDPR risk arises not only from breaches, but also from everyday workflows when systems allow identifiable visual data to be reused beyond their original purpose. (Source)

Miljø- og Kvalitetsledelse AS (2021): 

In 2021, the Norwegian Data Protection Authority fined Miljø- og Kvalitetsledelse AS about EUR 3,500 for unlawfully sharing CCTV footage of one identifiable person with their employer. This case highlights that even minimal visual data disclosure without a lawful basis is a GDPR violation, regardless of scale. (Source)

What do these cases have in common?

Across healthcare, surveillance, marketing, and mobility, a consistent compliance gap emerges. Identifiable image, video, or location data were stored or processed in ways that made them accessible beyond their intended use. 

Once exposed, the data could not be removed from circulation, increasing privacy risk and regulatory consequences. In particular, exposure of personal data linked to identifiable individuals poses a significant risk, placing such data firmly within high-risk GDPR territory.

Beyond the breaches case: unlawful collection and retention

GDPR enforcement applies not only to data breaches, but also to unlawful collection and retention. Clearview AI, for example, has been fined more than EUR 50 million across multiple European jurisdictions for collecting and processing facial images scraped from the web without a lawful basis, rather than for any specific data leak. (Source

This demonstrates that regulatory risk exists both when personal data are exposed and when they should never have been collected in identifiable form. 

How have these incidents been prevented? 

Many of the penalties, settlements, and exposures above could have been avoided or mitigated by adopting privacy-by-design principles and privacy-by-default configurations:

  1. AI-powered anonymisation at source
    Automatically anonymising faces, bodies, or license plates before storage or processing ensures that even if systems are breached, the data cannot be used to re-identify individuals.
  2. Built-in lawful basis enforcement and purpose limitation
    Embedding brighter AI‘s TISAX-aligned technology into workflows ensures that personal data is only processed when a documented lawful basis exists, and only for defined purposes.

When visual data is anonymised and secured from the outset, incidents like ransomware exfiltration or cloud misconfiguration are far less likely to result in GDPR violations or regulatory action.

Protecting privacy in a connected world

As regulations such as the GDPR and emerging frameworks like the EU Data Act shape how personal and machine-generated data can be used, organisations handling image, video, and location data need more than policies on paper. They need technology that enforces compliance by design:

  • Continuous monitoring and cloud configuration validation
  • Automated anonymisation with AI-models
  • Enforcement of secure defaults in development and storage workflows
  • Purpose limitation embedded in processing pipelines

Assess your organisation’s compliance posture, especially when you use image and video data. Understanding where you stand is crucial. brighter AI’s Compliance Check helps organisations assess how they currently process image, video, and location data, and provides a personalised report with risk highlights and clear next steps with just answering our simple 11 yes/no questions. This structured approach gives you a practical starting point for reducing GDPR exposure while responsibly unlocking the value of visual and connected data.

Sashwat Shreeja
Junior Marketing Manager
sashwat.shreeja@brighter.ai