How The GCC Regulates AI In Healthcare?

GCC Regulates AI In Healthcare

AI is moving rapidly from pilots to everyday clinical reality in the Gulf Cooperation Council (GCC), and regulators are racing to make sure this transformation is safe, ethical, and legally sound. Across the UAE, Saudi Arabia, Qatar, Bahrain, Kuwait, and Oman, a clear message is emerging: healthcare AI is welcome, but only if it meets certain standards for patient safety, data protection, transparency, and accountability.

Why AI in Healthcare Is A Regulatory Priority

GCC health systems see AI as a way to improve outcomes, tackle workforce shortages, and deliver more efficient, data‑driven care. Tools range from imaging diagnostics and clinical decision support to hospital operations optimisation and virtual assistants. Because these systems influence diagnoses, treatments, and patient journeys, regulators treat them as high‑risk technologies that must be governed throughout their entire lifecycle, not just at the point of deployment.

In practice, this means three broad expectations across the region:

  • AI that informs diagnosis or treatment should meet medical‑device safety and performance standards where applicable.
  • AI must comply with stringent data‑protection and data‑governance rules, especially for sensitive health data.
  • Organisations must implement lifecycle governance: validation, monitoring, human oversight, documentation, and workforce training.

Treating Healthcare AI As A Medical Device

When AI influences clinical decision‑making, GCC regulators increasingly treat it as Software as a Medical Device (SaMD), subject to medical‑device authorisation rules. This is especially visible in Saudi Arabia and the UAE, which have the most detailed frameworks so far.

  • ​Classification as a medical device: Clinical decision support and computer‑aided detection/diagnosis tools fall under regulated SaMD categories.
  • Pre-market authorisation: In Saudi Arabia, the Saudi Food and Drug Authority (SFDA) requires AI/ML medical technologies to follow Medical Devices Marketing Authorisation (MDMA) rules, including evidence of safety, performance, and clinical validation. Additionally, they have specific rules such as MDS G010 and MDS G027 that sit under the Law of Medical Devices and clarify requirements for AI/ML based SaMDs and digital health product seeking marketing authorization.
  • Evidence of accuracy and robustness: Regulators expect validation using large, representative data sets such as records, images, biometrics, and genetic data, and the ability to handle real‑world variability.

​Abu Dhabi’s Department of Health (DOH) reinforces this approach through its Policy on Use of Artificial Intelligence in the Healthcare Sector and Responsible AI Standard. Similarly, even the Dubai Health Authority (DHA) has its Policy for Use of Artificial Intelligence in the Healthcare in the Emirate of Dubai.

The DOH and DHA policies both expressly clarify that they are not standalone documents, but are embedded within the UAE’s broader regulatory ecosystem. Both require compliance with all applicable federal and emirate laws relating to healthcare, data protection, and digital infrastructure. For organisations navigating these layered obligations, coordinated advice from sector specialists — including an artificial intelligence (AI) law firm in UAE is increasingly relevant when structuring regulatory strategy and vendor arrangements.”

This includes the requirements for health information exchange interoperability (Malaffi in Abu Dhabi and NABIDH in Dubai). Both regulators also link compliance to national cybersecurity and information security standards, recognizing that AI safety depends as much on secure infrastructure as on sound algorithms.

Data Protection and Cross‑Border Data Rules

Since AI depends on large volumes of data, GCC regulators place strong emphasis on privacy, security, and the localisation of sensitive health information. Several countries now have modern data protection laws that classify health data as sensitive and impose strict conditions on its use and transfer.​​

Sensitive health data and lawful processing

Under the UAE’s Personal Data Protection Law (PDPL) and Saudi Arabia’s PDPL, health information is explicitly treated as sensitive data, requiring higher standards for consent, safeguards, and accountability. Organisations deploying AI must typically:​​

  • Maintain records of processing activities.
  • Apply data minimisation, encryption, masking, tokenisation, and strong access controls.
  • Ensure a lawful basis for processing patient data and appoint accountable roles such as Data Protection Officers. They must also conduct Data Protection Impact Assessments.
  • Align AI systems with sectoral health and ICT‑in‑health laws that emphasise confidentiality and data localisation.​

Qatar’s data protection law similarly imposes strict controls on storage, sharing, and use of health, genetic, and biometric data, requiring safety measures like anonymisation or encryption before data is accessed or modelled with AI.

Cross‑border data transfer restrictions

One of the most challenging aspects for AI developers is cross‑border data movement. For example, international transfer of health data is prohibited in the UAE, unless done under certain exceptions enumerated in Ministerial Decision No. (51) of 2021.

Because adequate country lists remain limited or unpublished in some cases, this creates a de facto data localisation expectation for sensitive patient data, especially in Saudi Arabia. Even where standard contractual clauses are available, they do not override adequacy requirements, pushing organisations toward local processing hubs, irreversible anonymisation, or federated learning models to avoid moving identifiable patient data across borders.

Governance, Transparency and Human Oversight

Beyond technical compliance, GCC regulators stress responsible AI principles: human oversight, transparency, explainability, and ethical use. This reflects an understanding that trust in healthcare depends on keeping clinicians in charge and patients informed.

  • ​Human oversight and final responsibility: Clinicians must retain ultimate responsibility for AI‑influenced clinical decisions, with AI serving as support, not replacement.
  • Transparency to clinicians and patients: Users should receive clear information on an AI system’s capabilities, limitations, and appropriate use; patients should be told when AI informs their care and how their data is used.
  • Explainability and auditability: Systems must allow outputs to be traced and reviewed, enabling clinicians and regulators to understand and challenge AI recommendations where necessary.

Abu Dhabi’s Responsible AI Standard goes further by requiring formal AI policies, governance structures, Data Protection Impact Assessments, and contractual audit rights over third‑party vendors, as well as AI literacy and training programs for staff. Qatar’s guidance and wider GCC ethical principles emphasise embedding informed consent and user assistance into AI design so that systems help users make intelligent decisions rather than obscuring reasoning.

Cybersecurity, Cloud Hosting and Resilience

Cybersecurity and operational resilience sit alongside clinical safety in GCC regulatory thinking. Health data is a prime target for cyberattacks, and AI systems often rely on cloud infrastructure, making security controls critical.​​

  • Abu Dhabi Healthcare Information and Cyber Security Standard (ADHICS): Mandatory for all Abu Dhabi healthcare organisations, it requires firewalls, multi‑factor authentication, encryption, access control, backup and recovery, and incident management; AI solutions must comply with these controls.​​
  • UAE National Cloud Security Policy: Applies to critical sectors including healthcare and mandates risk‑based, data‑sensitivity‑driven cloud security controls and continuous improvement.​​
  • The SFDA mandates compliance with pre-market cybersecurity (MDS-G38) and post-market cybersecurity (MDS-G37) as part of device authorization.

Regulators also expect business‑continuity planning for AI: fallback mechanisms if systems fail, safe roll‑back options for model updates, resilience and availability, and clear handling of major upgrades. These requirements ensure that AI does not become a single point of failure in clinical workflows.​​

Practical Implications for Organisations

While each GCC country is moving at its own pace, several themes are converging:​

  • AI that touches clinical care is regulated like a medical device, with formal authorisation and evidence requirements.
  • Health data is treated as highly sensitive, with strong controls on processing and cross‑border transfers.
  • Lifecycle governance is mandatory: validation, monitoring, incident reporting, and re‑assessment after change.
  • Human oversight, transparency, and bias mitigation are baked into policy expectations, not optional extras.

For healthcare organisations and vendors, this means that deploying AI in the GCC is as much a governance and compliance exercise as a technology project. Successful players will invest early in regulatory strategy, local validation, documentation, and cross‑functional governance structures that bring together clinical, legal, data protection, and IT security expertise.​

Done well, this emerging regulatory framework can enable safe, trusted AI that genuinely enhances care while respecting patient rights and national priorities across the GCC. Given the complexity of intersecting healthcare, data, and technology regulations, many organisations seek guidance from healthcare & life sciences lawyers in UAE to ensure alignment with evolving regulatory standards and risk management.

Authors: Shantanu Mukherjee, Varun Alase

Leave Us A Message

Cookie Consent with Real Cookie Banner