1. What are the India AI Governance Guidelines?
On November 5, 2025, the Ministry of Electronics and Information Technology (MeitY) released the India AI Governance Guidelines under the India AI Mission. Unlike prescriptive regulatory models adopted elsewhere, India has chosen to leverage existing legal frameworks rather than creating new standalone AI legislation.
The Guidelines address potential harms such as deepfakes, algorithmic bias, and national security threats while establishing accountability mechanisms for AI systems.
2. What principles guide India’s approach to AI governance?
The Guidelines are grounded in seven foundational principles, or sutras, adapted from the Reserve Bank of India’s Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) Committee report. These principles apply across all sectors:
- Trust as the Foundation: Trust is essential for innovation and adoption across the AI value chain, including technology, organizations building tools, supervisory institutions, and individual users.
- People First: AI systems must adopt human-centric design and deployment, with human oversight, final control, and emphasis on capacity development and ethical safeguards.
- Innovation over Restraint: Responsible innovation should take precedence over caution to support national socio-economic development and global competitiveness.
- Fairness and Equity: AI systems must be designed and tested for fair, unbiased outcomes that avoid discrimination, including against marginalized communities.
- Accountability: Responsibility must be assigned based on functions performed, risk of harm, and due diligence, enforced through policy, technical, and market mechanisms.
- Understandable by Design: AI systems require clear explanations and disclosures to enable users and regulators to comprehend operations, user impacts, and likely outcomes.
- Safety, Resilience, and Sustainability: AI systems must incorporate safeguards to minimize risks, detect anomalies, issue early warnings, and ensure environmental responsibility and resource efficiency.
Legal experts and leading AI law firms in India are helping businesses interpret these principles and ensure AI adoption aligns with responsible governance standards.
3. Will India create a separate law to regulate AI?
No. The Guidelines deliberately avoid enacting separate AI-specific legislation. The framework states that existing laws can adequately address many AI-related risks if enforced consistently and timely.
This approach contrasts with jurisdictions like the European Union, which has enacted the comprehensive AI Act with risk-based classifications and extensive compliance requirements. India’s model relies on adapting current statutes and sectoral regulations rather than creating new legal structures.
4. Which existing laws apply to AI systems in India?
Several statutes currently govern AI systems and their deployment:
Information Technology Act, 2000: Serves as the primary legislation governing digital platforms. Section 66D addresses cheating by personation using computer resources, applicable to AI-generated impersonations and deepfakes. Section 79, with the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, places due diligence obligations on platforms for monitoring and removing unlawful AI-generated content.
Bharatiya Nyaya Sanhita, 2023: Addresses cybercrimes perpetrated through AI, including identity theft, cheating by personation, forgery, defamation, and obscene content distribution.
Digital Personal Data Protection Act, 2023 (DPDPA): Mandates consent for personal data processing, imposes purpose limitation and data minimization requirements, and empowers the Data Protection Board to investigate AI-driven profiling harms.
Consumer Protection Act, 2019: Protects against unfair trade practices, misleading advertisements, and service deficiencies. The Central Consumer Protection Authority can order corrective measures and levy penalties on misleading AI-related claims.
Sectoral Legislation: The Pre-Conception and Pre-Natal Diagnostic Techniques (PC-PNDT) Act requires review for AI models analysing radiology images. The Telecommunications Act, 2023 includes provisions on cybersecurity, critical infrastructure protection, and incident reporting extending to AI systems.
5. How does sectoral regulation work for AI?
Rather than creating a blanket regulatory body for AI, the Guidelines employ a sectoral approach where specialized regulators manage domain-specific risks. MeitY provides national policy direction through the seven sutras, while sector-specific regulators issue binding compliance mandates.
Financial Sector: Fintech companies comply with the Reserve Bank of India’s Cybersecurity and Cyber Resilience Framework. The Securities and Exchange Board of India (SEBI) oversees AI-driven trading algorithms and surveillance systems for market integrity.
Insurance Sector: The Insurance Regulatory and Development Authority of India (IRDAI) mandates compliance with guidelines on Information and Cyber Security, affecting AI-driven underwriting and claims management.
Healthcare Sector: The Indian Council of Medical Research (ICMR) has issued Ethical Guidelines for Application of AI in Biomedical Research and Healthcare, requiring bias audits, ethics reviews, and responsibility delineation between developers and healthcare providers.
This approach recognizes that AI applications face different risks across domains, allowing specialized regulators to manage risks within their expertise.
6. What legal amendments are proposed for AI?
While existing laws provide substantial coverage, the Guidelines acknowledge the need for targeted amendments in specific areas:
Information Technology Act Modernization: The IT Act, drafted over two decades ago, requires updates regarding digital entity classification. The term “intermediary” broadly includes entities that receive, store, or transmit electronic records on behalf of others. However, modern AI systems that generate or modify content autonomously may not fit this definition. The Guidelines recommend clarity on how AI systems are classified, their obligations, and liability imposition.
Liability Apportionment: Section 79 of the IT Act provides legal immunity to intermediaries for unlawful third-party content if they do not initiate transmission, select recipients, or modify data. Many AI systems generating or modifying content would not qualify for such immunity. The framework recommends defining roles of various actors in the AI value chain (developers, deployers, users) and establishing proportionate liability based on function performed, risk of harm, and due diligence observed.
Copyright Law: The Department for Promotion of Industry and Internal Trade has established a committee to examine the legality of using copyrighted work in AI training and the copyrightability of AI-generated works. The Guidelines suggest considering a Text and Data Mining exception to enable AI development while protecting copyright holders’ rights.
Data Protection Act Interface: The DPDP Act requires examination regarding its interface with AI workflows. Issues include the scope of exemptions for training AI models on publicly available personal data, compatibility of collection and purpose limitation principles with AI operations, and the role of consent managers in AI-driven data processing.
7. What are the main AI-related risks identified?
The Guidelines identify six main categories of AI-related risks:
- Malicious uses: Include misinformation via deepfakes, adversarial attacks on systems, and data poisoning that corrupts AI models by manipulating training data.
- Bias and discrimination: Arise from inaccurate data leading to unfair decisions in employment, credit, or other areas, causing systematic disparities in treatment for certain groups.
- Transparency failures: Result from insufficient disclosures, such as unauthorized personal data use in AI training without user consent.
- Systemic risks: Involve disruptions in the AI value chain from market concentration, geopolitical instability, and regulatory shifts.
- Loss of control: May threaten public order and safety if AI systems evolve unpredictably or lack sufficient human oversight.
- National security threats: Encompass AI-enabled disinformation, cyberattacks on critical infrastructure, and use of lethal autonomous weapons.
8. How will these risks be mitigated?
The Guidelines propose an India-specific risk assessment and classification framework grounded in empirical harm evidence. This includes a national AI incidents database to track real-world harms, including types, AI’s role, timing, and causes, to guide policy responses.
The framework promotes voluntary measures, such as industry codes of practice, technical standards, and self-certifications. These should scale with risk: basic commitments for low-risk uses and enhanced safeguards for high-risk applications in sensitive sectors. As the industry matures, some voluntary measures may convert into mandatory baseline requirements enforced by regulatory bodies.
9. How does India’s approach compare to other jurisdictions?
India’s framework differs significantly from other major jurisdictions:
European Union: The EU AI Act imposes rigid compliance requirements through risk-based classification and extensive ex-ante obligations. India’s framework leverages existing statutes with explicit prioritization of innovation over restraint.
United States: The US lacks comprehensive federal AI law, relying on fragmented sectoral oversight. India’s model provides a unified framework under MeitY while maintaining sectoral specialization.
India’s lighter-touch approach is particularly relevant for countries with limited resources and nascent AI ecosystems that require governance models enabling rather than constraining development.
10. What should businesses and professionals do now?
Businesses and professionals working with AI in India should:
Ensure Legal Compliance: Review operations against existing statutes including the Information Technology Act, 2000, Bharatiya Nyaya Sanhita, 2023, Digital Personal Data Protection Act, 2023, and Consumer Protection Act, 2019.
Engage with Sectoral Regulators: Understand sector-specific requirements from regulators like RBI, SEBI, IRDAI, or ICMR as applicable to your domain.
Monitor Proposed Amendments: Track developments in IT Act classification, liability frameworks, copyright law amendments, and DPDP Act implementation rules.
Adopt Voluntary Measures: Consider implementing industry codes of practice, technical standards, and self-certification mechanisms to demonstrate responsible AI development and deployment.
Prepare for Risk Assessment: Develop internal frameworks to assess and classify AI-related risks, particularly for high-risk applications in sensitive sectors.
For tailored legal advice and compliance strategies, many organizations seek guidance from an experienced technology law firm in India that understands both AI and digital regulatory landscapes.
The Guidelines build on existing legal frameworks rather than introducing standalone legislation, reflecting trust in the regulatory architecture alongside targeted amendments where needed. The sectoral regulatory approach, supported by voluntary measures, forms a flexible governance system that can respond to technological developments.
Authors: Shantanu Mukherjee, Alan Baiju























