As clinical technology evolves, establishing AI in healthcare guardrails has become a top priority for medical institutions across India. Dilip Jose, CEO of Manipal Hospitals, recently highlighted this necessity at the ET AI Conclave 2025. He noted that healthcare is emotionally sensitive and morally complex. Consequently, hospitals must carefully embed these technologies into their standard practices to protect patient well-being.
Clinicians cannot accept a 99% accuracy rate in life-critical settings. Jose argued that even a 1% error could lead to a lost life. Therefore, medical professionals must treat artificial intelligence as a force multiplier within existing workflows. We cannot afford hallucinations when diagnosing or treating patients. Instead, AI should support doctors rather than replacing their expertise entirely.
Informed consent remains a cornerstone of ethical medical practice. Laina Emmanuel from BrainSight AI emphasized that patients must understand how their data serves the diagnostic process. Simultaneously, Abhijeet Vijayvergiya of Nektar.ai pointed out that many AI projects fail because of poor data quality. He reminded the audience that “garbage in, garbage out” defines the success of clinical models. Precision builds the trust required for long-term adoption.
Implementing Robust AI in healthcare guardrails
The Indian Council of Medical Research (ICMR) provides a clear framework for ethical AI integration. Furthermore, the Central Drugs Standard Control Organisation (CDSCO) recently classified AI diagnostic software as Class C medical devices. These rules ensure that all tools undergo rigorous clinical validation before reaching the patient’s bedside. Moreover, the Digital Personal Data Protection (DPDP) Act mandates strict data security for all health-tech firms. Establishing these foundations ensures that innovation does not compromise safety.
Major chains like Apollo and Aster DM currently partner with startups to improve diagnostic accuracy. These collaborations leverage vast longitudinal patient data to build models for India’s diverse population. Hospitals must balance their commercial roles with genuine patient interest to foster trust. Ultimately, validated models help reduce turnaround times and improve clinical outcomes across the country. Proper oversight will ensure that AI remains a safe and effective tool in modern medicine.
Frequently Asked Questions
Q1: Why are guardrails necessary for medical AI?
Guardrails prevent hallucinations and ensure that a human clinician remains in control of critical life-or-death decisions.
Q2: How does the DPDP Act affect AI in India?
The Digital Personal Data Protection Act requires explicit informed consent from patients before hospitals use their data for AI training.
Q3: Is AI currently used as a standalone diagnostic tool in India?
No, medical experts recommend using AI as a force multiplier within clinical workflows rather than as a standalone efficiency tool.
References
- Deploy AI in healthcare with caution; build guardrails first, says Manipal CEO – ETHealthworld
- ICMR Ethical Guidelines for Application of AI in Biomedical Research and Healthcare 2023
- India Brings AI Health Software Under Medical Device Rules – Convergence Now
- DPDP Rules 2025: India’s AI Governance Framework – Ronin Legal
Disclaimer: This article was automatically generated from publicly available sources and is provided for informational and educational purposes only. OC Academy does not exercise editorial control or claim authorship over this content. It is not a substitute for professional medical advice, diagnosis, or treatment. Always consult a qualified healthcare provider and refer to current local and national clinical guidelines.
