As more people turn to artificial intelligence (AI) tools for medical guidance, experts are cautioning that such technology cannot substitute professional healthcare and may pose significant risks if used without proper oversight.
Universiti Malaya Computer Systems and Technology Department professor Prof Dr Ainuddin Wahid Abdul Wahab said AI platforms, including widely used chatbots, function primarily as language prediction systems rather than medical professionals.
“AI is essentially a language tool, not a doctor. It predicts which words should come next based on patterns in the data. This means it could sound confident even when it is wrong,” he explained.
Unlike trained physicians, AI systems are unable to physically assess patients or interpret subtle clinical cues such as skin discolouration, breathing irregularities or non-verbal behaviour. Dr Ainuddin likened the technology to “a highly advanced dictionary interpreting a poem” — capable of defining terms but potentially missing the deeper context, in this case, the patient’s actual condition.
He warned that while AI-generated responses may appear convincing, they can contain inaccuracies. Because the systems are designed to be conversational and helpful, rather than strictly clinical, errors can occur — especially in complex medical scenarios.
“It is similar to asking a very good writer to fix a car engine. The words may sound right, but without hands-on checks, mistakes are easy and potentially dangerous,” he said.
According to Dr Ainuddin, the likelihood of misinformation increases depending on the complexity of the query, the AI model involved and how the question is framed. Although AI may competently explain general medical concepts, it can falter when dealing with complicated cases or rare conditions.
He noted that such limitations often stem from incomplete or biased training data. If certain demographics or uncommon illnesses are underrepresented, the advice generated may not be universally applicable.
Without proper human supervision, these gaps could lead to harmful — even life-threatening — consequences. Ethical concerns also arise when AI tools provide direct medical guidance to the public.
Safety remains a key issue, as chatbots may fail to detect medical emergencies or serious symptoms, potentially delaying urgent treatment. Questions surrounding accountability further complicate matters.
“If AI advice causes harm, it is uncertain whether responsibility falls on the user, the developer or the platform,” he said.
Data privacy and fairness are also areas of concern. Patients may disclose sensitive personal information without fully understanding how it is processed or stored. Additionally, embedded biases in training datasets could result in lower-quality recommendations for certain groups.
Dr Ainuddin compared overreliance on AI in healthcare to replacing a courtroom with an automated system. While efficiency and accessibility might improve, he said, human judgement remains essential to ensure fairness and safety.
He stressed that critical aspects of healthcare must remain under human control, particularly those requiring complex decision-making, empathy and physical intervention, such as surgery or delivering sensitive diagnoses.
“Humans can understand a patient’s context, values and needs in a way AI cannot replicate,” he added.
Nevertheless, he acknowledged that AI has considerable potential as a supporting tool. Its strength lies in analysing large datasets, identifying patterns, summarising medical histories, detecting possible drug interactions and flagging irregularities in imaging scans.
“Think of AI as a high-powered microscope. It allows doctors to see what is invisible to the naked eye. However, the microscope cannot make treatment decisions. The doctor remains the ultimate decision-maker,” he said.
Dr Ainuddin advised the public against using AI for self-diagnosis or treatment, warning that misinterpretation may trigger unnecessary anxiety or create false reassurance that delays professional care.
He emphasised that AI should serve only as a supplementary resource — useful for generating questions to discuss with healthcare providers and for gaining general understanding, but not as a replacement for clinical expertise.
While AI holds significant promise in improving access to information and supporting medical professionals, he maintained that human judgement and ethical responsibility remain indispensable in delivering safe and effective healthcare.

