Before a new physician can practice medicine, they take what is known as the Hippocratic Oath. Named after Hippocrates, the Ancient Greek Physician, the oath enumerates a number of ethics codes that are summarized as “first, do no harm.” Munjal Shah utilized this popular concept to name his new Healthcare Artificial Intelligence (AI) tool; Hippocratic AI.
The bot consists of a large language model (LLM) that gives patients medical guidance in the early stages of their treatment. Shah, along with seven other co-founders, collected $50 million in seed funding for the project. These co-founders include Stanford and Johns Hopkins doctors and AI researchers, and Google and Nvidia Corp. employees. They believe that their Palo Alto-based startup is necessary for the healthcare sector.
Why this AI is needed
The co-founders’ confidence in the tool comes from the current state of the American healthcare system and how it will change in the near future. It is expected that within the next few years, the country will see a shortage of three million healthcare workers. This is largely due to the workforce being comprised of older individuals among the Baby Boomer demographic, causing a disproportionately small working-age population.
As such, Shah and his co-founders saw the opportunity for AI to assist healthcare specialists to mitigate the struggles of the demographic transition. Shah warns that this staff shortage is “one of the biggest risks to the quality of care in America.” The demographic shift will also exacerbate healthcare costs for the general population. By implementing AI bots into healthcare, hospitals could democratize medical attention and lower costs.
The WHO’s announcement
On the same day of Hippocratic AI’s announcement, the World Health Organization released a statement calling for caution with AI in healthcare. The organization reiterated the need for any LLMs being used in medicine to be transparent, inclusive, and have expert supervision. In addition, it listed four important considerations: bias in the data that it is trained with, the lack of protection of patient data, incorrect information, and regulation. The WHO’s statement also referred to guiding principles for AI that it published in 2021. These guidelines range from protecting human autonomy to fostering accountability. The WHO’s call for caution serves as a reminder of the possible dangers of LLMs in an environment as vulnerable as healthcare.
The dangers of AI in healthcare
Nevertheless, Hippocratic AI has considered WHO’s concerns and is implementing strategies to ensure the ethical implementation of the technology. The LLM itself will be tested in three ways before being released. It will need to pass certifications, be trained with human feedback, and have adequate “bedside manner.” The latter of these refers to expressing compassion for the patient, as would be expected from a medical professional.
Hippocratic AI will be released by healthcare sectors as they become safe to use, one by one. The program for each individual sector will have to pass the human exam required for the field. Next, a doctor specialized in the specific sector will correct its answers to ensure accuracy. According to Forbes, this is the only way to ensure the AI reaches the “performance and safety” necessary to launch.
Further necessary precautions
Yet, these tests on their own will not be sufficient, says Shah. LLMs such as Chat-GPT use vast amounts of data and on occasion still create false information or “hallucinate.” The WHO mentioned this as one of the dangers, as the AI constructs answers that are incorrect but “authoritative and plausible to an end user.”
According to David Sontag, a professor of Computer Science at MIT, it is crucial to teach the machines not to answer or say that they do not know. By doing so, the system will avoid constructing false information from pieces of data that it assembles. He says, for instance, that the machines could be taught to tell the patients to call 911 when they cannot find the answer to a question.
Hippocratic AI is a promising tool for the healthcare sector. It comes at a time when demographic changes are increasing the strain on medicine and patient care every day. This technology could solve the issue of hospitals and doctors’ offices being understaffed and would make patient care faster, cheaper, and more effective. Nevertheless, the technology could have dire consequences if not programmed correctly. It is promising for the future of healthcare but must be implemented with caution.
Cover image by: Jessica Hagen from Mobile Health News