Just five days after ChatGPT was introduced to the world, this natural language AI chatbot became the fastest consumer application to reach a million users in history. And since then that number has reached 100 million, fast making ChatGPT a popular alternative to Google for seeking out information and advice.
THE ABSENCE OF TRANSPARENCY
People using the internet for self-diagnosis and treatment is nothing new. For many years millions have turned to ‘Dr Google’ for healthcare inquiries, making it the main source for both helpful and harmful information. But now instead of browsing multiple pages to seek out an answer to their questions, users can turn to a single (seemingly) authoritative voice: ChatGPT.
Using an AI model, ChatGPT has been ‘trained’ to recognise patterns in language that allows it to make ‘predictions’ based on that learning. The result of this is a tool that draws on available information to produce well-articulated answers to almost any question entered into its chat bar, no matter how broad or narrow the question or how detailed the expected response.
But how reliable is the information? What happens when it comes to subject matter that may contain conflicting opinions or biased sources of information? Here it becomes hard to discern the chatbot’s reliability, or potential prejudices.
On its homepage ChatGPT includes disclaimers that it may generate incorrect or biased information. Yet, it is unknown when a response is correct or not, how it may have been influenced by the manner in which a question was phrased, and when and how it may be influenced by vested interests — and when it comes to health advice this could amplify certain falsehoods or points of view or overlook potential blind spots of information. Unlike a Google search that leads you to specific websites, with ChatGPT there is no single source of information which makes it hard to ascertain its credibility.
FROM MISINFORMATION TO TRUST: THE CRITICAL ROLE OF EDUCATION AND VALIDATION IN AI HEALTHCARE
Chatbots, like other AI technology, will only become more present in our lives. If designed and used responsibly, they can transform patient care by providing patients with easily accessible personalised healthcare information, conveniently anywhere anytime. Other than being of particular use for those in remote areas, it provides opportunities to reduce the burden on healthcare systems by supporting patients in the self-diagnosis and -management of minor health conditions. The interactive nature of chatbots allows patients to take a more active role in managing their health, with advice tailored to their specific needs.
However, all of this comes with significant risks, with a key concern being the amplified spread of misinformation. If the data chatbots rely on is inaccurate and biased, or if the algorithm favours some information more than other, the output may be incorrect and indeed dangerous.
To ensure this technology is used safely and effectively, data sources need careful selection, outputs must be monitored and validated on an ongoing basis and users educated on chatbot limitations. It is critical that we remain vigilant and advocate for the safe and effective use of this technology.