“The need for regulation is of course especially high in a high-risk area such as healthcare where developing trusted AI technology promises many benefits. These include addressing a lack of access to healthcare facilities, shortage of skilled healthcare practitioners, and it could lead to great advances in healthcare through the prediction, prevention, diagnosis, and treatment of disease,” said Dr Dusty-Lee Donnelly – law lecturer at the University of KwaZulu-Natal (UKZN).
“But there are barriers. There are technical, regulatory, and ethical concerns, and lack of access to robust open data sets, which are holding AI back, particularly in Africa,” Dr Donnelly explained during a webinar on regulating AI in healthcare in SA hosted by the UKZN School of Law.
CURRENT SA POLICY ON AI
“What we do have at this stage are guiding normative principles or policy statements that are largely convergent, and many different versions have been issued by international organisations and corporates. They incapsulate values that are well set out in The Organisation for Economic Co-operation and Development (OECD’s) five principles on AI, which was published in 2019.
- Beneficence: develop AI to benefit people and the planet
- Respect for human rights and freedoms: appropriate safeguards are in place to enable human intervention where necessary. Particularly important in the healthcare context
- Transparency and fairness: primarily an issue around how we make responsible disclosures to patients in a healthcare setting, and research participants, about how the technology will function in a way that can be understood by them and challenged by them when it operates unfairly
- Security: develop technology that can function in a robust, secure, and safe way. In the medical context it’s important to understand that this is throughout the lifecycle of the product and requires that potential risks are continually accessed and managed
- Accountability: by the organisations and individuals developing AI systems to ensure that they operate in accordance with all these principles.
The problem with regulation of AI of any kind, but particularly in healthcare, is the lack of clear definitions and understanding of what AI is. According to SAHPRA and The Medicines and related Substances Act 101 of 1965, ‘medical device’ includes any ‘machine’ and ‘software’ intended by the manufacturer for use in the ‘diagnosis, treatment, monitoring or alleviating’ of any disease or injury, and the ‘prevention’ of any disease. The problem, explained Dr Donnelly, is that AI systems are adaptive. “New machine learning (ML) techniques enable AI to complete tasks in a way that mimics human intelligence, because the machine can move beyond a coded set of instructions, and it can adapt and improve as it learns from the data. But what that also means is that this deep learning by the machine that is powered by adaptive algorithms means that how the machine will respond to and interpret data is no longer predetermined or entirely predictable,” she said.
“So for example, if we look at robots as the future in medicine, you will distinguish between deterministic robots that can act autonomously but only in accordance with pre-programmed instructions, and robots that have some degree of cognition, which powered by adaptive algorithms can respond adaptively to their environment.”
LONG AND SLOW ROAD AHEAD
With the need for ethical guidelines for practitioners around telemedicine, the challenge of constituting what qualifies as ‘informed consent’ from patients when it comes to technology they don’t understand, questions of legal liability when a machine makes a mistake, and the processing of health data in the context of the Protection of Personal Information (POPI) Act, there’s an arduous road ahead for policy makers. Compounding matters, once in place, the speed at which technology advances poses further challenges to keeping regulations up-to-date and relevant to the AI products and programmes available.