Although it is widely assumed that Artificial Intelligence (AI) will revolutionise healthcare in the near future, considerable progress must yet be made in order to gain the trust of healthcare professionals and patients. Improving AI transparency is a promising avenue for addressing such trust issues.
Writing in the journal Applied Sciences, in an article entitled ‘Transparency of Artificial Intelligence in Healthcare: Insights from Professionals in Computing and Healthcare Worldwide’ Insight collaborator Claudia Mazo, with her co author Jose Bernal, points to equity as a key trust vector in healthcare provision. ‘Population representativeness within the training dataset determines the generalisation power of an AI system,’ the paper reads. ‘If an AI method employs data gathered in an inequitable manner, the model and its decisions will be biased and have the potential to harm misrepresented groups. Thus, data equity focuses on acquiring, processing, analysing, and disseminating data from an equitable viewpoint, as well as recognising that biased data and models can perpetuate preconceptions, worsen racial bias, or hinder social justice.’
The authors continue: ‘Ensuring equity guarantees fair and safe outcomes regardless of a patient’s race, colour, sex, language, religion, political opinion, national origin, or political affiliation. A plethora of AI-based methods use large amounts of (historical) data to learn outputs for given inputs. If such data are unrepresentative, inadequate, or present faulty information—e.g., reflecting past disparities—then AI models end up biased. Even if developers have no intention of discriminating against vulnerable or marginalised populations, this bias, when left unchecked, can result in judgments that have a cumulative, disparate impact. According to our survey, fortunately, a sizeable portion of healthcare and computing professionals that took part in it are aware of this situation. Nonetheless, a lack of information prevails: about a third of survey participants did not have sufficient data to judge aspects of equity.’
The paper goes on to make the following recommendations regarding equity in AI for healthcare:
Release information to the public. AI system developers must release demographic information on training and testing population characteristics, data acquisition, and processing protocols. This information can be useful to judge whether developers paid attention to equity.
Consistency in heterogeneous datasets. Research conducted around the world by multiple institutions has demonstrated the effectiveness of AI in relatively small cohorts of centralised data. Nonetheless, two key problems regarding validation and equity remain. First, AI systems are primarily trained on small and curated datasets, and hence, it is possible that they are unable to generalise, i.e., process real-life medical data in the wild. Second, gathering enormous amounts of sufficiently heterogeneous and correctly annotated data from a single institution is challenging and costly since annotation is time-consuming and laborious, and heterogeneity is evidently limited by population, pathologies, raters, scanners, and imaging protocols. Moreover, sharing data in a centralised configuration requires addressing legal, privacy, and technical considerations to comply with good clinical practice standards and general data protection regulations and prevent patient information from leaking. The use of federated learning, for example, can help overcome data-sharing limitations and enable training and validating AI systems on heterogeneous datasets of unprecedented size.’
To read more visit https://www.mdpi.com/2076-3417/12/20/10228