What is the impact of sex and gender differences in Artificial Intelligence (AI) used in biomedicine and healthcare? How can AI biases threaten our health and mitigate inequalities? According to Dr. Antonella Santuccione Chadha, founder of the Women’s Brain Project, ambitious goals set by precision medicine will be achieved using the latest advances in AI to identify the role of inter-individual differences. Yet, we need to consider more than ever that our decisions significantly impact the future treatment of patients and society. Despite scientific progress, most biomedical solutions do not account for (neither desirable nor desirable) bias detection. Furthermore, most algorithms neglect the sex and gender factor and their relevance to individual variances in health and illness. Failure to account for these variations could result in errors and discriminatory consequences (Cirillo et al., 2020).
A solution to this issue would be “no-code-movement,” which stands for a type of web development that allows non-programmers or people with great creative rather than technical skills to build immersive online interactions, such as 2D (Web 2.0) and 3D (Web 3.0) applications, and create software using a user-friendly graphical interface, instead of writing boring code. The no-code movement rests upon the fundamental belief that technology should enable and facilitate the creation and understanding and not be a barrier to entry. In this regard, user experience (UX) approaches can be taken to make it inclusive, adaptable, and customized for each sex and gender variation representation.
Historical sex and gender inequities and prejudices can infect health research and practice (Cirillo et al., 2020). Thus, developing AI precision systems in healthcare will enable the differentiation of vulnerabilities for disease and response to treatments among individuals while avoiding discriminatory biases. Furthermore, the no-code movement allows visualizations, logical statements, and dimensionality reduction techniques, which can be implemented to achieve interpretability (Cirillo et al., 2020). Therefore, developing and applying fair approaches is critical for implementing unbiased and interpretable models for everyone.
The no-code approach also saves money since it requires less upkeep and upfront investment. Coding money could now be better utilized for all sex and gender well-designed research, implying that post-trials are essential to scientific advancements (Rich-Edwards et al., 2018).
Furthermore, it allows for decentralized autonomy and innovation. When decentralizing development–by removing the barriers preventing healthcare users and professionals from experimenting and testing–no-code platforms, autonomy, and innovation are encouraged. It also enables to be known inside and out, thus avoiding the “black box” effect. Indeed, the EU directive 2016/680 General Data Protection Regulation (GDPR) states the “right to an explanation” about the output of an algorithm. In traditional development, the developers create a website, app, or another tool, then hand it over to the (third parties) that requested it. No matter how much a development team explains about the tool, there can always be aspects of it that the receiver will be less familiar with, thus creating digital literacy repercussions. Health professionals and patients will no longer be victims of digital literacy. Only by incorporating key ethical considerations during every stage of technological development can we ensure that the systems maximize the wellbeing and health of the population. That is what we want AI to be: Human, trustworthy, explainable, and inclusive, leaving no one behind.