Ethical Challenges of AI Integration into Biology

Artificial Intelligence and Bioscience| Rahma Iftikhar

This article examines the intricate ethical challenges surrounding AI decision-making in healthcare, from addressing biases to navigating the absence of emotional intelligence in AI systems. We delve into the implications of AI's limitations in abductive reasoning compared to human cognition and explore the broader societal impacts, including job displacement and ethical dilemmas in AI-driven industries. Through a comprehensive analysis of current research and ethical frameworks, this article provides valuable insights into the ethical dimensions of AI in healthcare and advocates for the responsible development and implementation of AI to uphold patient well-being and societal values.

Ethical dimensions of artificial intelligence (AI) in biology and healthcare are intricate and multifaceted, spanning issues of fairness, emotional involvement, decision-making capabilities, job displacement, and the very essence of humanity. AI's lack of genuine emotions raises questions about its suitability in roles involving vulnerable individuals [1]. Additionally, the inability of AI to engage in counterintuitive reasoning may lead to hasty and potentially inappropriate decisions, particularly in healthcare [2]. Beyond that, the pervasive influence of AI extends across various sectors, from agriculture to the military, posing ethical questions about job displacement, data sovereignty, and the potential for harm. AI's integration into everyday life also challenges human self-perception and raises existential inquiries about the nature of humanity in the face of increasingly autonomous and capable AI systems [1]. These issues prompt us to consider whether AI should be integrated into our everyday lives or if we need to engage in more reflection and policymaking to ensure its appropriate and responsible use.

Ensuring ethical and unbiased decision-making is of utmost importance within the healthcare sector. However, fairness is a subjective concept of ethics and is quite vaguely defined even within human society, often causing controversy [3]. Therefore, trying to embed this concept into AI-based decision-making is nearly impossible [3]. This could be a crucial issue in academic and health-related decision-making as there is no guarantee that ethnicity, gender, and disability biases will not unfairly influence algorithmic results. For instance, people using assistive technologies are disadvantaged when doing online testing to measure skill level as it is challenging to gather a balanced set of training data due to the various forms and degrees of disabilities [4]. Algorithmic biases may be corrected by ensuring that the datasets used to develop AI tools are diverse and inclusive; however, achieving this objective is complicated in both practical and ethical ways [5]. Many organisations even lack the data infrastructure required to train the algorithms in the most optimal way to fit the population and interrogate for bias to ensure consistency [6]. Therefore, ethical principles that include respect for autonomy, non-maleficence, beneficence, and justice could better be dealt with by medical practitioners, not AI [3].

As we delve into the complex topic of fairness in AI, it becomes evident that ethical decision-making often benefits from a certain level of emotional involvement [7]. Since AI lacks emotion currently, it may not be in our best interest to replace humans in activities that involve assisting vulnerable people in childcare, healthcare, and senior care settings [1]. For example, unemotional AI caregivers may deprive developmental young children of experiencing essential emotions [1]. Moreover, the current AI-based systems can express simulated emotions without experiencing them [8], [9]. This raises ethical issues like the ethical permissibility of deception, which may have serious unintended consequences. Even when using robots as companions for people who require care, the display of emotion has to not only be believable but also acceptable [10]. For instance, an expression of anger from a robot after a young child pushes it around may scare the child [10]. Despite its need, showcasing emotions may also result in attachments, which may disadvantage humans, as seen in warzones where US soldiers risked their lives for their Packbots [10]. Making robots experience emotions would also lead to a higher intelligence, raising debates about giving robots rights. However, this can be avoided by building robots with enough capabilities to fulfil our needs while avoiding qualities that make them deserve rights [10].

The lack of emotions is not the only difference between artificial and natural intelligence. Despite successfully responding to environmental changes, AI currently lacks the capacity for counterintuitive inferences that the human brain possesses [2]. For example, humans may recognize other possibilities for why a baby is crying instead of instantly assuming that the baby is in pain [1]. Since AI is incapable of abductive reasoning, it may lead to quick but inappropriate decision-making, especially in healthcare, where physiological parameters alone cannot reveal the patients' decision-making processes [1].

Moreover, AI's capabilities transcend mere decision-making, as AI robots are used in agriculture, healthcare transportation, and the military, replacing humans in many jobs [10]. This may result in a shortage in middle- and lower-class jobs, increasing inequality in society and other ethical disadvantages [10]. Although agricultural robots do things quicker, cheaper, and more efficiently to produce food for a growing world population, this raises ethical discussions about data sovereignty, lack of control for farmers over their farms, pollution, and harm to the natural environment [10]. Using AI in healthcare also leads to more effective services, including accurate diagnosis and treatment, and reduces the immense work pressure medical workers face. However, AI would also affect doctor-patient relationships and lead to further dehumanisation of healthcare [5]. Moreover, AI-based systems can form lethal biological weapons possessing physical platforms, perception, navigation, motor control, tactical decision-making, mapping, and long-term planning [11]. While the superior effectiveness and selectivity of autonomous weapons can minimise civilian casualties by targeting only combatants in warfare, we must consider the utterly disadvantageous position humans will be in due to this technology's incredible agility and lethality, increasing paranoia in everyday life [11]. Automating our tasks using AI will inevitably deteriorate human skills and increase dependence [10]. In addition, when AI makes mistakes, it is impossible to investigate the reasons for those mistakes and ways of rectification while using AI, especially in sensitive fields like healthcare, where mistakes can lead to immediate and tragic impacts on lives. Therefore, refraining from using medical AI could increase the number of human lives saved [10]. This also raises questions about who is liable and responsible in the case of mistakes [10].

Alongside increasing paranoia about the loss of jobs, AI might affect human self-perception by making us question what it means to be a human and in what sense humans are unique from AI systems [1]. The continued development of robots with human characteristics, despite increasing trust and acceptance in some cases, may make humans feel alienated and uncomfortable as distinguishing between robots and humans becomes challenging in everyday lives [10]. Furthermore, as robots become more autonomous, their capacity to make independent decisions grows [10]. This means robot and human agencies become hard to distinguish, raising more doubts about whether robots should be seen as moral agents [10]. The progression of AI also equips these systems with the ability to possess long-term planning abilities, autonomous behaviour, and the capacity to recognize the relevance of achieving a specific goal, further increasing psychological uncertainty and unease [1].

Although well-regulated AI technologies can provide monumental benefits to society, ethical controversies surround their applications, particularly concerning whether people would accept AI to be a part of the community. These challenges include issues of fairness, accountability, emotional engagement, and societal impacts. There is debate about whether humans are showcasing their skills and abilities by creating AI, or if humans are just as smart as AI as they can train AI for efficiency but then risk unemployment themselves. The future design of AI is unknown. The future viability of AI, however, is expected to be dependent not just on the increasing usage of hybrid or synthetic data, as previously noted, but also on the ability to transform difficult tasks into intricate ones [12]. As a result, the problem ahead will not be so much digital innovation but also digital regulation and governance, especially concerning AI [12]. Whether AI-based technologies can be integrated into the community depends crucially on how much people trust AI to be part of society [13]. Ultimately, it is our collective responsibility to steer AI development in a direction that aligns with our shared values and ensures the well-being of all individuals and communities.

[1] M. Farisco, K. Evers, and A. Salles, “Towards Establishing Criteria for the Ethical Analysis of Artificial Intelligence,” Science and Engineering Ethics, vol. 26, no. 5, pp. 2413–2425, Jul. 2020, doi: https://doi.org/10.1007/s11948-020-00238-w.

[2] K. Friston, “The free-energy principle: a unified brain theory?,” Nature Reviews Neuroscience, vol. 11, no. 2, pp. 127–138, Jan. 2010, doi: https://doi.org/10.1038/nrn2787.

[3] A. Vellido, “Societal Issues Concerning the Application of Artificial Intelligence in Medicine,” Kidney Diseases, vol. 5, no. 1, pp. 11–17, Sep. 2018, doi: https://doi.org/10.1159/000492428.

[4] S. Trewin, “AI Fairness for People with Disabilities: Point of View,” arXiv (Cornell University), Nov. 2018, doi: https://doi.org/10.48550/arxiv.1811.10670.

[5] A. Kerasidou, “Ethics of artificial intelligence in global health: Explainability, algorithmic bias and trust,” Journal of Oral Biology and Craniofacial Research, vol. 11, no. 4, pp. 612–614, Oct. 2021, doi: https://doi.org/10.1016/j.jobcr.2021.09.004.

[6] T. Panch, H. Mattie, and L. A. Celi, “The ‘inconvenient truth’ about AI in healthcare,” npj Digital Medicine, vol. 2, no. 1, Aug. 2019, doi: https://doi.org/10.1038/s41746-019-0155-4.

[7] D. Antos and A. Pfeffer, “Using Emotions to Enhance Decision-Making.” Accessed: May 24, 2023. [Online]. Available: https://www.ijcai.org/Proceedings/11/Papers/016.pdf

[8] L. Hall, “How We Feel About Robots That Feel,” MIT Technology Review, Oct. 24, 2017. https://www.technologyreview.com/s/609074/how-we-feel-about-robots-that-feel/ (accessed Jan. 27, 2020).

[9] S. Lavelle, “The Machine with a Human Face: From Artificial Intelligence to Artificial Sentience,” Advanced Information Systems Engineering Workshops, vol. 382, pp. 63–75, Apr. 2020, doi: https://doi.org/10.1007/978-3-030-49165-9_6.

[10] M. Ryan, S. van der Burg, and M.-J. Bogaardt, “Identifying key ethical debates for autonomous robots in agri-food: a research agenda,” AI and Ethics, Oct. 2021, doi: https://doi.org/10.1007/s43681-021-00104-w.

[11] S. Russell, “Robotics: Ethics of artificial intelligence,” Nature, vol. 521, no. 7553, pp. 415–418, May 2015, doi: https://doi.org/10.1038/521415a.

[12] L. Floridi, “What the Near Future of Artificial Intelligence Could Be,” Philosophy & Technology, vol. 32, no. 1, pp. 1–15, Mar. 2019, doi: https://doi.org/10.1007/s13347-019-00345-y.

[13] L. Yu and Y. Li, “Artificial Intelligence Decision-Making Transparency and Employees’ Trust: The Parallel Multiple Mediating Effect of Effectiveness and Discomfort,” Behavioral Sciences, vol. 12, no. 5, p. 127, Apr. 2022, doi: https://doi.org/10.3390/bs12050127.

Rahma is a Technology Graduate at Digital Services UoA and a Teaching Assistant at the Univesity of Auckland. She is completing her Bachelor of Science journey in Biological Sciences and Computer Science alongside her work, and currently serves as Head of Outreach for the Scientific. Due to her diverse interests, Rahma explores various fields within her majors and studies the connection between life sciences and technology.

Rahma Iftikhar - BSc, Biological Sciences, Computer Science