Redefining Empathy: The Ethics of Artificial Intelligence in Psychiatric Treatments
- Miranda Santiago

- Jul 15, 2025
- 34 min read
Updated: Aug 7, 2025

As a result of common misconceptions and misinformation surrounding mental health, psychiatric care is often stigmatized. Notions invalidating its legitimacy have been perpetuated throughout history, and more recently, social media along with the healthcare system. It is often difficult for a person to receive adequate psychiatric care as many healthcare systems need more resources to provide this support due to the shortage of supplies and practitioners. During the COVID-19 pandemic, patients were advised against leaving their homes to prevent the spread of the virus; therefore, many turned to virtual healthcare, like Telehealth. This development proved that there were alternate options for receiving medical advice, such as using technology to connect the healthcare provider to the patient.
Moreover, artificial intelligence (AI) has shown great promise as an innovative solution to provide mental health care services on a grander scale. AI applications can diagnose and treat complex mental health conditions as a human professional would. However, these applications have sparked ethical controversies regarding how society values patient-physician relationships, considering how technology has the potential to either replace or strengthen human empathy as a tool of these relationships. Within psychiatry, a branch of healthcare that relies heavily on the conversational therapy between the patient and healthcare provider, how should AI be implemented to maximize its benefit to society? Utilizing the ethical frameworks of deontology and utilitarianism, this paper will analyze the harms and benefits of AI in psychiatric care.
As technology becomes increasingly relevant to society, revolutionizing our way of life through newfound convenience and efficiency, it is essential to consider the drawbacks just as one would with its benefits. This paper will explore the ethical ramifications of implementing conversational artificial intelligence (CAI) into psychiatry and its effects on healthcare and society. Furthermore, I plan to explore how this technology will inevitably augment or alter how patients regard their practitioners, especially regarding the differences in their responses to human empathy to empathy demonstrated by technology while receiving care. By analyzing recent examples in conjunction with an analysis of current reports and statistics, I aim to explore how CAI will affect current and future practices in psychiatry and highlight several perspectives on its integration to form a stance on the ethicality behind this use of technology.
Table of Contents
Abstract
Introduction
Preface
Brief History
Inaccessibility of Current Psychiatric Treatment
Artificial Intelligence in Psychiatry
Defining Artificial Intelligence
AI Integration: Diagnosis
AI Integration: Treatment
Woebot
Assistance in Facilitites
Regulation
Ethical Analysis
Ethical Question
Stakeholders
The Duty of AI
Human vs. AI Responsibility and Accountability
Data Collection and Privacy
Issues of Justice
Effects on the Patient-Physician Relationship
Trust and Empathy
The Dangers of Trusting AI?
Conclusion
Proposed Future Applications
Final Thoughts
Introduction
Human health and wellness stem from an individual's physical and mental state, where maintaining one's quality of life is essential to both aspects of life. With a healthy outlook, one can maintain one's physical wellness; conversely, one's mental well-being can improve with proper physical maintenance. While many may be quick to focus on their bodily functions, psychiatry is just as key to ensuring a patient's overall quality of life. Psychiatry is the branch of medicine that focuses on diagnosing and treating mental, emotional, and behavioral disorders. This branch has a unique set of complexities, as a psychiatrist must receive training beyond medical school to understand the optimal way to deduce the condition of another individual's mind.
While recognizing the process of diagnosing mental illness, the primary forms of treatment within psychiatry that I will discuss in this paper include conversational therapy and specialized treatment facilities, along with the necessary ability to diagnose and treat these illnesses in the long term. Artificial intelligence has a recognized potential to replace humans administering these forms of psychiatric treatments, as this technology can simulate human conversations while being easily accessible.
Brief History
The first instances of psychiatric treatment can be traced back to the 17th century through institutions known as mental asylums. While these facilities were advertised as a way for people to receive care for their mental illness, often, these facilities were used as a place to hide away these patients from society. ("A History of Mental Illness") In the early discussion regarding mental health, many found it jarring to conceptualize the notion of viewing the mind as another critical component of one's well-being. As symptoms do not manifest physically or quantifiably, many become afraid of what they cannot comprehend for themselves. As a result, many turned to shunning their confusion instead of addressing it head-on, removing these patients from their respective communities. This failure to conceptualize the harms of mental illness manifested in the lack of attention or urgency when approaching these issues. However, today, technology has continued to evolve alongside society through various innovations, from telecommunication to social media and artificial intelligence.
Inaccessibility of Current Psychiatric Treatment

Following the evolution of psychiatric treatments, it is necessary to consider how new technology can be utilized to address the needs of the present. The topic of mental health has historically received mass amounts of stigmatization and misunderstanding, and these misconceptions have a rich historical foundation, seeping into the way mental health is regarded in the present. I hope to expand on the various factors that disadvantage patients from receiving adequate care for mental illness to preface how artificial intelligence fits into this picture through the particular focus on physician shortages, unequal distribution of services, socioeconomic disparities, and the underlying systematic biases regarding race, respectively. By understanding the factors that render psychiatric treatments accessible to select demographics, these
Firstly, the shortage of practitioners also stems from a declining interest or hesitation to enter the field of psychiatry due to factors such as stressed physician burnout. A population analysis that tracked the supply of mental health care providers found that between 2003 and 2013, the median number of psychiatrists per 100,000 U.S. citizens declined by 10.2%. Following these statistics, in 2011, 55% of psychiatrists were 55 or older, suggesting that a significant amount of the existing psychiatrist population is approaching retirement age. The study concludes by estimating that there will be a range from "a shortage of 17,705 psychiatrists to a surplus of 3,428." (Satiani) This indicates that projected models show how the population of psychiatrists is predicted to continue declining or remain limited.
A 1999 study questioned 223 first-year students medical students from Southwestern medical schools within the first two weeks of their official medical training to assess the careers they were interested in, the four categories being internal medicine, surgery, psychiatry, and pediatrics. The results indicated that one in 221 had signified that psychiatry was their career of choice and ranked "poorest by any ordinal measure." Furthermore, the study attributed these biases against psychiatry to students' being "less inclined to positively view careers in psychiatry." ("Decline of U.S. Medical Student Career Choice of Psychiatry") When students considered these questions, they were told to consider whether they would genuinely enjoy the job, want to be directly involved with patients, or expand on their scientific foundations in research. These results indicate that these students were found lacking in their knowledge of the benefits of psychiatry, as they only knew of the financial burdens and severe physician burnout that came with working in the field.
Secondly, a prominent example is the unequal distribution of practitioners, specifically in rural areas. In 2020, the Office for Disparities Research and Workforce Diversity for the National Institute of Mental Health published a call to action to address the disparities in mental health care in rural areas. About 20% of the U.S. population resides in rural areas, and within these populations, 6.5 million individuals have a mental illness. Despite these statistics, the study found that 65% of these nonmetropolitan counties do not have psychiatrists and that over 60% live in conditions with mental health provider shortages. (Morales)
Furthermore, the specific characteristics of rural culture, such as religion, poverty, etc., played a substantial factor in patients taking the first step in seeking help. Often, patients are dissuaded from continuing to search for proper psychiatric services due to the pressure and isolation of these communities. The price range for a single therapy session is $65 to over $250 in the United States. Even considering the provisions under the Affordable Care Act, insurance rates are affected by a co-pay, raising these bills to $50 or more. ("How Much Does Therapy Cost?") This pricing to have an hour-long conversation is an excellent example of how psychiatric treatment is only accessible to those with a certain degree of financial stability.
In regards to socioeconomic disparities, a study in 2005 sought to explore the underlying correlation between socioeconomic status and mental illness. This study represented the population of the Commonwealth of Massachusettes that has undergone "acute psychiatric hospitalization" from 1994 to 2004, involving 750,000 records of discharged patients.
(American Journal of Orthopsychiatry) To summarize the statistics and results of the study, it concluded that within the state of Massachusetts, lower socioeconomic status was directly correlated to a higher risk for "mental disability and psychiatric hospitalization." Not only does this article demonstrate the vulnerability of this specific population, but it also demonstrates the external factors that contribute to the inaccessibility of treatment, such as talk therapy. While currently government-funded financial programs, such as Medicare or Medicaid, offer mental health services, many low-income patients cannot afford to take time off work to attend therapy, as it would mean they would be unable to earn their own money. According to the study, these external factors included "economic stress [and] lack of family integration," referring to younger, less financially established families entering the workforce. (Hudson)
Lastly, the inaccessibility of psychiatric care services also impacts many communities of color. There have been multiple studies that indicate disparities in the quality of mental health care due to race. Specifically, a survey conducted by the U.S. Department of Health and Human Services in 1991 concluded that ethnic minorities not only have less access to mental health services but are also "more likely to receive poor quality when treated." The study found that minority patients were less likely to reach out for mental health treatment, and it found that it was more likely for African American people to end treatment prematurely in comparison to White people. (Barer) While there has been no current research to quantify practitioner bias or stereotyping, this article highlighted the difference in the quality of care patients receive based on racial factors.
These factors regarding the current state of the psychiatric field of health care reinforce the central idea that psychiatric treatments are becoming less and less accessible to the general population along with some minority populations at significantly disproportionate rates. However, there is great potential to solve these systemic inequity issues by using CAI applications to alleviate the pressure on the limited population of practitioners while being a more comfortable option for those in minority communities who are hesitant to seek help.
AI in Psychiatry
What is Artificial Intelligence?
With the increasing prevalence of practitioner shortages or disparities amongst minority communities when seeking psychiatric treatments, many have begun to turn to the potential of technology to help mitigate these healthcare issues, especially in the potential of artificial intelligence. Artificial intelligence (AI) is a technology that processes and interprets external data, learning by using those findings to carry out specific tasks. (Tai) Not just limited to healthcare, society has seen the gradual integration of these technologies into daily life: through people experimenting with ChatGPT, students using Grammarly's AI function in assisting quality work, or even teenagers frequenting Snapchat's My AI or Instagram's Meta AI. In the context of the United States healthcare system, many hope to utilize the power of AI's ability to collect and synthesize data. It can accomplish tasks delegated to many practitioners, such as interpreting data, like cardiac rhythms or radiographs, and, more recently, assisting in surgeries alongside human professionals.
Conversational artificial intelligence (CAI) refers explicitly to AI's ability to exchange dialogue between humans and machines through written, spoken, or multimodal natural language. CAI technology generally comprises natural language processing and machine learning programs. Natural language processing programs manipulate natural language or text, utilizing machine learning to understand and adapt to human language. (Tai) CAI is a potential solution to the shortage of resources or professionals in the psychiatric healthcare field because it has the potential to replace psychiatrists when conversing and gathering data with patients. Often, as psychiatry heavily relies on the qualitative data gathered in psychiatrist-patient interactions or conversations, it is more common for CAI applications to be used for treatment, but this does not make CAI exclusive to psychiatric practices.
While defining these technologies' abilities is necessary for the context of this paper, it is equally essential to recognize the dangers of current AI technology, as they pose detrimental threats at the forefront of interpersonal interactions. An essential consideration when implementing artificial intelligence in any branch of healthcare is how it can perpetuate bias through its coding. There is a grave risk of creating AI using data that does not represent an entire population, producing biased results that can be reflected in responses rooted in prejudicial or stereotypical remarks. These inherent errors are especially concerning if implemented into a system of standardized psychiatric treatment, as it allows the sources of these mistakes, in data collection, representation, or misuse at the hands of the practitioner, to remain unclear.
Another recent finding in artificial intelligence is the discovery of AI hallucinations. The term "AI hallucination" describes how AI can generate content that is not based on preexisting data but created through the creative interpretations of machine learning algorithms. These spontaneous instances exemplify how artificial intelligence will not always produce accurate results, regardless of its programming. As these conversational agents primarily rely on their ability to utilize their machine learning to adapt and grow when used by humans, these hallucinations pose a grave threat as they are inherent in AI's generative qualities. These AI hallucinations were not a product of faulty or biased coding; they just appeared due to inherent gaps in the program's interpretation. Regardless of programming, bias, and hallucinations will virtually always be present in these AI applications, leading to issues such as misdiagnosis or the perpetuation of misconceptions throughout society.
AI Integration: Diagnosis
There are many possible areas for CAI implementation within psychiatry. This paper will focus on its use in diagnosis and treatment, specifically through conversational therapy sessions and in-patient treatment under the supervision of a human practitioner. Diagnosing a patient with mental illness poses great nuance– as symptoms are not always evident. It is essential to recognize the gravity of the potential of CAI in these respective aspects of psychiatric help. Conventionally, the process of diagnosing patients for mental illnesses involves a series of tests designed to examine the patient's physical condition or genetics, followed by an examination of their behaviors. After ruling out any physical conditions, psychiatrists must use a psychological evaluation to "[talk] about [a patient's] symptoms, thoughts, feelings, and behavior patterns." ("Mental Illness") From the psychiatrist's deductions, they decide the direction for their treatments.
Many see a potential to boost efficiency and swiftness by allowing AI to do preliminary interactions with a patient, including collecting data and, more notably, diagnosing the patient with their condition. After this initial exchange, the physician could interpret the AI's analysis of a patient, and they would agree on how to proceed.
In the context of diagnosis in psychiatry to identify mental illness, at its current state, CAI can initiate conversations with a patient, like, for example, guiding an individual through the initial survey to identify areas of stress or concern or diagnosing them based on their responses in conversation about ones' circumstances. These first steps are crucial to determining the course of action for a single individual's road to recovery. CAI can vastly improve this process by making the diagnosis and prognosis more efficient and straightforward.
AI Integration: Treatment
Traditional treatment options for those diagnosed with mental illness almost always involve either conversational-based therapy or constant monitoring in specialized facilities. Often, a psychiatrist will prescribe psychotherapy, a type of treatment for patients "experiencing a wide array of mental health conditions and emotional challenges," to initiate the recovery process. (Bhatia) Psychotherapy treatment can enable a patient to track the root of their psychological issues to understand themselves better while alleviating their symptoms.
Based on the patient's progress during this probationary period, the psychiatrist will determine if the patient requires further treatment in residential psychiatric facilities where one can be monitored 24/7 by professionals. ("The Psychiatric Bed Crisis") This constant monitoring enables patients to receive massively effective treatment, as professionals can witness all aspects of their progress. Psychiatric beds capitalize on this individualized approach to treatment, requiring the direct supervision of a professional on the patient.

When considering doctors working in tandem with CAI technology, CAI can interact with the patients, while the human psychiatrist's role would be to oversee and interpret the diagnosis for the patients in question. Either through regular conversation or check-ins, patients would be able to receive constant attention while maintaining and receiving treatment, as psychiatrists would inevitably have the final say.
In recent years, a genre of free-to-download phone applications that utilize CAI to alleviate symptoms of mental illness has gained a newfound relevance as society recovers from the isolation that came with the COVID-19 pandemic. Some relevant programs to note include Woebot, Mentat AI, Replika, and many more. The emergence of the apps' significance originally stemmed from wanting to provide people in quarantine a way to maintain healthy headspaces. This brand of applications started with programmed spaces to guide the user through meditation or relaxation. Due to telemedicine's success and influences behind virtual care, new applications were developed using CAI to administer care.
It is essential to consider how CAI technology acts as a preventative measure by supporting and enabling a patient to address their mental conditions before these illnesses develop deeper concerns, for example, for one's safety or the safety of others. Presenting CAI technology on the market enables vast amounts of people to access a baseline level of treatment. Many of the leading causes of people typically being admitted into these facilities stem from problems that could have been addressed through psychotherapy, a progression of gradual checkups to work through internal problems regularly. (Ahad) However, many are unable to maintain this care. Thus, this use of CAI technology presents itself as a convenient means of maintaining an individual's mental illness before it can manifest into a condition that would warrant constant monitoring.
If a patient is still prescribed treatment in psychiatric facilities, many have gravitated towards using AI to assist human professionals in these establishments. Care is assessed through constant patient observation, meaning practitioners are responsible for consistently collecting valuable data. In this context, AI would monitor or observe patients through interactions utilizing CAI or based on data collection and synthesizing results for human professionals to oversee.
This new dynamic heavily alleviates the burden or pressure many practitioners feel over their assigned patients, increasing the number of services available. In these treatment facilities, many practitioners feel heightened pressure when constantly observing their patients. The use of AI in these facilities assists these practitioners, almost acting as a second source of reliable data collection.
Woebot
Using Woebot as an example of these CAI programs in practice, we can evaluate its efficacy and analyze the shortcomings and benefits of current CAI applications. Developed by Stanford psychiatrist Allison Darcy, Woebot utilizes conversational AI technology to deliver cognitive behavioral therapy through daily conversations and mood tracking. To obtain full functions of this device, a patient must visit their physician to receive a one-time code to gain unlimited access to this technology.
A study conducted in 2017 was used to explore the potential efficacy of a conversational agent, using Woebot, to administer self-help programs for college students specifically to treat anxiety and depression. Using 70 young adults in their studies, they were sorted into two randomized groups, where one receives care from the Woebot application, and the other is referred to the National Institute of Mental Health ebook. While the results of this study indicated significant benefits to the treatment of depression, the study found mass amounts of feedback regarding the greatest limitation of the Woebot technology– lack of intimacy. The responding patients found a common theme in the feedback form by detailing the confused or unexpected answers, coupled with the technical glitches in general, including involuntary looping. However, these challenges should not overshadow the promising potential of CAI technology in mental health treatment.
To unpack the effects this use of CAI technology has on its users, it is necessary to know how these technologies are being created in the first place. In the case of Woebot, Dr. Allison Darcy, a former Stanford University clinical research psychologist, and her team developed the program, suggesting that her past professional experience guided her in developing Woebot (Harris, Forbes). Despite her expertise in this field, the programming was still massively flawed, as reflected by the responses of the previous study.
Assistance in Facilities
Furthermore, it has been recently affirmed that AI can be applied alongside professionals in specialized psychiatric facilities. A 2020 service improvement project at the National Institute for Health Research Oxford Health Biomedical Research Centre sought to test the efficacy and safety of artificial intelligence to assist nurse practitioners in psychiatric treatment facilities. As a result of a developing correlation between sleep disruption and mental illness, it has become harder for practitioners to monitor their patients' well-being during their night shifts. Lack of sleep on both ends of the patient, along with professionals, is associated with an increased risk of aggression in psychiatric care units. (Barrera) Thus, these professionals turned to this trial to see if AI was safe to use in this context, as this technology would greatly benefit them emotionally in their practice.
In this study, sensors were programmed to collect observations of a sample of patients from 9:00 p.m. to 9:00 a.m., replacing a practitioner who would have had to stay up all night to care for these patients. Creating and using a new protocol based on their preexisting trust policy, the study concluded that these sensors were as accurate as the observations that would have been carried out in person. (Barrera)
The response from the practitioners was generally positive, as they found it a "fast and easy" means of collecting data on their patients while maintaining their health and getting their sleep. However, many patients felt that these sensors defeated the purpose of their stay. A former patient claimed, "I hate them, what is the point of observing me…The slightest little noise would wake me up, and I do not get much privacy." (Barrera) This study demonstrates the conflicting perspectives and effects of receiving AI-assisted treatment. While it makes the recovery process much more convenient, the patient-physician relationship loses a significant factor of connection.
Regulation

However, these cases, along with many other similar reported incidents, raise the bigger question of how these programs are being developed in the first place. If a trained psychologist from an elite private institution had various shortcomings in developing these applications, what determines the most effective way of programming these tools for standardized use throughout healthcare?
With these applications available on the free-to-download market, there has been further emphasis on how CAI, or artificially intelligent programs in general, are being regulated by the Food and Drug Administration (FDA). However, the regulation of artificial intelligence falls outside the scope of the FDA's traditional dominion over medical devices, claiming that the "traditional paradigm of medical device regulation was not designed for adaptive [AI] and machine learning technologies." Due to the recent introduction of AI to the market, the FDA proposes a different process for premarket review as its premature definition in this context is subject to change as AI technology advances. ("Artificial Intelligence and Machine Learning in Software as a Medical Device") Many government policies call for greater, stricter AI regulation as medical professionals navigate the new and early use of AI technology in their practices. A published article in 2023 introduces the ideal of general compared to precise regulation, finding that AI calls for a more precise regulation– developed with a specific framework to prevent future unethical use of the technology. Precise regulation would allow users to remain safe while being able to explore their personal creativity fully. (Reddy)
However, while the use of AI in healthcare calls for stricter regulation theoretically, there have yet to be structured models to regulate how these CAI applications are being developed or put into the market. After reviewing the current state of the psychiatric field of health care, along with how conversational artificial intelligence fits into this picture as a potential solution, it is necessary to consider the ethical ramifications that come with the introduction of this technology. Especially as this integration is being realized through applications such as Woebot in the present, it is more important now than ever to understand how to navigate this new use of CAI technology, considering the potential benefits and harms to society.
Ethical Analysis
Ethical Question
Considering the factors that lead healthcare professionals and society towards a future reliant on innovation and technology, is using artificial intelligence in psychiatric care ethical?
Stakeholders
Before exploring the ethicality of utilizing AI in psychiatric treatment practices, it is essential to consider who will be affected by this integration. The main stakeholders that are bound to be affected include patients, healthcare professionals, AI developers, the government, and society.
Patients will be the stakeholders directly affected by the impacts of AI technology, as their care and, consequently, their health are directly reliant on the quality of these applications. Conversely, as healthcare professionals utilize these applications to assist in their work, it is just as important to understand AI's impacts on their entire line of work. Moreover, a pressing consideration is how this technology is being developed and regulated in healthcare. It is important to identify the parties responsible for producing this technology and getting it on the market: companies that develop AI, such as Woebot, for the sake of public consumption along with government, specifically organizations such as the Food and Drug Administration, that could have a say in what devices are approved for public use. Lastly, yet most importantly, it is vital to consider society, more specifically, how society will change its opinions regarding AI, mental illness, patient-physician relationships, or otherwise.
The Duty of AI
By using CAI programs to treat psychological conditions, these applications allow virtually anyone access to care, providing the greatest good for the most significant number of people. This new technology would massively democratize the benefit of those with mental illness who are disadvantaged in their help-seeking by allowing treatment to be accessed anywhere at any time. However, it is just as necessary to factor in other considerations for the societal shifts that will occur with the standard implementation of this dilemma, which directly draws on the notion of quantity compared to the quality of care. While more people would gain access to care, the factors determining the overall quality of these newfound treatments are still ambiguous to the public. As of now, applications such as Woebot (etc) are accessible to virtually everyone. There needs to be checks to ensure that the care it administers fulfills the care a human professional would have administered.
Utilizing the ethical framework of deontology, it is essential to assess if CAI applications can assume the duties and obligations of a human practitioner. A significant issue when considering the use of AI in any system comes from the inability to assign human obligation to the fullest extent of this technology. When humans officially become professionals, they swear on oaths and sign legally binding contracts to ensure that they will administer a certain quality of care, along with the promise of protecting a patient's privacy. Along with these checks and balances to hold practitioners accountable, many often find themselves in healthcare from a personal or moral code, or, in other words, a need to care for and serve other human beings. Due to systematic standards and individual beliefs, human physicians feel obligated to do their best for their patients.
However, conversational agents, to state the obvious, are not human. This technology is incapable of demonstrating these specific intelligent behaviors. They cannot understand the depth or significance of these oaths, nor do they have any semblance of personal desires; these programs are made to imitate human cognitive ability, never being able to feel these emotions or obligations for themselves.
Many may view CAI's inability to feel personal desires or emotions as a benefit, as it demonstrates that there is no conflict of interest when receiving their personalized treatment. While this claim holds some truth, it is also essential to consider the benefits of human judgment. Human practitioners can use their experience as humans exposed to changing social norms or cues to deduce a patient's condition as they interpret and respond to the nuances of human behavior. Some examples of these nuances may include the subtleties of body language or intonation of speech. Much of this depth of interaction is lost when using chatbots, as many of these utilized applications rely heavily on text input rather than witnessing a patient's holistic demeanor.
Furthermore, conversational agents could significantly alleviate the heavy burden on physicians, as they would not feel an emotional attachment to their patient's well-being. Many may find it a benefit that by using this technology, practitioners will not feel guilt over their patient's health, for example, if one of their patients commits suicide or, in some instances, inflict harm on others in their state of confusion or delusion. (Sandford, PubMed) However, it is their guilt that ultimately, or significantly, motivates them to ensure that their patient will recover.
In the future, regardless of the possibilities to include the possible signs or symptoms in the diagnosis of mental illness or the appropriate responses, a trained psychiatric professional is taught within its coding, as programming, these applications cannot feel the responsibility of undertaking this vital position in society. As human practitioners can understand the gravity of another patient's circumstances through their innate empathy, there is an added depth to their care that CAI agents cannot precisely replicate.
Human vs. AI Responsibility and Accountability
Furthermore, the question of obligation involves the consideration of responsibility—expressly, who will assume responsibility for these patients? When considering an independent application for the convenience of its consumers, it is difficult to confine the blame for CAI's potential mistakes to only one stakeholder. This conflict refers us back to how AI is currently regulated in legislation. There are no regulations when developing code for these CAI applications, nor are there restrictions when downloading these programs; however, if these applications will be used in a professional setting, it is difficult to source the root of these potential complications or problems in the patient's care. In other words, the more significant dilemma comes from the innate inability to differentiate if these potential errors stem from issues within programming or at the hands of the practitioners.
Furthermore, it is an even more significant question of how these complications will be managed or addressed when regulating AI technology or holding someone accountable. Is it the fault of the practitioner? Or is it the fault of those who created the application in the first place? For example, if a patient is diagnosed with
On the other hand, when considering CAI's potential for error in treating their patient due to program bias or hallucinations, it is valid to argue that this error can also be found in human practitioners due to innate implicit bias as the source of program bias comes from their human programmers. Following these ideas, all human practitioners can create errors in their practices and view the utilization of CAI programming as a means of standardizing the practice to ensure that discrepancies are minimized. When comparing the quality of care a human can administer to the potential of CAI in this same treatment, how should healthcare systems determine if these discrepancies are acceptable when considering the greater healthcare system?
However, the standardized use of CAI programs in psychiatric practices can magnify one specific set of bias programming and information. In contrast, the current system encompasses an array of perspectives, along with their respective implicit biases, by having different individuals as professionals in the field. The danger of this coding standardization is that it spreads and standardizes a certain kind of ideology or belief in its use. As long as this technology is created by humans, where the data set and the programming are collected and coded by humans, this bias will never truly be minimized but rather amplified. While bias is challenging to address head-on, the current system enables multiple perspectives and ideas to be put into practice. In contrast, a standardized CAI application used in healthcare would only work to spread a particular belief or idea.
Using the deontological lens, it is essential to consider how duty and responsibility would be translated into these artificially intelligent devices. Their consciousness is merely a replica of human likeness, so how can society ensure they can carry out these same responsibilities with the same degree of care as a human practitioner? Furthermore, under this new treatment structure, it becomes increasingly difficult to assign blame when the lines become blurred between the human practitioner and the people designing the programming behind these CAI applications. In the future, it will become more noticeable that it is impossible to wholly replicate the pressure of human responsibility or compassion in this technology because it will always simply be coding.
Data Collection and Privacy
Following consideration of responsibility, it is equally important to address collecting personal data or information through these applications. Human practitioners swear on oaths that protect patient's personal information from being spread, holding these professionals accountable for their actions legally. However, these CAI applications do not adhere to these same standards due to the lack of formal regulation by the government or official healthcare systems outside of the Health Insurance Portability and Accountability Act, more commonly recognized as HIPAA. (Martinez-Martin) Along with a question of development, it is also reasonable to follow up with questions regarding where patient data and information are going, especially when you consider the idea of third-party systems affecting or sponsoring the development of these applications. The threats of this lack of protection are heightened when you consider the deeply personal and sensitive information about a patient that is exchanged in psychotherapy.
Many may argue that all applications have a list of private policies or agreements to bind users to these terms legally. However, there is a significant difference in how this information is presented when you compare this fine print to a human practitioner's word. When you consider users who are unfamiliar with the way these contracts work, such as children or older demographics, it changes the way these demographics can inform themselves before consenting.
Issues of Justice
As the issues with the idea of technology assuming the responsibility of human professionals, the problems within societal structures are made more apparent– most prominently, the consideration of CAI's impact on equity issues. The main issues within the current state of the psychiatric healthcare system stem from the apparent racial and socioeconomic disparities, and it is necessary to unpack how these added technologies affect either alleviating these issues or prolonging when they will be addressed head-on.
With AI programs increasing treatment options in conversational therapy and in-patient psychiatric care, mental health services are vastly more accessible than current psychiatric treatment options. Instead of paying a fee of, at minimum, $65 for an hour's worth of treatment, a patient can pay a weekly fee that averages to $5 for full access to the abilities of a conversational agent or receive approval from their doctor to utilize the application, providing a one-time use code to access the content of these applications. When these treatment plans are compared to each other, it is evident that CAI programming makes this care easier to receive.
In the short term, CAI therapy would act as a more convenient means of receiving treatment for the general public; however, it is essential to consider all possible changes in future societal standards. It is essential to consider how a human practitioner will be able to understand human experiences to the degree that programming will never be able to comprehend, as, at its core, it is a program made to replicate human likeness. However, with the introduction of these CAI programs intended to serve as accessible care for those who do not have the means to sustain contact with a human professional, this dynamic has the potential to reinforce a system that deems human practitioners as an even more wealthy to sustain the arguably better quality of care for mental illness. In contrast, those with less wealth would be left with robotic replicas.
Furthermore, the stigmatization of receiving mental healthcare plays a large part in patients in preventing patients in need from reaching out for these treatments. However, this implementation of CAI acts as a mere bandaid for these systematic inequalities in failing to address the societal issues that made receiving psychiatric care difficult in the first place. While some may argue that these bandaids are necessary first steps towards a more equitable distribution of care throughout society, it is important to acknowledge that this technology only poses a promising short-term solution.
By understanding the guiding value of justice, does this integration serve the mistreated minority communities and disadvantaged patients seeking help for mental illness? It is important to consider how CAI technology does little to impact the way mental health is currently regarded systemically and socially when considering the historical stigmatization of mental illness.
Effects on the Patient-Physician Relationship
Another essential framework to consider when viewing CAI as a solution to the inaccessibility of psychiatric treatments would be the consequentialist framework, which urges the individual to judge ethicality based on the consequences. Technology utilizing CAI has excellent potential to improve access to expensive forms of conversational-based care; however, it is crucial to consider the long-term impacts of allowing CAI to be recognized as the equivalent of seeing a psychiatrist.
Intelligent programming in psychiatrist-patient relationships threatens how humanity is perceived throughout society. By allowing a programmed application to assume this role, this new structure of treatment forces patients seeking treatments to place the same degree of trust that they would entrust to a human psychiatrist. Generically speaking, healthcare providers trust that their patients will give them accurate information about their symptoms or conditions. In contrast, patients trust that these professionals will use this information in conjunction with their judgment to administer care. The strength of the bond established between a patient and their healthcare provider is heightened in the psychiatric field, as their symptoms are not limited to quantifiable data, such as heart rate or blood pressure.
The distinguishing factor of psychiatry directly stems from exploring the human mind.
Programmers can give dialogic cues to identify symptoms of mental illness. However, it is essential to consider the potential to interpret the nuance behind body language and inflection in the context of the conversation. Considering how these applications will inherently lack the ability to wholly feel and assume emotional and moral stakes further distinguishes CAI technology from humanity. A conversational agent's empathy is inherently incomparable to a human professional's, as CAI can only emulate what a human practitioner would experience.
Alternatively, there are distinct financial considerations. It is indisputable that providing these applications with adequate information to recognize the symptoms and respective treatment options would warrant significant investments. However, it is equally important to consider whether these expenses are allocated towards providing psychiatric branches of healthcare with adequate funding and supplies as the system currently stands. Instead of changing these treatment systems, how would these results compare if these funds were invested in improving the branch altogether through increasing funds for education or otherwise?
Trust and Empathy
Throughout the analysis of this ethical dilemma, trust becomes an essential consideration when witnessing how the patient-physician relationship is affected by AI, especially in a branch of healthcare like psychiatry, where empathy is at the basis of any form of treatment in this field. As strategies employed to care for those with mental illness typically involve observations in conversation or behavior, the ability to understand and share the feelings of other human beings is central to receiving quality care. These nuances and emotions are essential to getting a holistic idea of what patients are experiencing when seeking treatment.
It is also essential to understand how humans develop attachment to their practitioners in psychiatric treatments, especially in conversational psychotherapy. Attachment theory claims that an individual's attachment behavioral system, "an innate psychobiological system," develops attachments to "attachment figures," or the psychiatrist, in times of need. This chain of reactions is motivated by a need to "promote a stable sense of attachment security and build positive representations of self and others." (Mikulincer)
In standard practices, patients entrust this role of an "attachment figure" to their psychiatrists as the patient and practitioner navigate through their interpersonal struggles, either through conversational therapy or the observation that takes place in specialized facilities. At the core of any psychiatric treatment, an inherently strong relationship is formed between these two parties to ensure that the physician understands a patient's circumstances. In contrast, the patient can receive the best quality care possible.
Furthermore, this relationship prompts the question of the dangers of giving the role of an "attachment figure" to AI. It is difficult to claim that AI will provide a "secure, stable" figure for patients to develop this attachment to when considering the current, inherent characteristics as well as flaws in its programming. As previously noted, all agents utilizing AI have the potential to display hallucinations, where content is generated outside of preexisting data sets. While these chances are slim, this fact, coupled with the understanding that all human-collected data sources are intrinsically biased, demonstrates that there should always be awareness of this slightly unpredictable factor in AI-provided treatments. AI programming can be virtually perfected to encompass all the responses or behaviors a human professional would exhibit. However, the existence of hallucinations or bias renders this perfection null– as there is almost always a possibility of error in AI responses outside of the knowledge a practitioner would have.
The Dangers of Trusting AI?
Furthermore, this use of CAI elicits a slippery slope for the future in its mere existence. While the mass amounts of care CAI provides bring great hope to patients who otherwise would not have received care, it also greatly changes the overarching idea of the patient-psychiatrist relationship, inevitably altering the "ideal" patient-physician relationship. The lines between human and artificial behaviors are blurred by allowing people to view technology with the same regard as a human professional.
Ironically, the harm of CAI being used in these applications poses a threat due to how easily accessible these treatments are, primarily in their potential to enable patients to overdiagnose themselves. One prominent consideration of CAI technology is its effects on younger generations. As more and more children are introduced to phones and computers at progressively younger ages, these children grow up relying on technology. Initially, it could fulfill the need for mindless entertainment, but the existence of conversational agents creates the possibility of children developing a sense of human relationships around these artificial interactions.
It is also important to note the shifts in the conventional dynamics of patients and healthcare providers, precisely how this development can harm how patients view their healthcare providers. Enabling CAI to work alongside human practitioners paints an image of these two figures as equals. However, it is still important to note that one is artificially made intelligence, and the other is a human capable of experiencing emotion and, more specifically, burnout. Introducing this dynamic could make human practitioners vulnerable to dehumanizing patient interactions if these patients are only familiar with or can see these humans as their chatbots adjacent.
Using the consequentialist framework in the consideration of CAI technology in psychiatry urges you to consider the consequences of this integration in both the short and long term. While the short-term consequences would allow those mentally ill in the present to receive treatment, it has many potential dangers looking further into the future, not only for the patients themselves but for health care providers. By enabling this technology to replace these valuable roles in society, we begin to equate AI to humans, despite this technology being coding made to emulate humaneness. Allowing this dynamic through CAI in psychiatry equates human practitioners to those of programming and could enable the next generations to dehumanize their healthcare professionals.
Following this eventual social shift, this integration challenges the question of at what point AI becomes human. As we continue to place more trust in and develop attachments to these artificially intelligent beings, society must also be aware of how these dynamics will inevitably affect future generations or civilizations.
Conclusion
Proposed Future Applications
As we have seen in the present, this proof that CAI can be used alongside a practitioner to administer psychiatric treatments opens the door for many more possible applications—most notably, its capacity to replace the professional position eventually. Currently, the development of these applications is not standardized and has only been seen used in conjunction with a practitioner, but this substantiates a baseline for a whole new form of treatment options. With the proper refinements and development, CAI holds massive potential to become its own significant treatment plan that many more can access. It is not just limited to providing care through conversation-based therapy sessions. However, it also holds massive potential to play a significant role in the care patients receive at specialized facilities.
While these applications are hypothetical, they slowly become more viable as technology progresses. As society looks for ways to automate standard processes, we are forced to question and assess the potentially drastic implications this AI poses.
Through navigating all of the possible outcomes or possibilities with a future of CAI, it is most essential to consider beneficence, the idea that an individual should act for the benefit of others and prevent harm. (Varkey) CAI has a place in accompanying human practitioners after specific modifications and regulations are implemented in society to ensure all parties– practitioners or patients– receive the benefits of this integrated technology.
The only viable way to include CAI technology in psychiatry would be alongside the guidance of a human practitioner after revisions to the fundamental programming behind these mental health applications. I firmly believe that current conversational agents are incapable of fully adapting to the complexities of human identity. Each patient will have an individual set of background information, and patients will come from various upbringings, cultures, circumstances, etc. Before CAI can be used as a standardized form of treatment, it must first be able to accommodate these gaps.
Furthermore, I propose a new government-sponsored organization, similar to the role of the FDA, as an extra means of testing and determining the use of new AI technologies. Following the ideas of the FDA, I agree that it is essential to have a separate branch that can dictate its purpose and what these technologies can be utilized for. This structure would prevent conflicts of interest when approving medical devices, along with providing the necessary precautions to prevent public access to potentially harmful AI devices or applications, either in coding or intended use. AI is slowly becoming more prevalent in society, and there must be a way for this technology to be approved before it is given to patients or the general society.
Even with perfected technology backing the treatment, human influence must remain central to diagnosing and treating the patient. The psychiatrist's involvement can vary depending on the severity of the patient's mental illness. Still, this model would be the closest to balancing efficiency and swiftness along with the interpersonal aspect of healthcare.
Final Thoughts

The use of AI in psychiatric treatments holds immense potential, serving as a means of supporting both the patient seeking and physician administering mental health care. AI offers a reliable way for a patient to easily access quality care, potentially revolutionizing the accessibility of mental health services. To physicians, this technology alleviates their amplified responsibility for their patient's well-being. In a field where deep connection between these two parties is crucial, AI could enhance the effectiveness of diagnosis and treatment.
Considering justice as a guiding principle, it is important to consider the effects CAI technologies pose as a substantial solution to the current inaccessibility of psychiatric care. It is reasonable to predict that such technology would benefit disadvantaged populations who would not have been unable to access quality psychiatric treatment otherwise.
However, these applications fail to address the systemic issues that cause this treatment to be inaccessible in the first place, for example, the racial or socioeconomic disparities. By failing to address the different qualities of treatment across certain groups of people, CAI technology instead perpetuates the system that leads these patients to seek conversational agents as a means of therapy. By providing more people with access, this factor, in turn, creates a dynamic where human practitioners are adjacent to commodities within the healthcare system. At the same time, disadvantaged communities are left where they began.
Furthermore, as a framework revolving around the idea of obligation, it is necessary to consider the degree of responsibility society can place in current CAI applications. These applications are made to replicate human likeness without being able to assume the same moral obligations. Implementing CAI into psychiatry– a field of medicine that expounds on the use of empathy to understand patients– allows patients to form this dependency on artificial beings that cannot wholly fulfill the duties that a human practitioner would have nor understand their circumstances. Moreover, we are forced to consider if we are sacrificing the quality of these treatments for the sake of accessibility.
The process of seeking and receiving treatment for mental illness is complex and interpersonal. Artificial intelligence may make this process less daunting while providing solutions for previously inaccessible care for the general public; however, it has the same potential to introduce new conflicts in healthcare. It simultaneously introduces a treatment plan that enables a regular patient to receive consistent monitoring and guidance through these CAI interfaces while introducing concerns regarding making these steps toward a future where anyone in need can receive treatment. It also has the potential to shift society to gravitate toward a more holistic approach to their individual health; it also has the potential to drastically affect the quality of care a patient can receive– posing a multitude of ethical concerns. It is in our best interest to be mindful of how artificial intelligence is displayed currently and consider how this impacts our society. Specifically, in allowing AI a role in psychiatry– a branch of healthcare built upon interpersonal connection– what does this integration mean for the future, along with what we consider socially acceptable? With this great potential in these rising technologies, we have the duty as members of society to ensure that artificial intelligence is being used responsibly and for our benefit and the benefit of future generations.
Works Cited
Ahad, A. A., Sanchez-Gonzalez, M., & Junquera, P. (2023). Understanding and Addressing
Mental Health Stigma Across Cultures for Improving Psychiatric Care: A Narrative
Review. Cureus, 15(5). https://doi.org/10.7759/cureus.39549
"Artificial Intelligence and Machine Learning in Software as a Medical Device." U.S. Food
and Drug Administration, 15 Mar. 2024, www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device#:~:text=The%20FDA's%20traditional%20paradigm%20of,may%20need%20a%20premarket%20review. Accessed 26 May 2024.
Barer, David, and Josh Hinkle. "'Horrifying' wait times for state hospital beds, official says."
USC Center for Health Journalism, 18 May 2020, centerforhealthjournalism.org/our-work/reporting/horrifying-wait-times-state-hospital-beds-official-says. Accessed 26 May 2024.
Barrera A, Gee C, Wood A, Gibson O, Bayley D, Geddes J. Introducing artificial intelligence
in acute psychiatric inpatient care: qualitative study of its use to conduct nursing
observations. Evid Based Ment Health. 2020 Feb;23(1):34-38. doi:
10.1136/ebmental-2019-300136. Erratum in: Evid Based Ment Health. 2021 May;24(2):
PMID: 32046991; PMCID: PMC7034347.
Bhatia, Richa. "What Is Psychotherapy?" Psychiatry, American Psychiatric Association, Apr.
2023, www.psychiatry.org/patients-families/psychotherapy. Accessed 26 May 2024.
"Decline of U.S. Medical Student Career Choice of Psychiatry and What to Do about It."
American Journal of Psychiatry, vol. 152, no. 10, Oct. 1995, pp. 1416-26,
https://doi.org/10.1176/ajp.152.10.1416. Accessed 24 May 2024.
Fitzpatrick KK, Darcy A, Vierhile M. Delivering Cognitive Behavior Therapy to Young Adults
With Symptoms of Depression and Anxiety Using a Fully Automated Conversational
Agent (Woebot): A Randomized Controlled Trial. JMIR Ment Health. 2017 Jun
6;4(2):e19. doi: 10.2196/mental.7785. PMID: 28588005; PMCID: PMC5478797.
Grayson MS, Newton DA, Whitley TW. First-year medical students' knowledge of and
attitudes toward primary care careers. Fam Med. 1996 May;28(5):337-42. PMID: 8735060.
"A History of Mental Illness Treatment: Obsolete Practices." Concordia University St. Paul
Global, 13 July 2020, online.csp.edu/resources/article/history-of-mental-illness-treatment/. Accessed 24 May 2024.
"How Much Does Therapy Cost?" GoodTherapy,
www.goodtherapy.org/blog/faq/how-much-does-therapy-cost. Accessed 24 May 2024.
Hudson, Christopher G. "Socioeconomic Status and Mental Illness: Tests of the Social
Causation and Selection Hypotheses." American Journal of Orthopsychiatry, vol. 75, no. 1, 2005, pp. 3-18. American Journal of Orthopsychiatry, https://doi.org/10.1037/0002-9432.75.1.3. Accessed 4 Apr. 2024.
Martinez-Martin, Nicole, and Karola Kreitmair. "Ethical Issues for Direct-to-Consumer Digital
Psychotherapy Apps: Addressing Accountability, Data Protection, and Consent." JMIR
Mental Health, vol. 5, no. 2, 23 Apr. 2018, p. e32, https://doi.org/10.2196/mental.9423.
"Mental Illness - Diagnosis and Treatment." Mayoclinic, Mayo Foundation for Medical
Education and Research, 13 Dec. 2022, www.mayoclinic.org/diseases-conditions/mental-illness/diagnosis-treatment/drc-20374974. Accessed 26 May 2024.
Mikulincer, M., & SHAVER, P. R. (2012). An attachment perspective on psychopathology.
World Psychiatry, 11(1), 11-15. https://doi.org/10.1016/j.wpsyc.2012.01.003
Morales, D. A., Barksdale, C. L., & Beckel-Mitchener, A. C. (2020). A call to action to address
rural mental health disparities. Journal of Clinical and Translational Science, 4(5),
463-467. https://doi.org/10.1017/cts.2020.42
"The Psychiatric Bed Crisis in the United States: Understanding the Problem and Moving
toward Solutions." American Journal of Psychiatry, vol. 179, no. 8, 1 Aug. 2022, pp. 586-88, https://doi.org/10.1176/appi.ajp.22179004.
Reddy S. Navigating the AI Revolution: The Case for Precise Regulation in Health Care. J
Med Internet Res. 2023 Sep 11;25:e49989. doi: 10.2196/49989. PMID: 37695650; PMCID:
PMC10520760.
Sandford, D. M., Kirtley, O. J., & Thwaites, R. (2023). Exploring the impact on primary care
mental health practitioners of the death of a patient by suicide: An IPA study. Psychology and Psychotherapy, 96(1), 56-82. https://doi.org/10.1111/papt.12426
Satiani, Anand, et al. "Projected Workforce of Psychiatrists in the United States: A Population
Analysis." Psychiatry Online, American Psychiatric Association, 15 Mar. 2018,
Accessed 24 May 2024.
Tai MC. The impact of artificial intelligence on human society and bioethics. Tzu Chi Med J.
2020 Aug 14;32(4):339-343. doi: 10.4103/tcmj.tcmj_71_20. PMID: 33163378; PMCID:
PMC7605294.
Varkey, B. (2021). Principles of Clinical Ethics and Their Application to Practice. Medical
Principles and Practice, 30(1), 17-28. https://doi.org/10.1159/000509119


Comments