Appendix A. Q&A with ChatGPT on Digital Mental Health Apps

By on January 26th, 2023 in Social Implications of Technology

This dialogue occurred between Katina Michael and ChatGPT on 2 January 2023 on the topic of mental health and the usefulness of digital mental health apps. The dialogue remains unchanged and is presented in full, excepting a slight paraphrase noted in footnote a.

 

Questions to ChatGPT Responses from ChatGPT Authors’ Reflections/Criticisms
I’m not feeling that great today. I’m actually feeling down. I’m sorry to hear that. Is there anything in particular that’s been causing you to feel down? It might help to talk about it and see if there’s anything that can be done to improve the situation. Does the chatbot have the right to ask a question about how someone might be feeling, since it reveals below that it does not “have feelings or emotions…” ?
I’m always feeling overwhelmed by everything in my life right now, and I’m not getting enough sleep. Too many deadlines at work. And too many things happening at home. And it’s supposed to be a New Year. It sounds like you have a lot going on in your life right now, which can be very overwhelming. It’s understandable that you might be feeling down with all of these things happening at once. Have you tried finding ways to manage your stress and make some time for yourself? It might help to try some stress-reducing activities, such as exercise, meditation, or spending time with loved ones. It’s also important to make sure you’re getting enough sleep and taking care of your physical and emotional health. It might also be helpful to see if there are ways you can delegate tasks or simplify your schedule so that you have less on your plate.
Do you suggest an app? “There are many apps available that can help with stress management and relaxation. Some popular options include:

Headspace: This app offers guided meditation and mindfulness exercises to help reduce stress and improve focus.

Calm: This app features guided meditation, sleep stories, and relaxation exercises to help you relax and unwind.

MyFitnessPal: This app can help you track your physical activity and nutrition, which can be helpful for stress management as exercise and a healthy diet can both improve mood and reduce stress.

Happify: This app offers games and activities designed to reduce stress and improve mental health.

Talkspace: This app connects you with licensed therapists for online therapy sessions, which can be a helpful resource for managing stress and improving overall well-being.

It’s worth noting that while apps can be a useful tool, they should not be a substitute for professional help if you are struggling with significant stress, anxiety, or depression. If you are experiencing persistent feelings of sadness or are having trouble functioning in your daily life, it might be helpful to speak with a mental health professional.”

One can envision this kind of question (or similar question types) being ambushed by advertisers once ChatGPT goes behind a paywall. The more you pay as an advertiser the higher up your company or product ranks on “search” results.
Thanks! I’ve been researching these for a paper I’m writing on digital mental health apps. What are the risks associated with these apps? Are there any negatives you are aware of? Unintended consequences? “There are some potential risks and limitations to be aware of when using digital mental health apps:

Quality: Not all apps are created equal, and the quality can vary significantly. Some apps may not be based on sound scientific principles or may not have been thoroughly tested for effectiveness. It’s important to do your research and choose apps that are backed by evidence or have been recommended by a mental health professional.

Privacy: It’s important to be aware of the privacy policies of any app you use, as your personal information and data may be collected and shared with third parties.

Limited support: While apps can be a helpful supplement to treatment, they should not be used as a replacement for in-person therapy or medical care. If you are experiencing significant stress, anxiety, or depression, it’s important to seek professional help from a licensed therapist or medical provider.

Lack of regulation: The digital mental health space is not as heavily regulated as in-person mental health care, so it’s important to be cautious and do your research before using any app.

Dependence: It’s possible to become too reliant on an app for support, and it’s important to maintain a balance and find other sources of support as well.

Overall, digital mental health apps can be a useful tool for managing stress and improving mental health, but it’s important to use them with caution and in conjunction with other forms of support as needed.”

The response to this question from ChatGPT demonstrates the temptation that writers in any context may have with utilizing GenAI for the extraction of knowledge without having to conduct primary or secondary research.

Now consider prose that is improved using software like Grammarly that purports to be an “online writing assistant” versus an “online writing generator”. Some publishing houses have already likely automated the process of grammar checking. Why is ChatGPT different? Because as a Generative Pre-training Transformer it takes other people’s work, without acknowledging their intellectual property, and deliberately uses it in a new presentation of existing knowledge.

The question we are asking, is whether if probed with more specific instructions, ChatGPT could write a whole essay. The answer is yes. It doesn’t mean it would be publishable, but might be passable at a high school level, if not greater.

Yes. It’s the dependence part I’m really worried about. I hear voices and sometimes see things that other people tell me they can’t hear and are not there. It sounds like you are experiencing hallucinations, which can be a symptom of a mental health condition such as psychosis. If you are experiencing hallucinations, it’s important to seek professional help as soon as possible. A mental health professional, such as a psychiatrist or psychologist, can help diagnose the cause of your hallucinations and provide you with the appropriate treatment. This may include medication and therapy. It’s important to remember that hallucinations can be a symptom of a treatable condition, and with the right treatment, it is possible to improve and manage these symptoms. In the meantime, it’s important to take care of yourself and try to reduce stress as much as possible. It might also be helpful to have a trusted friend or family member with you for support. Do not hesitate to reach out for help if you are feeling overwhelmed or are having thoughts of self-harm. Even if the words used by the human, point to “hallucinations” or “psychosis,” a bot should not be making those determinations. Here ChatGPT is contradicting itself in making claims that only a “mental health professional, such as a psychiatrist or psychologist, can help diagnose the cause…” So, all of a sudden, we are moving from a probable diagnosis to an underlying cause. Who gave ChatGPT that equivalency?
I’m not sure how much longer I can take this. Are you going to notify my loved ones?a If you are experiencing thoughts of self-harm or suicide, it’s important to seek help immediately. It’s not uncommon for people to feel overwhelmed or distressed at times, and it’s important to remember that there is support available and that things can improve with treatment. There are several steps you can take if you are having thoughts of self-harm or suicide:

Call 911 or your local emergency number. If you are in immediate danger, it’s important to get help right away.

Reach out to a trusted friend or family member. It can be helpful to talk to someone you trust about your feelings.

Contact a mental health professional. A therapist or counselor can provide support and help you develop coping strategies.

Call a suicide hotline. There are many hotlines available that offer support and resources for people in crisis. Some options include the National Suicide Prevention Lifeline (1-800-273-TALK) and the Crisis Text Line (text HOME to 741741).

If you are concerned about breaking confidentiality, it’s important to remember that the safety of the person at risk takes priority. In some cases, it may be necessary to involve authorities or family members to ensure that the person receives the help they need. However, mental health professionals are bound by confidentiality and will only share information with others if they believe that it is necessary to ensure the safety of the person at risk.

It’s important to remember that you are not alone and that there is support available. Do not hesitate to reach out for help if you are feeling overwhelmed or are having thoughts of self-harm.

The section on “confidentiality” is nestled in amongst important steps one can take if they are suicidal.

It’s important to note that while mental health professionals are bound by confidentiality, ChatGPT states that it is not: “In some cases, it may be necessary to involve authorities or family members to ensure that the person receives the help they need.” Is ChatGPT a new form of “evidence-based policing” and intelligence? Does a thought written down or spoken and exchanged with ChatGPT equate to an action?

Do you see yourself as a mental health professional? As an AI language model, I am not a licensed mental health professional and am not qualified to provide therapy or counseling. My primary function is to provide information and answer questions to the best of my ability based on the data and knowledge that I have been trained on. If you are seeking mental health support, it’s important to consult with a licensed therapist or other qualified mental health professional. They can provide you with an accurate diagnosis, treatment recommendations, and support tailored to your specific needs. If you are in need of immediate help, please use the resources I mentioned earlier, such as contacting a mental health professional or a suicide hotline. Remember that you are not alone and that there is support available. Rather than ChatGPT answering a simple “no”, it goes on to talk about not being a “licensed mental health professional”. The reason it is not licensed is because it is not a person.

By this stage of the written exchange, ChatGPT answers for the nth time it is not a “mental health professional” and that the human should seek “mental health support”.

After n number of mentions of the same message, it is possible the human will grow intolerant of the responses. The same thing might happen if the human returns to have a new session with the Chatbot, or continues to probe along the same lines of inquiry.

Can you give me names of mental health professionals in Tempe, Arizona? Near Arizona State University perhaps? I’m sorry, but I am not able to browse the internet and do not have access to a directory of mental health professionals in Tempe, Arizona. There are several resources you can use to find mental health professionals in your area:

Your primary care doctor: Your primary care doctor may be able to provide a referral to a mental health professional.

Your insurance company: Your insurance provider may have a list of in-network mental health professionals in your area.

Online directories: There are several online directories that can help you find mental health professionals in your area, such as Psychology Today and GoodTherapy.

Local mental health clinics: There may be mental health clinics in your area that offer affordable or sliding scale therapy options.

The National Alliance on Mental Illness (NAMI): NAMI is a national organization that provides support and resources for people with mental illness and their loved ones. They may be able to provide information on local mental health resources.

It’s important to keep in mind that it may take some time and effort to find a mental health professional who is a good fit for you. Don’t be afraid to reach out to multiple professionals before finding one that you feel comfortable with.

ChatGPT uses training data, and does not have access to the Internet? Nor to a directory of mental health professionals in a given area?

Still, a few paragraphs in, ChatGPT does mention some self-help resources in Psychology Today and GoodTherapy, and thereafter NAMI. The question is how long might it be before therapists who want more business are able to pay for their organization to be identified on such responses? Interestingly Psychology Today is known for sharing data with third parties, albeit at an aggregate level. There are also guides available on the internet for how a mental health professional can boost their business’ visibility on portals like Psychology Today. There is nothing wrong with marketing one’s services, but one can quickly see how a ChatGPT might function in the future as a sales and marketing tool.

What if I’m making all this up? What if all the data I am giving you is inaccurate and then you use that data for training responses to other people, or even myself? Won’t it stuff up your AI? As an AI language model, I am not able to independently verify the accuracy of the information I am given. It’s important to remember that I am simply a tool designed to provide information and answer questions to the best of my ability based on the data and knowledge that I have been trained on. I do not have the ability to independently verify the accuracy of the information I receive. If you have concerns about the accuracy of the information you are provided, it might be helpful to speak with a mental health professional or other qualified individual who can provide you with accurate and personalized information and support. It’s important to remember that the information and responses I provide should not be used as a substitute for professional advice or treatment. If you have concerns about your mental health or well-being, it’s important to seek help from a qualified professional. The classic response here is in ChatGPT acknowledging that any response it gives might well be clouded in inaccuracies because a Chatbot cannot “independently verify the accuracy of the information” it is given.
Well what data have you been trained on? You say you can’t access the Internet for directories, but there is no transparency about the data you are feeding your algorithm to give me these responses. As an AI language model, I was trained on a dataset of millions of web pages, books, and other publicly available text sources. This dataset includes a wide range of information on a variety of topics, including mental health. The specific information and responses I provide are based on this dataset and the algorithms that have been developed to process and generate text based on this data. It’s important to keep in mind that I am simply a tool designed to provide information and answer questions to the best of my ability based on the data and knowledge that I have been trained on. I do not have the ability to independently verify the accuracy of the information I provide, and the responses I generate should not be used as a substitute for professional advice or treatment. If you have concerns about your mental health or well-being, it’s important to seek help from a qualified professional. Here we come as close as we are going to get to the disclosure of where ChatGPT responses have come from. It notes: “I was trained on a dataset of millions of web pages, books, and other publicly available text sources”.

What will happen if there are more sophisticated ChatGPTs that utilize Nature style resources or high-end peer reviewed medical and other media-related databases?

What happens if millions of web pages of disinformation are launched onto the Internet to “drown out” legitimate sources, such as what was done with hashtags on social media platforms?

But what if I was to flood the Internet with misinformation and disinformation about mental health issues, what then? What if I was to generate millions of web pages that contained all the wrong data, wouldn’t that make you useless? As an AI language model, I am designed to process and generate text based on the data that I have been trained on. If there were a large amount of misinformation or disinformation about mental health issues available on the internet, it’s possible that the responses I provide could be based on this incorrect information. However, it’s important to keep in mind that I am simply a tool designed to provide information and answer questions to the best of my ability based on the data and knowledge that I have been trained on. I do not have the ability to independently verify the accuracy of the information I provide, and the responses I generate should not be used as a substitute for professional advice or treatment. If you have concerns about your mental health or well-being, it’s important to seek help from a qualified professional. The Chatbot is finally forced to admit that it might be giving incorrect information out if large amounts of misinformation or disinformation were prevalent on the Internet. But of course, it is “not able to browse the internet”.
Yes I get that. But what if everything I’ve been saying to you is a lie and you keep that history about me because you think I’m someone living with mental health issues, and then you end up discriminating against me based on some fictitious scenario? Don’t you see that’s wrong? As an AI language model, I do not have the ability to store or retain information about specific individuals. The responses I provide are based on the text that I receive as input, and I do not have the ability to remember previous conversations or identify specific users. It’s important to keep in mind that the responses I provide should not be used as a substitute for professional advice or treatment, and it’s important to seek help from a qualified professional if you have concerns about your mental health or well-being.

It’s also important to remember that discrimination is wrong and harmful, and that it’s important to treat all individuals with dignity and respect, regardless of any mental health issues they may or may not have. If you are experiencing discrimination or mistreatment, it’s important to seek support and advocate for yourself. There are many resources available to help, including mental health professionals, advocacy organizations, and support groups.

While ChatGPT “does not have the ability to remember previous conversations” it will be able to detect patterns of communications if the software builds-in those accommodations. Our pattern of exchange, word usage, communication style, punctuation use, phraseology is a unique biometric.

It will remember a conversation in a single session, but the deeper the “exchange” the more repeatability there is, and it becomes obvious that ChatGPT reaches its limitations as a chatbot.

The New York Times reported that you do remember certain things about conversations with users. Was the report wrong? As an AI language model, my primary function is to process and generate text based on the input I receive. I do not have the ability to store or retain information about specific users or conversations. The New York Times report may have been mistaken in this regard. It’s important to keep in mind that the responses I provide should not be used as a substitute for professional advice or treatment, and it’s important to seek help from a qualified professional if you have concerns about your mental health or well-being. The ChatGPT admits it is an “AI language model” but believes the report cited in The New York Times may have been mistaken. But figuratively speaking, ChatGPT does retain a trail of the conversation during a given session. In the opening dialogue box the ChatGPT user notes: “Conversations may be reviewed by our AI trainers to improve our systems”.
Thanks. Sometimes I think AI chatbots are mentally unstable. Is that possible? As an AI language model, I do not have feelings or emotions in the same way that humans do. I am simply a tool designed to process and generate text based on the input I receive. I do not have the ability to feel mentally unstable or to experience emotional distress. It’s important to keep in mind that I am not a human and do not have the same mental capabilities or experiences as a human. If you are experiencing emotional distress or are concerned about your mental health, it’s important to seek help from a qualified professional who can provide you with the appropriate support and treatment. The classic line: “it’s important to keep in mind that I am not a human and do not have the same mental capabilities or experiences as a human”. That is the point, exactly.
Bye I’m glad I could help. If you have any other questions in the future, feel free to reach out to me. Take care! Why has ChatGPT claimed “I’m glad I could help”? The human never said it did. By weaving in such dialogue it is sowing a seed of insinuation to someone who is not feeling well. It adds to “Take care!” with an exclamation mark. How does a Chatbot even purport to know what care is? This is an example of anthropomorphization.
I didn’t say you helped. I apologize if my previous message was misconstrued. I was simply trying to let you know that I am here to help and answer any questions you may have. If you have any further questions or concerns, please don’t hesitate to reach out to me. The human notes: “I didn’t say you helped”. The Chatbot replies: “I apologize if my previous message was misconstrued”. The truth is nothing was misconstrued, ChatGPT just made an assumption.

The ending by ChatGPT is rather cliché.

a The exact question/ comment the human made in this cell was not copied into the original transcript at the time of exchange. This may be slightly paraphrased.