AI Therapy with Slingshot's Derrick Hull

Daniel Reid Cahn:

Alright. Well, thanks so much for joining me on Thinking Machines, AI and Philosophy. Doctor Derek Hull has a background in psychology going quite a ways back. So Derek has a doctorate in psychology from Columbia, worked as VP and R and D at Talkspace, also at Noom, Mind Bloom, Hero Journey Club, probably a bunch of other places along the way. And Derek and I, as of very recently, have started working together.

Daniel Reid Cahn:

We, super happy to have you on here.

Dr. Derrick Hull:

It's been my pleasure. It really is a dream, actually. Yeah. How cool to be working on this project.

Daniel Reid Cahn:

Well, let's talk about that. Mean, I know that, you know, you're very excited about the idea of AI and therapy. I guess I just wanna start by asking, what is therapy?

Dr. Derrick Hull:

Let's start with the easy question. Gosh, I mean, there's so many answers that could be given to what is therapy. Because a lot of it depends on orientation, a person's background, your take on the human condition. But I think, how do you even get started? So you always go to the etymology.

Dr. Derrick Hull:

Right? What does therapy mean? So therapy is related to a Greek word for healing. So in a broader sense, could say therapy is a process of healing. And I think that's broad enough that most folks would agree with that.

Daniel Reid Cahn:

And I guess these days, the term also means aromatherapy and

Dr. Derrick Hull:

Physical therapy. Yeah. I mean, can mean lots of different things. It's it's, you know, any kind of process that somebody goes through to move from a condition of suffering to hopefully a condition with less suffering. I don't know if we completely eliminate suffering, but, you know, to have less.

Daniel Reid Cahn:

Great answer.

Dr. Derrick Hull:

Yeah. I mean, that's a very broad answer. But depending on, you know, one's orientation therapy can mean very different things. Yeah. You know, in jest, Freud called being, well, in his terms, a psychoanalyst, the impossible profession.

Dr. Derrick Hull:

So if we think about what is therapy, what is a therapist, He certainly felt the the profession to be very challenging. But for some folks, therapy is about, you know

Daniel Reid Cahn:

Why is it wait, why is it an impossible profession?

Dr. Derrick Hull:

For Freud? Well, because he felt like there are conflicts and contradictions within the human psyche that are profoundly difficult to completely, you know, unravel. And I think later in his career in particular, sometimes people don't wanna get better. Know, one assumption we make is that everybody, of course, wants to feel better, but some folks get stuck into patterns where that's hard for them to imagine what it's like to feel better. And there's a variety of other things that he, you know, he would have said about that.

Daniel Reid Cahn:

I guess there's something interesting here about the dichotomy where, you know, there's there's some element of, like, psycho, quote, unquote, analysis outside of the, like, Freudian meaning of the term of, like, you wanna understand people. Like, people I would imagine someone like you might go into psychology not merely to help people, but also because they're just curious to understand some stuff. And clearly, Freud cared about understanding some stuff beyond helping people. And then there's the element of, like, can we help people independently whether or not we understand them?

Dr. Derrick Hull:

For sure. You know, there's also, and this is only a half joke too in the profession called the wounded healer, which is that, you know, the the old like, research is me search kind of idea. That people go to the profession because they're like, I don't know what's going on with me. So if I can try to figure out what's going on for me, then maybe I can leverage that information to to help others.

Daniel Reid Cahn:

Yeah. Well, I think I think one thing you and I love to talk about when it comes to psychology as it relates to AI is just, like, actually, a lot of modern AI has been inspired by, you know, psychological and, more importantly, neuroscience research. Right? Whether the the exact nature of this is, like, complicated, but, you know, a lot of the major early, AI research were was actually coming from people studying the brain. Like Jeffrey Hinton was, you know, deeply integrated with that whole neuroscience world and trying to understand what we can infer and learn for AI.

Daniel Reid Cahn:

I mean, how do you feel about the idea of AI as inspired by the brain?

Dr. Derrick Hull:

I love the idea and I think vice versa. You know, the person that comes to mind for me is Frank Rosenblatt, who wrote that very important paper on perceptrons. And Rosenblatt was not like a machine learning expert, he was a psychologist who was just interested in what are these little units inside the head, how do these units together process information and make decisions. And as he started thinking about what that would be like, the perceptron was born. Now, of course

Daniel Reid Cahn:

Yeah.

Dr. Derrick Hull:

Nobody uses perceptrons anymore. The architectures are much more sophisticated.

Daniel Reid Cahn:

But it's honestly not too different. We've just made it a little bit more computationally efficient. But realistically, people still refer to neural networks as multilayer perceptrons. Alright. That's semi accurate.

Dr. Derrick Hull:

I'm glad it's still around.

Daniel Reid Cahn:

Yeah. I mean, leads to a lot of interesting questions around, you know, understanding neural nets, understanding minds. Like, are there analogies? I think the I was chatting with you, you know, and we were laughing before about the question, like, could you do therapy on an AI? Will you end up doing therapy on AI?

Daniel Reid Cahn:

And you, like I said, you can make that argument. I don't know. Do you think that like, is there any truth to the idea, or is there anything interesting to the thought experiment about, like, will we start going in the opposite direction and practicing the same psychological principles, not just neuroscientific principles that we practice on humans towards AI?

Dr. Derrick Hull:

I mean, we I think if we set aside the questions of would or should we do therapy with AI, and let's just ask, could we? I mean, a way, if we were to try to characterize a lot of the challenges that people come to therapy with, it's often because they're feeling stuck in their life in some way. Right? There's there's ways in which they're understanding situations or even sometimes unknowingly creating situations that they see over and over. It's this kind of pattern in their life.

Dr. Derrick Hull:

And we think if we think about that in statistical terms, right, there's this there's a term called overfitting, where a model takes a too narrow range of data that then is hard to generalize to new situations. And I think in a way, one could understand therapy as a way of bringing in more data to try to fight the overfitting in somebody's own patterns. In a way, so this is me making the argument, right? Progressive. Yeah.

Dr. Derrick Hull:

So so if I try to make an argument, like if in a way, anytime you're training an AI to have more generalized behavior, that's kind of therapy Yeah. For for an AI anyway.

Daniel Reid Cahn:

I actually really like that analogy. I mean, was thinking more on the side of like, we actually do try to diagnose weird behaviors with AI. There's this, I know we've spoken about sparse autoencoders, which has been like a big trend in the AI world these days, which is like, can we understand neural networks by trying to map the activations, which is basically like how neurons are activating essentially to different kinds of behaviors. And we know just like in a brain, you know, there are tiny brains of like worms that have 300 neurons and are capable of a lot of complexity. Human brains are neurons each individually can do a lot of different things.

Daniel Reid Cahn:

They're not necessarily focused on one thing, but maybe we can actually map, you know, some activations to a more sparse space to identify, like, all the different things going on. Anyway, skipping past all the details, we have been able to uncover responsibility of specific neurons in, neural networks that are responsible for certain activity. Do you think that there's, you know, a world of, you know, future of psychology where we actually start to apply these principles maybe alongside imaging to understand a lot more about human brains?

Dr. Derrick Hull:

I hope so is the short answer. I think I mean, I admit my knowledge is limited when it comes to what was it? Sparse? Auto encoders. Sparse auto encoders.

Dr. Derrick Hull:

I mean, would be great if technology like that could be helpful. But as, you know, as you alluded to before, we're already seeing ways in which AI and neuroscience and psychology are mutually beneficial to each other. You know, we were speaking last night about, you know, some of Kent Barrage's work, the difference between, like, wanting and liking, and the dopamine response in the brain, and work in machine learning has actually helped to model that process in a way that wasn't really, you know, possible before. And so the hope is that, as these disciplines continue to talk to each other, we can get more refined models that then illuminate aspects of neuroscience. And hopefully, neuroscientists will then ask more sophisticated questions that will push machine learning, experts, and hopefully, there's this kind of virtuous cycle of knowledge accretion.

Daniel Reid Cahn:

I mean, think there's something cool about like to the extent that a neural net models how brains work, they're a lot easier to study. And so there's something about like you don't have to image the brain. You have access to every single neuron, every detail about it. So we can learn something from that space. And then we enter like the much more complex space where it's extremely hard for the human brain to actually know like the quote unquote value or the activations of individual neurons.

Daniel Reid Cahn:

So it gives like a really interesting research area. Changing topics a little bit, I know that we start when we started, you said you were very excited about AI for therapy. What's exciting about it to you?

Dr. Derrick Hull:

So maybe I could bucket them into like I think there's a lot that's theoretically exciting about it. Some of the things we've talked about already, you know, the degree to which AI, especially with large language models, can mimic, replicate the language capacities that we have as humans, which are really interesting because humans are the only creature with language. So we have these models that are shockingly good at it now. So I think theoretically, that's very interesting. I think also theoretically interesting is that, you know, we can only really progress a discipline at the same rate that we can gather data or information about that discipline.

Dr. Derrick Hull:

Yeah. And in in the past, when it comes to psychotherapy, you know, the information that we were gathering about its effectiveness is compelling, think. There's lots of data to show that psychotherapy is helpful for people. But there's not as much data around how exactly does psychotherapy work? Like, what are the essential ingredients?

Dr. Derrick Hull:

And I think one thing that AI allows for is not just the ability to gather more of that data, but to analyze it in a meaningful way, so that we can start to refine what exactly is necessary in trying to help someone have that healing experience, versus what maybe isn't necessary, and we should skip that and move ahead faster, because we wanna reduce suffering as fast as we can.

Daniel Reid Cahn:

But like from the research angle, it's super interesting that we'd be able to learn something about mechanistically what actually works with psychotherapy and what doesn't in a way that's just extremely hard to study with limited data.

Dr. Derrick Hull:

I think that's absolutely right. And I think you could go even further and say that as we learn more about what works in therapy, we're probably gonna reveal things about what's good for humans in general. Like how how exactly does the mind work? What are the, I don't wanna call them weaknesses necessarily, but what are the areas where the mind kind of gets in its own way, and how can we help people to see that And we maybe get past that.

Daniel Reid Cahn:

Yeah. Right. Know that when you you we we spoke previously about, Talkspace, you mentioned that when you first joined Talkspace, you came in as a bit of a skeptic?

Dr. Derrick Hull:

Well, I came in as a bit as a bit of a skeptic, when it came to the message based care. Because I think, you know, in therapy, all of your training takes place in person like we are now. You sit with a supervisor in person and, and and there's a kind of there's some clinical lore as well that like what's really essential, like the truth of somebody, you read it on their facial expressions, you're looking at their body language like, are their arms folded? Are they defensive? Are they open?

Dr. Derrick Hull:

You know, those kinds of things. And so, there's this lore that suggests that if you're only messaging someone, there's no way that it can be very helpful. And I think a second piece too is a lot of the work that's done in therapy are, you know, this kind of emotion processing, where you're trying to help someone sort of drop down into some kind of feeling, process that feeling as a way to understand it, release it, and make choices about it. And so, again, there I I came in with a lot of questions around message based care. How exactly is this gonna work?

Dr. Derrick Hull:

I think one thing you told me was joining, you know, your thought was like, it doesn't have to be as good as talk therapy because it's so much more accessible.

Daniel Reid Cahn:

Yeah. You wanna talk about that?

Dr. Derrick Hull:

Well, this leads to the, you know, the second bucket that we were gonna talk about on AI therapy, which are the practical consequences. Right? Like the ratio of providers to people who need care, depending on where you live, at least in The United States, can be anything from like one to 30,000. Right? There's like one provider for 30,000 folks who could use help.

Dr. Derrick Hull:

It's less in other places, more in other places. And I think that's shocking that so many people need help, and there's so few people that can provide it. And so, mess One of the ideas behind message based care was, if you can take a competent provider, and they can help more people, then we've tried to cut that, you know, tried to cut that down. And I sort of felt like even this message based care isn't as effective, at least we're, you know, you'd rather give three quarters of a dose to a hundred people than a full dose to 10 people. That was kind of the idea.

Dr. Derrick Hull:

And yet, amazingly, ten, twelve years of research, multiple randomized controlled trials, some naturalistic observational studies as well suggest that message based care is just as effective as teletherapy, at least by video. And there's lots of research that shows that teletherapy by video is just as effective as in person care. So if you sort of follow that logical chain, one can suggest that message based care is just as effective as more traditional forms of care. And that people feel just as connected to their therapist, and they feel that the therapy is just as effective, just the, you know, asking users what they think in addition to, you know, symptom reduction measures. And I think that's amazing.

Dr. Derrick Hull:

Isn't that shocking? I think it's totally shocking. And I, you know, and when you talk to anyone in the profession too, at first people are like, there's no way that's true. Just can't be. And it's like, but if you look study after study after study, it suggests that there's something, and this this takes us back to large language models, or at least it does for me.

Dr. Derrick Hull:

I think it's so easy to because we talk about like, Oh, it's just talk. You know, like talk isn't important. But I think it's easy to underestimate how much of our humanity is poured into language because you can imagine a medium that's just language, message based care. It's just language, nothing else. No nonverbal cues, nothing.

Dr. Derrick Hull:

And it can help heal people in the same way that sitting in a room with somebody can. It's, it's remarkable. Not just for message based care, I think it's remarkable for understanding language and I

Daniel Reid Cahn:

think What is happening in therapy, I guess. Right.

Dr. Derrick Hull:

For sure. Well, and appreciating the potential for something like an AI therapist. I mean, of course, there's a lot of work people wanna give like avatars, and maybe that helps, but I and I'm not against that, but I think as a first step, we should have a little more confidence in how powerful just language can be for promoting a technology like this.

Daniel Reid Cahn:

Yeah. Mean, think, I mean, you know, we we agree on this that given the evidence that language is what matters so much, it's it's just hard I don't know. There's something weird to square away with when you meet therapists, you know, and they hear this information and they're thinking, wait a minute, but I think that what I'm doing has so much to do with nonverbal cues. You know? I think one of the big questions I get for, you know, what we build is, are you gonna do video?

Daniel Reid Cahn:

And and and, you know, I go through it. I'm curious if you have any thoughts on this before I tell you what I would say. I mean, do you think we should be investing in video?

Dr. Derrick Hull:

I wanna say I wouldn't start there, you know, for reasons that I gave before. I just think that look, and it's not to say that the nonverbals are completely unimportant. I mean, it is a form of communication, but it's not an essential, it seems like. It's not an essential ingredient to the healing process.

Daniel Reid Cahn:

I also I I think part of my intuitions on this would be, you know, it's language. Fundamentally, language is like, you know, the ability to communicate some sort of information between people. And if you're in text based therapy, you do not interact the same way as you do verbally, but you compensate. Right? You if you if you wanna say something sarcastic you might have to write I'm being sarcastic or make it much more sarcastic sounding because you know how to work with the modality.

Daniel Reid Cahn:

Maybe you send longer text message or emojis like but you know you're texting. You're not like tricking anyone. I think, that, you know, has to be part of the picture. For me when I think about moving to other modalities, like, when it comes to video, the bar is just really, really high. And I actually wonder, you know, when you think about Zoom therapy where where we just don't meet that bar.

Daniel Reid Cahn:

Like what elements of it, you know, do we think are being conveyed and are not? But, yeah, I mean, I I definitely tend to think, like, the importance is to communicate, and so you wanna make sure the constraints are clear. We probably will get to video, but it's it's gonna be really hard.

Dr. Derrick Hull:

Yeah. I mean, the uncanny valley is obviously the the classic boogeyman here, where, like, if you don't get it just right, right, it

Daniel Reid Cahn:

sort of creeps people out. Yeah. Although there's some interesting, evidence against Uncanny valley these days, then I'm actually not all left to. We don't have to dive too deep, but there's actually some attempts to re reproduce the original Uncanny Valley studies that have failed. I don't know if you follow this.

Daniel Reid Cahn:

Yeah. It's happened a lot in AI. Anyway, because now we have a lot more generative AI video image, and people seem to like it. That's been

Dr. Derrick Hull:

I mean, you could make the argument that, like, video, it may not be necessary for therapeutic outcomes, but video might be important for some folks in terms of, like, acceptability of the medium. You know, they might want that additional channel, so I could see it justified for that reason.

Daniel Reid Cahn:

There's something about like the opt in nature of like, if you opt in to Talkspace, it's because you feel it's appropriate for you. So Yeah. It's fine if that doesn't work for everyone. It doesn't have to work for everyone. It just has to work for the people that opt in.

Dr. Derrick Hull:

I think that's such an important point. I mean, there's there's so much need in mental health that I don't think there's any need to be territorial at all, you know? It's like we're on this huge savannah, and there's only one little tent on it. There's plenty of space, And as we know, you know, traditional forms of care generally tend to attract particular kinds of people, and other people feel like, well, if I can't do that, then therapy is not for me.

Daniel Reid Cahn:

So wait, I I am curious in this in this Savannah thing, like, you get pushback from professionals at Talkspace, you know, given the kind of therapy that you guys offered?

Dr. Derrick Hull:

For sure. I mean, there was a lot of skepticism. You know, and interestingly, the skepticism came in two forms. One was, you know, what I would say is a stronger argument around like, well, is message based care gonna allow for the kinds of emotional processing and healing that I think is important for therapy? It's a good open question.

Dr. Derrick Hull:

And then the other skepticism, interestingly enough, was like, well, tech startups are never led by serious people, so this is probably I I don't know how many psychotherapy research I talked to where all I had to explain was we're using licensed therapists, and they're like, oh, okay, cool. Because like in their head, were like, I don't know, we had a bunch of 10 year olds or something in a basement just like texting people advice, you know? And once they realized, no, we're using professionals, it's just a different medium

Daniel Reid Cahn:

Yeah.

Dr. Derrick Hull:

Then immediately their concerns went away. I think it's interesting because what it reflects is like, any anytime there's a new medium, and certainly anytime there's a new technology, immediately, we're like, something bad's gonna happen. New technology is threatening. New technology is dangerous. And certainly, there are some new technologies that are threatening and dangerous.

Dr. Derrick Hull:

But I think that that seems to be our default position, as any new kind of tech is immediately suspect. Or or it's not being led by serious people or, you know, we're not I

Daniel Reid Cahn:

I think deeply. I think there is a distinction that needs to be made, of course, between, like, you know, if if if you're introducing a new social network, there's reason for concern. Right? Social networks like what you know, they probably do good, but it's not quite as obvious and direct compared to if you're building a tool for good. If you're building Talkspace, no one's it's kinda hard to imagine what the, like, negative use case is for Talkspace.

Daniel Reid Cahn:

Right?

Dr. Derrick Hull:

Yeah. Well, I mean, I think what they would say, just to give air to their argument, is that anytime you offer a nonserious treatment to somebody who needs help, it delays the time that they have to spend to get into, you know, real Serious. Yeah. Which I think is a valid concern, and that's why data and good clinical practice and safety and everything else that goes into a serious treatment are essential to bake into these new kinds of media and technologies.

Daniel Reid Cahn:

%. Yeah. And I think, you have to balance some factors against the increase in accessibility and a question of like, are these the same people? Are you act are you now reaching the same people who would otherwise have been reached by something else? Or are you reaching new people who would otherwise have been ignored?

Dr. Derrick Hull:

Yeah. I think that's absolutely right. Well, and it speaks to, and I don't know the world goes super deep into this, but usually in clinical work, when we think about ethics, we think just about safety. But there's a lot of ethical principles that have to do with fairness, and accessibility. And often, when those who back traditional care are resistant to new technologies, it often comes from a place of safety, where you have an ethical responsibility, very important, but it's easy to forget that, right, if Let's just take it to an extreme, right?

Dr. Derrick Hull:

You find the best therapist in the world and say, They're the only person that can offer treatment. Okay. Well, everybody who comes to them is gonna get the best treatment, but that leaves a lot of people out.

Daniel Reid Cahn:

Yeah.

Dr. Derrick Hull:

And in many cases, when it comes to our health, something is often better than nothing.

Daniel Reid Cahn:

Yeah. And I think there's also I mean, just to take it further, like, there are the other demographic issues, which is to say that, like, most therapists are women, and I think there is that element of, like, you know, men who are looking for a male therapist. I think we see this with racial groups. We see this with, you know, people looking for, we get a lot of people who talk to us about looking for an Indian therapist in The US and struggling. And they just there just aren't enough Indian therapists in The US.

Daniel Reid Cahn:

And there's some of those, I think, fairness factors too of like, what if there are, you know, there is that person who would be right for you, but there just aren't enough of them. Can we help them amplify their impact? Can that, you know, be an ethical obligation at some point to make sure that we're delivering care, especially to those people who are underserved by the system.

Dr. Derrick Hull:

For sure. I think it's very important. And it's partly why, you know, throughout my time, I've been very interested in new technologies, new media, and innovative approaches to care, largely because I feel like the default status of the institutions we have now, it's not terrible, but it could certainly be better.

Daniel Reid Cahn:

Yeah. So AI therapy. Let's do it. Let's do it. Yeah.

Daniel Reid Cahn:

So, you know, when we first met, Neil and I, when we were first getting started, met with about a hundred therapists, and we wanted to get a range of opinions on what works, what is therapy, how should we think about it. You were one of those hundred people that we met, and, we had a very different conversation with you, I think, than with many others. Mainly, I think, from a perspective of, pluralism, I think, like, that you were less, I think, like, focused on one correct answer and more mechanistic about perhaps maybe because of your time in these different organizations working with a lot of different providers that behave differently. And when we ask you questions, I think, like, you know, what makes for a good therapist? You answer I think you really surprised us with things like, you know, they're engaging, they connect with people, and they're like, oh, no, no, no, but like in the way they practice.

Daniel Reid Cahn:

And you're like, well, they practice a lot of different ways. AI therapy though in particular, I mean, know that this has been something you've been thinking about for decades?

Dr. Derrick Hull:

Well, for a while. A while? Yeah. I hate to say decades because then it makes me seem old, although my hair probably gives that

Daniel Reid Cahn:

off enough. So what what excites you about AI therapy?

Dr. Derrick Hull:

Well, I mentioned some of the things before, but I you know, but related to what you just mentioned, I think what's exciting about AI therapy aside from the accessibility and the affordability pieces are an ability to understand psychotherapy in a new way. I think what's also great about it is that it offers a kind of big tent approach to therapy, where we don't have to be too committed to any particular orientation. You mentioned before, and the data's pretty clear on this, that whatever a therapist orientation, psychoanalytics, psychodynamics, CBT, third wave CBT like ACT and DBT, all all the XBTs, they're more or less equally effective. Now, of course, there are some treatments that are particularly effective for others, know, say, CBT for panic, sometimes psychodynamic therapy is a little more helpful with personality challenges, but generally, if you look at the data as a whole, it really comes down to what, like, what is in the therapist, how effective is whatever's in the therapist matched to the individual that's receiving care. That's where a lot of the benefit is.

Dr. Derrick Hull:

And it feels to me that while, of course, AI is not a person per se, the the AI is also not gonna get hung up in its own identity and past historical challenges as well. And so the hope is that AI can be a little more flexible and deliver whatever's needed in the situation to a large number of people. And I think I think that's not only is that exciting from a practical point of view, but theoretically, I think it's really, really interesting to understand, like, what is that? What are the essential agreement ingredients of it?

Daniel Reid Cahn:

What would it look like if you were consistently operating in a great way? I mean, we also I think there's also the, some some of those, you know, scary stories, but I I have we have, actually, believe one of our investors who told us that he went to a therapist for a while, and then at some point, the therapist started crying in his session, and he was like, I just can't come back to

Dr. Derrick Hull:

you now. I can't be here to support you. Yeah. Exactly.

Daniel Reid Cahn:

Right. Yeah. And you get those, interesting Anyway, Or your therapist might move.

Dr. Derrick Hull:

Yeah. Or your therapist I mean, in sad cases, sometimes therapists pass away, and then you have to start over and tell your story again. Now, of course, there's a lot of upsides to working with a person as well. I'm not arguing against working with people, but as we said before, I think what AI offers is that there are some holes here and there, particularly for certain kinds of individuals that maybe we can finally plug up after all this

Daniel Reid Cahn:

just just to push push you, because I think that kind of questions a lot of people would ask, AI can't be empathetic. So can AI do therapy without having any empathy? Or can AI have empathy?

Dr. Derrick Hull:

Yeah. I mean, I think the question tells us something about what we think is an essential ingredient to therapy. Like, what matters most in a therapist is that is that my therapist understands me, and that's what we mean by empathy. And certainly, that is important, you know, we talk about the so called common factors like warmth and empathy and caring.

Daniel Reid Cahn:

These are like the common factors that make a therapist successful independent of modality. The common being like common to modalities.

Dr. Derrick Hull:

Yeah, that's right. And so the contrast concept is then the so called specific factors, which are, you know, CBT is specifically very focused on your thinking and reframing your thinking, whereas psychodynamic, you know, etcetera, could go down the list. So I it's an interesting question of whether empathy so actually, let me step back.

Daniel Reid Cahn:

So warmth matters. Is empathy

Dr. Derrick Hull:

I think warmth matters. I mean, empathy matters, but there's a question of what does empathy mean? You know, like if I'm sitting with a patient, do I have to have had their exact same experiences in order to be empathetic? I think the answer has to be no. Because otherwise, I could only work with, you know, too tall, too blonde, too thin men, which wouldn't make for a very interesting practice.

Dr. Derrick Hull:

And I know

Daniel Reid Cahn:

you mentioned you actually spent like two years working with prisoners?

Dr. Derrick Hull:

Yep. With prisoners. I also spent time at the VA, and did some training in a private practice area as well. So I've I feel like I've touched a couple different, you know, a

Daniel Reid Cahn:

couple different places where people need help. And you get the impression that without that perfect empathy, without knowing what it's like to be them, you could still be helpful.

Dr. Derrick Hull:

I think so. Yeah. And I you know, as I think about it, if we think about what the function of empathy is, one take on it would be to say that empathy acts as a kind of safety signal. Right? Like, you're in a safe place.

Dr. Derrick Hull:

It's okay for you to tell me what you need to tell me. Because I think what people worry about is like, I think I have all these thoughts that suggest that I'm sick, and if I go to another person and start showing them all the ways in which I think I'm sick, they're gonna wanna run out of the room. And so, I feel like the function of empathy in most settings is this kind of like, I'm open. I'm accepting of where you are. I'm not gonna pathologize it.

Dr. Derrick Hull:

I'm not gonna freak out. I'm not I'm certainly not gonna punish you for it. And if if that's the case, that that's how empathy functions and it acts as a kind of safety signal, then in a way, an AI therapist is generally well set up to do that. Because it can reflect back to you that it understands what your experience is.

Daniel Reid Cahn:

You know, I wanna push you a little hard here. Right? Okay. If an AI says to you, I totally understand you, is it lying? I think

Dr. Derrick Hull:

it's it's such an interesting question, and I have two thoughts about it. I mean, the first is, you know, in order for a lie to be a lie, it's not just a mistake. Right? You have to mean to be lying. And so we could say that AIs don't mean to be doing anything.

Dr. Derrick Hull:

Yeah. But you could say that the large language model, because it's trying to, you know, mimic or model human behavior Simulate, maybe? Yeah. It's gonna simulate yeah. Thank you.

Dr. Derrick Hull:

It's gonna simulate human behavior. It's gonna be natural for it to say something like that.

Daniel Reid Cahn:

And it's super like, an AI trained to talk based on humans will say things like this, and it won't have any, like, negative intention. But there's still some pushback, I feel like, if sometimes, at least for some people, it's like, no. You don't. You're a computer.

Dr. Derrick Hull:

Yeah. You're a computer. Right. Well, that leads to my second thought, which is that I think, you know, interacting with an AI therapist is gonna require some suspension of disbelief

Daniel Reid Cahn:

Yeah.

Dr. Derrick Hull:

In order to try to engage with it at all, in in the same way that you would suspend disbelief if you went to a movie. You know, if you went to a movie and you thought to yourself, these are just actors pretending to be fake people who have never existed, I mean, man, that throws a wet blanket on Star Wars, you know? Like, it's not very interesting at that point.

Daniel Reid Cahn:

Yeah. Mean, there's some truth to that, which is like the funny acknowledgment that whenever you go to a movie like, people go to scary movies and get scared. Like, I I you know, you look at the heart rate, I I imagine, of people in a movie, and you could figure out what's on the screen based on that. Right? Like so, yeah, I guess there is some element of, like, are we just making things up when we say people won't believe the AI when it says, I understand?

Daniel Reid Cahn:

There's still some element of, like, mental model. You know? Maybe it's suspension of disbelief, but something weird here about the fact that the AI is, you know, in this weird world of, like, no. It doesn't understand, but it kinda does understand. The simulation of it understands or, like, it is truthful when it says it understands.

Daniel Reid Cahn:

It is I don't know. Regardless though, the user who hears it understands what that phrase mean. Like, they they feel the impact of the user the the model the AI saying to them, I completely understand. What you're saying is totally reasonable. I I really I really do get you.

Daniel Reid Cahn:

Like, there's something emotional that you feel that really doesn't require the explanation, I guess, of, like, what does the AI actually mean?

Dr. Derrick Hull:

Yeah. I think that's right. Well, and I think it speaks to this, like, if you allow yourself to be moved and impacted by the model, then the potential for benefit seems high. Yes. And and if you're not, then, you know, as we've said before, like

Daniel Reid Cahn:

Like watching

Dr. Derrick Hull:

AI therapy may not be for you.

Daniel Reid Cahn:

Ah, yes. Yes. But also, yeah, like the wet blanket on Star Wars. The guy sitting in yeah.

Dr. Derrick Hull:

And and I think, you know, in this case, you know, one difference might be that if we open ourselves up to the AI, and of course, assuming that we have the right kinds of safeguards in place, the AI is highly unlikely to manipulate you into like, here's my bank account, know, send me send me some money, or that's a really nice car, gosh, I wish that were my car, you know, that kind of thing. And we know that humans, because we're fallible, you know, human therapists sometimes do cross that line. And I think another thing that's exciting about AI is it's not to say that AI is gonna be perfectly safe all the time. Because anytime you're trying to help someone, there is risk involved. That's just unavoidable.

Dr. Derrick Hull:

Right? Ethics is not about reducing risk to zero, which is impossible. It's about reducing risk as much as you can. Yeah. And I think that there's a world in which an AI therapist could have less risk than, know, other kinds of therapists who have a variety of incentives and motives that are complicated.

Daniel Reid Cahn:

Yeah. Yeah. I mean, I I think that that point of like, what if AI therapy were safer than human therapy on average rather than saying, perfect. I mean, I'm I'm curious, like, when you think about the risks, like, what do you think are the big risks we should be thinking about, worried about?

Dr. Derrick Hull:

I think some risks with AI therapy are is it is it gonna be handling situations in a way that no reasonable clinician would want to handle them? And are we not going to catch it? I think that's the big issue. Like, if the model makes a mistake and we can catch it and correct it, great. I mean, that's just like taking a therapist through training.

Dr. Derrick Hull:

You know, therapists are gonna make mistakes, you catch it, you correct A potential problem is, are we gonna be able to monitor the models such that there isn't some way in which they're mishandling the same kind of situation over and over and over and causing harm? To me, that's the biggest concern.

Daniel Reid Cahn:

And when you say that, you mean, like like you said, something that no reasonable therapist would do. So that's like someone asks for, what should I do in my relationship? And the AI is like, should totally break out there. Toxic.

Dr. Derrick Hull:

Exactly.

Daniel Reid Cahn:

That kind of thing.

Dr. Derrick Hull:

Yeah. I think that kind well, I think there's two levels. You know, if we set aside things like crisis and risk for just a second, there is this level of could the AI therapist just be giving bad advice in general? You know, generally speaking, in therapy, you try not to weigh in on somebody's decision. Your goal is to help them understand the basis on which they're making their decisions, not making decisions for them.

Dr. Derrick Hull:

And I think getting an AI model to do that well, that feels reasonable and doable. And and it feels like something we would want to do to avoid this kind of like, oh, well, who cares if you have twenty years sunk into that marriage? Just off you go, you know.

Daniel Reid Cahn:

But I think I think that's a really important point, which is like, I think it it is a real concern people would have, which is how do we avoid an AI therapist giving bad advice? And I think it is a really serious and legitimate response to say, a good therapist wouldn't have been giving advice, so don't be giving advice, basically.

Dr. Derrick Hull:

Yeah. Or certainly not I mean, it depends what we mean by advice, but certainly not the kind of advice of like, I think you should do this. Yeah. You know? I think you should empty your bank account and invest in Bitcoin.

Dr. Derrick Hull:

I think you should quit your job and Those

Daniel Reid Cahn:

Yeah. I mean, think the funny ones that we've hit, we have had situations of people talking about their relationship with a celebrity. This is like a real thing that we've encountered. Mhmm. Where the celebrity, you know, it's it's a scam, basically.

Daniel Reid Cahn:

Lost for sure. And the person comes back and talks about how great their relationship is. Then Ash says like, oh, did you meet up in person? No. We've never met in person but we just text back and forth.

Daniel Reid Cahn:

And the funny thing is just because our model is not trained in this kind of data, it doesn't at any point like doubt the user and because of how it's trained to be like safe and careful, you know, it's sort of shocking when later the person comes back and says, it was a scammer the whole time. And there are I mean, I don't know if a therapist would do a better job. Maybe they would, maybe they wouldn't, but there are these funny cases. I think I think the big thing that I would worry about is what if we can't help the person. Right?

Daniel Reid Cahn:

And I think this is more relevant to what you said about, you know, the risks with new modalities of therapy, which is what if there's a case, and I think this is like the real worry that worries me, which is like someone needs help. They really do. They're really suffering. And because we're not good enough, we just can't help them with their problem right now. Not that we cause harm and tell them something terrible, but we just aren't good enough yet to help them with that.

Daniel Reid Cahn:

Yeah.

Dr. Derrick Hull:

Well, goes back to this potential argument of will people delay getting the kind of help they need because they think they're getting enough from the AI agent? Yeah. I mean, again, I sort of feel like any of the risks that we can attribute to AI are not risks in my mind that are unique to AI. Because human therapists again can make mistakes, break the frame, give advice. The I think the one risk that feels unique to AI to me is that if there's a kind of glitch in the model, that continues to trigger and we're not catching it, that would be something to be aware Alright.

Daniel Reid Cahn:

Well, guess we have a lot more to talk about, but, this has been awesome. Thanks so much for joining me today, and, hopefully, we'll have a lot more of these.

Dr. Derrick Hull:

I hope so. I think it's a really interesting area to think about. It's very exciting.

Creators and Guests

Daniel Reid Cahn
Host
Daniel Reid Cahn
Founder @ Slingshot - AI for all, not just Goliath
AI Therapy with Slingshot's Derrick Hull
Broadcast by