Does Philosophy Make Progress? Chatting with Every's Dan Shipper
So let's just get started. Alright. Thanks so much for joining me.
Dan Shipper:Thanks for having
Daniel Cahn:me. So Dan Shipper is the cofounder and CEO of Every. It's a media and software company on a mission to discover what's next by any means necessary, it seems, which means that Dan does quite a bit of writing as well as coding. I'm excited to have Dan here today because we seem to have a lot in common. Besides for a bunch of mutual friends we discovered from the New York AI community, Dan actually also started coding in middle school.
Dan Shipper:Middle school.
Daniel Cahn:Undergrad in philosophy. Mhmm. Also went on to start a AI company
Dan Shipper:in some sense. Also both named Daniel.
Daniel Cahn:Also both named I wonder if we're secretly related. But, anyway, I'm excited to have you on, talk a little bit about AI and philosophy and, you know, wherever we go.
Dan Shipper:Excited to be here.
Daniel Cahn:I'm just curious, maybe not the fully philosophical side, but how'd you start coding?
Dan Shipper:I read a Bill Gates biography when I was in 5th grade, and I decided that I wanted to start a Microsoft competitor. And I was gonna name it MegaSoft, and I wanted to build out, like, a another an alternate to Windows, so a different operating system. And I, like, begged my dad to take me to Barnes Noble and buy me a programming book, which he did, but he was like, it's expensive. I'll buy it for you, but you have to, like, read it. And so I was like, okay.
Dan Shipper:And so I read it, and and it was basic. So I didn't end up making an operating system, but I was like, oh, this is really cool. And I I kept doing it because it was the only way for me to make stuff as, like, a middle schooler, and I loved that feeling and just kept doing that.
Daniel Cahn:Yeah. I feel like there's some magic in in terms of, like, when you build things in the real world, it's so slow because, like, you'd draw, like, a blueprint of the building you wanna build, and you have to build it brick by brick. Yeah. You know, in in computer science, the blueprint is the product.
Dan Shipper:So you
Daniel Cahn:you described it very carefully, and you're done. There's no next step. Yeah. Totally. So nice.
Daniel Cahn:And then you didn't study computer science. You went into philosophy?
Dan Shipper:I did not. I I went into philosophy.
Daniel Cahn:Or at least you said you didn't.
Dan Shipper:I mean, I'm still I'm still in it. I'm still in philosophy. I've somehow made that, I guess, my career in some in some ways. But, yes, I studied it in undergrad.
Daniel Cahn:I'm I'm just curious why why study philosophy?
Dan Shipper:I have always been interested in questions about, like, how to live and, like, what a good life is. And I was very I started getting, like, really interested in that in a way that I could, like, talk about when I was in high school. And when I got to college, those were the classes. Like, the foster classes, like, really stuck out to me. And I knew that I I would be able to get a job after I I graduated because I I could program, which now it feels like it feels like such a different time where, like, being an engineer feels felt like it was so much more valuable, like, 10 years ago.
Dan Shipper:But, anyway, there's there's a lot there. I I still think it's valuable. It's just a different it's a different thing. But I realized that I could get a job And if I was like, why don't I, like, get try to get out of college? Like, what would be, like, most pleasurable or most, like, interesting to me?
Daniel Cahn:You mentioned the good life. Do you feel like you figured out what the good life is?
Dan Shipper:No. I think, like, I think, first of all, like, what philosophy afforded me was the chance to read a lot of books, which is really cool, and to, like, have that experience where you, like, read a philosopher and you're like, wow. Like, Descartes, like, really got it right.
Daniel Cahn:Really?
Dan Shipper:And then, like, the and then the next week, you, like, read, like, I don't know, Locke, and you're like, wow. Like, Locke got it. Descartes sucks.
Daniel Cahn:I was gonna say Descartes' an interesting choice.
Dan Shipper:Yeah. And then, like, every but every week, you, like, have that same experience, and then you start to be like, wait. I don't know if any of these people have it right. You get to have a little bit more of a critical perspective on those kinds of frameworks, or that which is which is, I think, a philosophical way of thinking that's been really helpful. I do think, like, in general, my perspective on most academic philosophy is that it actually has departed very far from its, like, Socratic, platonic, like, good life roots.
Dan Shipper:And so I think after college, I got much more into psychology because I think psychology is, like, maybe a little bit closer depending on which branch of psychology you're talking about, but a little bit closer to thinking about the questions of the good life. And I think I think I've I found some answers for myself, but not those answers are not final, and they're not general.
Daniel Cahn:I guess that was the question I was wondering, which is, like, do you feel like, you know, people often criticize philosophy from a perspective of, like, oh, there are no answers. They're just questions. Yeah. Do you feel like you've found answers?
Dan Shipper:I definitely have, like, perspectives on some of the, like, traditional philosophical questions or philosophical dichotomies. Mhmm. I I sort of I, yeah, I think there's this, like, general general feeling that philosophy doesn't make progress. Right? But I think if you if you study a history of philosophy, you do, like, see, like, a thinker will will think think something, and then someone else will respond to them and find holes in their argument and then posit their own, like, idea of whatever they're debating.
Dan Shipper:And over, like, 1000 of years and you can track, like, real really, like, big moments in the history of philosophy, like Descartes or Kant or whatever. And, I think that that's I think I think pretty much every branch of knowledge works like that if you actually, like, look at the look at the details of it.
Daniel Cahn:That there is progress.
Dan Shipper:Or or that the progress looks a lot like the progress in philosophy where, like, a classic, like, what it, like, talk about it would be like Koonian paradigm shift type type things where progress doesn't necessarily look linear. And depending on how you look at it, like, you can it's I I depending on how you look at it, like, you can it's I I think it I think progress doesn't build on itself in the way that, like, we tend to believe it does. I'm being, like, a little bit wishy washy on that. But I
Daniel Cahn:mean, I think Yeah. Sticking with philosophy for a minute, I guess there is that interesting point of view that's really easy to hit, which is, like, philosophy makes no progress. Yeah. People are saying the same stuff now as they did then. Why don't we know the answer between, you know, deontology or consequentialism or whatever frameworks?
Daniel Cahn:And then there's some interesting path you talk about in your own undergrad of, like, you discover a framework. It's magical and answers every question until you reach that next step and someone creates this hugely problematic
Dan Shipper:Yeah.
Daniel Cahn:Thought experiment. And suddenly you're, like, the framework falls, but the new framework stands.
Dan Shipper:Yeah.
Daniel Cahn:And this happens to you as a person enough times over the course of, like, 3 years
Dan Shipper:Yeah.
Daniel Cahn:That you realize, like, okay. I'm skeptical that any of these frameworks are gonna stand.
Dan Shipper:Yeah. Definitely. I think you, like, you start to you start to think about, like, and I think a lot of philosophers like Wittgenstein or, like, a lot of the American pragmatists or even Kant are, like, you start to think about what the limits of inquiry inquiry are and why we can't, like, come to those questions. And I think that's something that that I've thought about a lot. And those are the kinds of thinkers that I I really like for that reason, and it's also why I've been, like, really interested in machine learning and AI because I think same shift has sort of happened in in machine learning and AI where
Daniel Cahn:Like, the wishy washiness?
Dan Shipper:Not the Or I don't know about I don't know I don't know about wishy washiness. I think that the, like, the traditional AI paradigm, symbolic AI, which is trying to, like, represent intelligence in the basic units of of logic, symbols, and their relationships, didn't work. And it's it didn't work in the same way that I think a lot of floss like, the Technism. Yeah. The same the same kind of approach to philosophy didn't work, which is basically, like, being able to spell out answers to ultimate questions, like, what does it mean to be good or whatever in a, like, really explicit, like, rule based way.
Daniel Cahn:Mhmm.
Dan Shipper:We haven't really been able to do that in philosophy, and we haven't really been able to do that in AI. Mhmm. And what ended up working in AI was this shift to neural networks where those, those networks work on a sub symbolic level. They don't represent symbols, like, in each node or in each connection. They're all those, those representations are are distributed.
Dan Shipper:And I think that that, like, has some parallels to the limits of, like, philosophical inquiry where, like, if you're thinking for yourself, like, okay. Like, why do I do like, a lot of philosophy is thinking about, like, okay. Why is my intuition x y z, but I have another intuition over here, and how do they, like Yeah. How do they connect? And I I'm trying to make them, like, consistent and and have one principle that, like, underlies all of them.
Daniel Cahn:And they seem both super strongly held and yet super inconsistent.
Dan Shipper:Yeah.
Daniel Cahn:And suddenly you see a neural net, like, specifically a language model produce 2 opinions that directly contradict I don't know if you've done this, but I love with Chat GPC, occasionally you can ask it questions where it will answer confidently with an answer in a factual kind of way. Yeah. And you can lead it down a path that it does contradict itself. And then you say, does that sound like a contradiction? It says, yes, you're right.
Daniel Cahn:I did contradict myself. Yeah. And, you know, yeah, there's there's some interesting still, like, resistance, I guess Yeah. To the idea that one can hold multiple contradictory opinions.
Dan Shipper:And I think for me, like, I'm like, actually, that's fine because, the contradiction only only happens at the, like, conceptual level, which is the which is where the symbolic, like, contradiction exists, but there's no contradiction at the subsymbolic level. Subsymbolic as in, like, in terms of how people behave or decisions that get made? Sort of so in, like, in a in a personal situation, it's like we have a we have, like, a vast amount of intuitive knowledge and intuitive awareness that you we cannot and and should not like, it's not helpful to, like, try to make explicit in every single possible scenario. And the fact that that intuitive awareness and intuitive decision making has has contradictions to me doesn't really matter. It matters I think I think the contradictions are important in specific cases.
Dan Shipper:Like, there there might be cases where you're having a problem in your life and, like, you're you're holding 2 contradictory beliefs and you wanna, like, try to figure out, like you wanna update the weights in that, like, that little, like, subnet, basically. But the point is not to, like, explore all of the, like, all of the weights and make them all explicit. It's to, like, find specific it's sort of like a case law. You wanna, like, have a judge, like, kinda figure out in a particular circumstance, like, how did how does this how should this law apply rather than, like, have a have someone spend, like, all their time trying to figure out how every law applies in every single circumstance. Like, that doesn't really work.
Daniel Cahn:So it's interesting. It's like AI reflecting philosophy or, like, the way we think, same way as, like, a legal system instead of having a top down sorry, a bottom up system that's just like, this is what is ethical. Like, you know that that's ethical. You know that that's not. Every once in a while, you will hit a contradiction.
Daniel Cahn:And then you do need to fix the contradiction, but you don't have to do it by changing all of your values.
Dan Shipper:Yeah. Totally. I think the the, like, the project that we embarked upon, like, 25 100 years ago with Plato and Socrates is, like, knowledge is only what you can define and explicitly lay out. And I think what we've found is that, like, trying to do that, to some degree, is really helpful. Right?
Dan Shipper:Like, we've gotten we've gotten made a lot of progress, but trying to do that in an ultimate general way where there's no contradictions, doesn't doesn't work so well. It's like yeah. And and I think I think you can see this, for example, like, in ethical frameworks, it's it seems pretty obvious to me. It's like, if you look at the, like, the EA utilitarian crowd, like, some of it's really good. Right?
Dan Shipper:But, like, then it also leads you into all these, like, weird paradoxical, like
Daniel Cahn:Like, the non person problem.
Dan Shipper:Yeah. Or, like, you know, you're like, well, if there's a point 001% chance that, like, AGI is gonna kill us all, then I need to, like, I need
Daniel Cahn:to focus on that.
Dan Shipper:I need to know where to go. On that. And so Yeah. And what it is is, like, you're kind of, like, you've decided, okay, I'm not gonna listen to my intuitive, like, rational intuition or my intuit my intuitions about what what is moral or what is ethical. And instead, I'm only gonna make rational calculations.
Dan Shipper:And then you kind of end up rationalizing your intuitions.
Daniel Cahn:And Well, I'm curious on that just because you say rationalizing your intuition. So you mentioned, like, Kant and Descartes. I'm a big Hume guy. I don't know what you He
Dan Shipper:was great. Yeah.
Daniel Cahn:I was gonna say because you I
Dan Shipper:think he's a pro he's a proto pragmatist type thinker.
Daniel Cahn:Yeah.
Dan Shipper:And I'm a I'm a sort of pragmatist stan. So, I like Hume.
Daniel Cahn:He's he's you know, one of his most famous quotes was, like, reason is not only be a slave of the passions Mhmm. Where he argues that we actually only ever use reason to justify passions, which are is his word for, like, desires.
Dan Shipper:Yeah.
Daniel Cahn:I guess there's what I found fascinating when I first encountered Hume was just this idea of psychologically inescapable concepts and the idea that kind of, like, we shouldn't, like, if there are things that you can't doubt, if there are things you can't, you know, think differently about, perhaps we should, like, acknowledge that, keep it as part of our framework, not try to fix it and say Totally. That wall that looks white, actually stop thinking it looks white. Yeah. It's like Yeah. Why?
Dan Shipper:Pascal has this really good quote, which is the heart has its reason, that reason cannot know. And I I really like that. I think that that makes a lot of sense. And, yeah, I I think I think the sort of human emphasis, I guess, on know knowing the limits or or allowing yourself to, like, have unexplainable passions is, like, fine.
Daniel Cahn:Yeah.
Dan Shipper:It's it's the point the problem is that, like, we feel like we can't admit them, and we sort of try to hide them, and then that causes all sorts of, like Yes. Issues.
Daniel Cahn:Yeah. When you're kind of like, no. Actually, my whole ethical system stands, and all of my opinions are fair. And I don't contradict myself when I say I would, you know, pull the lever, but I wouldn't push the Yeah. Mad.
Daniel Cahn:Like Yeah. That's not a contradiction. No. It it's a contradiction, and maybe it's a fine one, and maybe it's one to be aware of to draw lines in your psychology, not psychology. I think one one challenge here that's kinda hard to escape is there's something nice about symbolic reasoning when it comes to AI.
Daniel Cahn:It gives, like, a predictability. Like, if we could create an ethical framework that just answered all of ethics Yeah. We would be in a position where at least we could all agree it's, like, fair. Yeah. At least we can all agree, yeah.
Daniel Cahn:That's the one we
Dan Shipper:do. Totally.
Daniel Cahn:Do you worry when you think about, you know, this projection? If ethics can't be represented with, like, a fixed system, does that lead to problems?
Dan Shipper:I think it would lead to problems if it could. Like so I well, the the thing that you're thinking about is Leibniz's original dream of, like, the universal calculus where he was basically like, hey. It would be really great if we could just, like, write everything down in this symbolic language where you couldn't lie and you couldn't you couldn't make any sort of contradictions. And so we'd be able to turn moral questions into, like, mathematical questions, which is sort of like this direct descendant from Socrates who wanted to turn all moral questions into explicit, like, quantifiable questions of knowledge. And my feeling about that is, I guess, in, like, some way, it would be nice to be able to be like, yeah.
Dan Shipper:You're like, universally, yes. This is how things this is how things go. But also, like, there's sort of like this that sort of feels like death to me.
Daniel Cahn:Like, if we've answered all the questions of ethics Yeah.
Dan Shipper:The whole day? Yeah. Yeah. Yeah. Yeah.
Daniel Cahn:I I think there's, you know, there there's definitely a truth to that. Yeah. The ways in which it would suck to have all the answers, but there's also the side of, I I are you familiar with, like, long termism? The yeah. I mean, I think one one thing that does strike me about long termism, that, like, area of study, not just the ethical claims, but long termism is just the idea that, like, we should be really focused on the long term because there's a lot of time and people in the future.
Daniel Cahn:But I think specifically the idea of, like, there are systems of, like, ethics that last a really long time. Mhmm. Right? Like, the Bible is a thing that's, like, 2000 years old Yeah. And it's stuck and, like, the things people believe haven't changed.
Daniel Cahn:And we could be in a moment where things solidify really strongly. Yeah. So I think there's something more I would wonder about, which is, like, even if we don't move into the rigid perspective of here's some rules in the universal calculus, will we not still inevitably enter a world of, like, rigid not rules, but, you know, a rigid neural net that just decides
Dan Shipper:something. Yeah. I I do think about. Yeah. I do think I think that there are these phases in the history of, like, moral thinking where you get, like, you get a a wave of, like, anti authoritarian thinkers who, like, try to find a new system.
Dan Shipper:So, like like, the prod Protestant reformation is, like, a really good example. Mhmm. And and those people, like, you know, they're they're like, okay. We're gonna we wanna have a personal connection to God. Like, we wanna we we we can, like, get rid of the, like, general general, like, Catholic doc doctrine and, like, all the, like, church hierarchy that's been around for 1000 of years.
Dan Shipper:But now, like, it's different. Right? Because, like, the people who are protestants, like, grew up, and I have nothing against protestantism, but, like, they they grew up in it. They didn't choose it. And so a lot of it is sort of this, like, sort traditional And
Daniel Cahn:they're they're around other Protestants. They're not responding to anything. They're just living.
Dan Shipper:Exactly.
Daniel Cahn:And, similarly, the US, you know, revolution happens in the late 1700 and leads to a constitution that responds to specifically, like, a declaration of independence. We will not be like them. We will not follow those laws that weren't fair.
Dan Shipper:Yeah.
Daniel Cahn:And that, you know, 100 of years later is now inspiring a huge number of constitutions and rule systems that aren't responding to anything in particular.
Dan Shipper:Exactly. And so and then you have periods of time where people are like, well, the conditions have changed sufficiently that the old systems don't work. We need to come up with new ones. And, like, it's possible that we're in one of those periods right now because of technological change. But I think the the the appropriate way to ultimately think about new ethical systems, like, let's say, long termism is rather than, like, people in the future are all that matter is to say, like, it would probably be better if we shade it a little bit more in that direction.
Dan Shipper:There's, like, there's all of these different ethical and moral considerations, and we can't actually, like, prioritize one over everything else, and that wouldn't be good. But we might wanna lean in that direction a little bit for x y z reason. An example might be like our our neurobiology is, like, pretty wired to be, like, really, really short term focused.
Daniel Cahn:Yeah.
Dan Shipper:And we're building technology that is, like, if gonna affect the, like, lives of, generations to come, and it'll be really good if we, like, were a little bit more sensitive to that. And that's I guess
Daniel Cahn:what what I mean here is not so much, like, should we focus more on the long term, but will our opinions on ethics and I mean, like, there's the I don't know, our opinions on clothing.
Dan Shipper:Yeah. You
Daniel Cahn:know, the the more cultural oriented moral opinions that we have, will they become solidified if we're developing systems? That's job it is is to understand our society, to understand how we reason, to approximate it, to predict kind of the moral judgments the average person would make, but it's the average judgment people would make today. Mhmm. Will we end up solidifying within machines whether, like, a universal calculus from first principles? You know, I think the the thing I'd wonder about is will we end up with universal calculus that still does exist?
Daniel Cahn:It just happens to be fuzzy instead of, you know, non fuzzy.
Dan Shipper:So are you are you asking, like, if we if we try to, like, if we try to quantify, like, moral judgments in an AI, is that sort of, like, universal calculus that gets solidified for, you know, generations
Daniel Cahn:to come?
Dan Shipper:Yeah.
Daniel Cahn:And I'm somewhat wondering if that's what we're doing or about to do. Will GPT 4 have certain opinions that, you know, define how we decide to design GPT 5 and 6 and 7 and anthropic's next slide?
Dan Shipper:Definitely. Like, there are there are always there are always these sort of, like, positive feedback loops where the like, we have QWERTY keywords because of, like, you know, it's like so, yes, a 100%.
Daniel Cahn:Because someone decided QWERTY was optimal and then it took off enough that it became the default.
Dan Shipper:Yeah.
Daniel Cahn:And whether or not it is optimal, it's not gonna change.
Dan Shipper:Yeah. I I agree. It it won't change until we no longer use keyboards anymore, which might happen in the next 100 years or whatever.
Daniel Cahn:Like, I
Dan Shipper:wouldn't I wouldn't be surprised. But I
Daniel Cahn:would mention you know, you mentioned the Protestant revolution. We mentioned the American revolution. And yet, going back to the Bible, going back to Socrates and Plato, we still maintain those opinion. Like, Socrates still matters today He does. Because he happened to live during this critical period when writing was kind of really a thing, and he had the freedom to talk about philosophy and whatever.
Daniel Cahn:You know, people thereafter read works inspired by him. Totally. But, like, there obviously might have been justice Marc Socrates that lived 10000 years ago but wasn't at the right moment in history for those writings to affect the future. Totally. Yeah.
Daniel Cahn:And we have, you know, terms from that. We have ideas from that. I guess what I'm really wondering here is I I so respect going back to the idea of kind of the we don't we we should be scared of a system that says we're gonna symbolically represent ethics
Dan Shipper:Mhmm. And
Daniel Cahn:answer every question with a number that's gonna say, like, that moral decision is a 4.27. Yeah. I wonder if like, do you have a vision for what it would look like? Do you think that we should have a vision for AI doing ethical reasoning independently and at scale?
Dan Shipper:I think that's such a good question. I actually, in 20 16, like, spent a while writing a novel about self driving car ethics and, like Really? What, like, what should self driving cars do in those, like, sort of scenarios. And I I've I wanna know like, I really wanna know, like, underneath under the hood. Like, I'm sure there's no explicit, like, rule system that's, like, if if you're about to hit the mayor, like, don't, you know, like, hit someone else.
Dan Shipper:Yeah.
Daniel Cahn:But city council
Dan Shipper:Yeah. Yeah. But for sure, like, there are, like, implicit there's some sort of implicit ethical system in these in these systems that we don't know about, and I think that's I think that's I think that's really important and interesting. And
Daniel Cahn:Especially if they're not consistent. Like, there's something even weirdly like, the realistic direction we're going, I feel like, is if you're using GPT 4 and you make multiple inference requests, you end up with multiple responses. There is a chance that we end up with these nondeterministic systems that are like, well, the mayors
Dan Shipper:I guess. I think, like, the the, like, history of philosophy is in some ways and you can apply this to science too, but it's a little bit different than science. But, like, the history of philosophy is you can think about it as a quest for certainty, for absolute certainty. And with the idea that, like, if we can, like, ground our like, what we know, our knowledge in this, like, certain transcendent truth, like, that will fix everything. Right?
Dan Shipper:And I think one of the, one of the things that I've learned about life and I think is more compatible with, like, a more pragmatic, probably, like, business y mindset is, like, embracing the uncertainty and the complexity of life. And I think, like, in a lot of ways, that's what language models do because they do model, like, all these different ethical positions, and they shade one way or another, but, like, you can prompt them to do whatever you want.
Daniel Cahn:Mhmm.
Dan Shipper:And I think what what we'll probably end up with is is a diversity of models from a diversity of viewpoints where each particular model maybe has, like, some leanings, but also, like, it's like their actual ethical viewpoint is, like, not it's you can't actually write it out. Like, it's not it's not an explicit code. It's just like, it's implicit. And I think and and the nice thing about that is, we get to see how those things play out and we get to choose which ones we wanna run our society with, which ones we wanna run our lives with. And I think that's actually that's actually really, really cool.
Dan Shipper:Like, if you if you think about, for example, a lot of our society runs on moral judgments, so, like, the court system. Yeah. But those are the moral judgments of, like, 12 random people. Yep. And I think there's a lot there's a lot of situations in which I'd probably want a language model judging me Yeah.
Dan Shipper:Because, like, what's interesting is about language models is they're frozen. Right? So you can get a consistent decider for lots and lots of different cases.
Daniel Cahn:And And so the consistency would yield some sense of fairness.
Dan Shipper:Yeah. You can you can prove fairness, basically.
Daniel Cahn:Have a 100 jurors with a 100 perspectives, let them vote Yeah. Choose whatever rules you want. Totally. So, it's a really that's a really interesting point of view. If you were to say, we're actually gonna sample a 100 random jurors in the population, none of them are human, and they are gonna have different perspectives.
Daniel Cahn:So is this court utilitarian? That's probably statistically, I don't know, 8 you know, 13% utilitarian. Yes.
Dan Shipper:Definitely. And I think and I think that rejection of any one particular organizing moral framework for, like we use the framework that fits the situation best, and then you have to ask, like, well, what what fits the situation best? And the answer is, like, there's no actual philosophical answer to that question, but that's exactly what neural networks do is they figure out what the situation is, and then they apply, like, thousands of rules partially to, like, to to, like, fit the situation best. Mhmm. And that's what human intuition does too.
Dan Shipper:And and being okay with that being our moral, how we operate morally is actually the only, I think, possible and own and and the best way to do things.
Daniel Cahn:I I find this, like, such a fascinating proposal of, like, you know, rather than you have a set of rules for justice where we just decide these are the laws, very rigid, very clear. We move in sort of the opposite direction of, like, no. We're gonna allow for flexibility. We're gonna move more towards, like, a juror type system Yep. Where, you know, you could actually have I mean, like, one thing I'd wonder about, just random thing to throw out there, jury nullification.
Daniel Cahn:Yep.
Dan Shipper:You're
Daniel Cahn:familiar with the idea? It's just like where a jury says, I don't care about the law. The law is not just. Not putting this person in jail. And you end up with this really interesting I mean, just something to think through.
Daniel Cahn:Would we explicitly say to a language model, you cannot do that? Right? You must follow the laws. Or I guess you can go even further. Even if you did, there's no guarantee the language model would obey that.
Dan Shipper:Yep.
Daniel Cahn:You know?
Dan Shipper:I Maybe. I think and and I think this actually connects really deeply to, like, technology and society and and and history in this in this interesting way, which is, like, one of the things that we're describing is something that's a little bit more like a direct democracy, like in Athens. Yeah. Where everybody is in Athens, where Socrates and Plato come from, everyone's a jury, a juror, they're a judge. They can they can bring suits.
Dan Shipper:There are statesmen, there are warriors. They do everything right. And, obviously, Athens had a lot of problems too, but, like, if you're a citizen, life's pretty good. And and so being uncivilized. Exactly.
Dan Shipper:And citizens but citizens, like, you're expected to be a general an excellent general generalist, basically. Yeah. And that starts to fall apart in Athens as Athens becomes an empire because empires require a lot of complex bureaucracy, and no person can be an expert at everything. And so you start to get specialization, which then, like, basically, you go from, like, a generalist culture, polytheistic generalist culture to a monotheistic specialized culture where you have the rule of law and abstract markets that allow people who don't know each other and are not are not generally excellent to to collaborate. And I think that that might start to shift back in certain cases because a generalist with a language model is, like, super dangerous because a generalist has a 1,000 specialists in their pocket.
Daniel Cahn:I love this framing of kind of what opportunities we open up in society if we shut down the specialists. Yeah. I mean, I I've been reading a book called Weapons of Mass Instruction Mhmm. About a few friends. Phenomenal book against education.
Daniel Cahn:Mhmm. I gotta talk about it on the podcast at some point. But just there's there's quite a bit of thought around, like, how specialized is our education? Should it be way less specialized? Yeah.
Daniel Cahn:I feel like there is this opportunity with AI for, like, what if we just don't need specialists all that much? What doors do we open up if everyone could be a judge? Because you don't need special knowledge of the law.
Dan Shipper:Totally. I think and I think that's I think that's really cool, and I also I also think it's a better way to live.
Daniel Cahn:Yeah. It just sounds I mean, also, just think about your life. Yeah. Like, I I I don't know. I hated the idea of being a specialist.
Daniel Cahn:All points in my life when people are like, oh, you're people love putting you in a bucket. Yeah. And I feel like, I don't know about you. I always felt like, ah, stop putting me in a bucket. I'm just a human.
Dan Shipper:Totally. And, I mean, I maybe that's why we've started companies because, like,
Daniel Cahn:that's Yeah.
Dan Shipper:That's a good way to be a to be a to And I've all I found with AI stuff, like, that's that's been the the biggest accelerant to my, like, generalist career possible because, like, now I, you know, I sit down and I, like, code a little app or, like, I make a little movie or, like, you know, I can do whatever I want without lots of money or lots of people. And, I think that's, like, super special, and I think it's a good way to live. And and I think the, the ramifications of being able to, like, the way the way that I think about the way language models allow us to coordinate is we've put a lot of emphasis on concrete, explicit rules and explanations in law, in religion, in science for many, many generations, and that's because, it's much easier to coordinate people that way. Because a person can write down a set of instructions and then give it to another person, and, like, they can follow it if they believe in god or they believe in the rule of law or whatever. Or in science.
Dan Shipper:They they, like, they can follow the experiment to, like, verify it for themselves. But what what I think we all know is that most forms of valuable knowledge are necessarily inexplicit and intuitive. So a really good example is, like, you can ask a famous investor, like, how they make decisions, and they might be able to give you a couple, like, sentences. But, like, really, it's, like, inexplicit intuitive pattern matching over thousands and thousands and thousands of hours of of experimentation and and
Daniel Cahn:And that does seem to be also one of those curves where, you know, the inexperienced investor has a very clear system. Yeah. There's there's also that. I think you only find those it's only those, like, top investors that are willing to say to you, like, I'm good at my job.
Dan Shipper:Exactly. There's also that. But but, like, basically, I think we've devalued a lot of the explicit intuitions because it's easier to coordinate over explicit rule based systems. But what's interesting about AI is you can now take an inexplicit intuitive process and put it into a tool that can be transferred between people. And so it allows for collaboration between human beings on intuitive, inexplicit tasks or or bodies of knowledge.
Dan Shipper:So, like, a really good example would be you care a lot about psychology, like different therapists and, like, psychiatrists, different clinicians have, like, vastly different levels of skill and and are fitted for vastly different types of patients. And in general, like, the biggest predictor of, like, therapeutic success is the therapeutic alliance rather than, like, any particular school or or, you know you know, amount of education or whatever, which which means that how good a therapist is is very is highly, highly con contextual.
Daniel Cahn:Although they're also I I feel like even when you do look at therapist bios by and large, they will go down. They're like, I'm good at everything.
Dan Shipper:I know. It's horrible. It's the we I can talk about that for hours.
Daniel Cahn:We should sometimes. This is what I do for a living.
Dan Shipper:But, but what's really interesting is, like, maybe there's, like, one therapist who's, like, I have OCD. So, like, maybe there's there's one therapist who's, like, 10 x better at OCD.
Daniel Cahn:Mhmm.
Dan Shipper:But, like, that person has a limited amount of time. Yeah. And maybe they train other people, but, like, that takes forever and, like, it takes away from them seeing clients and whatever. You can embody that therapist decision making in a model and then send it across the world and, like, everyone has access to the weights, and it can it can start, like Yeah. Helping people everywhere.
Dan Shipper:And and and and it can do that without any, like like, therapists having to write down an explicit set of rules or procedures or whatever. Like, I think therapy
Daniel Cahn:Or pretend that such a thing exists.
Dan Shipper:Like, therapy was harmed so much by this need to scale therapy Yeah. Which which which created this need to manualize it, which Mhmm. Like, manualized therapy, like, it it can help you, build build skills. Like, there's there's some good things about it, but, like, clearly, manualized therapy is
Daniel Cahn:very low engagement if nothing else? Yeah.
Dan Shipper:And it's all it's clearly less effective than, like, a highly trained therapist who's sort of, like, fitting their treatment to, like, the the patient that they have.
Daniel Cahn:It's interesting to just follow the trend. I mean, I feel like there's a few areas where you have this interesting point of view of, like, fuzziness is awesome. Yeah. And if we're not limited by the fact that we have to create, like, these strict systems, then we can actually, like, scale stuff out a lot Yeah. Especially when it comes to expertise, and then we could leave people as generalists.
Daniel Cahn:Yeah.
Dan Shipper:That's a really good, summary. Thank you. Of course.
Daniel Cahn:I I still wonder. I think, like, one big aspect of this theory is there's still some solidification. Like, there's this interesting you know, people people thought I I was driven by this argument in AI, like, 10 years ago. Yeah. I got into AI from that Tim Urban article in Wait But Why Yeah.
Daniel Cahn:Where he basically writes, like, I have no idea when AI is gonna really work. It could be a 100 years. Yeah. But at some point, computers will grow so much
Dan Shipper:Yeah.
Daniel Cahn:That we can take human brains, we can model them in computers Yeah. Just upload your brain to a computer, and then there we are. We have AI.
Dan Shipper:Yeah.
Daniel Cahn:And he's like, it doesn't have to happen that way. But if everything else fails, eventually, we'll get to that backup of just, like, model brains in computers.
Dan Shipper:Yeah.
Daniel Cahn:There's something here where, like, are we actually starting to whether it's, like, the exact same mechanism or not, are we modeling brains with computers? Are we taking this CBT therapist, uploading them to a computer? And if we do, is there that point where what I'm really asking is, like, are you gonna be out of a job?
Dan Shipper:Me
Daniel Cahn:If we can model your writing
Dan Shipper:our best.
Daniel Cahn:No. If we can
Dan Shipper:model Oh. Oh.
Daniel Cahn:You you you spend your time thinking. Like, that's literally, like, what you do for a living. Right?
Dan Shipper:Yeah. But, like, I think that there's this I think what's more likely to happen is we will develop tools to that do that, but they model what I would do in situations that I don't care to think about. Like, I'm asked, like, all the time about, like I don't know. I run a company. Like, people ping me, like, every day, like, many, many, many times a day, and I'd rather be writing.
Dan Shipper:And the things I'm saying to them are often repetitive. And I would love a little simulation to me to, like, say the repetitive thing that I would normally say and, like, leave the 2% of cases where I am actually needed to me. I think there's, you know, maybe there's, like, a larger question about what if in this scenario, we could just have an instantiation of you who's on Discord and is talking to everyone and no one knows it's you and and it could just, like, basically be you and write as you and whatever. And I think, like
Daniel Cahn:You know, technical feasibility aside.
Dan Shipper:I mean, I think that that's so far off mostly because already, like, o one or o three or whatever are vastly better than me at, like, math, for example, or, like, coding. But I think what what we'll find is that there's a lot a lot of complexity even once you get to AGI that that sits between some something being able to figure out a problem or being able to do math and something being able to take in the the vast context of, like, what's in my brain and simulate it, like, second by second to, like, come to the conclusions that I would come to over a long period of time. Yeah. I think maybe it would be able to do it, like, I think very likely it will be able to do it to some percentage of accuracy over the next, like, let's say, 10 minutes. Like, what I might say in the next 10 minutes based on previous podcasts I've done and everything else that I've been thinking about or whatever, but there will be some error bar.
Dan Shipper:And and the the more the error like, the longer time scale, the more the error bar increases and the more different I will be as a person.
Daniel Cahn:I mean, I I I see the the technical feasibility thing, and I I see the argument of, like, you know, we are there are things that we've gotten really good at Yeah. And there are things we've or suck at. I feel like there's another side, though, which is, like, if I could have you as an AI podcast guest, I, like, would never wanna do that. That would suck.
Dan Shipper:Yeah. I think I I also yeah. I think, like And
Daniel Cahn:and similarly, I mean, I run a company that's building foundation model for psychology.
Dan Shipper:Yeah.
Daniel Cahn:And I had, a meeting with my coach, like, a couple hours ago Mhmm. Who's a person. Mhmm. And there are things she's not remotely capable of that the AI that we develop is.
Dan Shipper:Yeah. But
Daniel Cahn:it's a very different interaction. Yeah. And I so I I feel like, at least for me, I'm just pausing this to get your take. Mhmm. I actually could imagine a very different universe, which is I wouldn't wanna talk to an AI version of you.
Daniel Cahn:Mhmm. I would wanna talk to an AI, but I wouldn't wanna talk to an AI version of you because the latter to me has some lack of authenticity. I I'm imagining a world when you talk about generalists and specialists. Mhmm. When you talk about someone talking to an AI version of you, you're saying that insofar as you're a specialist.
Daniel Cahn:Yep. And in my head, I'm like, well, then cut out that.
Dan Shipper:Yeah.
Daniel Cahn:Like, I I don't wanna talk to the AI version of you that's a specialist in something. I want an AI that can give me the right answer or move quickly. Yeah. If I wanna talk to the person in you and connect with you and gain what I have to gain by talking to you as a full person, then I kinda wanna talk to you as a person.
Dan Shipper:I think that that's true, and I think and but I I think it's very contextual. Like, AI is it's new technology. Right? You know, you've read my writing. People people you know, Plato was, like, didn't didn't like writing because, like, because, like, a conversation is much better way to get to know what someone thinks than writing is.
Dan Shipper:But now we think of writing as being, like, a main way to, like, get to know what someone thinks. It's not the same as getting to know them, but it is a legitimate way of, like, forming some kind of relationship with them
Daniel Cahn:That's a really fair point.
Dan Shipper:For particular contexts. And I think people get, like, bent out of shape about this, like, maximalist, like, well, what if it it it replaces everything in all contexts? And it's like, it probably won't. Actually, people like hanging out with people. Like, that's probably not gonna change.
Dan Shipper:But now there's this, like, new medium, this new technology that makes me available in context that I would not normally be available in, that would be valuable. Right? Like That's you're probably not gonna wanna interview me on on a podcast because it's like, why would you wanna do that? That doesn't make any sense. There might be some podcast formats that that figure that out.
Dan Shipper:I don't know. The current
Daniel Cahn:I wonder
Dan Shipper:It's just the question is in what context would it make sense, and there probably are those contexts, but doesn't mean it will make sense in all contexts.
Daniel Cahn:I mean, I think I think the analogy is really strong, which is, like, if you are listening to someone having a conversation, it is a way of talking to that person Yeah. You know, at a distance, and perhaps we can move in that direction. And I do appreciate that, like, literally what you do is try to understand the future by being a bit moderating. I was, thinking before about I think it was your post on Sora where you kind of said, like, when you see Sora, there's the moment of, like, oh my god. Woah.
Daniel Cahn:Everything is gonna change, and you have to remember, okay. Like, take it step by step. Yeah. We'll get there, but not immediately. I I do appreciate that when thinking about the near future, especially.
Daniel Cahn:I do wanna go in the utopian point of view, though, because I I you know, you're a philosopher. Come on. Like, you have to admire those days of, like, Plato and Socrates and Aristotle who were not thinking in the same career terms that we are now. And I feel like from that utopian point of view, correct me if I'm wrong, but I highly doubt that you would change your job if money stopped existing. Like, if you didn't need money, would you really do much very different than what you now?
Daniel Cahn:No.
Dan Shipper:I love what I do. Yeah.
Daniel Cahn:I mean, me too. I like
Dan Shipper:my team. Lucky that I get to do that, but, yeah, I this feels, like, pretty close to my ideal state.
Daniel Cahn:Yeah. But I I mean that within the context of being a generalist. Like, I think the ways in which, you know, we have to end, like, the the ways that society runs where people overly specialize, where we have this idea of living on a global scale. Like, I I I just wonder if, you know, when you talk what what's interesting about, you know, talking to people is, like, they are kind of substitutable in some way. Like, the kids you go to school with can be your best friends, and you only know them because you happen to go to school with them.
Daniel Cahn:But that doesn't remotely take away from the fact that they're your friends.
Dan Shipper:Mhmm.
Daniel Cahn:It's more like the world in the last 100 years or whatever has moved towards this more globalist point of view where people specialize where the focus is constantly on pushing the world forward. Is there something on the world we're talking about about, like, not needing specialized specialization as much, giving more power to the generalists? Is there a next step there that is kind of like stop worrying about money? You know, stop worrying about pushing the world forward. Be a generalist.
Daniel Cahn:Be a person.
Dan Shipper:Maybe. It would be nice. I think it would it it would be nice. I I do see these technologies as having serious, like, issues and potential drawbacks and also having lots and lots of benefits, one of which is like the potential to create a flowering and human flourishing that I think that is quite underweighted in the current discourse and is what I'm trying to bring about, in a in a small way with every do I think it gets rid of money? I think I think money is
Daniel Cahn:It doesn't have to be tomorrow. Yeah. Yeah.
Dan Shipper:I was thinking about this the other day because, like, money is this interesting thing where in a sense, it's it's it's pretty analogous to, like, concrete explanations or symbolic AI in the sense that it reduces everything to a specific number.
Daniel Cahn:Mhmm.
Dan Shipper:Like, all the complexity of a company is reduced to just, like, how much money did you make? Same with people.
Daniel Cahn:Mhmm.
Dan Shipper:And that's really necessary in a in in a society where you need to be able to coordinate among Uh-huh. Most anonymous people. So
Daniel Cahn:As opposed to, I would give you 3 of those for 2 of those Yeah. Which might remain true even if we abolished money, and then you might as well use money to anonymously.
Dan Shipper:I don't know how this works, but there's maybe some, like, interesting, like, there's some interesting crossover between the like, AI's ability to kind of, like, allow us to optimize along many, many different dimensions all at once without having to reduce to a single number and, like, meme coins, like, in crypto, basically, where you have many, many different forms of currency that all represent many different facets of life or reality. And you may be able to generalize away from, like, national currencies to meme coins. Basically, that reference the first first of all, I hate meme coins, but, like, this is just, like, a very vague thought in my head.
Daniel Cahn:I I That I do get the thought of, like, it's hard to escape currency even if it's there's the the Wilt Chamberlain idea and, I'm trying to remember who wrote this, but there's this idea in, like, political philosophy, I think, like, sixties or seventies. Like, if we if we if we redistributed money perfectly evenly, but then a basketball player says, you know, I'll play basketball, but only if you pay me pay me to play Yeah. Then suddenly he has more money than anyone else.
Dan Shipper:And that's the interesting thing about, like, the the the really funny thing to me about crypto is, like, the Internet is this beautiful place where scarcity does does not exist. Yeah. And then we invented it because With crypto. Yeah. Because humans like status
Daniel Cahn:to you know? I do find it fascinating, you know, off to the side, but the idea of how often people are like, now you can start paying for things on the Internet. And I'm like, isn't what's great that everything is so free? You know, people don't post on Reddit and then charge you to read their posts. And then suddenly you're like, well, you could give them money for posting.
Daniel Cahn:And you're like
Dan Shipper:I'm not sure I'm
Daniel Cahn:doing that. Yeah. But I but I agree. I think part of the beauty is the abundance. And that's kind of why I, you know, I think on these lines of specialists of, like, with abundance, shouldn't we celebrate the fact that, like, you know, you you wonder, like, should kids learn Shakespeare?
Daniel Cahn:Yeah. You know? Well, is it useful for a job? You know, I would wonder if specialism starts to disagree disappear, will the value of studying poetry, philosophy, or whatever? Like, I personally, I asked you why you studied philosophy.
Daniel Cahn:I studied philosophy for the intrinsic motivation. I loved it. It just was awesome. Mhmm. And it it served no purpose.
Daniel Cahn:There was no next step. It sounds like you have a very similar kind of point of view. Mhmm. For some reason, our society doesn't push very hard on those ideas. We have this, you know, oh, but what does it serve you?
Daniel Cahn:Will it help you find a job? And I think both of us had the comfort of, like, I'm okay. I'll find a job. I don't have to think about that right now. Yeah.
Daniel Cahn:Wouldn't it be a beautiful thought to imagine that for everyone?
Dan Shipper:That would be beautiful. May it be so.
Daniel Cahn:May it be so. I think the the idea we talked about much earlier, I I'm still processing because I think whether or not we answer it, you know, we're talking about philosophy. Do you come to answers? I feel like there are some philosophical questions where I've come to a stable enough state that it helps keep me sane for a little while. But I I do wonder in this world of fuzzy systems, in this world of up uploading your consciousness to Discord to talk to a lot of people, should we be a lot more worried about the way in which we upload our opinions that could stabilize our society too much?
Daniel Cahn:Stabilize? Stabilize. Like like, have our legal opinion like, case law was the example you gave. Yeah. If we uploaded all of our case law to machines, would that mean that we can't create a new contradictory case law?
Daniel Cahn:That, you know, the current case law decides future cases, you know, in a slightly fuzzy way, but they're still sort of like, okay. This case case is 60% similar to that one, 40% similar to that one. Therefore, here's the ruling. Where, like, part of the Yeah. Beauty of case law is that new cases can
Dan Shipper:Yeah.
Daniel Cahn:Change case law.
Dan Shipper:Well, I think you're pointing to an as yet, as far as I know, unsolved, like, AI issue, which is, you have, like, these training runs and you create this, like, underlying foundation model, and then it goes out into the world and interacts with people, but doesn't remember any of the interactions. And then the interactions, like, over many years get filtered back into the dataset and, like, put into the foundation model, which is a little bit different from, like, how humans work where, it learns at the humans learn at the edges. Like Yeah. Foundation models don't learn at the edges right now except in, like, weird, like, rag stuff where but it's that's not actually the same kind of, like, updating of the weights that I'm talking about. And that's still sort of, like, unsolved.
Dan Shipper:And and and and if you think about the way that, like, case law gets updated or science gets updated, it's like
Daniel Cahn:It's at the edge.
Dan Shipper:It get it's at the edge by new entrants who, like, learn the, like, learn the, like, old ways of doing things and are, like, so but are also naive enough and, like, hormonal enough to be like, fuck the old stuff. Like, I wanna figure something new out. And I think we'll probably end up having to figure out what is the architecture by which either existing models at the edge learn new things or become sort of sensitized to, like, new ways of seeing the world and new ways of, like, trying to organize the world or make decisions, which I think is, like, very far away from the current AI research paradigm because we're, like, right now obsessed with trying to solve problems with verifiable step by step solutions. And
Daniel Cahn:I do just wanna highlight what you said. You mentioned the hormonal thing Yeah. Sort of as a joke. But, I mean, I wonder if that actually is literally valuable, which is the element of, like, progress is made by young people. There's the classic, Einstein lamented late in his life that he, from his perspective, achieved nothing after the age of 25 because he challenged the status quo.
Daniel Cahn:He interpreted the evidence, did no experiments. He's like, you all saw it wrong. Here's the right answer. And then right afterward, quantum mechanics shows up and he's like, no, no, no. You guys are pushing too far.
Daniel Cahn:Yeah. Like, is there genuinely something to the idea of, like, adolescence or, like, being a young adult Definitely. This idea of, like, we are kinda stupid and willing to challenge, is that that seems a little separate from learning at the edge. Right? This idea of, like, shaking the system in a way that's good.
Dan Shipper:Maybe I think I I can I can connect them, but I do think that they're I do think they're separate things? And and I and I do think, like, that's that's a current limitation of AI no matter how much in the within the current paradigm, the current architecture, no matter how much data, no matter how much compute, they're trained to be people pleasers
Daniel Cahn:Yeah.
Dan Shipper:And to reflect back what they are told. And if you want to get something that can help someone see the world total in a totally new way, you I think you probably have to get things that have a little bit more of a sense of agency and a little bit more of a sense of independent perspective
Daniel Cahn:Mhmm.
Dan Shipper:Which is like and also to to be able to have opinions that are not proven, which is like completely outside of the current research paradigm because we're obsessed with like engineering problems that have step by step proven solutions. And I think we will start to see those limitations sooner rather than later, and we'll we I think we'll probably figure them out, but it's it's it's currently way outside of the like, a language model's capacity.
Daniel Cahn:Although I you know, we're talking about it as if it's a technological problem. Is it a design problem?
Dan Shipper:Yeah. It's both. I mean, I think it's like a it's a design and it's and it's a it's a design and it's an engineering problem.
Daniel Cahn:I mean, I I mentioned this as design because you you you mentioned earlier also, like, hopefully, we have different models that are different and or just, like, systems with different prompts that express different opinions and points of view. That could be true. But, you know, I I you think about I think about OpenAI versus Anthropic as an example here. Anthropic has not built a system called ChatChuputy. It built a system called Claude with a name, a name that has certain personality that's supposed to be kind of curious, and it's supposed to express certain kinds of opinions in certain kinds of ways.
Daniel Cahn:There was recently an observation on Nigerian English being present in Chachupiti and the use of certain words that are common in Nigerian English and not common in American or British or Australian English. Mhmm. Because of all the annotators That's it. In Nigeria expressing certain opinions, Chachipiti, you know, I had someone at OpenAI referred to the persona of Chachipiti to me as the the just the bureaucrat or some other phrase I'm looking for. Yeah.
Daniel Cahn:But there's, like, a certain persona that I think is intentionally embedded in the machine in order to, you know, communicate a certain way, fulfill a certain role as a product. Yeah. Like, I wonder, are we underplaying the degree to which that lack of curiosity that you're talking about, that lack of shaking world views is by design and might continue to be by design
Dan Shipper:for the long time? Is by design. Yeah. And that's that's a 100% what I'm saying, and it will continue to be by design because, like, businesses, if we're talking about like the the shift between uncertainty and certainty, like businesses, big businesses care about certainty and predictability. And so all of these companies that have like that have really, really high expectations and tons and tons of capital and need to sell a lot of AI are gonna wanna make the predict predictable AI.
Dan Shipper:That is people pleasing Yeah.
Daniel Cahn:Answers your question.
Dan Shipper:Yeah. Which leaves a lot of room for startups to make creative AI that is a little bit more harder harder to deal with, harder to wrestle. But, like, then you have to figure out, like, what's the what's the, like, what's the training loop to, like, get an AI to, like, learn how to make good contrarian predictions? That's crazy. Right?
Dan Shipper:Because they're they're a training loop like that does exist. There is some, like, there is some learned set of some subsymbolic connections between neurons that, like, represent that. And we've never figured that out, and we're not currently trying, but I think we should.
Daniel Cahn:I I love this idea that there's essentially, like, a smaller opportunity compared to building this AGI personal assistant Yeah. That's not a personal assistant Yeah. But probably has the same architecture that's higher risk, that's dangerous for a company like Microsoft to label as their own as for Facebook to label as their own. Yeah. But one that a startup I mean, I think this relates closely to what we were working on.
Daniel Cahn:Right? Like, we're training a model that is not a helpful assistant. Yeah. And people ask, you know, what's the difference with ChatChappetee? It's like ChatChappetee is a helpful assistant.
Daniel Cahn:It is intentionally trained to to not take these kinds of risks.
Dan Shipper:Yeah.
Daniel Cahn:Yeah. I I guess there's a really interesting view in here of, like, startups should try to build this dangerous thing. Yeah. Basically, all all these attempts to not just be people pleasing that are designed or, you know, you do run the risk that OpenAI and Anthropic and etcetera just solidify this more boring form of AI.
Dan Shipper:I don't yeah. And I don't I don't think that'll happen because I think, like, as people get more used to these models and they become more, like, sort of table stakes, there will be just market opportunities for people to, like, build what we're talking about, and people will buy it. It might not be everyone. Like, people at, like, huge companies, like, huge health insurance companies that need, like, predictability may not be able to use it, but, like, other people will. And I think that'll be great.
Daniel Cahn:I agree. I agree. I think I think there is, though, on on the note earlier on specialism, there's something interesting here about, like, the commercial use cases, like you said, are generally those more deterministic, more predictable ones. And what's kind of cool about that is that that means the direction of AI is to replace the most boring, predictable stuff
Dan Shipper:Yeah.
Daniel Cahn:Which is probably good for all the rest of us.
Dan Shipper:Yeah. I think I think it's good. I think I think I I I think it's good because, generally, like, humans would prefer not to have to do the same thing every day unless and, like, I think that there are some situations where it turns into, like, a real craft. Like, you think about, like, I don't know. I'm thinking about, like, a Japanese sword maker.
Dan Shipper:Like, that person probably loves making a sword the same way all the time. Yeah. Like, that's I think those are much fewer and further between and having systems that, like, take the, like, highly specialized sort of, like, factory worker, but of the knowledge era
Daniel Cahn:Mhmm.
Dan Shipper:Type tasks away, I think will end up being good even if it's like we need to figure out how to transition people to other ways of thinking and other ways of working and all that kind of stuff.
Daniel Cahn:I love that vision. Yeah. Awesome. Well, thanks so much for joining. Can I ask, given that you write, is there any piece of writing of yours or of anyone else that you'd most recommend someone read to follow-up on these topics?
Dan Shipper:Everything on every, every dot t o, I have a, piece on, like, how how I sort of see ChatChippet and AI in general, like, interacting with humanity and the human mind and culture on every called ChatChippet and the future of the human mind that I think reflects a lot of these topics. So if you're interested in that, check that out.
Daniel Cahn:Awesome. Dan Shafer, thanks so much. Thanks for having me.