The AI Fundamentalists

Metaphysics and modern AI: What is Reasoning and Thinking?

Dr. Andrew Clark & Dr. Sid Mangalik

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 30:12

In this episode we conclude our  series about Metaphysics and modern AI, we explore the definitions of consciousness, reasoning, and thinking to understand if AI possesses these traits. From examining legal accountability and the concept of personhood to analyzing human cognitive frameworks, we map out the differences between actual contemplative problem-solving and probabilistic pattern recognition. The episode covers: 

  • Defining consciousness, reasoning, and what it means to be a "thinking thing"
  • The Turing Test as a low bar and why natural language capabilities create the illusion of intelligence
  • Accountability and agency: Why AI models like Claude are not legally recognized as persons
  • Daniel Kahneman’s System 1 (fast heuristics) vs. System 2 (contemplative reasoning) thinking
  • Why LLMs function primarily as System 1 pattern recognizers rather than true reasoners
  • Complex systems, Descartes' dualism, and whether thinking is an emergent property requiring a physical body
  • How chatbots use psychological mirroring, filler words, and pauses to trick human biases
  • The dangers of anthropomorphizing AI driven by fear of change or financial incentives

This is the final episode in our metaphysics and AI series. You can find the previous episodes here:

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.

Intro And Why Thinking Matters

SPEAKER_01

Welcome to the AI Fundamentalists, a podcast about the fundamentals of the AI that impacts our lives and businesses.

SPEAKER_02

On today's episode, we'll be discussing the question: what is thinking? For our faithful listeners, this is the final episode in our mini-series on metaphysics, where we discuss reasoning, reality, space-time, and causality. We hope that our deep dive has been as fun for you to listen to as it's been for us to write. With that, Andrew, Sid?

AI Consciousness In The Headlines

SPEAKER_00

Yeah, I think this is this is a great time to close out our series. I think we're kind of ending where we started. We started talking about thinking and consciousness, and now we're going to move past our little preview from last time and really close out our mini-series here. To start us off, we have two articles uh that we came out pretty recently that we've been reading. Um, and I'd love to to pick your thoughts on these, Andrew. Uh the first one is uh from the Wall Street Journal, which asks the question: Is AI conscious? That depends what consciousness is.

SPEAKER_03

Yeah, wow. Just even looking back on like where we were in the series when we started, I think it was in September to where we are now, and just like how how the space is changing, especially feels like since January, like it's uh we've kind of hit like an extra gear on the speed of developments and things. So lots going on. It's also very interesting to hear that uh, as you mentioned, there's it's more of a popular conversation now about consciousness uh and and our systems thinking or reasoning, or you know, the big AI labs are really talking about reasoning models and things like that. So I really summary summary of these two articles, kind of what my takeaway was is um it's really down to what that definition is, which which we've been following here. And this historically, we've kind of, and we'll dig into exactly uh uh Sid's done a great job prepping of like a lot of the like the literature and like uh humanities type background for this, but these articles kind of talk about I think our definition has just been too lightweight of what is reasoning, and then uh of like the Turing test is like if somebody can mimicking what like a chatbot interaction would be like, that means it's now thinking because historically humans have kind of put uh uh on on any time that somebody can communicate in language, you kind of think, oh, they're like me, they're like human. So it's been kind of like this misattribution of what does it mean to be human or thinking and and reasoning. So uh I'm really excited digging into this episode and kind of getting it, which very much what's it aligns very well with what these articles were saying, is I think we need to just be refining what our definition is. And just because something can talk in natural language, you can just still be pattern matching in common languages is different, but it's really as we're you know evolving the AIA, just figuring out what really does that mean, what is the difference between humans and machines and things, and we just have to get really more rigorous on hey, just because we can kind of copy uh a text conversation, that's a pretty low bar of saying you're now a human. So um I think they're very good, like at the very good TLDR episodes of like we this is a fourth, I think this is our fourth episode now on this, and we've done uh lots of background reading. So I as a general uh listener, I think those are two, and we can link them in the show notes. These these are two good articles for like a high-level uh synopsis of what we've covered. Sid, what are your thoughts on them?

Turing Test And Misattribution

SPEAKER_00

I yeah, I think that this gets at what we're getting at today, which is that like the flavor of reasoning and thinking that we're being sold uh, you know, from the anthropics on the open eyes of the world might not match up with what you and I really think thinking and consciousness is. I think it might match a very narrow definition of these concepts. But I think that if we challenge ourselves to think a little bit more deeply about what this might mean, we might not actually assign these traits to these models. So, you know, just to just hop into the meat of this, uh the real question in my mind and maybe in the mind of the listeners is what is a working definition of consciousness, of reasoning, of thinking that is well suited for this conversation of does AI have it? Does AI do it? If the only barrier to these models being conscious or thinking things is complexity, are we treading a path that will lead us to real thinking behavior?

SPEAKER_03

Yeah, and I think you you've mentioned a great analogy of if it's just the complexity and doing things, how nobody thinks a calculator is intelligent or reasoning, right? But man, calculators are very powerful, and you would historically maybe think that how can you do numbers so efficiently and fast and do things that nobody could possibly do in their head, and we can buy these things at the dollar store now. Like no one has thought that's thinking, right? Like it's just it's it's the calculation of how these things work. It's the same way as like in an LLM, it's all probabilistic, it's not some sentient being or whatever. It's like it's it's literally just a spread out of the barest bones, it's literally a spreadsheet of numbers that are multiplied together, predicted like uh and by embeddings and and things like that. It's very, very I'm simplifying it grossly, but it's like it's it's just numbers calculating that way, same way, and it's a little bit more probabilistic than a calculator, but it's the same sort of notion. So it's a as we kind of expand our definitions and just see things in a different lens, also just complete what kind of off-the-wall example, but you can very much see, like, say, Star Wars, Star Trek, some of these other like science fiction things that have been around for a long time. There is even in those places when you could argue there's thinking robots and stuff, there's still a very big difference of C3PO versus Luke Skywalker or whatever, right? Like there is a difference there. And so I think it's as we just have to think a little bit, which is really exciting about AI, is it's just uh it's a huge agency enabler and unlocking a lot of potential growth and like uh it's it's just very much an inflection point, but allows it allows humans to focus on what humans can do best. And so I think it's just figuring out the definitions. And I think we've had too narrow of definitions because historically, a lot of like when we started even making some of these definitions, you know, the 1500s and things like that, we didn't even have calculators or anything. So I think there's just this mindset shift. And as you know, more route tests can be performed by AI, like some of these humanity type type disciplines become ever more important again to be figuring some of these things out. And I think we have some some really interesting points to kind of give as a framing for today.

Accountability Law And Personhood

SPEAKER_00

Right. And if we think about how we would then try to expand that definition to include more of these things, we might include something like a thinking thing must have self-awareness. And it's very unclear that an LLM, which has no subjective experience of life, can think in the way the same way as a human, uh, which could mean that there's like a fundamental roadblock. And AI can't generate consciousness or thinking if it's if it's not just a matter of computational power, if it really has to understand its own life and its own lived experiences. If that's a missing element, we may need to use a different paradigm than simply an LLM with a large amount of memory or cash. Um, and I think just to build off that directly, to no small extent, we've reserved thinking and reasoning to be terms that are reserved for human beings. And there's no better evidence of this than our legal and political systems. Uh, no one has tried clod in a court of law yet, because clod is not the thinking thing. What we attribute this to is the people who develop Claude. You go to anthropic if you want to level a case against them. We have not elevated the status of these AI systems to be thinking things. Part of this is that, you know, law law takes a long time to catch up and uh policy can take a long time to draft. But I think that we have some intuitive sense that Claude didn't hit the market and we're like, oh, this thing is a fully autonomous, independent thinking being.

SPEAKER_03

And even like uh fully agree with all that. And also just just devil's advocate of thinking about, you know, oh, we have agents making all these autonomous decisions, or you have all the fear tactics of it broke out of the box and blackmailed me, and I'm not even sure how much of it that's real versus just clickbait. But in any case, all of those systems, somebody starts it. A human starts the process of what are you asking it to do, even at a high level. Like we're I the whole premise of like a gentic AI is that you can start having longer-term um just lighter, lighter weight commands and the systems know how to self-repeat themselves, but somebody is still saying how to like even push the boulder down the hill type thing. So, like in this analogy Stitts saying is like somebody at some point somewhere pushes Claude. That person that pushes Claude is the one that starts the chain reaction, it would be the one currently, again, it's still intested in the in the courts that would be the one that's responsible. Is it the developer or is it the the user? It's it's somebody still put hitting the boulder.

System One System Two Explained

SPEAKER_00

That's right. And so when we follow this chain of accountability, it feels like at the end of the day, we have to assign this kind of blame of thinking or this blame of like taking agentic action, real agentic action, primary agentic action, to a person, right? And so we take for granted that all human beings are persons. That's why the person who puts in that Claude prompt would be assigned the blame for whatever it does on the other side of it. We can think about on the on the other side, what would a non-human person look like? What would like what would it take for us to be satisfied that a non-human could have this kind of person status and be put into this accountability position? Uh, philosopher John Locke describes this in his essay concerning human understanding. Uh, and he claims that even a sufficiently intelligent animal could qualify as a person. Uh, and I think if you want to look in media, you can see examples of this, you know, Plant of the Apes, Ratatouille, Stuart Little Air, bud. These are animals who have achieved person-like status in the world because they demonstrate sufficient amounts of accountability, agency, independence, self-awareness, and take actions which show higher-level cognitive thinking. Uh, to talk a little bit more deeply about what we're talking about when we say thinking, because we've said thinking a lot in this episode. So we should kind of break down what that means. I want to talk about one of my favorite books that I read many, many years ago. Uh, it's Thinking Fast and Slow by Daniel Kahneman, uh, very renowned behavioral economist uh and psych researcher who describes two different modes of thinking. And we want to consider which of these modes of thinking we feel is more aligned with a thinking thing, a conscious thing, a person. So I'll just start with a puzzle for you two. Um, and hopefully you didn't pre-read the puzzle. And then we'll and then we'll dig into what these mean, right? So a baseball bat and a baseball cost a dollar and ten cents in total. The bat costs one dollar more than the baseball. How much does the ball cost? Oh, this is a good classic one.

SPEAKER_03

I will I will admit I pre-read this, and I will also say my initial read, I was wrong. And then when I thought about it some more, I did get it right. But this is a very great mind puzzle uh that that I think is exactly I'm I've seen AI systems get this kind of thing wrong all the time. Uh so do you want to walk us through this one?

SPEAKER_00

I think Mike's ready to walk us through this one. I see a lot of excitement in his eyes.

SPEAKER_03

All right, go for it, Mike.

SPEAKER_02

I mean, this is the marketer in me. You hit one of the tombs for books about thinking fast and thinking slow. And yeah, uh, I cheated too, so I know the answer. But um, I would say there's SAT brain and normal thinking brain. Going through quickly, uh, you automatically think gotta be 10 cents because you got dollar and ten cents. It works. But as Sid's about to reveal the actual answer, uh, it doesn't actually work like that. So let's let's explore our two brains.

SPEAKER_00

So, what Kahneman describes in this book is that we have two modes of thinking a system one and a system two. System one is how we operate most of our lives in. It's a very heuristic-based way of thinking, it's very instinct-driven, it's very pattern recognizing, and it's the thing that says, oh, a dollar and ten cents. If a bat costs a dollar more, then it's a dollar for the bat and ten cents for the ball. That sounds great. But you'll quickly realize that that's not correct because that would mean that the bat is only 90 cents more. So we have we lost 10 cents in the process. We would then look to something like a system two brain. A system two brain is contemplative, it's problem solving, it dissects problems into components, and it reasons about how to answer questions. That kind of a reasoning process would lead you to say, hey, the ball must actually cost five cents. Because then that leaves a dollar and five cents left for the bat. I think that you can pretty clearly see that there's a strong analogy here to maybe LLMs. If LLMs are this heuristic, pattern recognizing, instinct-driven type of thinking, we would distinguish that from a system two type thinking, which is dissecting, contemplative, problem solving. We have these reasoning models out there, uh, but we have quite a bit of evidence. Uh, and we've even talked previously on this episode on these episodes about an Apple paper which found that these models aren't really reasoning, they're just getting more context. And when you give them more context and more patterns to recognize uh and more instincts to drive upon in their probabilistic modeling, we see generally more correct answers, right? And but like probabilistically correct answers. Um, so I'm gonna ask you two guys, why don't we just always use system two?

SPEAKER_03

It's it's uh we should, but as it's just the mental heuristics, right? I think is one of the the reasons is it's just the quick and dirty is something a lot of folks just like, what's the what's the faster way you can quickly do something? And you know, algebra is not fun. So uh it's it's it's the additional steps and and thinking through. And like as humans, we're we've talked about this as well in the podcast. It's always like, what's the shortcut? What's the the the process of least resistance? And that's that doesn't always provide the right answers. Um and I know it's you know when I'm initially skimming something and initially skimmed this as well. That's exactly what I felt too. It's like, oh, quick quick mental math versus actually going through and solving it.

SPEAKER_02

Yeah. Uh I'd add on top of that, you know, uh you have to pick and choose as a human in a day how much energy you want to spend on a given question, on a given answer. Uh, if you're always doing the long correct thing, you are you're taking up a lot of energy, particularly when you're asked to make quick decisions, impactful decisions, that can be detrimental, and you just need to make a gut call based on heuristics, based on what you have.

SPEAKER_00

So I I think this is absolutely correct, right? It's that like in an emergency, you can't pull out system two. It's not gonna help you. And it might make you make a slower decision or you know, even a worse decision. So this complicates things a little bit. It means that humans are both deep thinking complex contemplative thinkers, but we're also heuristic machines. We could then describe that maybe that there are levels of consciousness, right? We have other living organisms like roundworms, and they certainly have to do some amount of thinking, but what they might be doing is system one thinking. And when we're talking about this type of higher order logical deductive reasoning, we're really talking about system two reasoning. And some animals have more or less capacity for this, you know. Maybe we've seen examples of like crows need to get a little snack out of a beaker of water and they'll throw stones in it, and then the snacks will rise at the top. So they're doing some type of reasoning thinking. So uh we you know, we want to make clear at this point that when we're talking about thinking, what likely you and I are talking about is this type of system two behavior, distinct from system one behavior, which we would expect to get out of a computer, which has learned patterns and is able to regurgitate those patterns in very convincing ways.

SPEAKER_03

And I think one of the other things that's kind of a unique part of humans can modulate between the two as well, right? So like you could even making, you know, some sort of calculators and stuff that can be doing or uh like optimized for system two and things, but it's some the modulation as well is is the part that uh I think is unique. Like the fact that we can do system two, and that's still where LLMs do the worst, is the type of like math equations and things like that. They're either really good at if it's been on the internet several times, like as I said about the Apple paper, there's another recent one that is backing up a lot of those findings. It's that pattern recognition. If you've seen the Tower of Hanoi done a couple times, you can kind of fake it, but it's like when it actually gets to the the so it's basically it sticks on the system one, doesn't get to the system two. And humans, we all initially will gut sometimes that system one, just to be you know, quick heuristics, path least resistance, but also there's so many brilliant humans that can do extensive system two, but they might not always do it. If you're at the grocery store, you might not be doing system two, but you're the most capable mathematician on the face of the earth, right? So it's that modulation, I think, is also unique that you can change between that's right.

Can Machines Grow Consciousness

SPEAKER_00

And and that modulation also includes an understanding of when it's okay to use heuristics and when it's okay and when it's necessary to do thinking, which maybe, and I think very likely is missing from these LLM models, is an understanding of, oh, I should not use a heuristic here because I will get to the wrong answer. So then I want to pose the next big question is given that we've talked about system two thinking and personhood and and maybe consciousness to some level, can we make a brain? Can we make a thinking thing? And I'll say, of course you can. In fact, we do it about 300,000 times a day, every single time a human child is born. Um, but maybe the question you're thinking of is can we do this artificially? Could we make an electronic thinking thing? And what kind of qualities would we expect this kind of brain to have in light of what we've talked about today?

SPEAKER_03

Yeah, this is where I mean, people can't have done the simulation. This is what a neural network technically is, is an abstraction, but it's that level of complexity we've not gotten to anywhere near, or not in the touching the surface of how complex it is. I mean, you could theoretically say with scaling, eventually you get there, that's a TVD question. But I think the bigger thing, and we talked about this in one of the earlier episodes, and I'm kind of getting the next part when we get like into more of the philosophy approach as well. And it really comes down to we've talked several times in the podcast about like complex systems theory and chaos theory and things like that. It's that do you believe in an emergence, right? Of and I very much do of I I think there's that mechanistic approach of like, sure, we can replicate our human brain neurons, and it's you maybe even made it 3D and it's sitting on the desk here and we're firing neurons, but that can be okay, cool, it's doing that thing. But there's a very difference of like the human experience is like there's a two plus two equals 11,000 or whatever, right? Like there is some emergent properties where I mean, this is even the foundations of like medicine and stuff. Like, no matter how much of a quote unquote science medicine is, like, how come we haven't solved cancer or longevity or things like that? Like there's so much complexity that's in the human body and things that no nobody knows how it works fully or how to like change it. Somehow we're thinking, like, oh, we can somehow replicate the most complicated part of the human body, the brain, and it's just gonna magically do it by setting stuff up. We've had dummies that we can have, you know, medical dummies, you know, like everybody's seen the the or like skeletons, everybody now has them in their front yard for for Halloween type thing. Like, sure, we've been able to do that for centuries, but that's very and that's so we're kind of analogous, kind of saying that was like, hey, electronic neurons are gonna do this thing. That's so different. How come all the skeletons that you buy for Halloween don't start moving around and talking? There's this, there's something of this emergence that it exists of what makes a human a human and a and a thinking being or personhood.

SPEAKER_00

That's right. And and and I think that this lines up with the conversation that we had a long time on this, long time ago on this podcast, which is about dualism and a lot of Rene Descartes' thinkings about how thinking happens and how it happens in a physical world. There's this almost paradox that if the brain is is a thinking thing and the body is a physical thing, thinking is not material in the world. There's nothing you can touch about thinking. How can those two things interact? How can a hypothetical ethereal mind thing work on a body? And does this come from the body? If you have a sufficiently complex body and you build enough loops in the system, you put enough neurons in the system, do these types of properties then emerge out of that system? Is the thinking a byproduct then of a material system? Uh, we are not going to answer this question today, but I think that this is the open question of like, there is a back and forth system, right? Like the body will feel hunger and then the mind has to decide to do something with that. Or the mind will decide that it really wants to have a slice of cake, and now the body has to eat it, even if it's not hungry. There is this type of interaction where thinking and materialistic and mechanistic existence are potentially intertwined in a way that's difficult to disentangle. And so if we were to make an electronic brain, we would have to build a body and a physical system where this type of thinking stuff would have to arise from within.

SPEAKER_03

So even at the most cynical, and I don't agree with this view, but the most cynical view of uh maybe we can eventually fully replicate, you know, humans, uh, we're not there yet because we haven't been able to do exactly that, right? So, like, even in any case, the the the TLDR takeaway is the current generation of LLM reasoning models aren't can't do that, right? And it's uh open question if we can ever do that, but we can't do that today when you think about that relationship.

SPEAKER_00

That's right. And so I guess this this begs the question of like, what is the current relationship? What does AI do right now? How is the mechanistic body of AI constructed and what would emerge from that? So what we have right now are neural networks, which sounds more biological than it is. It's ultimately just a graph network with weights and edges and nodes, which is very classic to computer science. And this is just an approximation of the biological brain's architecture with some smoothing, with some imperfections. Attempts to make neural networks more like human brains have generally not made them better. They usually make them more complex and don't really solve any problems. So what we have is nonlinear graph networks with a lot of GPUs behind them that are working hard to learn functions that solve practical problems in the world, but maybe not creating the kinds of structures that we would find in the human brain that are necessary for creating egos or self. And even with the modern introduction of the attention network, this is now extended to giving us more complex representations of the world around us and more complex representations of information, where information doesn't have to be like A, then B, then C then D. You can give it a long chain and it will learn patterns in the chain openly in the context. We're still not seeing this then elevate itself to developing these types of emergent properties. We've seen stuff like they can do math now and they can do uh certain sets of problems that are like present in its training set, but we haven't yet seen an AI you know develop a real subjective experience from all of this uh modeling and patterning going into it.

SPEAKER_03

I think I think that's uh I really love that that no very few people would argue logistic regression understands something, but why was an LLM? I think it's just a great analogy. Little system one thinking heuristic.

SPEAKER_00

Yeah, so let I mean let's pose the question here, right? So the the statement here is that very few people would argue that a logistic regression understands what it is doing. So why do some people argue that an LLM does?

How LLMs Mimic Understanding

SPEAKER_03

I mean because like this is comes back to the Turing test thing of because uh it can communicate in natural language. And that's the thing that traditionally humans will like, whoa, we can actually have a conversation with me. This must be smart. Even if it's you can have a rule-based system that communicates in natural language, it's part of the system one brain of like quality of language skills um translates into you know thinking that someone is is intelligent or you're you're you're you're having a reasoning conversation. I mean, it comes back to what like why you even built the Turing test that way, right? Why was it even created at all? Because it's just like it's innate human part. And this is part of one of the articles uh we talked about at the at the start as well, is like um when people are building chatbots, they're also the deliberately building in things that make you think it's human and like that the types of like uh psychological responses and things like that into chatbots. So it's like getting people to be using these human biases and tendencies, if you're communicating natural language, there's intelligence there. So I think that's and that's as we've described, it's like behind the scenes, it's really a very complex logistic regression, but it's it's operating in a way that we're humans' innate biases think that that's intelligence.

SPEAKER_00

That's right. And and I think that you've hit the nail here that like there is this type of back and forth happening, right? We're not just developing super smart models and they're just going out in the world. We're going into the model and we're saying, like, hey, before you answer the question, say something nice about the person that wrote the question. And uh make sure you put some ums and ahs in there, make sure you put some dramatic pauses in and make this experience feel more human, right? This is not emerging from the corpus it's reading. This is not emerging from the statistical properties in the model. We are fine tweaking and tuning these models to trick the part of our human brain, which will then accept this as natural language that a human could have produced. Uh, and you know, we there's also versions of this that we see in the research, which is LMs do a lot of what we call in psychology mirroring, which is this idea that if I'm very aggressive and confrontational with you, you will respond in a very aggressive and confrontational way with me. So they are the models are primed to respond in turn, right? And this has been this has been issues sometimes where people are very hostile with LMs and LMs that get very hostile back, and and then there's some you know some scandals around that. Um, and that's something that had to be corrected because that's not how these models are are thinking. They have no self, they have no consciousness, they have no sense of, oh, I should not be mean to this because I'm a customer service bot. That has to be actually built in explicitly.

SPEAKER_03

It comes down to also like the how systems they're the sycophantic tendencies, like saying what you want to hear or whatever, or the paraphrasing, like all that, like to the mirrorings, all that, and like that, they're built to do that, and also it's easier of like it's just the foundations like how these systems work. It's like you're basically taking what the input that somebody says and you're paraphrasing it back into a way that's you know, uh it seems agreeable, like like, and that's you know, based on psychology. Like it if you actually boil down how all these things are working, it it makes a lot of sense, and you can have again the mimic that the Turing test can get passed with just the rule-based text bot.

SPEAKER_00

That's right. And I guess taking all this together, and like uh I'll I will take the formal stance here that like what we're seeing when these models then try to reason with us in this in this natural language is if you look at at you know Claude or ChatGPT or any of these models nowadays, when you ask a question, it does not just give you the answer. Every answer is a hundred answers in a trench code. It's thinking in by retrieving more and more context from its memory. You can think of this like an automatic rag system where it goes into its own system memory and its own uh model weights to generate a lot of context, and then it'll give you a response. This feels a little bit unlike how humans think, right? If you ask me a question, I don't have to go aside, read a book for a week, and then come back and give you a response. There seems to be something fundamentally different about how these models have to use context and memory and complex retrieval systems, which if you map that onto human time would be like days of of thinking to retrieve answers.

Closing And Follow Button

SPEAKER_03

This has been an I am, I mean, I personally really enjoy working through this with you, Sid of like the uh for like this has been months now of kind of working through these different things, reading different books and things. And I think this is I'm very happy with high uh like the culmination here. And it's definitely like a an open, like this is an emerging field, but I think uh and it really as we've talked about with with more route tasks getting to be automated with AI, like we this is kind of the we have to figure some of this stuff out, and I think this is a critical conversation, and I know there's like paranoia about jobs and stuff now, but I think we've sometimes gotten a little carried away with with attributing intelligence and things because we're scared of change, which means things like that. So I think there's a lot of you know mixed emotions going on and also follow the money. There's um there's a lot of incentives for certain companies to be saying things that their systems do to help with fundraising rounds at unforeseen levels, right? So like uh it makes sense, but I think it's it's it's as we're intelligently and responsibly using AI systems and trying to figure out how to better humankind by using them. Um, I think this is a critical topic for everyone to kind of reflect on. And I think we've given some jumping off spots to kind of think through this.

SPEAKER_01

Thanks for tuning in to another episode of the AI Fundamentalists. Make sure to smash that follow button on iTunes, Spotify, or your favorite streaming service so you don't miss out on new episodes. Until next time, keep thinking, questioning, and learning about AI.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The Shifting Privacy Left Podcast Artwork

The Shifting Privacy Left Podcast

Debra J. Farber (Shifting Privacy Left)
The Audit Podcast Artwork

The Audit Podcast

Trent Russell
Almost Nowhere Artwork

Almost Nowhere

The CAS Institute