The AI Fundamentalists

What is consciousness, and does AI have it?

February 12, 2024 Dr. Andrew Clark & Sid Mangalik Season 1 Episode 14
What is consciousness, and does AI have it?
The AI Fundamentalists
More Info
The AI Fundamentalists
What is consciousness, and does AI have it?
Feb 12, 2024 Season 1 Episode 14
Dr. Andrew Clark & Sid Mangalik

We're taking a slight detour from modeling best practices to explore questions about AI and consciousness. 

With special guest Michael Herman, co-founder of Monitaur and TestDriven.io, the team discusses different philosophical perspectives on consciousness and how these apply to AI. They also discuss the potential dangers of AI in its current state and why starting fresh instead of iterating can make all the difference in achieving characteristics of AI that might resemble consciousness. 

Show notes

Why consciousness for this episode?

  • Enough listeners have randomly asked the hosts if Skynet is on the horizon
  • Does modern or future AI have the wherewithal to take over the world, and is it even conscious or intelligent? 
  • Do we even have a good definition of consciousness?

Introducing Michael Herman as guest speaker

  • Co-founder of Monitaur, Engineer extraordinaire, and creator of TestDriven.io, a training company that focuses on educating and upskilling mid-level senior-level web developers.
  • Degree and studies in philosophy and technology

Establishing the philosophical foundation of consciousness

  • Consciousness is around us everywhere. It can mean different things to different people.
  • Most discussion about the subject bypasses the Mind-Body Problem and a few key theories:
    • Dualism - the mind and body are distinct
    • Materialism - matter is king and consciousness arises in complex material systems
    • Panpsychism - consciousness is king. It underlies everything at the quantum level

The potential dangers of achieving consciousness in AI

  • While there is potential for AI to reach consciousness, we're far from that point. 
  • Dangers are more related to manipulation and misinformation, rather than the risk of conscious machines turning against humanity.

The need for a new approach to developing AI systems

  • There's a need to start from scratch if the goal is to achieve consciousness in AI systems.
  • Current modeling techniques might not lead to AI achieving consciousness. A new paradigm might be required.
  • There's a need to define what consciousness in AI means and to develop a test for it. 

Final thoughts and wrap-up

  • If consciousness is truly the goal, the case for starting from scratch allows for fairness and ethics to be established foundationally
  • AI systems should be built with human values in mind

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Show Notes Transcript Chapter Markers

We're taking a slight detour from modeling best practices to explore questions about AI and consciousness. 

With special guest Michael Herman, co-founder of Monitaur and TestDriven.io, the team discusses different philosophical perspectives on consciousness and how these apply to AI. They also discuss the potential dangers of AI in its current state and why starting fresh instead of iterating can make all the difference in achieving characteristics of AI that might resemble consciousness. 

Show notes

Why consciousness for this episode?

  • Enough listeners have randomly asked the hosts if Skynet is on the horizon
  • Does modern or future AI have the wherewithal to take over the world, and is it even conscious or intelligent? 
  • Do we even have a good definition of consciousness?

Introducing Michael Herman as guest speaker

  • Co-founder of Monitaur, Engineer extraordinaire, and creator of TestDriven.io, a training company that focuses on educating and upskilling mid-level senior-level web developers.
  • Degree and studies in philosophy and technology

Establishing the philosophical foundation of consciousness

  • Consciousness is around us everywhere. It can mean different things to different people.
  • Most discussion about the subject bypasses the Mind-Body Problem and a few key theories:
    • Dualism - the mind and body are distinct
    • Materialism - matter is king and consciousness arises in complex material systems
    • Panpsychism - consciousness is king. It underlies everything at the quantum level

The potential dangers of achieving consciousness in AI

  • While there is potential for AI to reach consciousness, we're far from that point. 
  • Dangers are more related to manipulation and misinformation, rather than the risk of conscious machines turning against humanity.

The need for a new approach to developing AI systems

  • There's a need to start from scratch if the goal is to achieve consciousness in AI systems.
  • Current modeling techniques might not lead to AI achieving consciousness. A new paradigm might be required.
  • There's a need to define what consciousness in AI means and to develop a test for it. 

Final thoughts and wrap-up

  • If consciousness is truly the goal, the case for starting from scratch allows for fairness and ethics to be established foundationally
  • AI systems should be built with human values in mind

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Speaker 1:

The AI fundamentalists a podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses. Here are your hosts, andrew Clark and Sid Mungalik.

Speaker 2:

Hey everyone, welcome back to the AI fundamentalists. It's going to be a little bit of a different episode today. We're going to be talking about a little bit more of a philosophical topic. This is in response to a lot of questions that Susan, myself and Andrew get, which is basically is SkyNet here? Is it on the horizon? Is it our fault if we get taken over by robots? We want to talk about that today. We want to talk about does modern or future AI have the wherewithal to take over the world? Is it even conscious? Is it intelligent? What is consciousness?

Speaker 2:

We're going to be zooming out a little bit and talking about this from a slightly more qualitative standpoint and a little bit less of the technical we talked about. I think this is becoming more and more useful of a conversation to have. Today we have a guest appearance from a Michael Herman. He is one of the co-founders of Monitar. He is an extraordinary engineer. He is the creator of Test Driven IO, a training company that focuses on educating and upscaling mid level to senior level web developers. Here's Michael. I'll let him talk to himself a little bit and then we'll hop in.

Speaker 3:

Awesome. Thanks for that interest. I'm excited to be here. I don't think I have really anything else to add to that intro. I will say that I have a bit of imposter syndrome coming into this podcast, but my ego is holding on to my minor in philosophy that I got way back in undergrad. So, yeah, thanks again for having me.

Speaker 2:

Yeah, it's great to have you. We'll start here with a little bit of a plug. We actually went ahead and asked a bunch of our coworkers what did they think about the question. The question was just what is consciousness? They're allowed to free response and here are some of their responses.

Speaker 4:

To me, it's a combination of self-awareness and metacognition Not only our ability to understand that we are distinct, but also the ability to think about why we're thinking that and question how our thoughts and actions affects the things around us. Consciousness, in my opinion, is the state of awareness about one's existence and one's being.

Speaker 2:

Consciousness to me is being present, aware and alert, having your feelings or sensory perception still intact.

Speaker 4:

Consciousness is a feeling. It's this awareness I am what I am.

Speaker 5:

It's this thing that's happening outside of how my brain works. It's like my presence.

Speaker 2:

What is consciousness? I have only questions. I think that was really interesting, that was really great. I think that will give us something to chew on and hopefully people in the audience got something out of that too just to hear the diversity of opinions on this topic. For today, let's start a little bit by doing what we do at AI Fundamentalists, and that's going back to the fundamentals. If we want to talk about consciousness and AI, let's go back to the fundamentals of what philosophy has had to say about this. I'll let Michael start a sudden conversation about the mind-body problem, which maybe really motivates this question of is AI conscious and what is consciousness?

Speaker 3:

I think, before jumping into the artificial intelligence side of things, I do think it's important to look at ourselves and what is our own relationship to our own consciousness? And we can all agree that humans have a consciousness or a mind. It's where we think, it's where we sense, it's where we feel. It's non-physical, but how does it relate to our physical body? That's really the crux of the mind-body problem. Philosophers and cognitive scientists have been debating this very question since the early Greek philosophers. I'll just provide a few examples. Plato posited that we have both a body and a mind and that the mind is trapped in the body until death. Then, upon the death of the physical body, the mind is released. You can see similar beliefs in Christian theology.

Speaker 3:

While Plato thought that both the body and mind are distinct and foundational in the universe, materialism, in other hand, states that the only things real are physical things, those are things that can be touched and measured and observed and whatnot. Then dualism, which came from Descartes, a French philosopher in the Enlightenment era, built on top of Plato's beliefs and so stating that the mind and body are both real but separate. The problem here is how does something physical arise from something non-physical and vice-versa? Then, counter to dualism and materialism, you have panpsychism, and that came from Spinoza, a Dutch philosopher. In that theory, the mind, or consciousness, is just as real as the body or physical things. In that world, consciousness is king and it really underlines everything in the universe From an AI engineering perspective.

Speaker 3:

I'm guessing that probably most AI engineers are more embraced, more of a materialist approach. This is the 21st century, since they believe that matter is king and that, given a complicated enough AI system, consciousness will rise naturally from it as a side effect. I'm just like an. In summary, the mind and body are separate. In dualism, that's the ghost in the machine. With materialism, matter is king and consciousness is a side effect of very complex material systems. And then in panpsychism, consciousness is king. I find that last one interesting because if you're a panpsychist AI engineer, you probably already believe that your AI models are conscious, because everything in the universe is indirectly conscious to some degree Bees, trees, butterflies, dichotapes, that sort of thing.

Speaker 2:

Let's try and make that connection now right. Let's think I'll ask you the question basically in an AI system or in the computer, what constitutes the mind and what might constitute the body?

Speaker 3:

Yeah, that's a good question. So yeah, I think the mind would be the program itself, whereas the body would be the computational power components, the physical aspects of it and the input output.

Speaker 5:

Yeah, that's interesting I thought of it as well for some of these systems like the body could potentially be like robots, right? So in a lot of different areas in manufacturing and things like that, there's a lot of emphasis now on making robotics and trying to automate human processes that way, and then it turns out a lot of times the mind really isn't there. So we've almost been working in two separate areas working on the mind for algorithms, and then working on the body with robotics. So there's not that many systems actually that I think have been really trying to combine the two well.

Speaker 2:

Yeah, I think that's a good question, right, like, are these things distinct and how do we put them together into one unified system? So I want to go back to what Michael was saying and really wrap this up back in like the maybe three mindsets that we can look at this problem through. Right, so, in the materialistic viewpoint, right, if a human brain exists in the real world, a materialist would tell you well then, clearly, a sufficiently complex simulation could one-to-one recreate the human brain. Right, and so it would be possible, from a mechanistic viewpoint, to recreate everything that's in the human brain. In fact, this has already been done for smaller, less complex animals, very famously, sea elegans nematode. It basically looks like a little worm. A one-to-one recreation of its brain was created. So I'll throw it back to you guys to, I guess, like, what are your thoughts of that? Right, it's like what about this just mechanical view of the brain? And we've already done it for smaller animals, could it be do it for human brains?

Speaker 3:

Yeah, I think it again kind of goes back to your take on the special sauce. Like how does that arise? Yeah, I agree, like from a materialist standpoint, a complex enough system. Does consciousness potentially arise as a side effect from the complex system? But I think it kind of like how do you know, for example, if something is conscious? There's no sort of test for that. I mean you can kind of debate sort of like the Turing test and whatnot, but that's not really. I mean, I think we could probably talk about that another time, about the Turing test. But yeah, I think just because you recreate something doesn't mean the special sauce is inherently going to be there. And even if it is, how do you know?

Speaker 5:

Yeah, I definitely think there needs to be. It's kind of interesting that I think on purpose computer science is kind of steered away from having a Turing test for consciousness because the results wouldn't be very favorable to the current marketing spin. So I think definitely something with like an irreverent computer scientist and a philosopher or something should create what's a conscious test. Because if you started applying that to what is intelligence, what is consciousness, to the current system, such as LLMs, it would definitely go against the narrative about how wonderful these things are. So that would be definitely something that would work well with this program and something that would be very interesting to see, because the current emphasis in a lot of machine learning is not to get caught in like you have the drive by media that will like to call it consciousness, the just journalists that are trying to make everybody scared that all these are smart cognitive conscious systems and stuff. They're not like. The whole goal of them is to probabilistically optimize for a loss function and predict the next word or forecast the next thing. They're not doing thinking. So it would be really interesting to have a test and then we'd have to explore different architectures of how we're building these systems, because the current way that we're even approaching artificial intelligence and how to build these systems is not optimizing for what would be consciousness.

Speaker 5:

And Michael brought up another great point about like dynamical systems and you know emergence properties and butterfly effect and stuff. That'd be another great podcast talking about what is complex systems, dynamical systems and what is the components of that. And then where does that fall short of consciousness? Because those aren't really even supposed to be consciousness, just like the complex effects. And how do we model those and understand something that happens here as a nonlinear effect over there? But even that kind of systems we haven't. Really there's a major gap in the literature and just understanding of what does this consciousness mean and where are we currently with technology? And I think the gap has kind of been kept like I can't believe Michael's the only one asking these questions. I think that it's kind of been kept fuzzy on purpose to not expose where we currently are.

Speaker 2:

Yeah, and that gets us to almost a question of like. Should we even consider consciousness as a binary of like? Is it or is it not conscious? Or should we instead look at it as a larger spectrum of consciousness, where on one side we have the C elegans nematode and on the other side we have an adult human being, and would we argue that we have the same level of consciousness? Or does, like, a child have the same level of consciousness as an adult, right? So do we want to then look at this mind stuff, right, this dualistic mind stuff, as being a single type of thing, or is it a wide spectrum of things that we can then evaluate and test and see it interact with the real world? Any thoughts on what the levels of consciousness might look like? What is a less conscious system, what is a more conscious system and what makes them different?

Speaker 5:

I guess we really need to figure out what the consciousness is and what's that, what's that specialness, you know, that makes people human and what or what is that exactly meaning for for computers, because I mean, anything is possible theoretically. I think you can make conscious systems, but I don't see any. Anything we've even been doing the last 20 years in computing that's moving anything towards more consciousness. Every you're just moving to be better at whatever loss function you've defined is that? Is that making words look human? Is it trying to have a conversation? Is it try to win that the game go? There's multiple ways you can do this. You know.

Speaker 5:

Supervised learning, reinforcement learning probably is one of the better methods for either approach we're doing here. But like the underlying architectures, like of the neural networks and brain snaps, is that's not a great way of. We've tried to like reduce it down to almost the body. Essentially, we're trying to optimize body things and seeing if we, if we just make these computers fast enough and do enough body fancy algorithms, we're something on a make consciousness. It's just going to happen like this. We're going to have a sky net effect but we're not actually working towards what even does consciousness mean? What was consciousness in a system mean and taking a critical view of are the ways we're approaching it correct? Because, you know, yon lakoon from from facebook definitely has a lot of or meta has a lot of things to say about the current architectural paradigms and gregory hinton is very focused on keeping the current structure, for obvious reasons, but it doesn't seem like we've spent a lot of time.

Speaker 2:

Actually, we, we, we'll talk about cognitive computing and consciousness, but the systems we're building don't seem to be moving us in that direction at all yeah, I, I think that's totally right and I and I guess this builds me up to my next piece of the conversation, which is for all of us which is, even if we're trying to do this mechanistic recreation of the human brain, right, we talk about these neural network systems, but ultimately what we have is basically weighted graph networks with some non-linearity built into it. And when we look at the real human brain, we see much more complex networks. We see the creation and loss of entire layers of networks. We see that's not simple zero one activation, not activation, it's, it's rhythms of activation. And while there has been some work to try and make neural networks a closer and closer analogy to the human brain, we find that they actually don't make the models, make the models better. The closer we get to the human brain, we're not actually getting better, better models, right? So while scale is important, the human brain scale is is enormous.

Speaker 5:

The architecture is not a one-to-one recreatible benefit for thinking that's a great point and it's one of the things of people will say, even like chat, gpt's of the world and stuff are creative, but, as we're realizing, they're actually not. They're, um, like you know, we're a big fan of gary marcus and his work. He's been putting a lot of things on his, on his blog lately, showing all the plagiarism and things that are coming up with, and they're trying to suppress these things and just like taking, you know, star wars and other influences and mario and stuff. And you you ask for, like you know, an italian guy standing near a pipe in outer space. You're gonna have star wars and mario and they're doing pop it together and somehow it's creating new knowledge. It's not, it's just regurgitating other stuff. So what really seems to be the thing that's really still human is this creation of new, of new things. So I'd definitely just trying to do the architecture. We're like missing the boat here.

Speaker 5:

One area that I think is interesting that, um, you know I'm an economist by trade, a background that how economists look at like utility and and there's really a move away from it used to be the rational actor well, people aren't rational or the representative actor, where you'd have a lot of people like averaged up into one person. People are starting to realize in economics you want those heterogeneous actors sorry, um where the different preferences and things oftentimes these are randomized but you have like utility functions and you can have different ways of certain people value things differently. There's a whole lot of work around that where you're trying to somehow capture the, the individual and the uniqueness of people and if there's a way to try and take some of that, if we want to start move towards consciousness and the ability of like that thinking. It's not being group think or just trying to represent the body essentially with the computers and hoping you get consciousness magically like that's not.

Speaker 2:

There are definitely some other areas to explore yeah, and I want to throw this, this next question onto michael, which is kind of the same in the same vein, which is, if you build this system, which is basically just doing pattern detection right, mimicking the human behavior, uh, is that, is that enough, is that an approximation, or is that just the wrong direction?

Speaker 3:

so this is more on like the materialist sort of like standpoint. Yeah, it kind of like can you teach like a machine to care? I still go back to like how do you test for this stuff? Like, how do you know, just because something you know exhibits a certain behavior doesn't mean that you know like it could be just more like a, like a co, like a correlation versus causation, you know there. So I think how do you, how do you go back and like how do you test for this, whether something like understands his purpose and whether it actually cares about whether it is attaining that purpose?

Speaker 2:

Yeah, I think that's a great question right, there's seemingly, hopefully, some distance and some difference between acting the right way and cognitively thinking about acting in a certain way right, in some self-reflective way which is maybe missing in a simple mechanistic view. And so here's a question for everyone Is AI dangerous? Do we do? How do we feel about? Is AI really going to wipe out humanity? Are the systems we have now fundamentally dangerous and, if so, in what way are they dangerous?

Speaker 5:

I definitely think they're very dangerous. They're almost more dangerous than they would be if they were conscious. And you know we've talked about the whole like consciousness could even come down to better metrics, and there's lots of different stuff we could do. There's a whole mini podcast we could talk about on consciousness. But I think the danger a lot of times right now is, first off, the misinformation and the manipulation that you can do of people with these systems. I think that's huge and the fact that they're almost they look accurate when they're not. So they're almost they're in a weird spot. They're useful enough that people start relying on them, but they're dumb enough that they're a high risk. But then they get into the wrong hands and they can do some really like there's a lot of different. Cyber security is a big area where and impersonating people and deep fakes and all the different stuff that's happening Like they're very dangerous and almost think if they were, you know, had consciousness, they would be less dangerous. So we're kind of in a really sketchy spot with it at the moment.

Speaker 1:

Yeah, I agree there's they're definitely going to be dangerous. But I think, for all the reasons kind of like we heard in some of the random answers that we got, there's it's all subjective as to what people think drive consciousness. So when you even think about it that way, there's it's all subjective on who thinks you've achieved. You're going to this. A machine is going to achieve consciousness or can achieve. I loved the question that you guys asked earlier Can you teach a machine to care? Who decides when the machine cares? So I think that's the inherent danger in trying to predict, like that these will take, these will take us over. And I really think that takeover is in the minds of like how lazy do you get in building them? Well, yeah, sure They'll take over.

Speaker 3:

Yeah, I definitely agree with some of Andrew's points around. I do think the dangers of artificial intelligence are more akin to, like, the dangers of social media, and so, whether you you think 2016 was the end of the world or not, but you know, I think that, yeah, I do believe that's, a lot of the dangers with artificial intelligence are around, but if you listen to the people, yeah, for as an example of like a lot of people are scared about you know, these robots taking over the world and terminator type stuff.

Speaker 5:

But that's not how they're being used, because even if you look at the unfortunate situation in Ukraine and stuff, these are like you aren't having artificial intelligence running stuff anywhere. Even in military uses. It's still drones being controlled by humans. Like the technology. If, if, if it was out there, people would be using it right now. Right, like it's not smart enough to do that kind of stuff.

Speaker 5:

It's the manipulation of people and the faking and the hacking by having a nefarious person. It's like it's like a microphone for a nefarious person trying to do bad things is essentially what AI is right now, but by itself. It does not have a brain, it has no consciousness, it's not doing things by itself. So I think the danger, the fact that people feel threatened, and danger by AI systems 100% agree with. I think it's completely misplaced and misdirected fear. I have no fear whatsoever in the next 10 years of there's or longer than that, but I'm just going to say 10 years of that there's going to be some someone's going to somehow magically create this conscious system. It's going to go around and terminate, or everybody Like. I am not concerned about that. I'm a lot more concerned about bad actors using this stuff to mess, to manipulate and destroy other people. I'm concerned about that, not concerned about it taking a brain by itself. So I think we've just misplaced our fear, not that the fear is valid.

Speaker 2:

Yeah, that's a very fair point. I like what everyone's talking about here, where we need to think about these problems as they exist right now, and there are problems that these AI systems do host us today in their current state, with the likely lack of consciousness that they have. So I guess, looking out into the future, as Andrew's saying, in that 10 to 15, maybe even 20 year range, what do we feel about AI achieving consciousness, and are the current modeling techniques we have going to get us there, or is it going to be a totally new paradigm that's going to be required to make that move towards what we would consider something that could pass a consciousness turning test?

Speaker 5:

I think fundamentally we have to burn the ships and start over if we're wanting to go that direction. I think we've optimized. We're optimizing over lost functions that are trying to do like performance. We talked about in that other podcast performance metrics. But what are we optimizing for people in? Very, it's one of the criticisms, criticisms and a lot of disciplines.

Speaker 5:

You like the math, the math. If the math is neat and it works, you optimize for that. There's a lot of stuff in academia where it's so divorced from reality because, oh well, this, just this, just reduces right and I can use a, this certain multiple multiplier, this algebra works, so that you know people are like falling into these modeling systems. Model, I mean like mathematical model, because it's convenient, it's good to represent. Economics is really bad with this. We have our pet models and just because the math works me achieve equilibrium we're gonna keep riffing on this. Computer science is the same way. There's just like these pet models. We're all rocking and rolling on neural networks and everything's great. We're just gonna keep optimizing on it, like we.

Speaker 5:

Just how many times can you make a new loss function to optimize performance is not gonna happen. We have to completely reset first off. Do we even want to be trying to achieve consciousness would be systems. First off, let's decide that. Second off, make a test, like Michael is saying, like a touring test what is consciousness? We need to decide what that is and then, once we know what we're striving for, every reset. But we're we're kind of in, we're chasing our tails a little bit if that's what we're going for right now. So, honestly, there's a mate, as we talked about a lot. There's a major limit to what these systems do and oftentimes it's like how we're responsible using systems, using a simpler system in first specific use cases, better than trying to make these general things. But if we do want to make something general that's conscious, we got to restart, understand what the idea is and start from scratch and not use our pet Paradigms and try and keep riffing on it, because we're just gonna keep chasing our tails and there's gonna be another AI winter.

Speaker 2:

In my opinion, if we're gonna keep going the path we're going now and expecting different results, I Mean yeah, I mean, if I had to, if I had to put my a horse in the race, I would probably agree with Andrew right, basically, like Some very famous thinkers have all said, some flavor of you know it's like we're trying to go to the moon by climbing the highest tree that we can find Right it's. We're getting there, we're making progress every year, but are we doing the right? Are we solving the right problem? And it's Andrew's point are we even solving the problem that we want to solve? Is consciousness something that we care about? Do we want a conscious AI or do we want a conscious, do I do? An AI that does what we tell it to do, what's more useful for us? And and what do we really want to to be building? And Is it worth doing that work, to burn everything down, to start over, to build something that we don't even want?

Speaker 5:

So I agree that we need to burn the ships because I think the current what we're doing right now is a dead end and, as we talked about along this podcast, simpler models they're used with domain knowledge. I'll perform I've done an example of a GLM outperforming a neural network. As an example, if you know what you're doing or what you're solving for the current state of like, pricing and forecasting and stuff, have an interpreter model that you can understand. User domain knowledge Oftentimes is better than some crazy deep neural network. So even and I would agree with Sid that maybe we want to strive for having, maybe we would have personal assistance or whatever we're looking for even that type of thing and when. What is the goal? We're wanting these systems human, that human augmenters are making more productive, or or or whatever You're you're wanting it to do. What do we want these systems to do?

Speaker 5:

Let's take a step back and then, most likely, burning the ships is still the best approach to get rid of the little dogmas and biases we have on these systems, because if you look at almost any endeavor that humans have done successfully, yes, you'll, you'll get some, some inspiration from other things. How do you get a wheel? Well, it looks kind of like a rock. I know I rolled a rock and now I thought a wheel like stuff like that. But just continually doing small tweaks on stuff doesn't really do revolutionary results. So we've kind of tricked ourselves into this over optimization culture of we keep optimizing something, it's eventually gonna go zero to one, and that's not really how it works. All of the zero to one things you look like.

Speaker 5:

How did any tech company starter? What did they accomplish? They did something completely novel. They didn't just keep tweaking something that's already been done just so they can get tenure at their university. That's not how the world works. It's creative destruction, like you have to do something completely different.

Speaker 5:

So I think, from a theoretical machine learning and research perspective, no matter what we decide, personally I don't think we should be going for conscious systems. I think we should be going for systems that do all the things that humans don't want to do to make us more productive. Great, let's get some great car washers, laundry Machines, whatever we want them to do. Let's focus on making some really good AI systems that do those things. In either case, we got to start from scratch, but we do need to figure out what is it, what we want to do, and I implore the research community to let's take a step back, make the the touring test for consciousness, decide what we even even want that or what do we want and let's restart from scratch with interdisciplinary Inspiration. But let's burn the ships and not use your pet model projects and restart, I think we'll be a lot more successful.

Speaker 2:

A wonderful call to action.

Speaker 1:

This is also a time where AI ethics can shine, because in that say, in the same questions and a lot of the points that Andrew just raised, we've got to make sure that, in those problems that are being solved, what is the greater impact? I'm building that system or wiping the slate clean and starting to do it another way?

Speaker 5:

Susan, thank you for calling that out. That definitely should have been part of my rant. I fully agree and, as we've talked about in this podcast before, like One of the things that people don't like to do. But it's very possible. You can build. We've talked about multiple and we actually should do a Explicit podcast on this but multi objective optimization where you can prove to make your models both Performance and fair to whatever metrics you want those to have. But most people don't want to do that because it's a lot of work. But you can build, especially if we started with a paradigm from scratch.

Speaker 5:

One of the pillars of when you're building a system properly is Encoding a fairness or an ethics and all the things you want, and it's a lot easier to engineer a system from scratch with those attributes and to try and bolt them on later because right now we're kind of like Bolted on for for PR, honestly, is what we're kind of right now Like, just do enough, that you're okay, but we don't fundamentally change what we've done?

Speaker 5:

And that's kind of a frustration a lot of us have is like people don't like even the people that give lip service to wanting to do ethics. No, there are definitely people out there. They're wanting to do the right thing and doing it properly. So please don't misunderstand that. But there's a lot of other companies and people that are just trying to do enough that they don't get Regulatory attention or bad media. They're just just kind of bolt on. Now there, other people, companies, are completely transferring from one to the other, completely transformational. So I'm not not saying that at all. But if we burn the ships and started our paradigms from scratch, knowing that Fairness is the first class citizen In the modeling constructs that would an ethics are a key consideration, that be Our systems, we wouldn't be having these conversations about it, fairness and AI models, because they would be inherently fair.

Speaker 2:

Yeah, I think that that's awesome and I think that that might be the outlook that we have to see right. If we want to really get these AI systems or we want more powerful AI systems in our lives, we need to build them very intentionally and build them for the types of things that humans care about. Right this alignment problem, right Making models that are aligned with human values from the get go. And so I guess, like we're coming up on time here, I'd love to just hear some final thoughts and final notes, and then maybe we'll wrap.

Speaker 5:

I think I've done enough thoughts for the group, so I will be quiet.

Speaker 1:

Michael, I'd love to hear what you think. And, wrapped in that, what do you think of your first podcast with us?

Speaker 3:

Yeah, this has been fun.

Speaker 3:

Thanks, thanks so much for having me.

Speaker 3:

I think, yeah, I mean, I've spent most of my life as a materialist and so, like I want to be like optimistic and like hope that we could reach some level of consciousness with machines.

Speaker 3:

But I don't know, it seems like we're like pretty far from that and I also think that, like we've taken materialism or reductionism pretty far and some of like the problems that we're facing right now, I think that we need to like shift our perspective and shift like how we're solving them. It's not necessarily like we not only need to solve where they're going, but we need to kind of like change who we are, and so I don't necessarily think that just a straight materialist approach is going to get us there. And so, thinking a little bit more about panpsychism and some of more of like maybe some of the Eastern religions and philosophy, like how can we actually use that to solve the tackle sort of a problem so, yeah, I think I am optimistic that we'll be able to get there at some point in terms of like AI consciousness, but I think that we need to definitely shift our approaches.

Speaker 2:

Great Well, thank you everyone for joining us on a very different episode of the AI fundamentalist. We tried to bring some fundamentals into this and really thinking about AI and what is consciousness from scratch right, let's not hopping into is chat TPD here. You know, what would it mean to be there, how would we evaluate that and what does it mean?

Speaker 1:

Well said and, on that note, special thanks to Michael Herman for joining us today. If you have questions about this episode or any of our past episodes, leave us a note. There's a link at the bottom of the episode page. Until next time, this is the AI fundamentalists.

Exploring Consciousness and AI
The Potential Dangers of Artificial Intelligence
Limits of AI, Need for Fresh Start
AI and Consciousness With Michael Herman

Podcasts we love