The AI Fundamentalists

Mechanism design: Building smarter AI agents from the fundamentals, Part 1

Dr. Andrew Clark & Sid Mangalik Season 1 Episode 32

What if we've been approaching AI agents all wrong? While the tech world obsesses over larger language models (LLMs) and prompt engineering, there'a a foundational approach that could revolutionize how we build trustworthy AI systems: mechanism design.

This episode kicks off an exciting series where we're building AI agents "the hard way"—using principles from game theory and microeconomics to create systems with predictable, governable behavior. Rather than hoping an LLM can magically handle complex multi-step processes like booking travel, Sid and Andrew explore how to design the rules of the game so that even self-interested agents produce optimal outcomes.

Drawing from our conversation with Dr. Michael Zargum (Episode 32), we break down why LLM-based agents struggle with transparency and governance. The "surface area" for errors expands dramatically when you can't explain how decisions are made across multiple steps. Instead, mechanism design creates clear states with defined optimization parameters at each stage—making the entire system more reliable and accountable.

We explore the famous Prisoner's Dilemma to illustrate how individual incentives can work against collective benefits without proper system design. Then we introduce the Vickrey-Clark-Groves mechanism, which ensures AI agents truthfully reveal preferences and actively participate in multi-step processes—critical properties for enterprise applications.

Beyond technical advantages, this approach offers something profound: a way to preserve humanity in increasingly automated systems. By explicitly designing for values, fairness, and social welfare, we're not just building better agents—we're ensuring AI serves human needs rather than replacing human thought.

Subscribe now to follow our journey as we build an agentic travel system from first principles, applying these concepts to real business challenges. Have questions about mechanism design for AI? Send them our way for future episodes!

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Speaker 1:

The AI Fundamentalists a podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses. Here are your hosts, Andrew Clark and Sid Mungalik. Welcome to today's episode of the AI Fundamentalists. And today we're going to talk about the real fundamentals of agentic, and this is inspired by our last episode with Dr Michael Zargum. We're still coming down from that high and a lot of the expertise that he shared.

Speaker 2:

And so, before we hop right back into our new episode, here and it's been a while since we've done an episode with just the three of us.

Speaker 3:

We've been doing a lot of guests. Yeah, excited to just get back to.

Speaker 2:

I know I know last time we met uh, we were talking to dr zargum and we were covering topics on the nature of what an agent is, what an agent isn't and who would be responsible for these agents. Right, so you might remember that we talked about things having principle bases for agents, meaning that they act on some principle and they achieve some overall goal, and when they work on these operable and understandable goals, you have the ability to constrain them, to govern them, to understand how they behave, and you don't allow them to work in these like magical, non-deterministic ways, like with L LLMs, where you don't understand the underlying guiding principles and thus they become very hard to manage. Outside of your traditional things like access controls and guardrails, how can we really get at the underlying mechanism of how these agents work? And so that's what I'm going to dig into a little bit today. He's understanding the mechanisms and the mechanism design that we can use to build good agents.

Speaker 3:

Yeah, I think this is a great setup. So this is a very interesting topic that's near and dear to my heart, learned a lot from Dr Zargum over the years, as well as the mentor. There's a lot of really. It's a very fascinating field that I don't think is very commonly used, which is really. It's an interdiscipline, it's an interdisciplinary field. Uh, really coming out of like microeconomics as one basis and as an economist background, uh, definitely an area I've spent some time with which is really how do you design a system, the design, the rules of the game to get a desired outcome right? So, like, as Sid mentioned, with the LLMs, which no one really has explainability on LLMs as a talk track, you've kind of seen explainability of AI kind of fall off because no one really knows how to do it well with LLMs, at least on a broad scale. So it seems like the market is kind of de-emphasizing it so you can keep the LLMs in the loop. But in any case, today we're going to focus on the overarching what is mechanism design and how could we potentially be utilizing it to build responsible, governable agentic systems that actually would have the performance that is being promised in the market? So, as we've talked about previously. I think agentic AI, or the idea of multi-step systems being used to help automate tasks in business, is a great concept and I think there's a lot of future there. Just take the fact you have to have a pesky LLM in the middle, as the focus is where I'm not as convinced as the way forward. However, an LLM could still be a component of it. The larger gap that we have, though, is understanding the states of the system, as we talked about previously with our agentic travel um example, like it's a multi-step system, now that we have and there's specific optimizations and and steps you want to be going through, and if you're trying to have like a, a tool, be booking your travel for you, and things you'll have, like, what are your preferences? Are you a united airlines member, a delta member? You hilton mario? What dates do you want to go? Where do you like you? And and then, uh, trying to interact, and there's multi-steps that need to be accurate and done so.

Speaker 3:

Thankfully, as we've talked about briefly before of like, uh, agents, uh, there there is a multi-step systems. We've had some several podcasts around those we want to be starting a series on let's go from, from the fundamentals and actually what really build up that example we talked about of an agentic travel agent and looking at the first principles and like taking a uh, like a mechanism design, um, do it the hard way approach of how would you actually design the system to achieve the goals you want it to have, and then we can maybe even compare and contrast that against like the state-of-the-art LLM agentic systems, um, which is really taking trying to take all of these multi-step processes that you know we used to have an alpha go and things like that that actually did some of these processes and we're just trying to condense and say I'm going to give it a prompt and want it to get to the answer. So, by doing this mechanism design approach this is a field that's been around draws heavily from microeconomics, optimization, social theory, game theory. It's been around really since I don't know the 1940s it was I think it was one of the original references and kind of built up from there. Which is does it? It's a way of like marketplace design auctions, like how you know an ebay works, how, um, a lot of mechanisms, uh, like, uh, even I think like donation programs for uh, like, uh, body parts and things like that in hospitals, and as well as like placement of of doctors in residency programs.

Speaker 3:

All of these things are built on these like complex mechanisms and this the concepts of mechanism design have been rooted in a lot of these like auction type networks that we're just come accustomed as part of our daily life, and there's a lot of this type of design of how do you deal with agents' preferences, with asymmetric information, that principal agent problem Dr Zarvin talked a lot about in the last podcast. You have that principal agent problem with asymmetric information, meaning individuals have private preferences that they don't always reveal but they're trying to optimize sorry, they're trying to maximize their utility and that's kind of how they're operating. You know the rational, self-interested agents if you look at kind of like adam smith type methodology. So this is a whole field of how do you really build the rules of the game to achieve desired outcome and something back like how central banking is working. You know we've had brenton woods and we've had all these different accords of like how how the whole macroeconomic inter-world economy is being shaken up right now with some of how the US is changing around some trade tariffs and things like that. This is all like the rules of the game are changing for how international trade is done.

Speaker 3:

The whole world economy is a mechanism in itself and it used to be like after World War I. There were some issues where it was kind of asymmetrical, of you know, repayments by the Germans to the French and things like that, and it kind of caused World War Two. And then we had John Maynard Keynes had had a bunch of recommendations. Some of them weren't followed. But then we had the Bretton Woods Accords in 1944. That set up the post, all of the currencies pegged to the dollar in which was, quote unquote, tied to gold, which has kind of been, and until we had the gold center go down, but it's kind of like what was driving the monetary, um, international monetary doctrine until recently.

Speaker 3:

Those are all rules of the game at a macro scale, right, so like if we're thinking about building a marketplace of agents that are maybe trying to book travel with, you'll have also, like this is kind of like a bookingscom, expedia, these other things operate, I operate. I don't know if they're thinking through these principles, but kind of the same way of you have these different agents with private information preferences. They're trying to all maximize their individual utility, revenue, cost minimization, best vacation ever type things being rational, self-interested individuals and create a mechanism where they are supposed to be truthful and not lying to you know, I'm going to lie to you to get you to board my plane and then jack up the price. That kind of thing Like how do you make everybody be truthful and optimizing for their own self-interest and building that system? So there's in these more complex AI systems we're starting to see today, there's a lot of parallels of how we think about this maximum creating system-wide optimal problems with multiple agents and things you can really think about how enterprises are trying to use these tools and there's a lot of these.

Speaker 3:

You understand how to create the rules of the game in the system. Then you can have smaller utility functions and smaller optimization problems that are tractable. That then you can put in state-based smaller like optimization problems that are tractable. That then you can put in state-based systems going together and I'm, when we we talk about state-based dynamical systems a little bit in the past, uh, um, in a podcast with a guest.

Speaker 3:

Uh, it's like when you have these different states, you're optimizing for the next state and you do know the end goal in mind and the maximization for the whole network of everybody being involved, but you're really focusing on optimizing one state at a time and then you're not just trying to be like put your preferences in an LLM prompt and hope it all works out in the end, but you're optimizing for each state based off the individuals that are interacting in that state's functions, and then that reduces the surface area and makes it a more constrained problem.

Speaker 3:

So that was a long-winded introduction to kind of the concept. But I mean, that was a long-winded introduction to kind of the concept, but it's something I'm personally very interested in. I think we could go on a journey together on this podcast of like, really dissecting how you would build this type of thing and just the agentic travel agent example. There's a lot that we could be digging into some of these underlying techniques and basically showing how to build that marketplace and then also talk about, whenever possible, how that's generalizable to individual companies building some of those marketplace components.

Speaker 2:

Great. So I think that Andrew brought up a lot of great points. I think we covered a lot of ground. I think we should. You know, let's break it down a little bit, let's see the steps of it and then let's put it all together Right? So, as mentioned earlier, this is going to be part of a longer series where we're going to be building up agents the fundamentalist way. In future episodes we'll cover things like utility functions, bellman equations, control theory and put it all together. But today we're going to focus on specifically the mechanism design part, as mentioned.

Speaker 2:

You know this goes back to all the back to von Neumann and Morgenstern working on this idea of theory of games. We want to think of agents in the world, the way that we think about humans interacting in the world and that's kind of the basis of this research here is to understand how you can put a bunch of you know mice or people or robots in a room and how will they operate in that environment to optimize for what they care about. They have some, we'll say, rational self-interest right when they're interested in optimizing. For I want to get the cheese, I want to get the promotion, I want to get my client the best price for the best vacation they could possibly have. We're concerned with creating a game or an environment that these agents or people of robots operate in such that every agent acts optimally and generates strong and ideal outcomes for every participant. We may not be able to make it that everyone gets the best outcome, we may not be able to make it so that the average outcome is the best, but we want to make a game that is optimized for our end goal use case. So, in the case of a travel planner, we want to create agents that can reasonably optimize for what the client wants out of their model.

Speaker 2:

This type of mechanism design extends beyond, like we're saying, just marketplaces, right, where these have traditionally been used to understand. Like how, you know, agents and traders will buy commodities and sell commodities and buy services and render services. This also expands to answering any number of, like large, great questions, right, this could be, like we're saying, traveling vacation plans. This could also be things like automating insurance risk management, right, and so these agents will have to then engage in a lot of tasks involving, you know, booking flights, looking at prices, creating pre-processing code, triaging problems. There's an enormous scope in which this kind of work fits into, and so, while we're going to be talking about a slightly narrow scope, we want to just make it clear that, like these are general problems with general solutions, yeah, great definitions there, sid, on going the next step of it.

Speaker 3:

And yeah, it's definitely. I talked a lot in the marketplace, as you know economics background, I think a lot in marketplaces and things but it's definitely a GenTech AI. How it's being utilized today is one of those narrow tasks. Specifically, mechanism design where it's super exciting as a field to be utilizing here is that we are just when we have agents interacting for how they will try and achieve a desired result in a system. Mechanism design helps you design that. So that's where we're and that's where we'll walk through in the sequence of this series of episodes on doing our agentic travel agent and breaking down what that works. But, as you mentioned, like any of these things, we can use some other tangible examples for thought exercises as well triaging claims, things like that that are, you know, in insurance, specific use cases, different things like that. It's really like whatever is this mechanism or system or objective you're trying to do and you're having a multi-step process and you're having multiple agents interacting or multiple components of complex nature. So it's more than like a predictive task such as like just predict the number of Pop-Tarts you need for a specific store stores in Florida, like that kind of a thing or generative of hey, let's just generate a blog about I don't know parrots and what they like to eat, that kind of thing right, like. Instead, those are more of those narrow tasks. Mechanism design really comes to its own in those multi-step tasks of the travel agent. But you can also think of like some of the stuff Salesforce is doing of you know you're searching for, you know different customers you can be engaging with, taking the first step, analyzing some information, like all of those things. Are you having agents Remember, like all the PR out there right now, you know horizontal agents that don't get tired and things like that. So you're having all these agents performing tasks and the idea is, like these autonomous agents doing things, that, if you think about breaking down each one of those problems, it's this multi-step and it's interacting with different actors, different signals. And that's where we'll, in any of those use cases and we'll try and keep it away from just the when we're walking through just the travel agent, because travel agent does seem kind of marketplace-y which it is, and that's the thing Some of these use cases are but also be, whenever possible, bringing like the Salesforce type, bdr, development type work, things like that as other use cases, contrasting and showing how those methodologies still work.

Speaker 3:

But one of the key things we'll always be and then we're going to get into like game theory and some of the roots of these systems a little bit more on the interaction between mechanism of design and game theory in the next little bit but one thing that the whole, as we walk through this stuff sequentially and once we start showing the different states and like we'll define the states of like this system and walk through it, it really comes down to how do you govern it and that's going to be one of the things and how you do it responsibly.

Speaker 3:

That's one of the things we really focus on here. So we'll make sure at every component it and this trying to also illustrate where, as we've talked about in our agentic podcast a few months back of like we're very concerned about the surface area of governance with traditional agentic llm systems because they're already kind of like you're not quite sure what they're doing. Now you're adding multi-steps and you have the prompt and you just have the final outcome and it gets a little nuts um on on the governance surface area and all the different things. So if we define each state in the set of admissible actions per state, it's easier for us to keep that governance surface and also more explainable what's happening. So we'll really always keep bringing back into the how would you do the responsible governance and documentation.

Speaker 2:

Yeah, so let's tie this together a little bit. So, again, we're focused today on mechanism design, and a lot of mechanism design asks us to design games for our agents to play in that result in efficient systems with alignment on the desired outcome. So what is a game right, and how do you design a game, and what do you define it as the rules of the game? A game is typically described as a mathematical model of the strategic interaction of at least two agents, right? So a game could be something like chess. But a game could also be something like buy low, sell high, and so these skeptics games can happen in a turn-based fashion, where one agent takes a turn, then another agent takes a turn, then another agent takes a turn. These can happen all at the same time, where a bunch of agents are just operating in real time. Or it could be a dynamic system where agents make moves that are directly based on the timing of other agents, right, like, let's say, like a waiting game. So that brings us to game theory, a way of analyzing games. More specifically, it lets us create definitions of how cooperative and non-cooperative games work, for what we're going to be talking about for most of this podcast, we're interested in cooperative games, games where agents need to work together to solve problems in an environment. There are, of course, games where you have agents operating in difficult terrains, like think, mars, rovers, but we're interested in agents that are working, if not cooperatively, at least in an environment with other agents.

Speaker 2:

The stated goal of a lot of these game theory works is to solve for this idea of a Nash equilibrium. This comes from a very, very famous mathematician, and what the Nash equilibrium is basically about is what is the optimal strategy for all of our players to use? If every player used this strategy, they could all optimize for their self-interest If they were acting rationally, and that was all that they wanted to optimize for. That was all that they wanted to optimize for. So let's assume that we have a bunch of travel agents in a room together and that they're generally truthful and by truthful I'm going to say that their goals are explicitly aligned with what they do.

Speaker 2:

You know, if they buy a ticket, they're actually buying a ticket. They're not just buying a ticket to sell it later at a higher price to buy another ticket. Right, they act in a way that their actions and their goals show strong alignment, even if we have agents that are generally truthful. They might not always be accurate, ie they can make mistakes, they could make hallucinations, and this leads to an asymmetry of information, where what the agent does and how they behave creates a loss of information. Ie, it's not that the model has some unrevealed preference and motive, but because the model makes mistakes along the way, because they're not perfect, because they're not able to perfectly optimize for their goals, we have loss of information in the system, which can then equate to loss of desired outcomes or weaker performance, or, basically, states in which the game is not able to resolve itself in a positive way.

Speaker 3:

Great primer on game theory. And game theory is really the analysis of the games and then mechanism design. How they relate together is really reverse game theory, which is creating the game for the desired outcome, right. So like they're, they're kind of inverse of each other the analysis versus the creation but they have that really interesting symbiotic relationship. And game theory itself is a very fun rabbit hole to go down. Same with even like nash equilibrium, the prisoner's dilemma. That's always a fun thing to see and what's like.

Speaker 3:

It gets a little bit nuanced and confusing at times of like um this is a tangent, by the way of uh, nash equilibrium is, you know, the optimal strategy for an agent, given what they think others will do.

Speaker 3:

That's even different than some of the mechanisms we're trying to do as a dominant strategy which is the best strategy for you, no matter what, not considering the agents and they can. If you've been diagram, it could sometimes be the nash equilibrium, it may not be, so it's when we're designing the mechanisms we're. We oftentimes are not even trying to get the the nash equilibrium, we're trying to get the dominant strategy, no matter what. For that like it's my best interest to do this, I don't care what anybody else does is the dominant strategy. The nash is, given what everybody else is doing. What's my optimal decision? So in the, in the famous prisoner's dilemma, um, the nash equilibrium is is different than the dominant right, but sometimes the dominant is the same. So, um, very fun to just kind of go down all of the different sequential, non sequential games, all that kind of kind of thing.

Speaker 2:

Yeah, I think let's quickly do the prisoner's dilemma. I think that like for us, obviously, but let's go for it right A very classic thought experiment that we see in psychology, but we also see in game theory, which is going to help us illustrate this difference between an optimal strategy and a dominant strategy is the prisoner's dilemma, which many of you may have already heard of but is very valuable to go over. In the prisoner's dilemma we have two prisoners, prisoner A and Prisoner B. They've both been captured by the police and they're both in detainment. They have an option, and they're both in detainment. They have an option they can either confess and put their other fellow prisoner on the line as an accomplice, or they can choose to stay silent. If prisoner A, for chance, decides to betray their fellow prisoner, decides to betray their fellow prisoner, the fellow prisoner will have to spend three years in jail, but the tattletale will get to go free, and so that sounds like a pretty ideal outcome. But here's the catch If they both decide to do this, if they both decide to betray each other, they will both serve two years in jail. Other, they will both serve two years in jail. So they may be thinking, okay, well, that sounds pretty reasonable. You know, if I can guarantee getting two years in jail, then that's reasonable. There is, however, the last option. In the last option, both prisoners decide to cooperate by staying silent. In this case, there will not be sufficient evidence and they'll both only serve one year in jail.

Speaker 2:

If we're talking about a strictly ideal solution and just running the numbers, the best solution is for them to both stay silent, both accept a one-year sentence and then go home after serving one year. However, we find that in experiments, in mathematical simulations, that's not what happens. They don't end up choosing this cooperating solution. What they'll often end up doing is that they'll optimize for the chance that their partner is going to do the optimal thing and then send them to jail for three years and get out for free. So what happens then is between this interplay of do I do the correct thing, do I do the optimal thing, we find that the dominant strategy becomes always betray your fellow partner, while the ideal situation was to both stay silent. And so we have this tension, this disconnect between the two possible outcomes, and these are the kinds of things that we're thinking about in game theory, and these are the kinds of problems that we need to work through and solve.

Speaker 3:

Excellent, great definition, sid, definitely. I think that was a great call out to walk through that here. And this is it's very fascinating and it gets very complex, which is, you know, the siren song of why to use an LLM for some of this stuff because you don't have to do the thinking. But when you figure out how to properly design the game, analyze the game, create the mechanism and figure out how to do those steps, and after you've done it a couple of times, it becomes more intuitive. It really provides these higher performance systems with less compute required and the governance service area being much lower. So, like it for high risk systems and even like consequential you know how maps work, how auction systems work, like still in these consequential systems, where this, these types of processes, are still around.

Speaker 3:

But, um, our thesis is that for these high consequence systems within enterprises that they're trying to to accomplish specific tasks, they're spending all these millions and billions of dollars at times on AI and not seeing results. And you know we're currently just like build out larger LLMs to throw out the problem that it might be time to start evaluating some of these harder techniques that may take a little bit more time to build initially, as you you know, we're all massive fans of systems engineering and the Apollo program, but look what you can accomplish when you do things the hard way. So I think it's definitely an interesting area to pursue and that's why I'm excited to do this podcast series, uh, walking through together. Um, one specific mechanism to highlight, uh that I think gets us is a good way for us to start this journey of designing this agentic travel agent mechanism is the Vickrey-Clark-Groves mechanism. Sid, would you like to take us through that?

Speaker 2:

Of course. So the Vickrey-Clark-Groves we'll just be calling it VCG to save some time is a very common mechanism to ensure that the dominant strategy that these selfish agents are going to take is one where the agents are always required to act. You will oftentimes find situations where, in order for agents to optimize their outcomes, they may choose to not make moves or just wait to respond to others. This is a mechanism that enables us to have some guarantees that agents are always participating in our marketplace, in the situation for travel agents. So the agents are incentivized to report their private preferences truthfully and that they should have no reason to not participate. And we'll be using this going forward for our agent agentic travel agent that we're going to be building up over the next few episodes here.

Speaker 2:

Most importantly, this VCG mechanism satisfies three sorry, four major properties. One, the agents are expected to tell the truth and that this should become the dominant strategy tell the truth and that this should become the dominant strategy. Note, as we said earlier, that the dominant and the Nash equilibrium strategy are not always the same, but that the dominant strategy will be to tell the truth. Individual agents incentives should be aligned with the system objectives. That agents will voluntarily participate at all times and not opt to not make moves. And it will apply budget constraints to the agents, ensuring that the mechanism doesn't incur some overall loss, ie that agents won't get rewards for choosing not to act or participate in the system.

Speaker 3:

Excellent and the individuals behind this mechanism some of them, I think, got Nobel Prizes for this. This was a major turning point in game theory and mechanism design and this specific mechanism. There are several others as well, but this is one of the most common ones we'll use as the basis and the assumptions and properties work pretty well for how we're building like an agentic travel agent, which again again can be normalized or generalized to any type of, you know, agentic enterprise use case and we'll always be bringing those in. It really helps. It's used very commonly and, if you really dig into all the different places, it's used in communications networks and different parts of engineering and things.

Speaker 3:

This is embedded in a lot of places in our daily life this, this mechanism. So that's why we'll stay with the most common one, but there's definitely other ones as well, some bayesian ones that would deal with priors and different things like that. But um, very fascinating field and I I personally really enjoyed digging back into this stuff to prep for this episode and we'll definitely be next couple episodes. We'll talk about utility functions, state-based systems, some optimization theory, control theory and then wiring all these together and we'll have some accompanying blog posts as well of really walking through and have the math of how these different components will interact, given the mechanism, as well as some Python code showing how we can build this system that will fulfill a common use case that I've at least heard you know referenced in conversations around generative LLM agents all of this.

Speaker 1:

It's hard when you're talking about agents especially when you were giving the prisoner's dilemma and you were talking about that earlier. One of the things it's hard not to let your mind go a little bit dystopian over the possibility, what people are hyping up, the possibilities of agents of maybe bringing more structure and more predictable with game theory, making things a little bit more predictable where human nuance may not have lent itself to being the best thing to have in some processes, in some markets, in some other processes that people depend on electronically. I've got to believe some of the listeners are thinking this too about agents. What's your response to the question of if agents are going to introduce more structure or they can operate on this type of predictability or maybe make nuance less of an issue. What does that mean for some of the things that we depend on as humans? It's a really abstract question, but I think it's. It's out there.

Speaker 3:

I think that's a great one, and this is where I would argue that this type of a more structured approach like we're proposing, actually helps what you're concerning less than like the, the LLMs that are people are just trying to deploy everywhere and like, hey, we're just going to find a way to replace human. We don't care if it's accurate or not. Like you know the latest studies, agentic AI is not very performant right now, based on lane chain studies and things like that, and that you know that kind of loss of what's truth or what's real, that the trend back towards, like expert systems and we're seeing this a little bit of more of the trend right now is small llms that are super fine-tuned on a, on a solution. That is a good trend back in the expert direction. So I'm happy to see that trend. I still think they're over. Llms need to get relegated to a tool in your toolbox versus the tool, but I do think we'll start seeing that trend.

Speaker 3:

However, um, the fact that, like what makes a human human and the humanities of it, like the understanding, what's, you know, the what is good, what's bad, how to optimize for social welfare, whatever you're like making sure humans can be humans and everybody is, you know, improving in their welfare and things like that. I actually think this is more helpful in that regard and not harmful. Why Is because you're building the rules of the game right. So if you have humans that are properly motivated to not just be super selfish and zero-sum game, I'll take the whole pie. But if you're building this system, that you're building mechanisms that people interact with, that are, for remember, selfish agents can interact in these systems as a definition, mechanism design and will optimizing their own self-interest and will still optimize the social welfare of the whole system. That is how, like that is the premise behind a mechanism design with social welfare theory in the middle of it. So, and as we've even just illustrating here, this stuff is complex and requires a lot of designing. So I actually think to your point, susan. This is a fantastic, this is a whole podcast tangent itself, I think.

Speaker 3:

I actually think that, taking a step back and thinking about what are we trying to accomplish? What is, what are the goals of the system? How do we make a system that respects the individuals in it and rational and make sure that it's optimized for the system's wellbeing as well as the individuals? First off, having to define that, design that, implement that and build a system where selfish people can interact and they're still not hurting the whole group by selfish actions.

Speaker 3:

That means we're actually preserving that humanity and thinking about it more than just saying I don't want to think at all, I'm going to let an LLM just run the whole show, and I think that's as society, where you know we're seeing like employers now think they're, they're in the back of the driver's seat. You know, I think Spotify CEO said every manager has to prove an AI can't do their job before they hire someone. Like there's a lot of panic in the workforce right now because of those sorts of things, but it's like you're taking the humans out of it and you're not thinking about it. You're just like it's a different paradigm versus how we're thinking about it here. So I would actually argue this is the solution to some of those fears. But I think this is a great discussion.

Speaker 1:

Yeah, and we'll definitely get into this in episodes to come. I do think that the humanities, liberal arts even near and dear to my heart, as I'm watching my daughter go through college and work in this world is definitely getting a new I want to say a new swing at life, because it has to be integrated. Like same way we're saying about game theory and mechanism design, the human elements and practices and theories need to be woven in there as well.

Speaker 3:

Very partial to what, like everyone's, you learn ai, learning I learn ai I actually. I agree, susan. I think the biggest gap right now is people need to be reading the classics and understanding like the human side of things to help. I mean, we're obviously biased on governance, but I really think that's the most fundamental issue is responsible, objective thinking about how are you using a system and why and how are you mitigating the risk, like that's the thing people are missing. And to think that abstractly and think about how those pieces fit together. It's not just another code ninja that can use an llm until they can get an lm to replace their coding. No, it's like how do you responsibly use these systems and fit them together at its core? That's what mechanism design is.

Speaker 3:

Mechanism design gets very mathy very quickly and we're we'll try and not go, we'll define it rigorously and then but and try and not go too mathy.

Speaker 3:

But at the fundamental level, susan, it's like what you're saying is like all of humanities, I don't know how, like all of the knowledge over all the years we've developed, and then we're trying to like well, we'll throw it in an lm and forget it all, like the.

Speaker 3:

How do you think? How do you process, actually write your term papers, don't just pop it in chat, gpt and and like, take the easy button, taking the hard road, really figuring out how to learn and and and and use your brain and that journey of a thousand steps yeah, I think that's where the real magic is, and this, this, is helping that way, and I'm I am concerned if this like self-eating society of everybody just using LLMs and not and not thinking is kind of a disastrous path we're on society of everybody just using LLMs and not thinking is kind of a disastrous path we're on Absolutely and I think this kind of lends itself to what I think is basically the future state is that if we really want to model what our future is going to be and how these agents are going to play in our world are they going to play nicely?

Speaker 2:

Are they going to take over core functions? Are they going to leave what's going to be left for us to do? A lot of this mechanism design is going to inform how we understand, how we and these AI agents will play together in the real world 100%.

Speaker 3:

And that's really the takeaway. I'm hoping like we're going to try not to get too Matthew. We are still going to rigorous, but that exactly what you said said I think you said it very eloquently of the concepts we're trying to get across here. On mechanism designing, game theory help to inform that. How do you work with agents and the humanities that you're just talking about, susan of? Like, all of that stuff is what we're really the focus of, what we're trying and doing things responsibly and thinking about how you're using the tooling. I think that's what's so important right now. On this broader AI revolution.

Speaker 1:

Awesome. Well, I can't wait to get into the. This is a great foundation and I can't wait to get into the next episodes where we cover more and go deeper on these topics. Sid or Andrew, anything you want to say before we close out?

Speaker 3:

I think I've talked too much this episode, so I will not add a follow-up and ending point.

Speaker 1:

All good thoughts Sid.

Speaker 2:

I think that this is going to be an incredibly interesting series of episodes and on the surface, we'll be talking about building an agentic travel agent, but I think that through this process, we're going to learn a lot about how agents are built, how they do what we want them to do and how we can build smarter agents and still work with them. We're trying to assume that this agent working with us basically at a human level, and up until now we wouldn't know how to. We barely know how to govern humans with, with laws and enforcement. Imagine having millions of them in an AI space and operating with them. We'll be covering a lot of ground, but I think, hopefully, we'll learn a little bit about how we can build reliable, usable agents and then deploy them in the real world.

Speaker 1:

Excellent point, sid Andrew. Thank you, as always and for our listeners. If you have a question about this topic, please ask, because it's going to be a really interesting ride over the next few episodes as we cover dig deeper into the topic. Until next time.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The Shifting Privacy Left Podcast Artwork

The Shifting Privacy Left Podcast

Debra J. Farber (Shifting Privacy Left)
The Audit Podcast Artwork

The Audit Podcast

Trent Russell