The AI Fundamentalists

The future of AI: Exploring modeling paradigms

Dr. Andrew Clark & Sid Mangalik Season 1 Episode 28

Unlock the secrets to AI's modeling paradigms. We emphasize the importance of modeling practices, how they interact, and how they should be considered in relation to each other before you act. Using the right tool for the right job is key. We hope you enjoy these examples of where the greatest AI and machine learning techniques exist in your routine today.

More AI agent disruptors (0:56)

AI Paris Summit - What's next for regulation? (4:40)

Modeling paradigms explained (10:33)

  • As companies look for an edge in high-stakes computations, we’ve seen best-in-class rediscovering expert system-based techniques that, with modern computing power, are breathing new light into them. 
    • Paradigm 1: Agents (11:23)
    • Paradigm 2: Generative (14:26)
    • Paradigm 3: Mathematical optimization (regression) (18:33)
    • Paradigm 4: Predictive (classification) (23:19)
    • Paradigm 5: Control theory (24:37)

The right modeling paradigm for the job? (28:05)


What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Speaker 1:

The AI Fundamentalists a podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses. Here are your hosts, Andrew Clark and Sid Mungalik. Welcome to today's episode of the AI Fundamentalists. Today, we're going to discuss modeling paradigms. We're actually revisiting this topic since our last chat about modeling with Christophe Bonnard, when we talked about the modeling mindset.

Speaker 2:

Yeah, and so today we'll be taking a deeper dive into some of the types of modeling paradigms that are out there, and we're looking at this with a specific lens of. You've probably been hearing about these and you might be thinking they're brand new, but often a lot of what we're seeing is rooted in the modeling history and literature, and so we just want to do a quick refresher on the types of paradigms to expect so that you're caught up and you don't feel like they're brand new and you're ahead of the curve. Before we hop into that, I think there's some interesting news we should talk about. Specifically, we can talk a little bit about.

Speaker 2:

First, tying into our last episode. There's a London-based startup called Convergence AI and they launched a model called Proxy, which is exactly agentic AI, exactly as we laid out last episode, and this agent is meant to be deployed into your browser to do tasks for you, and I don't imagine opening eyes to you about this because they got their lunch eaten again. This company came out here. This extension is free, unlike opening eyes operator, and even if you do use the paid tier, it's about 10 times cheaper.

Speaker 3:

Oof Well it probably about 10 times cheaper. Oof. Well, it probably uses DeepSeek under the scenes, is my guess. Cost savings, yeah, Cost savings right there Also. The thing is, though this is what we talked about too, with what a Gentic AI is. First off, there's no need to have an LLM in there at all for the concept and, second of all, depending on what this usage is, someone might've even used a very narrow something else too, maybe not even a deep seek size, so I'm very interested to follow the space again. Don't think like these autonomous agents are gonna be that productive for you, unless it's very, very simple tasks, unless you're really building an agent based system, but it's very interesting to see this kind of race to the bottom in prices and really showing OpenAI doesn't have a huge moat right now.

Speaker 1:

Can I ask a question, because this might come up later in the episode as well what constitutes an agent? We've got, I'll say it Salesforce, agent force calling agents. And now we've got convergence ai. And then there's just the whole school of thought on, like we don't really have these out in production yet. Like what should people be looking for?

Speaker 2:

it's a really good question and I think about to really like kind of distill what we talked about before. I would say that an agent is basically a piece of code that's meant to run autonomously and interact in an environment the same way that you or I would interact with the world. We take in inputs, we think about them, we process them and then we do some output, right. So they're supposed to represent this kind of like dynamic interaction with the world.

Speaker 3:

Think of like a Boston Dynamics robot, right, that would be like an agent in the real world yeah, I think really what the extension is is traditionally with, like machine learning, it's very much a take data in, make a prediction, do like it's very much like a one one thing, versus agents are more like they're multi-step.

Speaker 3:

There's a multi-step process being like grab an input, do the machine learning or whatever calculation and then act on it.

Speaker 3:

So is it going to be like we, the blog posts we put out talking about like a, an agent type of, like travel agent, you'll just put in your information, it will go search for best deals for you, like you already have these components today, but it's kind of like chaining it together is what I would call unique and that's why I like the idea and I think there's a lot here. The only thing we're really calling foul about is that saying you must put an LLM in the middle or that an LLM is the reasoning component, which we know they don't reason. They predict the next word to sound human. So it's not a good idea to put that as your reasoning. However, the concept of having these multi-step, it's not revolutionary, it's just kind of like the next stage I would say, because you already have computer programs, do this kind of thing. You have different things you have RPA Like. It's just a new way of approaching this problem and I do like that you maybe try and make them more useful by doing multi-step.

Speaker 1:

You know we're also hot on the trail of the Paris AI Summit and a lot of the fallout and outcomes and commentary from that yeah, well, this was a fun one, um, in a figure of speech anyway.

Speaker 3:

Um. So jd vance, the new vice president of the united states, kind of went, went a little while to the paris ai summit, basically saying that any sort of like ai safety is authoritative control. Um, this is after nist last year. Of course, put out some information of you know. One of the things we need to make sure is that people aren't learning how to make nuclear weapons and weapons of mass destruction and things like that from your LLM. So saying you're wanting to put some safeguards, monitoring on top of you know people doing really bad things and learning how to make bombs and calling that authoritative control. Bad things and learning how to make bombs and calling that authoritative control, that's tenuous at best of relationship.

Speaker 3:

However, I think the main thrust was you know, regulate. If we're going to keep up with China, you can't have too much regulation, which I mean that there is a point to that. I'm not sure that you would necessarily reach that point yet, but that's definitely a thing. So we have seen some fallout of like some, like the united kingdom changed something from the safety to security. But again, what we talked about, like some of the major major issues with these things, like the weapons of destruction stuff, are still security. So it was very interesting and probably a little aggressive stance, to put it mildly. Uh, that he did. But like, I don't think, even though on the U S United States, the federal level is not obviously not going to be pushing any regulations in the future you still have multiple States. If you want to play in Colorado, you want to play in California, you have to be doing these general, general AI regulations and things like that.

Speaker 3:

So for a lot of companies and like we all know, in the whole premise of this podcast and we've talked about, fundamentals matter and doing the hard yards and making good systems and and having peer review and and testing and validations make your modeling systems better. So, despite that, there might be a lessening pressure from a federal level which, to be clear, fed the united states had never even passed a federal ai bill. So it's not like there's any real lessening going on. It's just like obviously signaling there's not gonna be anything new and eu hadn't even fully gone into effect fully yet. It's just starting for those prohibitive systems. So it's not like a we want to make sure no one's over correcting on just because it was a very aggressive speech. Out there, things don't change. But also just keeping in mind that, the balance of making sure you're not doing a process for sake of process, but making sure you're doing it to make sure you're making a system more secure and safe.

Speaker 2:

No, absolutely, and I would interpret a lot of what the EU is doing as reacting to how the US is doing things and saying well, we want to speed up. But when the EU talks about speeding up, they're not talking about dropping safety and security entirely. They're talking about trying to reduce some of these bureaucratic laws that are in the way, right, so they're trying to see ways to enable innovators, but I would not see them as trying to pull off regulatory pressure Fully agree.

Speaker 3:

Fully agree.

Speaker 1:

Yeah, and when do we get to a point that a certain type of governance or certain type of regulation is part and parcel to the innovation? Like innovation breeds innovation. Like cars got faster, so seatbelts were invented. Cars got faster than that, so now you need shoulder straps in all of the seats. Like there is some layer of hey. We're trying to make sure that this innovation can carry on, not and that's why other safety measures that's why I use the seatbelt analogy, you know illustrate that you've got to make sure that that innovation can carry and by doing that, you put some safety measures in place. That's when do we get to that point?

Speaker 3:

I think that's a great point. That's what we, when we had anthony on the podcast last year he's talking about that some of like that's our general stance, as well as like safety and right. It's some sort of regulation, just improvement pulls up the bar for everybody, right, like just instead of making a race to the bottom. And that's where I think it was a too much of a fire and brimstone slash and burn speech that happened, and I definitely don't agree with a lot of what was said. By any means, um, I think the only pro you can spend on it is making sure that you're making regulations for the sake of making safety, like seatbelts. You're not deciding with the color of the seatbelt or all, or like just kind of random stuff like that. Or your every dog now in your car needs a seatbelt. Like it's basically, you have a seatbelt to keep you safe, but you stop there.

Speaker 3:

I think that's the key thing of like some of these large language systems and things like what was said in that speech versus what you need. You do need some sort of monitoring for safety and things like that to make it that's your seatbelt. I love that analogy versus like saying nah, yolo, we're never going to wear seatbelts Like that's dangerous and saying that any sort of safety or security or testing on a system is bad. 100% don't agree. But I think, if, like, and I actually think what like, the tweaks that Europe has made based on that speech are actually just like hey, make sure we focus on the seat belts and not regulating the color of the seat belt, so if that's where we land, okay, that's actually lands us in an okay spot.

Speaker 1:

For sure. And, side note, we could get into an entire tangent about the shift from safety to security, but that's an entirely different podcast and maybe there's probably a couple already out there well, that's actually.

Speaker 1:

That could be a good new new podcast, but honestly, like I don't necessarily care what they call it, yeah, on the surface absolutely, but it sounds like that was brought up too and, like I said, clearly a set, clearly a tangent. But we'll keep looking at that, whether it's just a new way to market or it really is like an emotional tone of we don't want to say safety, we want to say security. To be continued, anyway, shall to it let's get to it, yeah, I think we had a whole podcast we could do right there.

Speaker 2:

But let's dive into our modeling paradigms. Let's talk a little bit about the types of models that are out there in the world, why we use them and let's get into a little bit of history about them. And I think that covering all this ground is going to be really useful because, as you see, lms and these big foundational models being used and applied on these tasks, you don't want to be left out of context. You want to know what's going on historically and how we're building upon these older systems. So we'll be talking specifically about agents, generative modeling, mathematical optimization, prediction, prediction and control theory. That's by no means all the paradigms, but I think these are the ones that I anticipate seeing our current landscape moving towards and trying to push the boundaries on.

Speaker 2:

So let's start from the top. Let's start from agents, right? So, picking up where we left off last time, agent-based modeling is used to gain a deeper understanding of complex systems modeling. If you think back to our previous episode with Rachel Lissacco doing cosmology research, we had a bunch of simulated stars and black holes in space and we put predefined code that defines how they work, put them into the universe and see how they interact, and then we can learn about the universe from how these individual agents interact, right. So we've seen simulations on people, animals, banks, investors, and we just put them in the market and we see how they interact with each other.

Speaker 3:

Yeah, and this is where like I love that episode that we did in the past we got to get some economics and other takes on using it in the field too. Like these are just part and parcel in academic and science professions of using agent-based models. And this is where when we've as we talked about last time, with agentic these are you build agents of like the plant of a planet or whatever, of like the plant of a planet or whatever, of like all the atmosphere, information, all of like the how does it go to the next decision? All like same with if you're building rational agents for like a, an economic simulation or something, it's the same thing. You're building the individual utility of what.

Speaker 3:

What motivates um sid or myself or susan, of like what, what motivates you to do the next step? Like how do we go through the world and do that? Like this is where we think a lot of those techniques that are still applicable. So like when we're seeing like agentic AI is really at the highest level, agent-based modeling going to be more useful to end consumers is kind of how I see it, and there's so much rich stuff there that I'm like you're spoiling it when you go to. You got to put an LLM in there, versus like. This concept is great and it's being used in high-class research and it's like it's the foundation to a lot of things that we do today and a lot of our scientific knowledge is using agents.

Speaker 2:

So that's why I'm very excited about the general concept if we can kind of tweak the execution of it by a lot of companies. Yeah, I think that's totally fair, right, and it kind of like leans into this idea that, like, well, an AI agent is not just you know, devin or proxy on your computer, devin or proxy on your computer. An agent should be a whole ecosystem of these types of bots interacting with each other and creating some emergent properties. There's been some interesting research recently showing like, uh, they create a bunch of llm agents and they put them in a room together and they create a video game together, right, one llm becomes like the product manager, one becomes the coder. Uh, this, I think, is going to be the real crux of modern agentic modeling, and I wouldn't say that it's so much of what we're seeing, where it's like the personal assistant that's not really interacting with you in a strongly agentic way. Uh, and this and you know this comes out of like previous work from von neumann in the 50s uh, where he tries to like model a cell in a body, right, all these little, they have fixed little things that they do and then they interact together to play something like John Conway's Game of Life or something right. So, as we look at the agent paradigm, think about how we can build smarter, simpler agents, that kind of work together to help us either describe specific issues or solve bigger problems. And now I think we can hop into generative.

Speaker 2:

We probably all know generative from talking to chat GPT. Right, you put language in, it generates language out. This goes back really, really far, but I think most famously with Weizenbaum in 1961 with the Eliza chatbot. For anyone that doesn't know the Eliza chatbot, this was a mock psychotherapist where you would tell it how you're feeling and it would quote unquote, intelligently, give you your statement back as a question and it would generate language in this way, right, so you'd say like, oh, I'm really worried about my goldfish, and it would respond with with what about your goldfish is concerning you? Right? This is the like. This is what, like generative was all about back in the day.

Speaker 2:

Um and we, and you know, with rnns and lstms in the nlp domain, with language, we saw slightly better improvements to this, where we're now, uh, improving the way that we probabilistically search for the next best word to generate um and if you repeat this process over and over and use more powerful models, you get to stuff improving the way that we probabilistically search for the next best word to generate.

Speaker 2:

And if you repeat this process over and over and use more powerful models, you get to stuff like GPT. Right, that's over here in the language domain, but let's not forget that this is happening in the image domain too, and this goes back decently far. But it really only gets good until we get these generative adversarial networks or GANs, which are long forgotten, but I think they're really wonderful and we're still seeing a little bit of inspiration in stable diffusion. But this idea is that generative is a paradigm where we take in inputs and want to generate brand new outputs that have never been seen before. So, outside of training and testing, usually, this basically means you make an amalgamation of what was input right. Why are you worried about your goldfish and not something totally brand new?

Speaker 3:

Yeah, I think that was a great synopsis of generative. And this is where we see, as in Eliza chatbot, in a lot of use cases we're seeing enterprises adopt it can be like well, you could actually use an Eliza. Some of the aspects, lot of use cases we're seeing enterprises adopt it can be like well, you could actually use an Eliza. Like some of the aspects of like summarization they're taking data points, they're making summaries and things. Or you're taking basic things of like outside of like spellcheck and things when you're actually trying to make applications. Sometimes it's like do you actually need a LLM? Are you trying to do a generative type thing where you can even randomize some nouns and verbs or whatever in there to make it sound unique that you're not always just like templatizing text, but it's really interacting with text is what everybody is really excited about. And there's other techniques that even go back and like from the research of um, as I mentioned, like this is not necessarily a new problem, um, and it's. There's some good techniques that are out there.

Speaker 1:

Yeah, do you think that some of what we're seeing, even in generative now, is probably this is more likely what's happening than LLMs the way they're being marketed right now?

Speaker 2:

That's a really good question. I think that, in terms of when we see these really really solid products that do the job well, right, like, say, you have an email and you want to rewrite it, the really good products aren't really open AI, because they create really flat and simple language and we've done some research that shows that they're really psychologically homogenous. The really good systems are built by teams like Grammarly, who have like actual in-depth, expert knowledge of grammar and semantics and sentence structures and make informed decisions about how you can make language changes. So a lot of these expert systems, they aren't just LLM under the hood right, they are powered by LLMs, but they're not just a chat GPT.

Speaker 3:

And that's where I think the thing that apologies Susan, that is really interesting is a lot of enterprises think they just grab a chat GPT and think they're good to go to use use cases, but the actual applications still have to have an expert in there, like what Sid mentioned.

Speaker 2:

And then I think that you know maybe a paradigm that's being forgotten, but I think is kind of just the fundamental machine learning and statistical learning. Is this mathematical optimization idea right? You might think of this like regression. I mean this goes back to World War II where we were solving, you know, linear equations, doing integer programming and optimizing for things like, you know, supply and demand. This is, by and large, the bread and butter of how these modeling techniques were built and it's what's underneath all these large language models, and so I would anticipate that we're going to start to see potentially foundation models being used to try and solve these problems, and I think that that'll be most relevant in situations where there's problems where you don't have enough information and you want to generate new information or new inputs.

Speaker 3:

Yeah, but for those mission critical and this is where I think there's definitely research will start going in that direction as well. But, like Google Maps is an example, this is just basic mixed integer programming behind the scenes and if you really think of and these are very low computation often and it's really just solving a specific problem if you can define it. And this is where, like, the lore of LLMs and stuff is that you don't have to have like a defined problem versus like, if you really boil it down like these, like route planning and how do you do supply chain logistics and stuff. This is major problems that we all know with COVID, when the supply chain got messed up and stuff like you can very much see, or even like airplanes queuing up and landing and things like that. Like this stuff is super complex and you very much know when it goes wrong, but you're we're all so accustomed to it just working correctly. But it's all based on these basically integer mathematical optimizations where you define the variables you solve for a specific and this is, as I said, mentioned basis of machine learning and things like that.

Speaker 3:

But it's kind of a, it's a technique that I think has been operations research focuses on a lot, but the computer science community has kind of like gone over as too simplistic and I'd love to see like a little bit of a turn back and, like it's sid's mentioning, maybe some of the generative folks will look at that, or or maybe just in in general, now that we have more computer chips and things, we can take a take a new look on other problems.

Speaker 3:

But like this is like a silent hero. If you think iceberg of modeling, everybody talks about the llms. But like the actual modeling that's being used today walmart, anybody like of, how do you, how do you do the skews and things like that? Or how do you supply chain every the eggs and how do you? All that kind of stuff is based on from large companies. It's all all based on optimization and this integer programming stuff. So, and you could argue that your iPhone's GPS using Google Maps is an agent, right, so it's an agent using mathematical optimization. These same techniques are what you actually use in those agents, that for the planetary physics and all and things like that too. So it's kind of an unsung hero that I'd love to see a little bit more focus back on from the wider modeling community.

Speaker 2:

Yeah, I think absolutely. I mean, you know, maybe we can all remember that, like, even in the last 10 years, there was a moment where everything was prediction. Like AI predicts obesity based on how you tweet. Ai predicts cancer based on radiology scans. Right, this is kind of like a heyday where, like, ai is predicting and solving all these problems that we couldn't solve before solved before. And so you know, let's not forget that this paradigm exists. This paradigm is, I'm going to say, 95% of how AI actually affects your life. Right, it's Walmart deciding what goes in your shopping cart and it's you know what decides if you get asked to have an interview. Right, it's these types of prediction and mathematical optimization models.

Speaker 3:

Yeah, and that's where you hit on. A really good point of this is where, kind of like the, where you will meet well, like with actuaries doing and what some people in computer science is this, this correlation versus causality conversation. This is where it's technically, any of these techniques and this would love to get like uh, christoph mollner back on or something talk about some causal modeling is like um or or. Causal modeling doesn't actually matter what technique you use, as much as the approach. And this is where the difference of like we've tried to democratize data science, a lot of like everybody's building models and things, but it's these correlations and this is where you can get in trouble.

Speaker 3:

When you start talking about the cancer predictions and things like that is it get very dangerous when some large provider starts making decisions based off of predicting things right Versus and correlations versus causality and trying to really understand. And this is where, like, no matter how you slice it, you can't really cut an expert out of it. You can get better paradigms, but even for, like, the most effective generative applications, as I mentioned, there's linguistic folks behind the scenes helping with that Same, with like causal modeling, and understanding the relationships is very key when we were talking about mathematical optimization, we also just kind of flowed right into prediction and classification as well.

Speaker 1:

What separates the two?

Speaker 2:

Yeah, that's a really good question. So when we think about like regression which I'll use as shorthand for math optimization and prediction we basically want to think about what type of outcome we want. And all these paradigms are basically just about generating different types of outcomes. A generative model wants to create totally new output and a regression model wants to create a continuous numerical between 0 you know, zero and 100 prediction of like what is the correct balance of answer to do? Right, how much does house in Boston cost based on how many bedrooms are in there?

Speaker 2:

Right, the prediction paradigm is focused on creating outcomes that are discrete and finite and are like exactly one thing. Right, if I go into my sock drawer, what color? Like exactly one thing. Right, if I go into my sock drawer, what color sock is coming out? Right, hard predictions about like what is the specific thing that's going to happen? And that's where and you know, prediction is where a lot of like that stuff we're talking about earlier, about, like you know, detecting cancer comes from. It comes from using these neural networks, which are really well designed to basically approximate any real world function and predict some outcome.

Speaker 2:

And so that brings us to our final paradigm of the day, which is control theory. I put this last here because this is kind of at the intersection of all of them, right, and it really is that bridge between a standard modeling paradigm and a safe paradigm. So how can you think about control theory? A really easy example is that you can imagine driving a car and putting on cruise control. Right, your car knows how fast it's going, how much power the engine is using, what speed you set it to and, in some cars, even what angle your car is driving at. And then its job is to find the amount of power to put into the engine to minimize the difference between what you requested and the current speed. So the control module is what's filtering between what the inputs are and what the desired output is.

Speaker 3:

Oh yeah, definitely, and so control theory has been around for a long time and this is what we even used in Apollo. Navigation was really a Kalman filter, which is essentially just. It's a way of taking a measurement of where you are and predicting where you're going to next be, and then truing that up to try and be able to have this internal state that can help you guide and do navigation. So we even went to the moon with something like this. It's very low computational, so a lot of the common thread of these are these are old methodologies. We've talked about most of these today and it's. They're very effective for what, what they are, and this is where it's it's.

Speaker 3:

I don't love how the trend within computer science has been from like these very targeted expert methods, like let's just get these large generic things that are going to solve everything and then put a lead marketing engine behind. It is kind of what it seems like versus like how could like, and there's so much neural processing power we can be a little sloppier on those things. But what we'd love to cause of this reemergence of modeling paradigms is what if some of these amazing brains that are working on making the next LLM stop just trying to focus on this same paradigm. There's other ways we could do it energy-based methods and a lot of different things we could be doing. I personally think the current path we're going down with large language models is kind of a dead end with this exact paradigm. Even DeepSeek changed it up a little bit, which is why it got as much press as it did is. It was taking a slightly different approach. Um, and like what if? What if we, instead of these things being siloed and like like common filters is being used some in cyber security and things like that now, but it's really like an aviation type type work. It's kind of been deprecated.

Speaker 3:

And then you have the, the optimization work, that's that's very much done in operations, research and things like that. What if we brought those a little bit more into the computer science research, same with real agent modeling, instead of just being like these little scientific fiefdoms have these techniques they use. And then you have the computer science folks just trying to make big, larger, generative applications versus like how can we cross-pollinate a little bit some of these different methodologies and really put some of the brains and economic horsepower behind like hey, is there something here? Or, as Sid mentioned, there is some research that's going on already in the space of like. What if we took some of these like? Behind a lot of generative AI is some sort of mathematical optimization. What if we see what's the latest and greatest on how operations research is doing it? And what if that gets married with an LLM or generative something for us to then be doing, these agents or something? I think there's a lot of magic here if we can try and cross-pollinate a little bit better.

Speaker 1:

Yeah, and it goes back to a point that we talked about a long time ago. There's all of these different modeling paradigms, but now this brings into question and brings into light we're headed to a future of complex modeling systems, models on models, on models. So, when you consider that there are several tools that are great for certain jobs, where do we go from here?

Speaker 2:

What I'd love to see happen is that we think about all these paradigms together. Right, we think about a model that needs to interact like an agent and, instead of just putting that agent into the world, behind that agent is some control theory, and behind that control theory is some of our predictions and our regressions. Right, so we have robust symbol models in the bottom, which are then fed up into a control system which basically calibrates those outcomes against desired outcome, and then that package would then interact as an agent. Right, so we build up from fundamentals, but we get the type of outcomes that we want, while making sure the model is more performant, it's interpretable, it's reliable, it's affordable and, ultimately, it requires less governance.

Speaker 3:

I think that's a big point. It's like using the right tool for the job. There's not one. If you have a hammer, everything's a nail right. We are proposing you have all of these different methodologies, know how to use them and you could maybe be even putting them in the same large applications. But most importantly, as Sid just mentioned, governance.

Speaker 3:

These large, generic, unwieldy, monolithic we do everything models are harder to govern and especially if you think of these multi-state, agentic things where you're having quote unquote reasoning being done by large language models versus, like, we have a multi-state system which, already multi-state, is going to have a factorial consequences for monitoring and things like that and access control and things as we outlined last time in our blog post. But also, um, if you have a very underspannable, explainable, repeatable mathematical optimization as the brains of the operation here, that makes the governance burden a lot less. So companies should be really considering, like, when you look at the whole picture, some of these methodologies are actually a lot cheaper and I would argue most of the time there's no performance trade-off, there's a performance increase. But is the just for sake of argument, the 4% decrease of performance or something worth the increased compliance costs? I don't think, and that's what I don't like with. No one is really talking about the increased compliance burden with the GenTec.

Speaker 2:

Yeah, I mean absolutely, and I think that's kind of where we land here. Right, you need to think about these paradigms as building blocks and put them together and build strong, robust models, because if you just make one of these paradigms and you just go all in, you don't consider the other paradigms, you might create some model that's ungovernable effectively.

Speaker 1:

Right. The way you guys describe this makes me think of like when you open up a watch, like an old watch that has a lot of the cogs and spoke and wheels, different size, wheels doing different operations to keep the watch in precision and time. So I think of each of those wheels, different sizes, different spokes, integrating as the different models. One can carry a bigger load than the other, but they're all working at the same time to carry a certain type of precision. But, unlike the analogy of the watch and the watch, you can see which one might break or which one might be stopping up the entire system. A modeling system left ungoverned won't have that same visibility to measure performance and precision, and I think that's really what we're trying to get to in that future.

Speaker 2:

Yeah, that's exactly right. It gets us at this idea that if you put a bunch of black boxes in a watch and turn it on, you don't know how the watch works anymore and you don't know that the pieces of the watch are helping each other achieve the same goal.

Speaker 3:

Well, this is a massive topic we barely even scratched the surface of. There's so many more paradigms and I don't think we did any one of these any justice whatsoever. It was very much of a drive-by of like here's a concept drive-by type thing. So, like, I'd love to hear any feedback from audience of like we could go more in depth on all these, but I think the main takeaway if I had one, and I'd love to hear both yours as well is choose the right tool for the job, right. So just know that, no matter what pop culture is saying or whatever is in the buzz on Reddit, don't just think that there's LLMs are the hammer for everything. They're not. They are effective for certain things, but they're very ineffective for other things, and the whole industry needs to start releasing what's the best tool for the job, versus trying to square, pay a ground hold just because they put a lot of investment in this.

Speaker 3:

Like, be very smart and tackle well what you are trying to accomplish, and that's also defining your model development life cycle. What is good governance look like? What are you trying to do and make boxing that and then choosing the best tool for the job. So as, as christoph mollner had in his, his book like that that we talked about the other time, which it's be a T-shaped modeler, was his advice Be very good at one or two things, because nobody can be an expert at everything, but have a very horizontal understanding of different things that are there. So you know, like there's a lot of stuff, that Sid and I have different expertises as well, but I'll know enough to know when to call Sid right. So you want to have that ability to to know what's out there versus just this. My optic deep neural network view, where a lot of people are being trained towards, is only knowing that I, yeah, I mean all I would do is second that right.

Speaker 2:

All we want to see is we want to see that you know moliner's idea that if you're only practicing one type of paradigm, explore the paradigms, get to know them, understand them and you know you'll be more prepared and ready when you need to work with them.

Speaker 1:

And, with that note, we thank you for joining us today on this episode. If you have any questions about the paradigm shared here today or any of our other episodes, please reach out to us. We're at AIFundamentalists at Monotarai. Thank you.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The Shifting Privacy Left Podcast Artwork

The Shifting Privacy Left Podcast

Debra J. Farber (Shifting Privacy Left)
The Audit Podcast Artwork

The Audit Podcast

Trent Russell