The AI Fundamentalists
A podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses.
The AI Fundamentalists
Modeling with Christoph Molnar
Episode 4. The AI Fundamentalists welcome Christoph Molnar to discuss the characteristics of a modeling mindset in a rapidly innovating world. He is the author of multiple data science books including Modeling Mindsets, Interpretable Machine Learning, and his latest book Introduction to Conformal Prediction with Python. We hope you enjoy this enlightening discussion from a model builder's point of view.
To keep in touch with Christoph's work, subscribe to his newsletter Mindful Modeler - "Better machine learning by thinking like a statistician. About model interpretation, paying attention to data, and always staying critical."
Summary
- Introduction. 0:03
- Introduction to the AI fundamentalists podcast.
- Welcome, Christopher Molnar
- What is machine learning? How do you look at it? 1:03
- AI systems and machine learning systems.
- Separating machine learning from classical statistical modeling.
- What’s the best machine learning approach? 3:41
- Confusion in the space between statistical learning and machine learning.
- The importance of modeling mindsets.
- Different approaches to using interpretability in machine learning.
- Holistic AI in systems engineering.
- Modeling is the most fun part but also the beginning. 8:19
- Modeling is the most fun part of machine learning.
- How to get lost in modeling.
- How can we use the techniques in interpretable ML to create a system that we can explain to stakeholders that are non-technical? 10:36
- How to interpret at the non-technical level.
- Reproducibility is a big part of explainability.
- Conformal prediction vs. interpretability tools. 12:51
- Explanability to a data scientist vs. a regulator.
- Interoperability is not a panacea.
- Conformal prediction with Python.
- Roadblocks to conformal prediction being used in the industry.
- What’s the best technique for a job in data science? 17:20
- The bandwagon effect of Netflix and machine learning.
- The mindset difference between data science and other professions.
- Machine learning is always catching up with the best practices in the industry. 19:21
- The machine learning industry is catching up with best practices.
- Synthetic data to fill in gaps.
- The barrier to entry in machine learning.
- How to learn from new models.
- How to train your mindset before you
What did you think? Let us know.
Good AI Needs Great GovernanceDefine, manage, and automate your AI model governance lifecycle from policy to proof.
Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.
Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:
- LinkedIn - Episode summaries, shares of cited articles, and more.
- YouTube - Was it something that we said? Good. Share your favorite quotes.
- Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
The AI fundamentalists, a podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses. Here are your hosts, Andrew Clark, and Sid manglik. Hello, everybody. Welcome to the AI fundamentalists. I'm Susan Page. And together with Sid and Andrew are fundamental analysts. We're welcoming our first guest of the podcast. Christoph Molnar, he is the author of multiple easy to read data science books such as Modeling Mindsets, Interpretable Machine Learning, and his latest book Introduction to Conformal Prediction with Python. He is also the curator of the Mindful Modeler newsletter, which you should subscribe to, as soon as you listen to this episode. So welcome, Christoph.
Christoph Molnar:Hi, thanks for having me.
Sid Mangalik:All right, Christoph, Sid here. So I think let's let's start off with something for the listeners basically, to ground them in the world that you come from. So when you think about AI and system modeling, how do you look at a high level at what AI systems and machine learning systems are?
Christoph Molnar:So I very much prefer the term machine learning, I would say. And at the fundamental level, I would say it's machine learning is about solving a task with the help of data, so you'll learn from data, how to solve the task, if it's supervised machine learning, the task is usually prediction. If you're an unsupervised machine learning the task is then clustering and so on. But that's, I think, my favorite definition that you solve, solve a task.
Sid Mangalik:Yeah, that's good. And then would you do anything to separate? And leaning on your book here a little bit? How would you separate machine learning from something more like, classical statistical learning that was done in the old stats days?
Christoph Molnar:Yeah. So um, there, you will, like, if you ask other people, you will, I think get many different answers. And my answer would be that there's quite a difference. So if you start with just classic statistical modeling approach, you're, you might even end up with the same model in the end, but you approach the problem or the task, you're doing very differently from what I would call it machine learning. So in like classic statistical modeling, you start by thinking about the data, like how it was produced, think about the rights, like what would be the right distribution to model a data, like think of all these, like the data generating process, and it's always very important to think about interoperability interpretability of the model. And if you're like a Bayesian, you might even think of like a Bayesian model, and so on. But with machine learning, you approach it kind of from the other direction you first would define, okay, what would be? How would I evaluate my model performance in the end, and going and it's gonna be like a currency or something. And then going from that, you'll start kind of a contest between models, which one is the best model, if you're in a supervised machine learning, setup, and then pick the model, which just worked best on your data, given some constraints. But I think, and you might end up with like, in both cases with a linear regression model. But the reasoning with which you arrived at the model is very different. And like the assumptions that you put in and eventually also, what you're allowed to do with the model differs between these two approaches.
Andrew Clark:I think that's that's a great point. I really liked that. And I think there's so much confusion. Right now in the space people are what statistical learning with machine learning, and everybody gets all kind of bent around the axle? Well, if you're using deep neural networks, that's machine learning versus if you're using linear regression. That's statistical learning. But I love how you just summarize it. It's very much in line with your your great new book, The modeling mindsets, is it's really the mindset of how you approach it. It's not you even said, You ended by saying you could both be linear regression be statistical, or it can be machine learning. It's how you're approaching the problem, the mindset you're using, not the tooling. And I think that's very murky. Right now, for a lot of people.
Christoph Molnar:I've seen statisticians argue that machine learning is just statistics. And on a technical level, it might be true. That I mean, if you're like trained statistician, you can quite easily understand many concepts of machine learning. But the way you approach a problem, like a real world task is very different from these two, coming from these two mindsets. So my personal background is like I was trained as a like, statistician, then later turned into more of a machine learner, like kind of self taught and like Kaggle competitions and so on. And so I had for me this, this kind of clashes of mindsets. And this just very different ways how to approach modeling.
Sid Mangalik:And I realized that through line that you mentioned, where you talk about how we can understand the performance of the model through interpreting it, right, through interpreting and understanding the model, we can get some sense of, how is it performing? And what is it performing on? And if we find that, you know, it's like single future modeling, you know, then we're going to end up with a linear regression model, which, you know, would be like, Oh, why are we doing this statistical learning mindset of solving this problem in that traditional machine learning setting? So digging in on the interpreting machine learning piece a little bit, then well, what do you feel like has been like the most successful in helping people understand the performance of their models? Like what technologies are people actually using? What are what can people really do today?
Christoph Molnar:So there's, there's different approaches for that, I would say there's like even to schools. And one is to say, which is a bit similar to that, like the statistical modeling to say, we just use interpretive models from the start. That's, of course, very restrictive. So you, and also very difficult to say, what actually is an interpretable model. So it could mean that you just use linear regression models, decision trees, and so on, to solve your problem. So that in the end your model, you can kind of still understand how it make predictions. But it's very restrictive. And if you do this optimization process of trying out different models, your best model might not be in the set of models that are interpretable. So there will be some trade off, because you're restricted to the model set you'll look at. The other approach is that you start with, I'll go with whatever model is the best for solving my task, which is the best model for making predictions. And if you do that, you can still leverage tools that make your model more interpretable. So there's like a whole range of tools now available. So one very famous one would be like Sharpe, which is a method to explain individual predictions, and which can also be combined across cross a data to give you insights about the model overall, like the feature importance, how individual features affected your prediction, and so on. So there's many, which are called model agnostic tools that you can apply to any models. And I think that those are also them, like the most versatile and famous ones. And I think they naturally fit with this supervised machine learning mindset where you say, I don't know which model will make it out as a winner in the end of this process of hyper parameter tuning and model selection. So that sort of I would say these set the set of ml agnostic methods is yeah, good for interpretability.
Susan Peich:That's really interesting that the interpret ability part, you know, can we switch gears just a little bit? When you think about going beyond simple modeling, and instead thinking about holistic AI, since systems engineering, like what comes to mind for you?
Christoph Molnar:Yeah, so I think as a data scientist, it's it's easy to focus on modeling. And it's the most fun part. And, but you will always have like a beginning, so it doesn't last long. So if you do like a Kaggle competition, you only see this modeling part. And that's the fun part where you optimize your model. And by the fact, a lot of things already had to be decided beforehand. So if you see this life as a holistic process, there's like, many questions, if you have to have an answer before you even start modeling, like, what's your goal in modeling, should you even is machine learning even the best approach to solve your problem? Maybe it is, maybe it isn't, maybe some manager just started deep learning is cool, and wants you to solve this task with deep learning, but it might not be the best approach. And also the way you frame your problem, like, the thing that you want to predict, for example, makes or breaks your model. So if you choose, you could have the best model. But if it answers the wrong question, that will be a really bad model, of course. And if you figure out together with the stakeholders, what would be the best way to frame the problem, how the model would be helpful and like in the overall picture, then even like a mediocre model would outperform like the best model for the wrong question. And also, after, like, if you're done with modeling, there's also like this long chain of things and the model has to be deployed somewhere, the model might have to be monitored and the model might, there might be a distribution shift, and you have to account for that somehow. You might have to look out for like, not only predictive performance but other factors like latency, how fast is the model and so on? So I think it's well, easy to get lost in the fun parts of modeling.
Unknown:Yeah, I think we definitely feel as data scientists where we're so excited to find the model to in the model and get these hyper parameters set. And then we have a notebook at the end, and we say, okay, engineering team, do the rest of the work. Yeah. And we get lost in this in the systems engineering piece, which is basically making models available and usable to people. So I guess, you know, let's say you're at the end of the day, and you have your machine learning system, and you've deployed it, and you're, you're generally happy with it. But now, regulator comes to you and says, This is great. Can you explain to me how this model works? How can we use the techniques in systems using the techniques in interpretable ml, create a system that we can explain to stakeholders that are non technical, like they might struggle to look at a sharp graph and understand that? Yeah, so how can we do that interpret at the non technical level?
Christoph Molnar:Yeah, I think that's, um, so for a few months, I worked actually at a, like a company in the regulatory space. Then I figured out what I wanted to write books. So this was a really short stay only. And what if I noticed that it's still, so this was for medical devices. And they had, like, the regulatory parts was still like, lagging far behind. So it's still like trying to figure out what should be reported? How should we report it? And my impression was also that interpretability was just like a smaller part of it. So there's a lot of like documentation, like, what data did you use? And how did you use it? Can you show that your model is the best performing one and stuff like this. So I also want to make a distinction between like, this, this audit scenario, that like when a regulator comes in, and also like, different stakeholders, so even for those two, they might be both non technical. But even for those who you already need different interpretability of your model. So if you have, like interpretability, in terms of someone has to use your model, then you need ways to make your model predictions like explainable so that they can act on a prediction. Whereas a regulator might want to, like see more like feature importance and stuff like this, like my overall information about your model? Yeah, that's,
Andrew Clark:that's a great point. One of the things we've really noticed is, for those in non technical users, it's really reproducibility is one of the key parts when they're saying explainability. reproducibility is a big part of like, hey, if I take a cohort of users across, say, this medical device, you might have people of different age age ranges, and maybe different medical backgrounds? I don't know specifically, your company. I'm just generalizing here. So let's let's take two users from each of these demographics. And let's run them through this, this tool, the system from the very top level down to the bottom and see, does that make sense? Is that performing as expected? Do we see any any differences based on on factors that shouldn't be causing a difference? Where even I've even seen with some of the times feature importance kind of throws people and stuff specifically, like, is this local? Is this global? All that kind of stuff? It really that reproducibility of can I understand how a got to be a sometimes solving it even more? So maybe? That is kind of the? Yeah, it's interesting, because saying explainability, to a data scientist versus like, a regulator is two very different different areas. Yeah, I
Christoph Molnar:think I would say it's like completely different, like use cases, because as a developer, you can use like, interpretability, to debug your model to communicate with the boss telling, like, having some more means to talk about the model, as well, like not just saying, hey, that's the performance. But look, here are the most important factors. So it's, it's kind of like removes the distance between the data and and the data scientists kind of, because like with interpretability, you get these insights into the model. So you can also make make decisions like whether to use the model or not, and others can make the decision whether to trust the model or not, and so on. Which is
Andrew Clark:also interesting point of oftentimes, I see interoperability kind of build is like, Oh, this makes everybody understand your model. I think you just hit it there to have like shap and and some of these really helped as a developer to build a better model. But it's not this panacea of I just plug it in. And now everybody knows what I'm talking about. Like, I love that how you highlight the difference?
Christoph Molnar:Yeah. So yeah, I would say the model is all like these interpretability tools. They're all just descriptors of your model. So they just show you some insights, but they never like fully show you what the model does, because then you might not meet machine learning if it's that easy.
Susan Peich:One thing I want to talk about with you guys, when we the fundamentalists were read In your latest book, The Introduction to conformal prediction with Python. We were really intrigued by an idea that you mentioned in the book, I'm gonna read it for you, just so we get you on the same page. The passage goes, the uncertainty quantification can improve fraud detection and insurance claims by providing context to case workers evaluating potentially fraudulent claims. This is especially important when a machine learning model used to detect fraud is uncertain in its predictions. In such cases, the caseworkers can use the uncertainty estimates to prioritize the review of the claim and intervene if necessary. What roadblocks Do you foresee with conformal prediction being used in industry?
Christoph Molnar:Yeah, so this was a hypothetical, hypothetical case. But I think that conformant prediction, it's a bit like this model agnostic interpretability, which you can do like after you did your model, so you don't have to integrate it like in a complex way. So in that sounds, I think it's quite easy to deploy actually. So because it's like a postdoc method that you just can add on to your machine learning model. So that doesn't come at the cost of like switching out your entire stack or anything. And my impression was that it's just not that known yet. So many, there's a growing interest and confirm prediction. And the book was a way for me, because I just discovered the topic. And this was away from you, as well to learn to learn about confirming prediction. And I was quite surprised that it's not more popular. Yeah, maybe it's, it can seem a bit arcane, like the topic. And that was one of motivation for me to write a book. Because there's, I think that the concept to apply, it is actually quite easy. The math behind it is not necessarily easiness. It kind of lives most still a lot in scientific space. And with the book, I tried to put it more like on the side of application as well. So I think it just hasn't made that leap yet. And it's just about to make it kind of Yeah,
Andrew Clark:I thought it was a fantastic book, I highly recommend everybody go and read it. And I think we will see an uptick more. But that's one of the things that's been interesting in data science versus other professions is there's often just kind of a bandwagon effect of what's Netflix doing. Sometimes it seems this modeling and things I know you're smart. This is this is audio. But Christoph is smiling when I said that versus like, what is the actual best technique for a job or taking that next step? Do a lot of data science is very computational computer science sea based, versus taking that modeling approach? And some of these other things conformal is new, but variations on this and control theory and Aerospace Engineering, there's a lot of techniques that have been around a long time and are being used in these other fields, that they're being rediscovered, you know, in in computer science sometimes, and just, it's really that mindset difference of like, how can we actually quantify a problem or get confidence intervals or actually understand what we're doing versus just like, let's just throw a bunch of data at it and optimize it, I think a lot of it comes down to that mindset of a lot of practicing data scientist or in the ML mindset, versus some, like, let me actually model and understand that model is a subset of reality, and how do I best quantify that?
Christoph Molnar:Yeah, I would say also machine learning, if you like, do bare bones, machine learning, it's pretty, yeah, dumb, kind of, because you just get like, just like dumb optimization in a way. But that's these all these tools that you can like build on top of your machine learning model. And also some you have to kind of integrate into your modeling process, like putting in domain knowledge and stuff like this, but also adding interpretability after things like confinement prediction, so you can enrich your your model in the end. So and I think it just takes some time to accumulate all these tools kind of and to, to reach a state where we don't just throw like a neural network on our problem, but have all these add ons that make the machine learning model. Just much more rich, like with interpretation and uncertainty quantification, and all these things.
Sid Mangalik:Yeah, I think that's spot on. Right. It's like, you know, the machine learning industry is always catching up with all these practices, right? With the best practices with the interpretability. With the understanding the model with quantifying the uncertainty in their models. We're always modeling first and then figuring out how to fix it later.
Christoph Molnar:Yeah, a little bit mania. That's also again, this it's fun to model the things to it's fun to have this, this modeling process where you have the pipeline and, and the benchmarking where you compare models and, and this kind of challenge to increase model performance with feature engineering maybe or finding a better neural network architecture, but integrating it and then all these things around can take a lot of time. And you have to gain that knowledge. Like you have to learn about confined prediction before you can use it, of course.
Sid Mangalik:So with a lot of these techniques, you know, we already fight with, you know, business holders a little bit to be like, Oh, we have, you know, a training set of 5000. You know, test train split is already a hard sell, and then, you know, it's like, can we have a calibration set? Can we have an extra evaluation set? That's a struggle, we often have to fight a lot to basically say, like, you know, business buy in, we need to reserve some data for these for these use cases. Is there a use case for something like synthetic data to try and fill in these gaps?
Christoph Molnar:Well, that really depends on the problem, I would say. I mean, if you have a good grasp on how to generate a synthetic data that looks somewhat similar to what you would expect. But that's a real challenge. I think, many settings are not structured in this way. It's even if you have the perfect data, then you still have data drift, for example, distribution shifts, and synthesizing data or simulating data is a challenge in itself. So I come from academia, where you have simulated data, for simple, simple things to show that some method doesn't work in some edge case. And that comes already with so many things to decide how to simulate a data. So synthetic data, it's difficult to do, I would say,
Sid Mangalik:yeah, it definitely has posed a big problem for us to work with, and, and really assure that we're happy with the data, right, that it fits all the bounds that it fits all the expectations and that it matches the interactions of the original data. I guess I want to close in one question, which is, you know, back to your back to your first book, which, which we love. And I think this is just good for the for the listeners, one thing you talk about is basically how we can open our minds and hearts to new types of modeling, and take different mindsets, and approaching these problems. So you know, maybe can you give an example of how like, someone who's currently entrenched in the machine learning world, how they can learn from a more frequent test approach from a more causal modeling approach and more Bayesian approach, and really improve their modeling?
Christoph Molnar:Yeah, the problem is that it's difficult to know in depth about these other mindsets, because they use different language, there's like some barrier to entry. That's kind of why I wrote modeling mindsets to have like a really short introduction to each of these mindsets. But I would say that there's many different things you can do. So you could, well, you could read my book, obviously. But you could also maybe do an online course on causal inference just to get a little bit of input, how, how you would think if you want to do more causal learning, or try to read a book in frequentist, inference, or go to meet up where like patients meet, these things are not easy, because there's often like this language barrier and, and a different way to think. So that was also for me, like the difficult entry point, as always. So when I learned about statistics, and then I did my first committed my first Kaggle challenge, I failed miserably, because I didn't understand the mindset of that you have to evaluate your models, and that you have to benchmark different models and so on, because I just modeled like a simple statistical model. So this process, I think, it just takes time. And the best thing is that you if you have an overview of what's out there, and if you realize that you're hitting some kind of barrier than some limitation with your mindset with your approach, that you at least know where to look.
Susan Peich:And Christoph, I took Andrews advice from our earlier discussion, and I read them on I tried to skim through the modeling mindset. I totally appreciated the scenarios, you said up work and when a Bayesian walks into a bar to really illustrate your points about the different you know, what happens in the premises of the different modeling mindsets, but it also reminded me that like when we're, when I think about interviews, in even business in a social context, as well, as modeling you were kind of going there with it is, you know, I like the interviews that are very intentional, about like simplifying, you know, two different paradigms where research can be super deep, but then starting to simplify it to a place that two contexts can align. And that sounds like what you were saying, and you're two different in the mindsets that you were illustrating in that book. And so I thought that was very, for this topic of like, you know, how can we train our mindset and figure out what we want to what's the purpose of the model before we go into it? I thought that was very intriguing, and I would highly encourage that book, too. For anybody who's trying to start with that
Christoph Molnar:Thank you for the positive feedback.
Susan Peich:Good. Yeah, that my long way of saying good enough for the Layperson who did like a hot semester in linear regression. That was it.
Christoph Molnar:my goal was to keep out like math and details and just focus on the high level view, which, even if you're deep into the math, and the methods are sometimes hard to see, right. So when I wrote the book, I kind of was things that I already know. But like the frequentist mindset, I just realized, for example, how weird it is to interpret a confidence intervals and stuff like this. So you get also reminded of is I kind of accept all these assumptions when I model in this in a certain way in a certain mindset?
Susan Peich:Absolutely. Well, first off, on behalf of me, Sid, and Andrew, it has been a pleasure talking to you. Any final thoughts before we close out?
Christoph Molnar:I just can say that I was very happy that you had me on the podcast, and it was a lot, a lot of fun talking about my books and all the questions you had. So thanks for having me.
Susan Peich:Sure. And for our listeners, you know, please check out introduction to conformal prediction with Python Python by Christopher Molnar. We also mentioned the modeling mindset here for anybody who is really trying to get a basic foundational understanding and has some data or computational background and wants to start at the beginning. I highly suggest that and also subscribe to his newsletter, and frisk. Subscribe to our podcast for more conversations like this