The AI Fundamentalists

Fundamentals of systems engineering

August 23, 2023 Dr. Andrew Clark & Sid Mangalik Season 1 Episode 6
Fundamentals of systems engineering
The AI Fundamentalists
More Info
The AI Fundamentalists
Fundamentals of systems engineering
Aug 23, 2023 Season 1 Episode 6
Dr. Andrew Clark & Sid Mangalik

Episode 6. What does systems engineering have to do with AI fundamentals? In this episode, the team discusses what data and computer science as professions can learn from systems engineering, and how the methods and mindset of the latter can boost the quality of AI-based innovations.

Show notes

  •  News and episode commentary 0:03
    • ChatGPT usage is down for the second straight month.
    • The importance of understanding the data and how it affects the quality of synthetic data for non-tabular use cases like text. (Episode 5, Synthetic data)
    • Business decisions. The 2012 case of Target using algorithms in their advertising. (CIO,  June 2023)
  • Systems engineering thinking. 3:45
  • Learning the hard way. 9:25
  • What is a safer model to build? 14:26
    • What is a safer model, and how is systems engineering going to fit in with this world?
    • The data science hacker culture can be counterintuitive to this approach 
    • For example, actuaries have a professional code of ethics and a set way that they learn.
  • Step back and review your model. 18:26
    • Peer review your model and see if they can break it and stress-test it. Build monitoring around knowing where the fault points are and also talk to business leaders.
    • Be careful about the other impacts that can have on the business or externally on the people who start using it.
    • Marketing this type of engineering as robustness of the model, identifying what it is good at and what it's bad at, and that in itself can be a piece of selling.
    • Systems thinking gives a chance to create lasting models

What did you think? Let us know.

Good AI Needs Great Governance
Define, manage, and automate your AI model governance lifecycle from policy to proof.

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Show Notes Transcript

Episode 6. What does systems engineering have to do with AI fundamentals? In this episode, the team discusses what data and computer science as professions can learn from systems engineering, and how the methods and mindset of the latter can boost the quality of AI-based innovations.

Show notes

  •  News and episode commentary 0:03
    • ChatGPT usage is down for the second straight month.
    • The importance of understanding the data and how it affects the quality of synthetic data for non-tabular use cases like text. (Episode 5, Synthetic data)
    • Business decisions. The 2012 case of Target using algorithms in their advertising. (CIO,  June 2023)
  • Systems engineering thinking. 3:45
  • Learning the hard way. 9:25
  • What is a safer model to build? 14:26
    • What is a safer model, and how is systems engineering going to fit in with this world?
    • The data science hacker culture can be counterintuitive to this approach 
    • For example, actuaries have a professional code of ethics and a set way that they learn.
  • Step back and review your model. 18:26
    • Peer review your model and see if they can break it and stress-test it. Build monitoring around knowing where the fault points are and also talk to business leaders.
    • Be careful about the other impacts that can have on the business or externally on the people who start using it.
    • Marketing this type of engineering as robustness of the model, identifying what it is good at and what it's bad at, and that in itself can be a piece of selling.
    • Systems thinking gives a chance to create lasting models

What did you think? Let us know.

Good AI Needs Great Governance
Define, manage, and automate your AI model governance lifecycle from policy to proof.

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

  • LinkedIn - Episode summaries, shares of cited articles, and more.
  • YouTube - Was it something that we said? Good. Share your favorite quotes.
  • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
Susan Peich:

The AI fundamentalists, a podcast about the fundamentals of safe and resilient modeling systems behind the AI that impacts our lives and our businesses. Here are your hosts, Andrew Clark, and Sid Mongolic. Hello, everyone. Welcome to this episode of the AI fundamentalists, where today's topic will be Systems Engineering. I'm here with Andrew Clark and Sid Mongolic. How are you guys doing today? A lot of chatter in the news and some things from our previous podcasts that have been reinforcing our points.

Andrew Clark:

Yeah, it's been great. There's, we've actually seen a lot of articles coming out recently from Gary Marcus. Big thought leader staying some some of the same sentiments that we've had here, which has been really great to see it seems a chat GBT usage is down for the second straight month, it's starting to see a little bit of people realizing, hey, maybe this is not really quite as, as panacea as we as we may have thought it was.

Susan Peich:

Yeah, and one of the things that he went into was just the Yeah, he really did start driving home a point about the kind of cannibalization of it of itself, because of the data that's coming back out from a from generated by chat GBT getting posted to the internet by people who have been really trying to work with this thing, to their credit. And now, you know, we've really got to really understand the

Andrew Clark:

data of like, you can't really even use more synthetic data, which is guess what generated from it generative AI model. That's how you're generating synthetic synthetic data for non tabular use cases like text. So you're just the quality just goes down. And you don't know what that that truth is just disinformation that people thought, you know, previous years with Twitter and stuff was missing information, just wait until everything's AI written.

Susan Peich:

Of course. And, Gary, if you're listening, we just wanted to fan out on you just a little bit as the AI fundamentalists, these things are not just one person speaking, we are really, in a situation where data scientists and people who have worked with algorithm led decision making for years really understand that, you know, this is not something that just just just happens, it's not magic. Speaking of algorithms, not, you know, being around forever, was also reminded, we were looking up some stuff on systems engineering. And one of the things that I found I was reminded of, in a CIO article was the case from 2012, where target was using algorithms and their advertising and X, you know, because that algorithm figured out that someone's activity was giving indications that they might be pregnant, got sent coupons, for prenatal vitamins. And here, it was a teenager, who really did not want her parents finding out that way that she was pregnant, led to a whole discussion. And the interesting part of this, which we're going to probably dig into, from a more system engineering point of view later was that the business had to make a decision, target had to make a decision on how they were going to handle that. And they didn't necessarily stop the targeting. They just inserted other agnostic ads to try and break it up. So it didn't really, it brings into question, you've got the systems that help enforce these decisions. But then the business has really got to do something on was ethical and what's right. And, like, to our point in a couple podcasts ago, there, this is still a responsibility on the person using them or on the business using them.

Sid Mangalik:

Yeah, that's, that's exactly right. And I mean, this really highlights, you know, what is the difference between just building algorithms and building models and building systems, right, this holistic system had building where you know, you have these types of problems occur in your model, and you actually have chances to remediate them, not basically ways to gloss over them, paint over them, put a new wallpaper over it, fundamentally changing how the system works, because you understand it deeply, and the dynamics of the system to actually solve these problems.

Susan Peich:

That's a great summary said, and it really sets us up to explore systems engineering as a mindset as well.

Sid Mangalik:

You want to give us a quick little summary intro. And then we'll hop into some deeper thoughts on it.

Andrew Clark:

Sounds good? Yeah. So Systems Engineering is also not new, like automated decision making. It really came out that the term Systems Engineering really came out of Bell Labs in the 1940s. Working on large projects for the government really came into its own with the NASA Apollo program, which is like how do you undertake this massive, massive interdisciplinary approach of putting a man or woman on the moon? How would you go about that? How do you test to make sure there's their safety engineering, there's reliability? What are the requirements? How do you scope that out? How do you do the simulations? All of those different things? How do you think of that whole system? And how do you break it down to its constituent parts, the sub topic we can get into later is systems thinking. But if you really take it back down of where citizen Systems Engineering even came from it, you can actually trace it back to how are the pyramids made? How are the Roman aqueducts the Great Wall of China, all of this this whole? How do you build a really, really, really complex system? And then not be you know, the discussions of people asked for you've all seen the memes of somebody asks for, you know, a car and then getting a truck with bicycle wheels on it or something, right? Like how do you get through a correct requirements and know how to build the whole time and also come in on time and on budget? It's a massive problem, that's really now becoming more of a discipline?

Sid Mangalik:

Yeah, so I'll just feed you the next obvious question. That is, what is the thing that defines these problems that make them good systems engineering problems, you know, building aqueducts, sending someone to the moon? What is the type of problem that is well led to systems engineering systems thinking?

Andrew Clark:

Anytime you need to think of a large thing as a system. So a system is a way of looking at the world is it's it's more than this. Individual parts you add together, there's emergent behavior, there's, there's complex interactions, there's more in relationships, kind of like we were talking about last time with synthetic data, is you can't just generate individual columns of data and think that they're the same as having those inner relationships and the connectivity. There's, there's that extra emergent behavior, and butterfly effects and things like that, that happens when you look at these complex systems. It is very nonlinear relationships between data, and also too large to really think about a in just a very finite fashion.

Sid Mangalik:

Yeah, I think that's spot on. Right? That's, that's this, it's this idea of the complex system, where the design, the development, the implementation, and maybe even the decommission of the system. These are all very interrelated complex systems. And you can't just make a change the design, without having to change the implementation, you can't make a change the implementation, without having to change the development cycle, there's deeply inter related connection of bodies of work that have to be done, either in parallel or planned out ahead of time. So what fundamentally,

Susan Peich:

do people learn about systems engineering?

Andrew Clark:

Well, it's very much it's embedded in there's even a couple institutions now that have a systems engineering degree. It's really like product management on steroids. But really aerospace engineering, civil engineering, these are the disciplines more that really go into this thing, I mighty has several classes on it, it's really a naval edge, I have a friend who's a help the Navy, who building ships, it's something they taught there any of these really large complex systems, it's something that's been using the defense industry for a while and the aerospace and things it's really when I think the level of criticality is higher. And we're starting to see that now with these systems, bringing it back to you know, AI systems, the criticality is a lot higher and the reliability and the safety and how do we know that this thing is going to be doing what it should be doing? When you ever you have that low margin for error is when you really want to start doing these things. But it's not the agile approach that's kind of popular these days.

Susan Peich:

Right? So maybe not, so maybe I misspoke. Not obviously college, but obviously, certain programs rooted in engineering than who's who might not be on that list who might not be as inclined to systems engineering.

Sid Mangalik:

Yeah, so you know, when we think about who learns these topics, we think about aerospace engineers, electrical engineers, mechanical engineers, but the two people that don't make that list that maybe shouldn't make that list are the computer scientists and statisticians, right? We live in a very separate world. And we haven't, you know, really engaged with systems engineering, even in the way that even software engineers have to think about it, and have to fight with it. So we're entering a world where AI systems and ml systems have been a little bit separate from the expertise of people that do systems engineering.

Andrew Clark:

And so with a lot of these different topics we've been bringing out and doing things the hard way and learning like the actual physical sciences and how mathematics and how things work. I'm in computer scientists, they have the best marketing engine in the world. But then at some point in a couple of years, computer science is gonna come out with their own version of systems engineering, and it's gonna be the only thing that's ever happened in the history of the world like it is based on the trend that a lot of these things we've come up with. They've been we've learned the hard lessons the hard way, by you know, sadly, Apollo was one it was like three or four blew up on the launchpad well Guess what they learned a lot of things from that apply them in the systems engineering paradigm. So now you had successful Apollo 11, and even Apollo 13, you could argue as the one that had the issues in space, but they got everybody back safely. It was maybe even a more successful mission, because they were able to manage that complexity. They tested how, what the weird things that could go wrong and knew the different tolerances they had, were able to bring the astronauts back home safely, because they had so studied the system and knew the intricacies and had done all of the different simulations.

Susan Peich:

Let's keep going with that. Anything else that what else can we learn from NASA and the Apollo program? Because it sounds that kind of crystallized it for me there. Tell us a little bit more.

Andrew Clark:

They even have a sub discipline called requirements engineering. So systems engineering as a whole is massive, there are all these different components to PART OF WHAT IS IT systems engineering, even mean? We're not even going to get into all of that today. There's a famous V model of systems engineering, where you have stakeholder analysis, systems, engineering requirements, definition and systems modeling, Lifecycle Management Architecture, like there's all prototype manufacturing, there's all these different sub disciplines that people focus on. But if we start looking like what are the principles we can take to are, obviously if you're a data scientist in an industry, you're CEOs, often attention span might be two months, you obviously can't do a systems engineering project. But you can get really good at defining what are those requirements? What do we need to know? How do I validate? What's the criticality of the system? Risk management is another component, you haven't systems engineering, what's the criticality of the system? Which will mean how much testing and validation and what almost determining what you want to test and validate prior to even going into building your system?

Sid Mangalik:

Yeah, I mean, you know, this is something we run into a lot of times a data scientist, you know, like, Oh, we're gonna build this wonderful system, it's going to be AI systems gonna be end to end is going to really solve people's problems. And then you look at the time budget of a project, and it's two months or three months. And so this is really about aligning the importance of your projects, or the criticality as Andrew was saying, with the necessity for having, you know, good requirements, good reliability, good logistics, good evaluation, good mean, good maintenance of the project. It's building systems, which is a bigger ask than just building models.

Andrew Clark:

Yeah, and I think it's Systems Engineering has a marketing problem. Yeah, there are. And it's oftentimes now it's been, I think, associated incorrectly with waterfall. In software engineering, where waterfall is basically a junior varsity version of systems engineering, it's like it missed a lot of the parts maybe played hooky during during class, but caught a couple of things. And let's define everything and wait three years for some and something comes out the end, that's gonna be great. Well, that's lost a lot of of credibility in the industry. And everyone has moved to agile, because of a lot of bad project management and waterfall approaches, or working in companies or industries where maybe you didn't actually have the requirements on or you don't need to do that whole system engineering approach. But for these, once we talking about AI systems, in places where you want that whole system, you have criticality, you need it to be safe, you need to be reliable. We need to start taking these principles and understandings and also the alignment on really defining with the stakeholders, what is it you're actually looking for, let's define what that is, and then laying out hey, I can build you this system. But hey, CEO, you need to sign off that you are you are wanting to deploy a POC that does not have any of these things in place. And I can do that for you in two months. Or we can do four or five, six months, and it will have that safety and validation CEO, are you gonna go on record and sign off on this not being safe? Most likely not. But us as engineers, we've we've not done the best job of raising these concerns or illustrating to our executives why you need to be concerned about these things. It's often just like, Well, I mean, it'd be better if I if I didn't have to work as hard and you take longer, like, that's not what we're saying. But that's what they're hearing. We because we haven't done a good job. That's where it's really good to come back. And MIT has a bunch of open resources on this on understand some of the nuances of System Engineering. We're not NASA, we don't have that large of a team. But taking some of these things is trying to bubble up the concerns we may have. For our systems.

Susan Peich:

We've been talking about systems engineering, we've hit a little bit on like, how applicable it is in AI and machine learning. How do systems engineers start working on making better and safer models?

Sid Mangalik:

Yeah, so I'll just start by talking about like, what is a safer model? Right? So to answer that question, and how is systems engineering gonna really fit in with this world? You should think about what it can do for us, right? So it can give you something like models that are fully reproducible, right? If you're building from systems if you're building from the ground up, if you're building with very intentional design, you know, even if you change the model every three months, and you had some, you know, wacky decision that you're not happy with or customer complaints about, you can go back, rebuild the system as you have had at the time, and determine you know where, where the faults came from. So when we talk about safety, it's a lot about this, like control and understanding of your model that you gain from taking a systems approach. And so if you want to have any hope of assuring your model of governing your model, you should be building it with these types of systems, ideas in mind. Others, you have basically no hope of managing these very underlying these complex underlying systems, that you'll just be at the whims of, well, we lost our artifact for this. It's in an s3 bucket somewhere, I don't know what it's called. And then you just have to gloss over it and hope that no one that you know, complains about it later.

Andrew Clark:

So I think the data science hacker culture as soon as mentioned with isn't an s3 bucket somewhere or something like that, it's actually been really detrimental to this approach. And you'll hear statisticians and traditional modelers kind of gripe about the data science world actuaries very much do this as well as actuaries have a set way that they learn they're very particular very precise, is they have a code of ethics, they have all of these things in place, they're very precise in their modeling. And you know, in insurance, you're gonna have data scientists coming in, and it's very swashbuckling, hacker culture, I just pip install sky color, and I make awesome things happen in my model outperforms you, and I'm optimizing performance thanks to Kaggle. You don't you miss a lot of these other nuances. And because the marketing engine and computer science and data science, it's kind of taken over that, that's fine. That's all you need. We're all going for performance, more performance better. And it's just, I'm not quite sure how we that's the whole purpose of this podcast, doing things the hard way, doing things the correct way, trying to bring us back for when we're talking about mission critical systems, even like some of the things going on with the the White House's new new, like, let's now look into AI again, well, this has already been doing this for years, why are we now starting from scratch again? And then let's get the same computer science folks in the room and ask them what we should be doing about how to build systems? Why are we going back in history and understanding all of the great within the United States Government knowledge that's just not being utilized on how to build safe, resilient systems? And guess what the Apollo program was using models back then to make decisions about trajectory on where you're going in space? Like, there's so much of this institutional knowledge out there? How do we start trying to bring that into building critical systems?

Sid Mangalik:

Yeah, that's exactly right. And, you know, as much as we love Kaggle, and data science, we have to remember that, you know, during these types of coding challenges, and these, like data science, you know, accuracy hacking, you'll generate a lot of data scientists that are really great at making really powerful models, but you're not going to create data scientists that know how to follow best practices, that, you know, even have seen a psychic learn pipeline, which is not in itself, systems engineering, but it's, you know, it's a part of it. They don't know how to, you know, you're not learn, you're not taught how to document code, you're not taught how to create code and enterprise setting. So there's this really big disconnect between how we used to do engineering, and how data scientists have gotten away with just being, you know, right out of the hacker culture.

Andrew Clark:

And that's that part of that's reliability, safety engineering is actual disciplines within engineering. We don't need to go to that extent. But we can do another podcast at some point on validations. We touched on Monte Carlo simulation some last time in the synthetic data, but very much, even, okay, you're optimized for performance. Great. Now, let's take us back, step back and objectively have your friend or somebody else, review your model and see if they can break it and stress test it and see what those safe operating bounds are. Where does my model perform? Well, where does it not perform well, build monitoring around knowing where those fault points are, but also go to your business leaders, hey, this is the model and it breaks down here, you still want to do it. Because the other thing is we kind of pushed on CEOs earlier wanting everything yesterday, and wanting them to actually own it. But to their credit, they're not engineers. They don't they're not software engineers, they don't know understand the trade offs, they just see Oh, models can do all these awesome things. They don't understand, hey, well, me saying go faster, go harder I want the sooner they think they're just being an aggressive business leader. They don't realize they're implicitly telling you Yeah, I could care less if this is going to be a racist algorithm. They're not saying that. Now I'm sure there's some bad apples out there. But the 90% or more of these, I would say 9598. They don't know that they're implicitly saying I don't care about these things. They're just assuming you're doing it anyway. But they don't understand the trade offs. And that's why being better adopt the requirements analysis requirements, documentation of systems engineering, plus the validation saying, hey, C suite, this is where my model breaks down. You're not the US data scientists have failed at giving our our leaders the knowledge and the the parameters they need to actually decide if these things should be deployed or not.

Susan Peich:

Exactly, because you brought this up earlier, when you were talking about like the system Systems Engineering and even introducing actuarial into the process. these professions, by nature of hiring them have their own inherent code of ethics, in some cases, and in some industries that they have to follow when they're even making this. And so to your most recent point about, you know, talking to the C suite, or talking to the leaders who are sponsoring the development of the technologies that you're making with these models, or the decisions that you're making from models, this is really important to note that to your point, like, you have to be able to vocalize like, not just that, hey, this is where the model makes down. But this is where you also this is where we have to be careful, this is what the other impacts that can have on the business or externally on the people who start using this or are the benefactors of our decisions.

Sid Mangalik:

Yeah, and it's really a chance for us to kind of market this type of engineering where we can mark it as robustness of the model, hey, we know that our model is robust and 99.9% of circumstances. And we know that because we did the proper evaluations and the proper testing and the proper flight siming of our model, right, really identifying what it's good at what it's bad at. And that in itself can be a piece of selling, right? It's that we can verifiably say things about our model that we couldn't say before, about goodness beyond accuracy, but goodness in terms of robustness and reliability.

Andrew Clark:

I think this could honestly really be a selling point that we've missing as as well with our leaders, like, Hey, do you want this thing in two months? Or do you want it in four months, and then you can say all these things, because I was on doing some research on some some one of my friends has a new cybersecurity firm was looking at stuff yesterday, and it says military grade cybersecurity, because they have a lot of ex analysts and stuff from the military. It's like, whoa, well, that's a honestly, I don't know if it is or not, that's beside the point. But that kind of if you can backup that your stuff is really more secure, or very robust. You can advertise that. I mean, Susan, correct me if I'm wrong, but I would think in marketing, you could use that data point. And that actually might be helpful, okay, we're slightly slower to market but doesn't always manage matter who's first, it's who's best. So like, being able to advertise that your version of Chachi Beatty is bulletproof, is a lot better than I was just first.

Susan Peich:

exactly know the integrity, especially in algorithmic lead decisions is going is huge. And is going to be bigger Case in point, you know, our, you know, at the top of the hour when we talked about target and the use case there and how they chose how the business chose to address that. Versus inherent system failure where you really didn't do the testing or the end, give it the integrity, it deserves to be the strongest and most trusted in the market.

Sid Mangalik:

Yeah, and this is something systems thinking really gives you a chance to create lasting models and lasting systems, not just models that, you know, hit make a big splash are great. And then even forget about in two months. If you want like a truly enduring product or system, or analysis tool, it does involve building it robustly and correctly the first time.

Susan Peich:

We've covered a lot of ground. And I think that we could definitely keep going with this. But I do want to like get us to get us to a place where we can close out with some key takeaways because I think we had some really important ones in here. Andrew, why don't we start with you?

Andrew Clark:

Ah, it's tough to summarize the topic. Start thinking about your your city, you're not just modeling you're not just making a model to do accuracy. How can you think of it as a as a system? What are you trying to accomplish taking kind of a step back before you build anything before you import sky kit learn? What are you trying to accomplish here? And what is the best tool for the job? Hint Hint, as Christoph Molnar told us it may not be machine learning always. But go make a proposal exactly what are you trying to accomplish? Tell your your your decision makers, these are the pros and cons, this is what we're trying to do. And, and then propose the validations on all these parts in as your process. So make it make it a little bit more explicit on like, this is the system I'm trying to build. And these are the steps I need to have. And this is what you get when when I'm there I that that planning process and data science seems to really be lacking. So just even taking a step back and just thinking through that forget Systems Engineering methodology per se. Just think through what you're doing and why and document that and share it with with your your leaders.

Sid Mangalik:

Yeah, just dealing with Andrew saying, you know, thinking holistically about our problem, thinking about modeling is more than just the multi layer perceptron. It's the data collection. If the data cleaning, it's the data handling, it's the outputs of the model. It's how you present the outputs of the model. thinking of this as a holistic process. Rather than thinking of AI and ML as just the fancy model you got a PhD engineer to do for you think of it as the whole process. And then really think about this process through the lens of creating a consistent model. You know, we're working with blackbox models, we sometimes don't understand exactly how they work. But building these types of guardrails around them, building this structure and understanding around them, gives you a chance of creating a truly consistent, reliable model.

Susan Peich:

Yeah, and I'll tie both of those up with a vote by making sure that people understand that, that traceability that understanding of your system really, is your secret weapon, you understand what inherently made up which tools from the box were used at which time tools being models. And from that you really have a good understanding of your product, and the impact that it's going to have on either your customers or the decisions that your business makes on behalf of your customers. Anything else before we close out today?

Andrew Clark:

Definitely love to hear your guys's feedback and let us know questions of of things you'd like addressed in future episodes, or we can do follow on blog posts. We're trying to get better with that. Systems engineering, we just barely touched the surface of hey, this is the thing we could do a several series is on diving a little bit deeper, as there's a lot of complexity here. Or we want to start being a little bit more interactive with the audience. And so what are your burning questions? Where would you like to see this go next? We have as you probably gathered more than enough to talk about but we don't want to just be talking into a vacuum. We want to be able to have that conversation with you guys. So definitely, please give us your thoughts.

Susan Peich:

Yep, we've enjoyed your questions so far, and the response to our earlier episodes. Andrew, you hit us a new inbox, you made us a new email address. Let us know what it is.

Andrew Clark:

Yes, it is the AI. So please just drop us a line there. And all three of us are on that email distro. So we'll we'll read those and circle internally and and make blog posts, answer them on our next podcast. However, depending on on the length of question or answer privately,

Sid Mangalik:

perfect, you want to spell out monitor for our audio listeners to give them a fair chance? Oh, that's

Andrew Clark:

a good one. Oh, in I T. A you are. A lot of times people do that. It's like monitor and like monitoring. But it's monitor. And also thankfully you do have a cheat sheet. It's spelled correctly on on the podcast that you're listening to. So you have that as well.

Susan Peich:

Yeah, we're not picking on AutoCorrect or spellcheck at all by spelling.

Andrew Clark:

I mean, I hope at some point Google will learn it. We show it enough.

Susan Peich:

You think it learns it for a day and then it lets it go. Another problem to solve. The Bots aren't perfect. All right, guys. Thank you so much for joining us. Until next time.

Podcasts we love