March 26, 2024
Dr. Andrew Clark & Sid Mangalik
Season 1
Episode 16

Information theory and the complexities of AI model monitoring

The AI Fundamentalists

More Info
Share

The AI Fundamentalists

Information theory and the complexities of AI model monitoring

Mar 26, 2024
Season 1
Episode 16

Dr. Andrew Clark & Sid Mangalik

In this episode, we explore information theory and the not-so-obvious shortcomings of its popular metrics for model monitoring; and where non-parametric statistical methods can serve as the better option.

Introduction and latest news 0:03

- Gary Marcus has written an article questioning the hype around generative AI, suggesting it may not be as transformative as previously thought.
- This in contrast to announcements out of the NVIDIA conference during the same week.

Information theory and its applications in AI. 3:45

- The importance of information theory in computer science, citing its applications in cryptography and communication.
- The basics of information theory, including the concept of entropy, which measures the uncertainty of a random variable.
- Information theory as a fundamental discipline in computer science, and how it has been applied in recent years, particularly in the field of machine learning.
- The speakers clarify the difference between a metric and a divergence, which is crucial to understanding how information theory is being misapplied in some cases

Information theory metrics and their limitations. 7:05

- Divergences are a type of measurement that don't follow simple rules like distance, and they have some nice properties but can be troublesome in certain use cases.
- KL Divergence is a popular test for monitoring changes in data distributions, but it's not symmetric and can lead to incorrect comparisons.
- Sid explains that KL divergence measures the slight surprisal or entropy difference between moving from one data distribution to another, and is not the same as KS test.

Metrics for monitoring AI model changes. 10:41

- The limitations of KL divergence and its alternatives, including Jenson Shannon divergence and population stability index.
- They highlight the issues with KL divergence, such as asymmetry and handling of zeros, and the advantages of Jenson Shannon divergence, which can handle both issues, and population stability index, which provides a quantitative measure of changes in model distributions.
- The popularity of information theory metrics in AI and ML is largely due to legacy and a lack of understanding of the underlying concepts.
- Information theory metrics may not be the best choice for quantifying change in risk in the AI and ML space, but they are the ones that are commonly used due to familiarity and ease of use.

Using nonparametric statistics in modeling systems. 15:09

- Information theory divergences are not useful for monitoring production model performance, according to the speakers.
- Andrew Clark highlights the advantages of using nonparametric statistics in machine learning, including distribution agnosticism and the ability to test for significance without knowing the underlying distribution.
- Sid Mangali

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

- LinkedIn - Episode summaries, shares of cited articles, and more.
- YouTube - Was it something that we said? Good. Share your favorite quotes.
- Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.

In this episode, we explore information theory and the not-so-obvious shortcomings of its popular metrics for model monitoring; and where non-parametric statistical methods can serve as the better option.

Introduction and latest news 0:03

- Gary Marcus has written an article questioning the hype around generative AI, suggesting it may not be as transformative as previously thought.
- This in contrast to announcements out of the NVIDIA conference during the same week.

Information theory and its applications in AI. 3:45

- The importance of information theory in computer science, citing its applications in cryptography and communication.
- The basics of information theory, including the concept of entropy, which measures the uncertainty of a random variable.
- Information theory as a fundamental discipline in computer science, and how it has been applied in recent years, particularly in the field of machine learning.
- The speakers clarify the difference between a metric and a divergence, which is crucial to understanding how information theory is being misapplied in some cases

Information theory metrics and their limitations. 7:05

- Divergences are a type of measurement that don't follow simple rules like distance, and they have some nice properties but can be troublesome in certain use cases.
- KL Divergence is a popular test for monitoring changes in data distributions, but it's not symmetric and can lead to incorrect comparisons.
- Sid explains that KL divergence measures the slight surprisal or entropy difference between moving from one data distribution to another, and is not the same as KS test.

Metrics for monitoring AI model changes. 10:41

- The limitations of KL divergence and its alternatives, including Jenson Shannon divergence and population stability index.
- They highlight the issues with KL divergence, such as asymmetry and handling of zeros, and the advantages of Jenson Shannon divergence, which can handle both issues, and population stability index, which provides a quantitative measure of changes in model distributions.
- The popularity of information theory metrics in AI and ML is largely due to legacy and a lack of understanding of the underlying concepts.
- Information theory metrics may not be the best choice for quantifying change in risk in the AI and ML space, but they are the ones that are commonly used due to familiarity and ease of use.

Using nonparametric statistics in modeling systems. 15:09

- Information theory divergences are not useful for monitoring production model performance, according to the speakers.
- Andrew Clark highlights the advantages of using nonparametric statistics in machine learning, including distribution agnosticism and the ability to test for significance without knowing the underlying distribution.
- Sid Mangali

What did you think? Let us know.

Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

- LinkedIn - Episode summaries, shares of cited articles, and more.
- YouTube - Was it something that we said? Good. Share your favorite quotes.
- Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.