Long Talk (25min)
FR
Bias in AI-based products: risks and mitigation
Description
Creating a product that truly contains AI is a bit like making a batch of organic fruit yogurt: you’re working with something “alive,” which raises critical quality concerns. There’s life in the input data—always shifting, polluted, and shaped by societal trends. There’s life in the upstream acquisition and processing workflows. There’s life, too, in how decision-makers perceive unwanted side effects, since norms are constantly evolving. And finally, there’s life in user feedback (even in B2B), as people grow accustomed to the product’s “taste.”
We’re all aware that a strange bacterium thrives in the recesses of AI algorithms: bias. How much “algorithmic salmonella” are we willing to accept in a consumer product? We will share five years of experience auditing production algorithms and build our argument around several key points:
- Bias is not an exceptional collateral damage of AI algorithms; it is inherent to the computational mechanics of machine learning.
- Bias is rarely global or visible at first glance; it emerges in usage contexts, affecting certain subpopulations or behaviors.
- Marketing personas cannot capture product biases. We need new categories because algorithms no longer think through fixed rules.
- Finally, while bias can be corrected, it challenges data scientists in dimensions that are often overlooked.
Laws and AI regulations require control—or at least transparency—regarding biases in deployed AI algorithms. We will explore how to guard against these issues and propose new metrics and monitoring processes to limit the organization’s risk exposure.