Increase AI Trustworthiness with Multi-Model Adversarial Analysis

“How can LLM users reduce AI sycophancy and bias?” 

This simple question at a GenAI conference led to the creation of a potentially game-changing tool that does just that.

Watch this On the Record/Off the Record (OTR/OTR) conversation to learn how to:

  • Use AI prompts to conduct adversarial analyses that reduce sycophantic and biased AI responses.

  • Leverage a multi-model approach to improve AI robustness and fairness.

In this OTR/OTR, Jeffrey Lee-Chan (Snap, Inc), Josh McDermott (StepHabits), and Mary Beth Snodgrass (Product Advisory Collective) share how they connected at a conference in discussions on ecosystem and AI bias.

  • Jeff shared why and how he created a new multi-model tool, which he uses every day as an engineer to increase his own productivity.

  • Josh shared his observations of startup-VC ecosystem gender bias and reactions to how it can be interpreted and explained.

  • Mary Beth shared how multi-model adversarial testing helped her not only reduce sycophancy and bias in AI responses but also identify a ‘shadow strategy’ pattern, which she ultimately used to have a strategic, productive conversation with a VC about bias against women founders.

Who will find this discussion insightful? This episode will particularly resonate with:

  • investors seeking to optimize strategy or reduce blind spots,

  • entrepreneurs engaging and influencing decision-makers, or

  • technologists seeking to improve productivity with AI in day-to-day work.

Full video below



Next
Next

Tech Leadership Lightning Roundtable