A professor has been running an unusual experiment looking for signs of racial and gender bias in AI chatbots. And he has an idea for developing new guardrails that can check against such bias and remove it before it is shown to users.
See show notes and links here: https://www.edsurge.com/news/2024-09-03-ai-chatbots-reflect-cultural-biases-can-they-become-tools-to-alleviate-them