SummaryIn this episode of the AI Engineering Podcast Jim Olson, CTO of ModelOp, talks about the governance of generative AI models and applications. Jim shares his extensive experience in software engineering and machine learning, highlighting the importance of governance in high-risk applications like healthcare. He explains that governance is more about the use cases of AI models rather than the models themselves, emphasizing the need for proper inventory and monitoring to ensure compliance and mitigate risks. The conversation covers challenges organizations face in implementing AI governance policies, the importance of technical controls for data governance, and the need for ongoing monitoring and baselines to detect issues like PII disclosure and model drift. Jim also discusses the balance between innovation and regulation, particularly with evolving regulations like those in the EU, and provides valuable perspectives on the current state of AI governance and the need for robust model lifecycle management.Announcements
Interview
Introduction
How did you get involved in machine learning?
Can you describe what governance means in the context of generative AI models? (e.g. governing the models, their applications, their outputs, etc.)
Governance is typically a hybrid endeavor of technical and organizational policy creation and enforcement. From the organizational perspective, what are some of the difficulties that teams are facing in understanding what those policies need to encompass?
How much familiarity with the capabilities and limitations of the models is necessary to engage productively with policy debates?
The regulatory landscape around AI is still very nascent. Can you give an overview of the current state of legal burden related to AI?
What are some of the regulations that you consider necessary but as-of-yet absent?
Data governance as a practice typically relates to controls over who can access what information and how it can be used. The controls for those policies are generally available in the data warehouse, business intelligence, etc. What are the different dimensions of technical controls that are needed in the application of generative AI systems?
How much of the controls that are present for governance of analytical systems are applicable to the generative AI arena?
What are the elements of risk that change when considering internal vs. consumer facing applications of generative AI?
How do the modalities of the AI models impact the types of risk that are involved? (e.g. language vs. vision vs. audio)
What are some of the technical aspects of the AI tools ecosystem that are in greatest need of investment to ease the burden of risk and validation of model use?
What are the most interesting, innovative, or unexpected ways that you have seen AI governance implemented?
What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI governance?
What are the technical, social, and organizational trends of AI risk and governance that you are monitoring?
Contact Info
Parting Question
Closing Announcements
Links
GDPR)
The intro and outro music is from Hitman's Lovesong feat. Paola Graziano) by The Freak Fandango Orchestra)/CC BY-SA 3.0)