AI Development and Guardrails

Open at Intel - Podcast autorstwa open.intel - Środy

Kategorie:

Ezequiel Lanza and Katherine Druckman from Intel's Open Ecosystem team chat with Daniel Whitenack, founder and CEO of Prediction Guard. They discuss the importance and implementation of guardrails for securing generative AI platforms and cover the operational challenges and security considerations of running AI models, the concept of responsible AI, and practical advice for integrating guardrails into AI workflows. Additionally, the conversation touches on multi-model integrations, open source contributions, and the significance of vendor-neutral frameworks in achieving a secure and efficient AI ecosystem. 00:00 Introduction01:28 What is Prediction Guard?03:31 Understanding Guardrails in AI06:49 Security Risks and Responsible AI13:30 Open Source and Model Security19:00 Open Platform for Enterprise AI20:26 Contributing to Open Source Projects27:12 Final Thoughts   Guest: Daniel Whitenack (aka Data Dan) is a Ph.D. trained data scientist and founder of Prediction Guard. He has more than ten years of experience developing and deploying machine learning models at scale, and he has built data teams at two startups and an international NGO with 4000+ staff. Daniel co-hosts the Practical AI podcast, has spoken at conferences around the world (ODSC, Applied Machine Learning Days, O’Reilly AI, QCon AI, GopherCon, KubeCon, and more), and occasionally teaches data science/analytics at Purdue University.

Visit the podcast's native language site