Generative AI and its various capabilities are all the rage lately. We’ve heard boasts of it doing anything from writing your content to writing your code, faster, more securely, and for less pay than most developers. But can it really be trusted? Does the data it’s trained on come into play and if so where and to what degree? If you asked an AI to tell you if your code is secure would you double-check the result or just click ‘accept’ like on a phone keyboard suggestion? As a potential defensive tool, how easy is it to ‘fool’ AI into ignoring actual problems or, worse, adding problems in the mere act of checking? In this webinar, Bogdan Kortnov, Co-founder & CTO of Illustria, discusses these topics with Barak, Scribe’s DevRel.