Code Quality in the AI Era
Ship AI-generated code you can actually trust — with testing, security, and review practices that scale.
Who This Is For
You can ask your AI coding tool to assess its own quality. Most of the time, it’ll pass with flying colors. So how do we end up with stories of folks shipping API keys in GitHub repos or frontends where they can be easily exploited?
What do we think — and do — about test coverage, interface contracts, refactoring, and general clarity of code? Does reuse still matter? And what about security, particularly as more companies have non-engineers generating code?
The concept of human code review becomes unsustainable at scale. So how do we ensure security, reliability, and sustainability when the volume of code keeps growing and the people writing it aren’t all engineers? This workshop works through practical strategies and hands-on exercises to answer exactly that.
Key Concepts
- Why AI-generated code passes its own quality checks — and why that is dangerous
- Test coverage strategies when code volume outpaces human review
- Security risks unique to AI-generated code: secrets exposure, injection, and trust boundaries
- Sustainable code review practices at scale
- Interface contracts, refactoring, and when reuse still matters
What You'll Take Away
- A practical quality checklist for AI-generated code
- Automated security scanning workflows you can deploy immediately
- Strategies for reviewing code written by non-engineers
- A team policy template for AI code generation
- Hands-on experience catching real vulnerabilities in AI-written code
Morning
Afternoon
Instructor
What You Need
- A laptop running macOS or Windows (tablets are not sufficient)
- A modern web browser (Chrome or Firefox recommended)
- Claude access will be provided — setup instructions sent before the workshop
Ready to join?
Questions? Get in touch →