Code Quality in the AI Era

Ship AI-generated code you can actually trust — with testing, security, and review practices that scale.

Online / Zoom
May 11, 2026
Request Preview
Online / Zoom
May 20, 2026
Register $499

Who This Is For

→ Engineering leads responsible for code quality and security standards
→ Senior developers reviewing AI-generated code from their teams
→ CTOs and VPs of Engineering navigating AI adoption policies

You can ask your AI coding tool to assess its own quality. Most of the time, it’ll pass with flying colors. So how do we end up with stories of folks shipping API keys in GitHub repos or frontends where they can be easily exploited?

What do we think — and do — about test coverage, interface contracts, refactoring, and general clarity of code? Does reuse still matter? And what about security, particularly as more companies have non-engineers generating code?

The concept of human code review becomes unsustainable at scale. So how do we ensure security, reliability, and sustainability when the volume of code keeps growing and the people writing it aren’t all engineers? This workshop works through practical strategies and hands-on exercises to answer exactly that.

Learning Goals

Key Concepts

  • Why AI-generated code passes its own quality checks — and why that is dangerous
  • Test coverage strategies when code volume outpaces human review
  • Security risks unique to AI-generated code: secrets exposure, injection, and trust boundaries
  • Sustainable code review practices at scale
  • Interface contracts, refactoring, and when reuse still matters

What You'll Take Away

  • A practical quality checklist for AI-generated code
  • Automated security scanning workflows you can deploy immediately
  • Strategies for reviewing code written by non-engineers
  • A team policy template for AI code generation
  • Hands-on experience catching real vulnerabilities in AI-written code

Agenda

Morning

8:30Setup & introductions
9:00Why AI-generated code passes its own quality checks
10:00Security scanning workflows for AI code
11:00Catching real vulnerabilities hands-on

Afternoon

12:00Lunch
1:00Sustainable code review practices at scale
2:00Building a team AI code generation policy
3:00Quality checklist and review workflow design

Instructor

Jeff Casimir
Jeff Casimir has spent two decades building technical education programs, including founding the Turing School of Software & Design. He brings deep experience in hands-on, project-based learning to every session.

What You Need

  • A laptop running macOS or Windows (tablets are not sufficient)
  • A modern web browser (Chrome or Firefox recommended)
  • Claude access will be provided — setup instructions sent before the workshop

Ready to join?

May 11 Online / Zoom Preview Request
May 20 Online / Zoom $499 Register

Questions? Get in touch →