Skip to main content
Brooks McMillin
  • Home
  • About
  • Projects
  • Appearances
  • Blog

AI Security Research Blog

February 26, 2026

Does Your System Prompt Actually Stop Prompt Injection? We Tested 10,000 Times to Find Out

An empirical study of 10,080 prompt injection attempts across 8 models, 6 defense strategies, and 7 attack types. The results challenge common assumptions about prompt-level defenses.

#security#AI#LLM#prompt-injection#ai-security#benchmark
January 28, 2026

Defense in Depth for AI-Assisted Development: Pre-commit Hooks, Review Agents, and CI That Catch LLM Mistakes

Practical strategies for safer AI-assisted development: automated review agents, layered security checks, and context management that prevents catastrophic mistakes.

#security#AI#LLM#ci-cd#pre-commit#code-review#MCP
September 7, 2025

The Call is Coming from Inside the House: When your Agentic Coder Writes Dangerous Code

An introduction to the flaws in security testing for AI-generated code.

#security#AI#LLM#vibe-coding#ai-security

© 2026 Brooks McMillin