part 2: why read code if an llm can just read it for me?
if you already know a certain primitive can be used dangerously, feed your files with that primitive to an llm alongside clear criteria. once you get structured output highlighting potential misuse, grep for leads—its accuracy will hinge on how precise your criteria are. this is brute force, but it works: batch the files, confirm dangerous patterns exist, skip the manual digging where possible. for simpler checks, the llm can directly verify presence or absence of a pattern in code.
i’ll demonstrate this soon, with a focus on open-source targets like gitlab. the difference from typical sast is that llms mimic human cognition. no more wrestling with rigid vulnerability models or hoping your tools catch every data/control flow issue. llms can interpret code at an abstract level, letting you process hundreds or thousands of files with near-human comprehension. this isn’t about building a fully autonomous auditing system—it’s about using llms for batch matching when you’ve found lucrative code areas and manual verification is the bottleneck. less time reading tedious code, more time finding real bugs (and making money).