Code Review Strategies
As @sonofaaa mentioned, in the book, "The Art of Software Security Assessment", the authors discuss code-auditing strategies in Chapter 4 (end of Part I).
In particular, external flow sensitivity (data flow and control flow) and tracing direction (forward or backwards slicing) are discussed along with many neutral methods of review. Other topics are discussed in great detail. It's the best written material on the subject of secure code review.
I should also mention "Secure Programming with Static Analysis" -- a book by Brian Chess and Jacob West of Fortify Software. They cover the internals and use of security-focused static analysis tools and compare them to other forms/tools in the secure code review world.
If you want to check out modern security-focused static analyzers, I suggest that you first get involved with some open-source or free tools such as CAT.NET for .NET (typically C#, VB.NET, and/or ASP.NET), find-sec-bugs (or the older LAPSE+) for Java Enterprise and JSP, and RIPS Scanner for PHP. You won't commonly find security-focused static analyzers that support dynamic languages, because they don't rely on a type system, but let me know if you are interested in support of Python, Ruby, or another dynamic language (or any other language) and I'll try to point you in the right direction. For starters, try Bandit (OpenStack project for Python code) and Brakeman Pro for Ruby.
Commercial security-focused static analyzers are meant for highly trained and specialized application security oriented developers. The cost of them assumes that someone will be running and analyzing these tools daily, as a full-time job, all year round. If you are interested in seeing quick results -- check out HPFOD -- but if you are interested to integrate these tools into a long-term, at-risk application portfolio for a large-installation -- check out Cigital ESP. There are also many application security boutiques and consulting shops that run and tune these tools for their clients. Depending on your locale and strategic direction, I would choose to partner with one regardless of anything else I mentioned, as they can be invaluable to success of an appsec program. Searching on LinkedIn for "application security consulting" should work if you don't where to go next.
Contrary to your statement, I believe that (security) code review should not be a mostly ad-hoc activity. There are some pretty strong methodologies for doing an efficient code review. For best results, this should be done incrementally and iteratively.
Here is a sample of a high-level outline of such a methodology, with some guiding principles:
- Understand your system (architecture, design, ...). Use a prepared question list...
- Decide clear objectives
- scope
- constraints
- goals
- non-goals!
- types of security issues
- time limit
- Analyze threats (e.g using Threat Modeling) to help focus on high-risk areas
- Preliminary scan using automated tools
- Review complex / fragile sections
- Complex code (e.g. Cyclomatic complexity)
- Areas with high number of historical bugs
- Even many “false positives” from automated scan
- Identify data validation for all input
- Account for trust issues
- Data output in web pages
- Specifically review all security mechanisms in depth (e.g. authentication, crypto, etc)
- “Interesting” junctures, e.g.
- Creating processes
- Threads and synchronization (especially in static methods and constructors)
- Resources Access (e.g. data access, file system, etc)
- Default code
- Elevated privileges
- Unauthenticated access
- Networking
- Language specific issues,
- e.g. for C/C++: buffer overflows (stack/heap), integer overflows, format strings, dynamic LoadLibrary()s, "banned" APIs, etc.
- or for .NET: InterOp, reflection, dynamic Assembly.Load(), CAS/Full/Partial trust, unsafe code, etc.
I usually start with a checklist. Say the OWASP top 10 or the CERT C coding guidelines (or the section in my own book!). I get the team to evaluate the code with relation to the checklist, and also to do undirected testing and review. "popular" issues from the open review get added to the checklist, replacing non-issues which were on there originally.
In addition, static analysis tools are used where available.
The benefit of this approach is also its biggest drawback. Issues from the checklist are easily spotted, but are often the only problems found as creative bug-hunting is not easy.