We do code review at ZenRobotics too. It mostly works, but maybe not extremely well.
I see code review (in general, not just at ZenRobotics) not working as a symptom: it indicates a problem elsewhere. I've seen at least the following underlying issues getting reflected in code review:
- Lack of tests: if you don't have good tests, you are very reluctant to make changes in code that you think works. This means you resist the changes that others propose, since you fear you'll break your code.
- Code that is difficult to understand (including convoluted designs or architecture): if the code is hard to understand, people will either require a lot of comments (which may be seen as counterproductive), extensive rewriting (without specific guidance on how to re-write since they didn't understand what your purpose was), or less-than-useful reviews that just look at the surface.
- Style guide violations: both reviewers and people who wrote the code under review quickly get annoyed at stylistic comments. At Google the idea was that you internalize the style guide under the first couple of weeks of using a language and after that you don't need reminding. Code in review with style violations may also mean you lack tool support to make style checks easy. It could also mean that your team-mates don't actually agree on your coding style, which makes it pretty useless.
- Not knowing the components others are working on / not having time to understand them when reviewing: if you don't have at least one other member on your team who understands the stuff you work on, you have a very low bus factor - could even mean people are starting to create fiefdoms, or that code reuse is very low.
(This doesn't mean I'm certain that code reviews suit every situation. What if you want to quickly iterate on your MVP? What if you want to quickly prototype an idea or a design? Anybody else have an idea?)