AI Reviewer is a tool capable of performing fully automated code reviews of large C++ projects, by detecting violations of the well known S.O.L.I.D. principles of object oriented design, formulated by Robert Martin in the early 2000s.
Our goal in putting together AI Reviewer’s analysis mix was to achieve as complete a coverage of the SOLID principles as possible. Therefore, you’ll find that some of the analyses correspond more or less precisely to well known code smells (e.g. God Class, Refused Bequest, Feature Envy), while others are defined in a top-down fashion, with a specific design principle in mind.
In addition to detecting violations of the SOLID design principles, AI Reviewer can also compute and report dozens of code metrics. They address all of the important aspects of object oriented design (size, complexity, coupling, cohesion, inheritance), and are applicable on a wide range of granularities (method/function, type/class, file, folder, project).
A versatile rule syntax allows you to fine tune the scope of the analyses, as well as filter generated output based on the measurement results themselves.
Finally, AI Reviewer exports its analysis findings into an XML based format, and measurement results into XML or CSV format. This allows for easy import, processing and reporting through further tools.
How does it work
AI Reviewer combines static code analysis with heuristic techniques in order to perform its investigation and establish its findings. Below is a high level overview of AI Reviewer’s architecture:
Core Analysis Model (CAM): represents a detailed abstract representation of the program source code that is being analyzed by AI Reviewer. The CAM is language neutral to a large extent, and can accommodate both object oriented and procedural constructs. It represents and relates to one another all kinds of program entities, from libraries, classes and packages, down to abstract representations of individual statements, declarations and references. All metrics and analyses are implemented on top of the CAM.
Analysis Engine (AE): represents the component that knows how to traverse the CAM and start the appropriate analysis/metric based on the type of model entity that it encounters. The AE keeps a registry of all analyses and metrics implementations available, and annotates/enriches the CAM with the computed analysis and measurement results.
Model Extractor: these are the components responsible for parsing source code in a particular language (such as C++), mapping source entities to CAM abstractions, and constructing the CAM.
Metrics Suite: represents a collection of software metrics implemented on top of the CAM. Since the CAM is largely language neutral, most metrics implementations are also language neutral and directly reusable for any object oriented language.
Analysis Suite: represents a collection of heuristics based detection rules that can identify various object oriented code smells and violations of object oriented design principles and rules. They are implemented on top of the CAM and are therefore highly reusable between languages.
Exporters: represent components that export information from the CAM (primarily findings and measurement results) into various formats, such as XML, CSV, database, etc.
Integration components: are software components responsible for integration of AI Reviewer with 3rd party systems, such as Jenkins[link!]. They usually use the reports generated by exporters as input, but a more direct integration (e.g. with the CAM) is also possible.