The open interfaces of VectorCAST give us the necessary flexibility to adapt VectorCAST to all needs in all our development...Read More »
VectorCAST/Analytics provides an easy to understand web-based dashboard view of software code quality and test completeness metrics, enabling users to identify trends in a single codebase or compare metrics between multiple codebases.
- Real-time access to quality and testing completeness metrics
- Built-in connectors for all VectorCAST-produced data
- User-defined connectors for third-party data
- Fully customizable dashboard based on organization’s goals
Code Health Check: Get Actionable Insights on Your Code
Find and fix problems before they get out of hand and request your Health Check today.Request a Health Check
Real-Time Code Quality Metrics
Provides quantifiable data on tests run vs. tests needed, release readiness, risk areas, and hot spot identification.
Technical Debt Identification
Identifies data on the key components of technical debt such as code complexity, comment density, and testing completeness.
Test Case Quality
Reports on the quality of test cases with metrics such as: tests with expected values but no requirements, number of requirements tested, and tests with expected values.
Allows end-user customization of calculated metrics, as well as data presentation using a variety of built-in graphs and tables.
Extendable Data Connectors
Includes built-in data connectors for all VectorCAST tools and is easily extended to support any third-party data sources.
How it Works
VectorCAST/Analytics provides user-configurable data connectors that allow key metrics such as: static analysis errors, code complexity, code coverage, and testing completeness to be captured from VectorCAST or third-party tools. These base metrics can be combined into calculated metrics to identify hot spots in the code, such as functions with high complexity and low coverage. Displaying this information in a heat map view, where code coverage controls the box color and code complexity controls the box size, allows users to quickly view where they should invest testing and refactoring resources to get the best return on investment. Big red boxes imply highly complex functions that are poorly tested.