System test alone is ineffective in enabling organizations to achieve customer demanded quality and meet market opportunity windows. To achieve...Read More »
- Requirements Specification:
- System Design:
- Unit Testing:
- Integration Testing:
- System Testing:
VectorCAST/Ada is a dynamic software test solution that automates Ada unit and integration testing, which is necessary for validating safety- and mission-critical embedded systems.
- Moves testing from a manual process to engineering rigor
- Can be used on new or legacy applications
- Makes developer testing a streamlined and repeatable technology
- Proven reduction in cost by automating the creation of frameworks to isolate newly developed code
- Enables attention to quality without derailing development time
- Supports Ada 83, Ada 95, and Ada 2005
Ada Unit and Integration Testing
Traditionally, Ada unit and integration testing is performed by developers as the code is built. As individual software components are created, test code is generated to take the place of the external interfaces of the code under test. This test code, typically called a test harness, is made up of drivers to stimulate the functions of the code being tested, and stubs to take the place of dependent functions that are called by the code being tested. Our software automates the creation of stubs and drivers as part of the creation of the test harness, giving developers time to focus on building quality and thorough test cases.
With VectorCAST/Ada, unit testing can be done natively or on your specific target or target simulator. VectorCAST’s run-time support package, VectorCAST/RSP, is the add-on module that makes executing your test cases on target hardware automatic and seemless. Additionally, tests can be developed in a host environment and re-executed on an embedded target to validate target and cross-compiler performance
Easy Regression Testing
One of the key benefits of unit test automation for Ada is the ability to rerun the tests as the source code changes to ensure that new errors aren’t introduced. After the test cases are generated, they can be saved in text format and easily stored in your Configuration Management tool. VectorCAST’s command line interface can be invoked from your nightly build to rerun all of the unit and integration tests allowing you to check for errors that are introduced during the day. Finding issues immediately after they are introduced greatly reduces debug time later in the lifecycle,
VectorCAST/Ada features include:
- Complete test-harness construction for Ada unit and integration testing – no writing of test code
- Driver and stub code built automatically
- Test execution from GUI or scripts
- Code-coverage analysis specific to military and avionic requirements
- Integrated with IBM® Rational® Rhapsody®, AdaCore GNAT, Green Hills® AdaMULTI™, Atego™ ObjectAda®, DDC-I™ SCORE®, IBM® Rational® DOORS® and other tools
- On-target and simulator test execution
- Code-complexity analysis highlights high risk code
- Automatic test case generation based on decision paths
- Test execution playback to assist in debugging
- Leverage existing tests to automate regression testing
How to Automate Ada Embedded Software Testing with VectorCAST/Ada
VectorCAST/Ada parses your source code and invokes code generators to automatically create the test code (stubs and drivers) required to construct a complete, executable test harness. After the test harness is constructed, utilities can be used to build and execute test cases; show code covered, and report static measurements. Test data is kept separately from the test harness, enabling easy automatic regression testing.
Components of the VectorCAST executable harness:
- test driver – main program that
- source file(s) under test
- complete stubs for dependent functions
- source files for any dependent units that are not stubbed
The test harness is data-driven, meaning the harness reads test data during execution. This approach eliminates the need to compile and link a new executable harness for each new test.
VectorCAST/Ada also generates code coverage metrics which indicate which areas of the code that you are testing still have not been tested. The easy-to-read code coverage viewer indicates down to the line of code where there is testing left to be done, and can support the different levels of coverage specified in various industry standards like DO-178B.