1. Introduction
1.1 Background
Transportation Management Systems (TMS) are composed of a complex, integrated blend of hardware, software, processes, and people performing a range of functions. These functions typically include data acquisition, command and control, data processing and analysis, and communications.
Developing, implementing, operating, and maintaining a TMS is a challenging process in many areas and for a variety of reasons. It takes many different skill sets to deploy a TMS ranging from program management (finance, scheduling, human resources, etc.) to specialized software and hardware skills (operating system device drivers, communications protocol troubleshooting, electrical/electronic engineering, etc.).
Testing is an important part of the deployment of a TMS. The purpose of testing is two-fold. First, testing is about verifying that what was specified is what was delivered: it verifies that the product (system) meets the functional, performance, design, and implementation requirements identified in the procurement specifications. Hence, a good testing program requires well-written requirements for both the components and the overall system. Without testable requirements, there is no basis for a test program.
Second, testing is about managing risk for both the acquiring agency and the system's vendor/developer/integrator. The test program that evolves from the overarching systems engineering process, if properly structured and administered, facilitates the process of managing the programmatic and technical risks and helps to assure the success of the project. The testing program is used to identify the point at which the work has been "completed" so that the contract can be closed, the vendor paid, and the system shifted by the agency into the warranty and maintenance phase of the project. Incremental testing is often tied to intermediate milestones and allows the agency to start using the system for the benefit of the public.
Consider the risks and results described in the following two systems:
- The first system required 2700 new, custom designed, field communication units to allow the central traffic control system to communicate with existing traffic signal controllers. The risk associated with the communication units was very high because once deployed, any subsequent changes would be extremely expensive to deploy in terms of both time and money. As in many TMS contracts, the vendor supplied the acceptance test procedure, the specifications defined the functional and performance requirements that could be tested, and the contract terms and conditions required the test procedure verify all of the functional and performance characteristics. The execution of a rigorous test program in conjunction with well-defined contract terms and conditions for the testing led to a successful system that continues to provide reliable operation 15 years later.
On the other hand, deploying systems without adequate requirements can lead to disappointment.
- In the second case, the agency needed to deploy a solution quickly. The requirements were not well defined and, consequently, the installed system met some but not all the needs of the acquiring agency. Since the requirements were not well defined, there was no concise way to measure completion. As a result, the project was ultimately terminated with dissatisfaction on both the part of the agency and the integrator. Ultimately, the agency decided to replace the system with a new, custom application. However, for this replacement system, the agency adopted a more rigorous system engineering process, performed an analysis of its business practices, and produced a set of testable requirements. The replacement system was more expensive and took longer to construct, but, when completed, it addressed the functional requirements that evolved from the review of the agency's business practices. Testing assured that the second system met the requirements and resulted in a minimal number of surprises.
In both these cases, testing made a critical difference to the ultimate success of the programs.
1.2 Objectives
The objective of this handbook is to provide direction, guidance, and recommended practices for test planning, test procedures, and test execution for the acquisition, operation, and maintenance of transportation management systems and Intelligent Transportation Systems (ITS) devices.
This handbook is intended for individuals that are responsible for or involved in the planning, design, implementation, operation, maintenance, and evaluation of the TMS for public agencies. Targeted end users of the handbook are first-level supervisors (managers and supervisors) and technical staff that may include transportation planners, traffic engineers and technicians, construction and maintenance engineers, and traffic management center (TMC) staff.
The guide is an introduction to the technical discipline of testing and provides a foundation for understanding the role, terminology, and technical issues that are encountered while managing a test program. Because it is an introductory guide, other testing resources are referenced for additional information.
The guidance provided herein is best utilized early in the project development cycle, prior to preparing project and acquisition plans. Many important decisions are made early in the system engineering process that affect the testing program. The attention to detail when writing and reviewing the requirements or developing the plans and budgets for testing will see the greatest pay-off (or problems) as the project nears completion. Remember that a good testing program is a tool for both the agency and the integrator/supplier; it typically identifies the end of the development phase of the project, establishes the criteria for project acceptance, and establishes the start of the warranty period.
1.3 Document Structure
This handbook provides an introductory guide to transportation management system (TMS) testing. It begins by discussing testing within the system engineering life-cycle process and stages in Chapter 2. This discussion introduces the system engineering process and identifies the sources of requirements that are the basis for testing and ultimate system acceptance. Chapter 3 provides an overview of the TMS acquisition process starting with the development of a regional architecture for the TMS. It then discusses system procurement considerations and practices with emphasis on how the test program affects various phases of the system's life cycle, including post-acceptance operation and maintenance. The material on the testing role in the system engineering process and the project life cycle is necessary to set the stage for testing and to put it into context.
Chapter 4 addresses the basics of testing. It discusses testing methods, planning, test development, resources, and execution. This chapter tries to fold the entire technical discipline into a compact presentation. It will not make you an expert but it will introduce you to the terminology and many of the concepts that will be used by the system vendors. Chapter 5 focuses on planning a project test program. The broad-based technical discussion of testing fundamentals provides material that a TMS project manager must be familiar with to implement the project's testing program and fit it into the overall TMS deployment program. The success of the project will depend on the test program that evolves from this system engineering process and, when properly structured and administered, the test program allows the programmatic and technical risks to be managed. Understanding the role of testing, its terminology, and technical issues is key to managing a test program for a TMS deployment project. This material also allows the project manager to discuss the program in detail with the technical experts that may be employed to assist with development and implementation.
Chapters 6 and 7 discuss hardware and software testing respectively and provide some real-world examples. Chapter 8 addresses testing at the subsystem and system levels. Chapter 9 provides guidance and recommendations on such topics as the meaning of "shall" and "will" in requirements statements, how to write testable requirements, and pass/fail criteria. This chapter also includes helpful information and suggestions on test reporting, testing timeframes, testing organization independence, testing relevancy and challenges, failure modes and effects, testing myths, and estimating test costs. Chapters 6-9 are much more pragmatic discussions of testing, presenting lessons learned and providing practical guidance for planning, developing, and conducting testing.
Chapter 10 provides a list of available resources for further investigation. These are web sites that address TMS-relevant standards, organizations and associations that provide training in the testing discipline, and organizations involved with the process of establishing standards for ITS. These resources are provided as starting points and may appear in the results of web searches regarding testing. The handbook includes four appendices: Appendix A - Example Verification Cross Reference Matrix, Appendix B - Sample Test Procedure, Appendix C - Sample System Problem/Change Request Form, and Appendix D - Example Application of NTCIP Standards.
Throughout the document, the term "vendor" refers to the supplier, developer, or integrator of the system.
Previous | Next