ebook img

Development of a Multiple-Camera Tracking System for Accurate Traffic Performance ... PDF

76 Pages·2013·2.27 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Development of a Multiple-Camera Tracking System for Accurate Traffic Performance ...

Development of a Multiple-Camera Tracking System for Accurate Traffic Performance Measurements at Intersections Final Report Prepared by: Hua Tang Department of Electrical and Computer Engineering Northland Advanced Transportation Systems Research Laboratories University of Minnesota Duluth CTS 13-10 Technical Report Documentation Page 1. Report No. 2. 3. Recipients Accession No. CTS 13-10 4. Title and Subtitle 5. Report Date Development of a Multiple-Camera Tracking System for Accurate February 2013 Traffic Performance Measurements at Intersections 6. 7. Author(s) 8. Performing Organization Report No. Hua Tang 9. Performing Organization Name and Address 10. Project/Task/Work Unit No. Department of Electrical and Computer Engineering CTS Project #2012016 University of Minnesota Duluth 11. Contract (C) or Grant (G) No. 1023 University Drive Duluth, MN 55812 12. Sponsoring Organization Name and Address 13. Type of Report and Period Covered Intelligent Transportation Systems Institute Final Report Center for Transportation Studies 14. Sponsoring Agency Code University of Minnesota 200 Transportation and Safety Building 511 Washington Ave. SE Minneapolis, Minnesota 55455 15. Supplementary Notes http://www.its.umn.edu/Publications/ResearchReports/ 16. Abstract (Limit: 250 words) Automatic traffic data collection can significantly save labor work and cost compared to manual data collection. However, automatic traffic data collection has been one of the challenges in Intelligent Transportation Systems (ITS). To be practically useful, an automatic traffic data collection system must derive traffic data with reasonable accuracy compared to a manual approach. This project presents the development of a multiple-camera tracking system for accurate traffic performance measurements at intersections. The tracking system sets up multiple cameras to record videos for an intersection. Compared to the traditional single-camera based tracking system, the multiple-camera one can take advantage of significantly overlapped views of the same traffic scene provided by the multiple cameras such that the notorious vehicle occlusion problem is alleviated. Also, multiple cameras provide more evidence of the same vehicle, which allows more robust tracking of the vehicle. The developed system has mainly three processing modules. First, the camera is calibrated for the traffic scene of interest and a calibration algorithm is developed for multiple cameras at an intersection. Second, the system tracks vehicles from the multiple videos by using powerful imaging processing techniques and tracking algorithms. Finally, the resulting vehicle trajectories from vehicle tracking are analyzed to extract the interested traffic data, such as vehicle volume, travel time, rejected gaps and accepted gaps. Practical tests of the developed system focus on vehicle counts and reasonable accuracy is achieved. 17. Document Analysis/Descriptors 18. Availability Statement Intelligent transportation systems, Camera-based vision systems, No restrictions. Document available from: Cameras, Multiple cameras, Tracking systems, Traffic data, National Technical Information Services, Software, Implementation Alexandria, Virginia 22312 19. Security Class (this report) 20. Security Class (this page) 21. No. of Pages 22. Price Unclassified Unclassified 76 Development of a Multiple-Camera Tracking System for Accurate Traffic Performance Measurements at Intersections Final Report Prepared by: Hua Tang Department of Electrical and Computer Engineering Northland Advanced Transportation Systems Research Laboratories University of Minnesota Duluth February 2013 Published by: Intelligent Transportation Systems Institute Center for Transportation Studies University of Minnesota 200 Transportation and Safety Building 511 Washington Ave. S.E. Minneapolis, Minnesota 55455 The contents of this report reflect the views of the authors, who are responsible for the facts and the accuracy of the information presented herein. This document is disseminated under the sponsorship of the Department of Transportation University Transportation Centers Program, in the interest of information exchange. The U.S. Government assumes no liability for the contents or use thereof. This report does not necessarily reflect the official views or policies of the University of Minnesota. The authors, the University of Minnesota, and the U.S. Government do not endorse products or manufacturers. Any trade or manufacturers’ names that may appear herein do so solely because they are considered essential to this report. Acknowledgments The author wishes to acknowledge those who made this research possible. The study was funded by the Intelligent Transportation Systems (ITS) Institute, a program of the University of Minnesota’s Center for Transportation Studies (CTS). Financial support was provided by the United States Department of Transportation’s Research and Innovative Technologies Administration (RITA). The project was also supported by the Northland Advanced Transportation Systems Research Laboratories (NATSRL), a cooperative research program of the Minnesota Department of Transportation, the ITS Institute, and the University of Minnesota Duluth Swenson College of Science and Engineering. The authors would like to thank St. Louis County for its special support to set up a local test laboratory at the intersection of W Arrowhead Rd and Sawyer Avenue, and W Arrowhead Rd and Arlington Avenue in Duluth, MN. The authors also thank Mr. Victor Lund for providing valuable information on the test laboratory. The authors would like to thank Dr. Eil Kwon, director of NATSRL at the University of Minnesota Duluth, for very valuable discussions and comments. Table of Contents Chapter 1. Introduction ............................................................................................................. 1 1.1 Traffic Data Collection and Different Methods ............................................................... 1 1.2 Camera-Based Vision System .......................................................................................... 1 1.3 Overview of a Multiple-Camera Tracking System .......................................................... 3 1.4 The Proposed Multiple-Camera Tracking System ........................................................... 7 1.5 Organization of the Report ............................................................................................... 7 Chapter 2. Camera Calibration of Multiple Cameras ............................................................ 9 2.1 Introduction and Previous Work ...................................................................................... 9 2.2 An Extended Method for Camera Calibration of Multiple Cameras ............................. 15 2.3 Experiment Results ........................................................................................................ 22 2.4 Summary and Conclusion .............................................................................................. 25 Chapter 3. Multiple-Camera-Based Vehicle Tracking System............................................ 27 3.1 System Overview ........................................................................................................... 27 3.2 Vehicle Segmentation .................................................................................................... 29 3.1.1 Existing Approaches ............................................................................................... 30 3.1.2 The Mixture-of-Gaussian Approach ....................................................................... 31 3.3 Vehicle Tracking ............................................................................................................ 35 3.3.1 Image Processing for Object Refinement ............................................................... 35 3.3.2 Image to World Coordinate Transformation .......................................................... 37 3.3.3 Object Extraction .................................................................................................... 39 3.3.4 Object Overlap Check ............................................................................................. 40 3.3.5 Create Potential Vehicles ....................................................................................... 41 3.3.6 Check Relations of Potential Vehicles with Current Vehicles ................................ 43 3.3.7 Update Current Vehicles......................................................................................... 45 3.4 Vehicle Trajectories ....................................................................................................... 50 3.5 Summary ........................................................................................................................ 53 Chapter 4. Traffic Data Collection ......................................................................................... 55 4.1 Vehicle Count and Travel Time ..................................................................................... 55 4.2 Accepted and Rejected Gaps .......................................................................................... 56 4.3 Experiment Results ........................................................................................................ 57 4.4 Summary ........................................................................................................................ 59 Chapter 5. Summary and Conclusions................................................................................... 61 References ................................................................................................................................. 63 List of Figures Figure 1.1: Processing flow of a multiple-camera tracking system (result from each step is shown in italics). ............................................................................................................................. 5 Figure 2.1: Side view and top view of the camera setup for roadway scenes and projection of real-world traffic lanes in the image coordinate. .......................................................................... 11 Figure 2.2: Bird’s eye view of the W Arrowhead and Sawyer Avenue Intersection. .................. 15 Figure 2.3: An earth view of the W Arrowhead and Sawyer Avenue Intersection. .................... 16 Figure 2.4: Two images of an intersection traffic scene from two cameras (Images are captured from local cameras). ...................................................................................................................... 16 Figure 2.5: An enlarged view of the W Arrowhead and Sawyer Avenue Intersection from the first camera at the NE corner. ....................................................................................................... 17 Figure 2.6: Bird’s eye view of the W Arrowhead and Sawyer Avenue Intersection. .................. 19 Figure 2.7: An enlarged view of the W Arrowhead and Sawyer Avenue Intersection from the second camera at SW corner. ........................................................................................................ 20 Figure 3.1: Processing flow of the designed vehicle tracking module. ....................................... 29 Figure 3.2: Two captured images from two cameras and their segmented outputs. .................... 35 Figure 3.3: The transformed objects in the world coordinate. ..................................................... 39 Figure 3.4: The extracted object from Figure 3.2. ....................................................................... 40 Figure 3.5: The created potential vehicles from Figure 3.4 and Table 3.3 and its characterization. ....................................................................................................................................................... 42 Figure 3.6: The characterization of a current vehicle. ................................................................. 45 Figure 3.7: A sample vehicle trajectory and its states in the software. ........................................ 51 Figure 3.8: (a) Vehicle trajectories overlaid in both images from the two cameras; (b) Five vehicle trajectories in red, blue, green, yellow and black (the red one does not seem right). ...... 52 Figure 4.1: Illustration of all straight-through and turning vehicles. ........................................... 56 List of Tables Table 1.1: Traffic output data and communication bandwidth of available sensors [3]. ............... 2 Table 1.2: Equipment cost of some detectors [4]. ......................................................................... 2 Table 2.1: A comparison of previous methods for camera calibration in ITS applications. ....... 14 Table 2.2: Comparison of estimated distances from the calibration results to ground truth distances (coordinates of A=(168,95), B=(126,110), C=(109,90), D=(148,78) in Figure 2.5 and A=(161,125), B=(198,106), C=(228,124) and D=(193,146) in Figure 2.7) . ............................... 24 Table 3.1: Pseudo code for the MoG algorithm for vehicle segmentation. ................................. 33 Table 3.2: Pseudo code for image to world coordinate transformation. ...................................... 38 Table 3.3: Overlaps between objects from two cameras. ............................................................ 41 Table 3.4: Overlap relations between potential vehicles and current vehicles (Left: overlap percentage matrix; Middle: strong match matrix; Right: loose match matrix). ............................ 44 Table 3.5: Pseudo code for the tracking algorithm. ..................................................................... 48 Table 4.1: Vehicle count for each direction of traffic for a 35-minute video. ............................. 57 Executive Summary In Intelligent Transportation Systems (ITS), traffic performance measurements or traffic data collection has been a challenge. The collected traffic data is necessary for traffic simulation and modeling, performance evaluation of the traffic scene, and eventually (re)design of a traffic scene. Video-Based traffic data collection has become popular in the past twenty years thanks to the technology advancements in computing power, camera/vision technology and image/video processing algorithms. However, manual data collection by human inspection of the recorded video is traditionally used by traffic engineers. Such a manual approach is very laborious and costly. Therefore, in past years, automatic traffic data collection has become an important research topic in ITS. Automatic traffic data collection for highways has become available now and even commercial tools have been developed today. However, due to much more complicated traffic scenes and traffic behavior, automatic traffic data collection for intersections and roundabouts has been left behind. The proposed project is to develop a multiple-camera tracking system for automatic accurate traffic performance measurement for roundabouts and intersections. Traditional tracking system employs a single camera for traffic performance measurements. Compared to the traditional single-camera system, a multiple-camera one has two major advantages. One is that multiple cameras provide overlapped view coverage of the same traffic scene from different angles, which may significantly alleviate the vehicle occlusion problem. Also, multiple cameras provide multiple views of the same vehicle giving additional evidence of the vehicle, which allows robust vehicle tracking hence improves tracking accuracy. The proposed multiple-camera tracking system consists of three processing modules, camera calibration, vehicle tracking and data mining. For calibration of multiple cameras, the main difference from that of a single camera is that a common world coordinate must be selected for multiple cameras. In this project, we extend the traditional method for single-camera calibration based on vanishing points to accommodate multiple cameras. The key of the extended method is the setup of a square or rectangular pattern such that parallel lines are available to allow the traditional method to derive vanishing points. The distance estimations using the calibration results are shown to be above 90% on average. Once the camera is calibrated, the next processing module in the developed traffic data collections system is to process the videos to allow vehicle tracking. Compared to vehicle tracking with a single video from a single camera, vehicle tracking with multiple cameras has some similarities but also significant differences. In our vehicle tracking module, the first step is to segment objects based on the Mixture-of-Gaussian (MoG) algorithm. In MoG, each pixel is modeled probabilistically with a mixture of several Gaussian distributions (typically 3 to 5), and each background sample is used to update the Gaussian distributions, so that the history of pixel variations can be stored in the MoG model. Latest light and weather changes can be reflected by updating the mean and variance of MoG. Hence, the sensitivity and accuracy of vehicle detection is highly improved compared to traditional techniques. Another very important advantage of MoG is its robustness against camera shaking, which is one very practical problem that affects tracking accuracy. With segmented objects, we perform image processing, such as noise filtering, to refine the objects. Then, all potential objects are transformed from the image coordinate to the world coordinate, followed by object extraction and validation. Subsequently, the extracted objects from multiple cameras are all checked for correspondence based on overlap, as two objects from two different cameras should ideally overlap exactly if they are the same object. This step of vehicle correspondence is not a problem in a single-camera system but a very important one for a multiple-camera system. Once objects from different cameras are corresponded, they are used to create potential vehicles for the current image frame with their states (which describes the vehicle’s 3D details including position and speed in the world coordinate, shape, size, etc.). The next step is to associate the created potential vehicles at current image frame to existing vehicles tracked so far up to the previous image frame. This association is again based on overlap between potential vehicles and existing vehicles in our system design. To facilitate this association, we used Kalman filtering techniques for prediction of the states of existing vehicles in the current image frame from the previous image frame. We consider mainly three types of associations between potential vehicles and existing vehicles, namely one potential vehicle to one existing vehicle association, one potential vehicle to multiple existing vehicles association, and multiple potential vehicles to one existing vehicle association. Each type of association could possibly have multiple scenarios to consider, and each scenario requires a different scheme of updating existing vehicles using potential vehicles, which is the last step of the vehicle tracking module. We did not consider the case of many potential vehicles associated to many existing vehicles in order to simplify the vehicle tracking confusion. So finally, the existing vehicles that had been tracked up to previous frame can now be tracked to the current image frame. The vehicle tracking module generates the outputs as vehicle trajectories for all vehicles that have been detected and tracked by the module. The accuracy of these vehicle trajectories ultimately determines the traffic data accuracy. Given vehicle trajectories from the vehicle tracking module, the next processing module of the traffic performance measure system is to mine the vehicle trajectories to automatically extract interested traffic data, including vehicle volume, vehicle speed, acceleration/de-acceleration behavior, accepted gaps, rejected gaps, follow-up time, lane change and so on. In our system design, we focus mainly on vehicle counts for all directions of traffic at an intersection. To find a vehicle’s moving direction, we detect how its x and y coordinates in the world coordinate change throughout its lifetime in the camera views. The developed traffic performance measurement system is currently implemented in combined C programming languages and MATLAB software running on personal computers. Also, a local test laboratory is setup at the intersection of W Arrowhead Rd and Sawyer Avenue and the intersection of W Arrowhead Rd and Arlington Avenue in Duluth, MN. Two cameras are set up for each intersection, giving two videos for the same intersection from two different sites. We have performed extensive testing of the system using offline videos recorded for the intersections. The automatically extracted traffic data have been compared to ground truth data that were manually collected by inspecting the video and it was found that the automatically extracted traffic data are reasonably accurate with up to 90% accuracy, which is an improvement compared to a single-camera system based on our experience. It was also found that the main factors that affect the accuracy are relatively poor vehicle detections from vehicle segmentations and neglect of some complex types of vehicle associations in the vehicle tracking algorithm.

Description:
Compared to the traditional single-camera based tracking system, the multiple-camera Cameras, Multiple cameras, Tracking systems, Traffic data,.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.