May 2, 2018
Dear Faculty, Graduate, and Undergraduate students,
You are cordially invited to my dissertation defense.
Title: Fusion for object detection
When: Tuesday, May 22, 2018, at 10:00 AM
Where: Simrall Hall, Room 228 (Conference Room)
Candidate: Pan Wei
Degree: Doctor of Philosophy, Electrical and Computer Engineering
Committee:
Dr. John E. Ball
(Major Professor)
Dr. Nicolas H. Younan
(Committee Member)
Dr. Derek T. Anderson
(Committee Member)
Dr. Christopher J. Archibald
(Committee Member)
Abstract:
In a three-dimensional world, for perception of the objects around us we not only wish to classify them, but also know where these objects are. The task of object detection combines both classification and localization. In addition to predicting the object category, we also predict where the object is from sensor data. As it is not known ahead of time how many objects that we have interest in are in the sensor data and where are they, the output size of object detection changes, which makes the object detection problem difficult.
In this dissertation, I focus on the task of object detection, and use fusion technique to improve the detection accuracy and robustness. To be more specific, I proposed a method to calculate measure of conflict. This method does not need external knowledge about the credibility of each source. Instead, it uses the information from the sources themselves to help assess the credibility of each source. I apply the proposed measure of conflict to fuse independent sources of tracking information from various stereo cameras. Besides, I proposed a computational intelligence system for more accurate object detection in real-time. The proposed system uses online image augmentation before the detection stage during testing and fuses the detection results after. The fusion method is computationally intelligent based on the dynamic analysis of agreement among inputs. Comparing with other fusion operations such as average, median and Non-Maxima Suppression, the proposed methods produces more accurate results in real-time. I also propose a multi–sensor fusion system, which incorporates advantages and mitigate disadvantages of each type of sensor (LiDAR and camera). Generally, camera can provide more texture and color information, but it cannot work in low visibility. On the other hand, LiDAR can provide accurate point positions and work at night or in moderate fog or rain. The proposed system uses the advantages of both camera and LiDAR and mitigate their disadvantages. The results show that comparing with LiDAR or camera detection alone, the fused result can extend the detection range up to 40 meters with increased detection accuracy and robustness.