Abstract
Object recognition and camera pose estimation play an important role in many computer vision applications such as robotics, augmented reality, structure from motion, 3D object localization. This study aims to recognize objects using two different methods (SIFT/SURF) and to estimate camera pose from two imaging planes. The camera is calibrated to capture source image that includes the object with a known coordinate. The descriptors of object in source image are then extracted by SIFT/SURF algorithm and compared with descriptors of captured frame from video stream. These descriptors are matched by The Fast Nearest-Neighbors algorithm. The pairs of good descriptors between source and target images are calculated to find essential matrix that is decomposed to localize the camera pose. From the experiments conducted, SIFT has better performance in pattern recognition. However, the processing time is lower than SURF. SIFT is suitable for applications that need more accuracy and SURF is used to improve the computational time for real-time applications.
Original language | English |
---|---|
Publication status | Published - 2017 |
Externally published | Yes |
Event | The 18th International Symposium on Advanced Intelligent Systems - Daegu , Korea, Republic of Duration: 11 Oct 2017 → 13 Oct 2017 http://isis2017.org/wp-content/uploads/2017/10/01ISISProgram_Final.pdf |
Conference
Conference | The 18th International Symposium on Advanced Intelligent Systems |
---|---|
Country/Territory | Korea, Republic of |
City | Daegu |
Period | 11/10/17 → 13/10/17 |
Internet address |