Mason Archival Repository Service

3D Model-Assisted Learning for Object Detection and Pose Estimation

Show simple item record

dc.creator Georgios Georgakis
dc.date.accessioned 2022-01-25T19:14:43Z
dc.date.available 2022-01-25T19:14:43Z
dc.date.issued 2020
dc.identifier.uri http://hdl.handle.net/1920/12381
dc.description.abstract Supervised learning paradigm for training Deep Convolutional Neural Networks (DCNN) rests on the availability of large amounts of manually annotated images, which are necessary for training deep models with millions of parameters. In this thesis, we present novel techniques for mitigating the required manual annotation, by generating large object instance datasets through compositing textured 3D models onto commonly encountered background scenes to synthesize training images. The generated training data augmented with real world annotations outperforms models trained only on real data. Non-textured 3D models are subsequently used for keypoint learning and matching, and 3D object pose estimation from RGB images. The proposed methods showcase promising results with regards to generalization on new and standard benchmark datasets. In the final part of the thesis, we investigate how these perception capabilities can be leveraged and encoded in a spatial map, in order to enable an agent to successfully navigate towards a target object.
dc.title 3D Model-Assisted Learning for Object Detection and Pose Estimation
thesis.degree.level Ph.D.
thesis.degree.discipline Computer Science
thesis.degree.grantor George Mason University


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search MARS


Browse

My Account

Statistics