Zheng, Senjing (2022). Automatic identification of mechanical parts for robotic disassembly using deep neural network techniques. University of Birmingham. Ph.D.
|
Zheng2022PhD.pdf
Text - Accepted Version Available under License All rights reserved. Download (3MB) | Preview |
Abstract
This work addressed the automatic visual identification of mechanical objects from 3D camera scans, and is part of a wider project focusing on automatic disassembly for remanufacturing. The main challenge of the task was the intrinsic uncertainties on the state of end-of-life products, which required a highly robust identification system. The use of point cloud models implied also the need to deal with significant computational overheads.
The state-of-the-art PointNet deep neural network was chosen as the classifier system, due to its learning capabilities, suitability to processing 3D models, and ability to recognise objects irrespective of their pose. To obviate the need for collecting a large set of training models, it was decided that PointNet was to be trained using examples generated from 3D CAD models, and used on scans of real objects. Different tests were carried out to assess PointNet ability to deal with imprecise sensor readings and partial views. Due to restrictions on access due to the pandemic, it was not possible to collect a sufficiently systematic set of scans of physical objects in the lab. Various tests were thus carried out using combinations of CAD models of mechanical and everyday objects, primitive geometric shapes, and real scans of everyday objects from popular machine vision benchmarks. The investigation confirmed PointNet’s ability to recognise complex mechanical objects and irregular everyday shapes with good accuracy, generalising the results of learning from geometric shapes and CAD models. The performance of PointNet was not significantly affected by the use of partial views of the objects, a very common case in industrial applications. PointNet showed some limitations when tasked with recognising noisy scenes, and a practical solution was suggested to minimise this problem.
To reduce the computational complexity of training a deep architecture using large data sets of 3D scenes, a predator-prey coevolutionary scheme was devised. The proposed algorithm evolves subsets of the training set, selecting for these subsets the most difficult examples. The remaining training samples are discarded by the evolutionary procedure, which thus reduces the number of examples that are presented to the classifier. The experimental results showed that this economy of training samples allows reducing the execution time of the learning procedure, without affecting the neural network recognition accuracy. This simplification of the learning procedure is of general importance for the whole deep learning field, since practical implementations are often hindered by the complexity of the training process.
Type of Work: | Thesis (Doctorates > Ph.D.) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
Award Type: | Doctorates > Ph.D. | |||||||||
Supervisor(s): |
|
|||||||||
Licence: | All rights reserved | |||||||||
College/Faculty: | Colleges (2008 onwards) > College of Engineering & Physical Sciences | |||||||||
School or Department: | School of Engineering, Department of Mechanical Engineering | |||||||||
Funders: | Engineering and Physical Sciences Research Council | |||||||||
Subjects: | T Technology > TJ Mechanical engineering and machinery T Technology > TS Manufactures |
|||||||||
URI: | http://etheses.bham.ac.uk/id/eprint/12464 |
Actions
Request a Correction | |
View Item |
Downloads
Downloads per month over past year