Learning-based generative representations for automotive design optimization

Saha, Sneha (2023). Learning-based generative representations for automotive design optimization. University of Birmingham. Ph.D.

[img]
Preview
Saha2023PhD.pdf
Text
Available under License All rights reserved.

Download (64MB) | Preview

Abstract

In automotive design optimizations, engineers intuitively look for suitable representations of CAE models that can be used across different optimization problems. Determining a suitable compact representation of 3D CAE models facilitates faster search and optimization of 3D designs. Therefore, to support novice designers in the automotive design process, we envision a cooperative design system (CDS) which learns the experience embedded in past optimization data and is able to provide assistance to the designer while performing an engineering design optimization task. The research in this thesis addresses different aspects that can be combined to form a CDS framework.

First, based on the survey of deep learning techniques, a point cloud variational autoencoder (PC-VAE) is adapted from the literature, extended and evaluated as a shape generative model in design optimizations. The performance of the PC-VAE is verified with respect to state-of-the-art architectures. The PC-VAE is capable of generating a continuous low-dimensional search space for 3D designs, which further supports the generation of novel realistic 3D designs through interpolation and sampling in the latent space. In general, while designing a 3D car design, engineers need to consider multiple structural or functional performance criteria of a 3D design. Hence, in the second step, the latent representations of the PC-VAE are evaluated for generating novel designs satisfying multiple criteria and user preferences. A seeding method is proposed to provide a warm start to the optimization process and improve convergence time. Further, to replace expensive simulations for performance estimation in an optimization task, surrogate models are trained to map each latent representation of an input 3D design to their respective geometric and functional performance measures. However, the performance of the PC-VAE is less consistent due to additional regularization of the latent space.

Thirdly, to better understand which distinct region of the input 3D design is learned by a particular latent variable of the PC-VAE, a new deep generative model is proposed (Split-AE), which is an extension of the existing autoencoder architecture. The Split-AE learns input 3D point cloud representations and generates two sets of latent variables for each 3D design. The first set of latent variables, referred to as content, which helps to represent an overall underlying structure of the 3D shape to discriminate across other semantic shape categories. The second set of latent variables refers to the style, which represents the unique shape part of the input 3D shape and this allows grouping of shapes into shape classes. The reconstruction and latent variables disentanglement properties of the Split-AE are compared with other state-of-the-art architectures. In a series of experiments, it is shown that for given input shapes, the Split-AE is capable of generating the content and style variables which gives the flexibility to transfer and combine style features between different shapes. Thus, the Split-AE is able to disentangle features with minimum supervision and helps in generating novel shapes that are modified versions of the existing designs.

Lastly, to demonstrate the application of our initial envisioned CDS, two interactive systems were developed to assist designers in exploring design ideas. In the first CDS framework, the latent variables of the PC-VAE are integrated with a graphical user interface. This framework enables the designer to explore designs taking into account the data-driven knowledge and different performance measures of 3D designs. The second interactive system aims to guide the designers to achieve their design targets, for which past human experiences of performing 3D design modifications are captured and learned using a machine learning model. The trained model is then used to guide the (novice) engineers and designers by predicting the next step of design modification based on the current applied changes.

Type of Work: Thesis (Doctorates > Ph.D.)
Award Type: Doctorates > Ph.D.
Supervisor(s):
Supervisor(s)EmailORCID
Menzel, StefanUNSPECIFIEDUNSPECIFIED
Minku, Leandro LUNSPECIFIEDUNSPECIFIED
Sendhoff, BernhardUNSPECIFIEDUNSPECIFIED
Yao, XinUNSPECIFIEDUNSPECIFIED
Licence: All rights reserved
College/Faculty: Colleges (2008 onwards) > College of Engineering & Physical Sciences
School or Department: School of Computer Science
Funders: European Commission
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
URI: http://etheses.bham.ac.uk/id/eprint/13339

Actions

Request a Correction Request a Correction
View Item View Item

Downloads

Downloads per month over past year