A PhD thesis @ NALAG


This is a PhD thesis prepared by a member of the NALAG group or prepared with a (co)promotor from NALAG.

TW 2012_02

Yao Yue
The use of model order reduction in design optimization algorithms
September 6, 2012

Advisor(s): Karl Meerbergen


Abstract

It is nowadays common practice to use CAD (Computer-Aided Design) or EDA (Electronic Design Automation) software tools to simulate and study mechanical vibrations and integrated circuits before they are actually built. This is achieved by a combination of better computer technology, mathematical models and numerical methods. This greatly increases the design efficiency and allows the designers to better tune the design parameters. In many fields such as analogue circuit design, engineers mostly rely on experience to choose the parameters, and they may need many iterations before they arrive at a satisfactory result. A step forward is (automated) design optimization, i.e., formulating the design objective and the constraints mathematically and using optimization software to locate the optimal values of design parameters. This thesis focuses not on how to formulate the mathematical models, but on how to solve the optimization problem efficiently when the models are represented by discretized PDEs. In this thesis, we study discretized PDEs in the frequency domain. PDEs give rise to large-scale linear systems and therefore high computational costs. This cost is multiplied by an important factor due to the different choices of design parameters that the systems are solved for.

One choice for reducing the computational cost is the use of (parametric) model order reduction ((P)MOR) techniques. The idea of (P)MOR is to generate a low-order reduced model whose input-output behavior is similar to that of the high-order original system. It has been successfully applied to many different fields such as circuit simulations, (vibro) acoustics and MEMS design.

The goal of this thesis is to use Krylov-Padé-type model order reduction (MOR) techniques to accelerate design optimization. We show that if design parameters only have a low-rank contribution to the system matrix, we can use the block- Arnoldi process to obtain a parametric reduced model valid on the entire parameter space. However, in general, a reduction for all parameters is not practical, especially when we have many design parameters, since in this case, their impact on the system matrix is usually not of low-rank. Therefore, it is reasonable to fix a selection of parameters and then to generate a reduced model on a subset of remaining parameters. We show that the gradient towards the design parameters is well approximated by two-sided Krylov Padé-type reduced models. This means that we can use gradient-based methods, such as Quasi-Newton methods, to achieve a superlinear convergence rate. Based on this observation, we developed two frameworks: the MOR Framework and the PMOR Framework. The MOR Framework fixes all design parameters and only reduces on the frequency, while the PMOR Framework takes one additional variable corresponding to the search direction into account. These two frameworks embed (P)MOR into existing gradient-based optimization algorithms in order to reduce the cost for the function and gradient evaluations.

Our next step is to develop optimization algorithms that take better advantage of the built reduced models. First, we use the basis obtained by MOR to build an interpolatory reduced model, i.e., we build a model defined on the entire parameter space based on the basis computed by MOR for a single interpolation point. Since both the function value and the gradient are well approximated at the interpolation point, we expect the interpolatory reduced model to well approximate the original model in a neighborhood of this point when the objective function is smooth. A heuristic error bound for the reduced model is employed to assess the accuracy of the model. The classical theory for optimization problems assumes the first order condition to hold, which requires exact function and gradient evaluations. However, such surrogate models are dificult to build in practice for design optimization problems due to the high computational cost of the original model. We introduce a relaxed first order condition that only requires information from the reduced models. We prove that when all iterates satisfy, what we call, the error aware sufficient decrease condition, convergence is guaranteed. This leads to a practical algorithm with guaranteed convergence. We developed two variations on this algorithm: the Error-based Trust Region method (ETR) and the Error-based Penalty method (EP). ETR uses the error bound to define a trust region and exploits the reduced model inside this trust region, while EP uses the error bound to construct a penalty term.

Our numerical tests show that all the proposed methods are efficient in reducing the computational cost.

Doctadmin 3E090452 / lirias 358122 / mailto: nalag team

<
keyboard_arrow_up