PhD Theses
Older PhD Theses
Geert Uytterhoeven | title
Wavelets: Software and Applications
date 28 April, 1999 Promotor: A. Bultheel and D. Roose Abstract We worked out the details of integer wavelet transforms, based on the lifting scheme, for a class of biorthogonal wavelets (Cohen-Daubechies-Feauveau). Based on this we designed and implemented a software library called WAILI (Wavelets with Integer Lifting) that provides wavelet transforms and wavelet-based image processing operations on two-dimensional images. Later we added support for very large images and block-based processing. We created a new kind of second-generation wavelets on a rectangular grid, based on a red-black blocking scheme. These Red-Black wavelets are less anisotropic than tensor product wavelets. We reduced the complexity of the Proper Orthogonal Decomposition (POD) by biorthogonal wavelet packet compression and evaluated the resulting Approximate POD by analyzing a large-scale dynamical system, described by partial differential equations. For a text see here
|
|
Peter Kravanja | title
On Computing Zeros of Analytic Functions and Related Problems in Structured Numerical Linear Algebra
date 9 March, 1999 Promotor: A. Haegemans and M. Van Barel Abstract This thesis is a blend of computational complex analysis and numerical linear algebra. We study the problem of computing all the zeros of an analytic function that lie inside a Jordan curve. The algorithm that we present computes not only approximations for the zeros but also their respective multiplicities. It does not require initial approximations for the zeros and we have found that it gives accurate results. A Fortran 90 implementation is available (the package ZEAL). Our approach is based on numerical integration and the theory of formal orthogonal polynomials. We show how it can be used to locate clusters of zeros of analytic functions. In this context we also present an alternative approach, based on rational interpolation at roots of unity. Next we consider the related problem of computing all the zeros and poles of a meromorphic function that lie inside a Jordan curve and that of computing all the zeros of an analytic mapping (in other words, all the roots of a system of analytic equations) that lie in a polydisk. We also consider analytic functions whose zeros are known to be simple, in particular Bessel functions (the package ZEBEC) and certain combinations of Bessel functions. Next we propose a modification of Newton's method for computing multiple zeros of analytic mappings. Under mild assumptions our iteration converges quadratically. It involves certain constants whose product is a lower bound for the multiplicity of the zero. As these constants are usually not known in advance, we devise an iteration in which not only an approximation for the zero is refined, but also approximations for these constants. In the last part of this thesis we develop stabilized fast and superfast algorithms for rational interpolation at roots of unity. These algorithms lead to fast and superfast solvers for (indefinite) linear systems of equations that have Hankel or Toeplitz structure. For the text see here
|
|
Gorik De Samblanx | title
Filtering and Restarting Projection Methods for Eigenvalue Problems
date 2 June, 1998 Promotor: A. Bultheel and M. Van Barel Abstract In recent years, the use of implicitly restarting iterative eigenvalue solvers has become widely accepted. The reason of restarting may be to improve the performance of an algorithm by making it numerically more stable and by restricting the required computational resources. The basic idea behind restarting is: if an iterative algorithm converges too slowly, so that it uses too much time or too much computer memory, or if it converges to a wrong result, then simply stop it and restart with new, better initial conditions. The popularity if implicitly restarting lies in the fact that it is more efficient than a bare, explicit restart and that it has attractive, additional properties. An implicit restart computes -- at least in theory -- the same results as a specific explicit restart by transformation of the current results, rather than recomputing explicitly all the iteration steps. In order to understand the properties of an implicit restart, we need to say a few words on iterative eigenvalue solvers. Most iterative eigenvalue solvers build iteratively a basis V for a subspace on which an eigenvalue problem (A,B) is projected. The solutions of the projected problem approximate the solutions of the original eigenvalue problem. At each step of the algorithm, V is expanded with one or more vectors. An implicitly restarting algorithm transforms this basis to a new basis V+ of smaller dimension, but that contains as much useful information as the basis V. Often, the new basis can be seen as a filtered version of the old basis V+ is approximately equal to R(A,B) V. The choice of the filter function R(A,B) defines the different applications of the implicitly restarting-and-filtering algorithm: convergence acceleration, removal of spurious information, validation of the results or deflation. This thesis derives implicitly restarting and implicitly filtering algorithms for exact eigenvalue solvers and for solvers that assume inexact (inaccurate) arithmetic. In the first part, implicitly restarting algorithms are derived for the Rational Krylov Sequence algorithm (RKS) and for the unsymmetric Lanczos algorithm. Concerning the RKS algorithm, a detailed implicit filtering algorithm is described and the text goes into possible numerical problems that are related to restarting. The presented implicitly restarted Lanczos algorithm is a natural biorthogonal extension of the implicitly restarted Arnoldi method. It may be used for filtering and the principle of exact shifts applies on it. The restarted Lanczos method, called Nested Lanczos, can also be used in case of serious breakdown. The second part studies the use of inaccurate arithmetic with eigenvalue solvers. We derive conditions for the convergence of these methods and we show how these methods may be restarted. It is much more difficult to implicitly filter inexact eigenvalue solvers. It is shown that this problem is due to a high-rank residual matrix. It may be cured in some cases, though.
|
|
Karin Willemans | title
Smoothing scattered data with a constrained Powell-Sabin spline surface
date April, 1996 Promotor: P. Dierckx Abstract In this thesis algorithms are presented for smoothing scattered data with a convexity, nonnegativity or monotonicity preserving surface or with a surface subject to boundary constrains. As for the approximating functions, Powell-Sabin (PS) splines on conforming triangles are used. These piecewise quadratics with C1-continuity are expressed as a linear combination of locally supported basis functions (B-splines) which are fairly easy to construct. Necessary and sufficient shape preserving conditions for the PS-splines are then formulated in terms of the B-spline coefficients. Due to the local support of the B-splines, these constraints are sparse, which can be exploited to efficiently solve the problem. In the presented algorithms a local refinement procedure for the trangulation problem is included that automatically takes account of the difficulties in the function underlying the data. Numerical examples are supplied to illustrate the profit of surface fitting by means of constrained Powell-Sabin splines.
|