Jobs


There are always possibilities to join our research group. Please contact one of the staff members if you are interested in making a PhD.

Specific vacancies

  • Multivariate Polynomial and Rational Interpolation and Approximation (PhD)

    Description:
    • In several theoretical as well as computational mathematical problems, one wants to work with complicated multivariate functions. However, in a lot of cases performing operations with these original functions is cumbersome and requires an unacceptably high computational effort. A solution to this problem is to replace the original complicated function by a function that can be handled much more easily, e.g., polynomial or rational functions. Within this space of simpler functions, we can look for the function that optimizes one of several possible criteria. The computational effort to find, e.g., a minmax approximant is large. Instead a nearly optimal approximant can be found by just computing the function that interpolates the original function in certain well-chosen points, called “good points”. The “good points” themselves can be chosen in an optimal or nearly optimal way according to a certain optimization criterion that specifies the “goodness” of the set of points.
    • Further details can be found at the project webpage.
    Aim:
    • In this project, we will study “good points” for multivariate interpolation and approximation for different function spaces as well as “good representations” for the corresponding interpolant/ approximant. The theory of these “good points” and “good representations” will be developed as well as the corresponding efficient and accurate algorithms to compute them and work with them. We will also study several applications in which these tools could play a key role.
    Profile:
    • Candidates must have a masters degree in either Mathematics/Applied Mathematics/Numerical Mathematics/Scientific Computing or similar disciplines, and be interested to work on this project, in collaboration with other researchers in the research group. The position will be initially for one year. After a positive evaluation, this position will be extended for three more years.
    Contact:

  • Numerical Linear Algebra and Polynomial Computations (PhD)

    Description:
    • This project focuses on the interplay between numerical linear algebra and polynomial computations. Although these two domains are already well-established, several important problems still remain to be solved, especially when the polynomial data is given in finite precision.
    Aim:
    • This project consists of three parts. In a first part, we will study componentwise global backward stable algorithms to solve univariate polynomial equations and matrix polynomial eigenvalue problems. Secondly, the recurrence relation for multivariate orthogonal polynomial vectors will be investigated and how the coefficients of this recurrence relation can be computed as an inverse eigenvalue problem, with applications in multivariate rational approximation. Thirdly, several new methods will be constructed to solve systems of polynomial equations. These will be validated on real-life problems.
    Contact:
  • Fast matrix multiplication using sparse and simple tensor decompositions (PhD)

    Description:
    • Multiplying two 2x2 matrices requires 8 multiplications using the standard algorithm. This operation can be represented by a multiplication tensor and a canonical polyadic decomposition (CPD) of this tensor has rank 7 leading to a more efficient multiplication algorithm for nxn matrices requiring order O(na) flops with a = log2 7 < 3 called a fast matrix multiplication algorithm (FMMA).
    Aim:
    • The main aim of this project is the systematic and efficient derivation of CPDs for a given tensor having sparse simple factor matrices. The given tensor can be a multiplication tensor in case of FMMAs. Besides matrix multiplication, other numerical computations that can be viewed as recursively applying fast algorithms for one or more base case tensors, will be investigated.
    Contact:
  • Can unconventional QR algorithms supersede the state-of-the-art (PhD)

    Description:
    • SIAM News published in 2000 "The best of the 20th Century: Editors Name Top 10 Algorithms'' including the QR algorithm for computing eigenvalues. We quote: ``Eigenvalues are arguably the most important numbers associated with matrices---and they can be the trickiest to compute''. In this project the QR algorithm will be revisited. We do not aim at minor improvements, tweaks, or tunings of existing theories and algorithms, but at a fundamental research restart situated at the mathematical foundations underlying this algorithm.
    • This research touches the core of the QR algorithm and alters its foundations. Preliminary experiments illustrated the possible high gain: halving computing time. In this proposal we will investigate whether this unconventional approach is capable of superseding the contemporary QR algorithm for computing eigenvalues.
    Contact:
  • Rational Krylov, Matrix Functions, and Graph Theory (PhD)

    Description:
    • Many real life applications, e.g., power grids, networks, the internet, road networks, protein interactions, spreading of diseases, and so forth are modeled as networks. Quite often it is desired to identify in someway or another the most important (central, hubs, authorities) nodes in such massive datasets. Modeling the network by its adjacency matrix and computing a matrix function (e.g., the matrix exponential) of it typically forms a good starting point to compute good measures for the centrality of nodes. Unfortunately, computing matrix functions and specifically the matrix exponential is non-trivial, and quite time-consuming taking the order of the network into account.
    • Several algorithms exist for computing the matrix exponential and it is known for quite some time that computing the exponential via rational functions delivers faster convergence than via classical polynomials. In this research proposal we will enhance existing rational krylov algorithms and rational compression techniques to deal with these network problems. The focus is not on developing new algorithms for computing the matrix exponential itself, but focusing on the underlying network problem, as not all, but typically only few entries of the exponential are required.  We believe that combining rational Krylov (fast convergence) with rational compression (limited storage) to deal with these ever growing datasets will help us to accurately analyze many networks in a reasonable amount of time.
    Contact:
  • Tensor Geometric Mean (PhD)

    Description:
    • he set of positive definite tensors was created as a generalization of the positive real numbers. A first extension were the positive definite matrices which have positive real eigenvalues and form a smooth manifold. In recent years there is an exponential growth of data that needs to be processed. As a result one has defined higher-order objects known as tensors. Again, the real eigenvalues of a positive definite tensor need to be positive. Unfortunately, the structure of the set of positive tensors has not yet been studied sufficiently and we will analyze its smoothness.
    • The geometric mean of n positive numbers is defined as the n-th root of their product, and provides next to the classical arithmetic and harmonic mean another, for many applications more suitable, way of averaging. For positive definite matrices, the geometric mean was defined as a least squares mean with respect to the distance measure induced by the natural geometry on the set of positive definite matrices. In essence, this means that the shape of the space of positive definite matrices cooperates better with a curved distance measure following the manifold of positive definite matrices. Our goal is to find such a natural geometry and distance measure for positive definite tensors based on which we can define, construct, and compute the tensor geometric mean as a least squares mean.
    Contact:
keyboard_arrow_up