BTDHOT research project
BTDHOT: Block Term Decompositions of Higher-Order Tensors (01-01-2014 - 31-12-2017)
Uniqueness, Computation and Links with Structured Matrices
Sponsored by
FWO project G.0830.14N
Short description
Higher-order tensors are the natural generalizations of vectors
(first order) and matrices (second order).
They may be imagined as multi-way arrays of numbers.
Decomposition of a higher order
tensor in rank-1 terms is unique under conditions that do not have a
matrix counterpart.
We have recently introduced Block Term Decompositions (BTDs) of higher-order tensors.
In BTDs the terms are not necessarily rank-1.
Instead, they may have low multilinear rank, which
means that their columns (rows,...) are restricted to a low-dimensional,
but not necessarily onedimensional, vector space.
A large part of the project will concern the study of the uniqueness
of BTDs and the derivation of algorithms for their computation.
In a somewhat unexpected manner,
the work on algorithms may give insight in the complexity of the
computation of a matrix product.
The uniqueness properties of tensor decompositions make them essential tools for the separation
of signals that are observed together.
Matrix-based thinking has so far led one to consider
almost exclusively components that are rank-1.
This fundamental assumption may actually be questioned.
The project will lay the foundations for more general BTD-based signal separation.
We will make use of connections between hypothesized signal structure (such as exponential
polynomial, rational function,...) on one hand and properties of
structured matrix representation
(such as Hankel, Loewner,...) on the other hand.
Proposed research situation
Higher-order tensors are the generalizations of vectors (first order) and matrices (second order)
of which the entries are indexed by more than two values. Columns, rows,... of a tensor are
obtained by varying the first, second,... index, respectively, and keeping the other indices fixed.
Like for matrices, column, row,... rank are defined as the dimension of the vector space spanned
by the columns, rows,... , respectively. A difference with matrices is that, for
Nth-order tensors,
the
N values may be different. The
N-tuplet is called multilinear rank.
A tensor $\mathcal{A}$ has rank 1 if it
is the tensor outer product of a number of nonzero vectors.
We write $\mathcal{A} = u^{(1)}\otimes u^{(2)} \otimes\cdots\otimes u^{(N)}$,
which means that the entries are of the form
$a_{n_1n_2\ldots n_N} = u_{n_1}^{(1)} u_{n_2}^{(2)}\cdots u_{n_N}^{(N)}$.
The rank of a tensor is
the minimal number of rank-1 terms into which it can be decomposed. For higher-order tensors,
multilinear rank and rank are very different concepts.
Canonical Polyadic Decomposition (CPD)
is the decomposition in a minimal number of rank-1 terms. We have recently introduced
Block
Term Decompositions (BTDs) as decompositions in terms that have low multilinear rank.
There are several possibilities, depending on the multilinear rank that is imposed. In the simplest
third-order case, the terms are the tensor outer product of a low-rank matrix and a nonzero vector.
Proposed research:
- Thorough study of BTD uniqueness and explicit computation of exact BTDs
The most well-known condition guaranteeing uniqueness of the CPD was derived
by J. Kruskal [14]. Kruskal’s derivation is not constructive, i.e., it does not yield an algorithm.
In many applications, there are reasonable bounds on the rank of the tensor at hand. For cases
where the rank is known not to exceed one of the tensor dimensions, uniqueness has been proven
under a condition that is an order of magnitude more relaxed than Kruskal’s. For that case
we have additionally shown that the decomposition may be computed by means of conventional
linear algebra if it is exact. We have recently proposed a theory that covers the intermediate
cases and we have further deepened Kruskal’s result.
For BTD, we have so far only obtained Kruskal-type uniqueness conditions
- Development of algorithms for the computation of approximative BTDs based on
numerical optimization over manifolds
Numerical multilinear algebra is still in its infancy. We have recently proposed
state-of-the-art optimization algorithms for the computation of CPD and extensions for the simplest
types of BTD. We have made our algorithms available under the form of a MATLAB
toolbox.
Computation of tensor decompositions via numerical optimization over manifolds has so
far been limited to low multilinear rank approximation, i.e., to the computation of a BTD that
has only one term. In this case, the optimization may take place over a product of Grassmann
manifolds. Optimization over manifolds has become an important technique in numerical linear
algebra in recent years.
- Computation of complexity of matrix multiplication via optimization over manifolds
The multiplication of two matrices may be represented by a third-order “multiplication
tensor”. The rank of this tensor is related to the complexity of the operation, while its
CPD suggests an efficient algorithm. Rank and CPD are only known exactly in a few simple
cases. Bounds on the complexity have been obtained in different ways. The first result,
due to Strassen, was that multiplication of two (2 × 2) matrices only requires 7 multiplications.
Coppersmith and Winograd have shown that for (n × n) matrices the complexity is not more
than O(n2.376). Explicit computation of the CPD is difficult since the rank is high. (For instance,
the multiplication of two (3 × 3) matrices is determined by a (9 × 9 × 9) tensor that has rank
between 19 and 23 and “border rank”, allowing an arbitrarily good approximation, between 15
and 21.) Also, the fact that there is a manifold of solutions poses numerical challenges.
- Development of a new concept for signal separation based on low displacement rank
PD is becoming a very popular tool for signal separation and exploratory data
analysis. On the other hand, Independent Component Analysis (ICA) separates
signals on the basis of hypothesized statistical independence. ICA has since 20 years been the
object of intensive research and is now a relatively standard technique. Algebraically, ICA
relies on the CPD of a tensor of statistics that is estimated from the data.
As a deterministic alternative, we have recently proposed the use of the simplest type of BTD
for the separation of signals that can be modeled as exponential polynomials. Here, a data
matrix is first mapped to a data tensor by mapping its rows to their Hankel representation. Other
low displacement rank structures have not yet been exploited for purposes of signal separation.