SyTen

◆ cols() [1/2]

template<typename Scalar >
void syten::Delinearise::cols ( DenseTensor< 2, Scalar > const &  a,
DenseTensor< 2, Scalar > &  p,
DenseTensor< 2, Scalar > &  t,
DenseTensor< 2, typename ScalarBase< Scalar >::type > const &  thresholds_in = DenseTensor<2, typename ScalarBase<Scalar>::type>(),
bool const  relax_if_necessary = false 
)
inline

Calculates \( p \), \( t \) s.t. \( p·t = a \) and \( p \) has as few columns as possible.

Parameters
[in]ainput matrix
[in]thresholds_inmatrix of thresholds/errors associated to each element of a
[out]pmatrix p with columns to be kept
[out]ttransfer matrix s.t. p·t = a
[in]relax_if_necessaryif true, sets thresholds to at least SYTEN_DELINEARISE_THRESHOLD

Steps of the algorithm:

  • Permute a and thresholds_in such that columns with maximal zeroness score are in the front
  • Then, for each column i, attempt to write it as a linear combination of previously-kept columns. If possible, store the relevant prefactors in t, otherwise, add the column to p.

This step is done via QR decomposition of the matrix of previously-kept columns. After multiplication of \( Q^H \) into the RHS, each element of \( R \) is scaled s.t. the RHS entries are 1 (unless they are exactly zero).

During the backsubstitution in Gaussian elimination, we require that the residual of each ( \( Q^H \)-transformed) row is smaller than the threshold associated to this ( \( Q^H \)-transformed) row.

The residual of the untransformed column minus the build-up from the calculated coefficients is checked against the thresholds specified for this column. This ensures correctness, i.e. the algorithm will rather keep a linearly-dependent column than introduce an error larger than the specified threshold in any row.

  • The kept columns are stored in the p matrix, the matrix t is the transfer matrix.
  • After one run of column delinearisation, the rows of the resulting p are de-linearised as well. If this results in fewer rows than columns, the left transfer matrix is taken as the new p and the left p matrix is multiplied into the previous transfer matrix.
Remarks
When multiplying the coefficient transfer matrix into the next matrix, it is suggested to set EliminateZeros::Yes.

Symbolically, we write

\begin{align} A & = P_A \cdot T_A \quad \textrm{column-delinearisation of A} \\ P^H_A & = P_P \cdot T_T \quad \textrm{column-delinearisation of }P^H_A\textrm{ (row-delinearisation of }P_A\textrm{)} \\ P_A & = T_T^H \cdot P_P^H \\ A & = T_T^H \cdot P_P^H \cdot T_A \\ \textrm{if }&\mathrm{cols}(T_T^H) < \mathrm{cols}(P_P^H)\textrm{, then } P = T_T^H \textrm{ and } T = P_P^H \cdot T_A \\ \textrm{otherwise }& P = P_A \textrm{ and } T = T_A \end{align}

References cols_only(), syten::DenseTensor< rank, Scalar >::dim(), syten::DenseTensor< rank, Scalar >::getDims(), syten::herm(), syten::makeIdentity(), and SYTEN_DELINEARISE_THRESHOLD.

Referenced by cols(), syten::STensorImpl::delinearise(), and rows().

+ Here is the call graph for this function:
+ Here is the caller graph for this function: