1 Introduction

1.1 Equivalence problems

An equivalence problem is a computational problem where, given two objects A and B of the same nature, it asks whether there exists a map with some properties (an equivalence) sending A to B. Different problems can be stated, depending on the nature of the considered objects or the properties of the map. One of the most well-known equivalence problems is graph isomorphism, but in the literature one can find problems concerning groups, quadratic forms, algebras, linear codes, tensors, and many other objects. We will focus on the latter, with the code equivalence and the tensor isomorphism problems. An interesting fact is that the isomorphism problem for tensors seems “central” among others. In particular, a large class of equivalence problems can be polynomially reduced to it. In other words, given a pair of objects (groups, algebras, graphs, etc.), a pair of tensors can be built such that they are isomorphic if and only if the starting objects are equivalent. This led to the definition of the complexity class \(\textsf {TI}\) in [13]. Different reductions among these problems can be found in [7, 12, 14, 23, 24]. In general, there are no known polynomial algorithms for most of the above problems. Because of this, many public key cryptosystems base their security on the hardness of solving these kinds of problems, for example, Isomorphism of Polynomials [22], code equivalence [1, 6], tensor isomorphism [16], lattice isomorphism [9], trilinear forms equivalence [29], and problems from isogenies of elliptic curves [3, 8, 10].

1.2 Code equivalence

One of the most studied equivalence problems concerns linear codes. In the Hamming metric, the maps that generate an equivalence were classified in [18], leading to the monomial equivalence problem, which was studied in [23, 27]. Worth mentioning is the support splitting algorithm [26], which solves the above problem in average polynomial time for a large class of codes over \(\mathbb {F}_q\) for \(q< 5\). For a detailed analysis, the interested reader can refer to [2]. Recently, the problem of equivalence in different metrics has been studied, and we will focus on the rank metric and the sum-rank one. Concerning the rank metric, the classification of equivalence maps is given in [20], while in [7], the authors analyze the matrix code equivalence, and they reduce the Hamming case to it. The same result is given in an independent work [14], where the former problem is called matrix space equivalence. In [24], it is shown that matrix code equivalence is polynomially equivalent to problems on bilinear and quadratic maps. Moreover, the link between the rank and the sum-rank metric is studied, leading to a reduction from the latter to the former in a special case. Here we extend this analysis, finding an unconditional reduction from the code equivalence in the sum-rank metric to the rank one.

1.3 Our contribution and techniques

In this work, we give two results of different nature. The first one concerns some relations between tensors problems. The d-tensor isomorphism problem (d-\(\textsf {TI}\)) asks, given two d-tensors \(T_1\) and \(T_2\), if there are d invertible matrices \(A_1,\dots ,A_d\) sending \(T_1\) to \(T_2\). We introduce another problem called d-tensor monomial isomorphism problem (d-\(\textsf {TI}^*\)), where instead of having d invertible matrices, we require that one of them must be monomial. We show that d-\(\textsf {TI}^*\) reduces to 3-\(\textsf {TI}\) for every \(d \ge 4\). To show this, we use techniques from [7] where the authors exhibit a reduction from monomial code equivalence to matrix code equivalence. We reformulate this reduction in terms of tensors, and we generalize it in higher dimensions. In particular, we show that d-\(\textsf {TI}^*\) is reducible to \((2d-1)\)-\(\textsf {TI}\), and then, using a result from [14], we get as corollary that d-\(\textsf {TI}^*\) reduces to 3-\(\textsf {TI}\). Our techniques are the following: given the reduction \(\Psi \) and the \((2d-1)\)-tensors \(\Psi (T_1)\) and \(\Psi (T_2)\), we project to the vector space \(\mathbb {W}\) where we expect the action of the monomial matrix. Then, we consider the projected tensor as a 2-tensor in order to compute its rank. We show that some constrains on the rank imply that the matrix acting on \(\mathbb {W}\) is monomial. Observe that the techniques from [14] can be adapted and used as well, but they are less efficient in terms of output dimension, since the reduction is looser with respect to the one given in [7]. Another contribution is about the sum-rank code equivalence. Using the result from above, we reduce the problem of deciding whether two sum-rank codes are equivalent to the problem of deciding if two matrix codes are equivalent. Note that a similar result is given in [24] with the assumption that some automorphisms group are of a given form. While such hypothesis is mostly satisfied for randomly generated matrix codes (for example the ones used in cryptography [6]), here we give an unconditional reduction. Unfortunately, our reduction produces matrix codes with dimension and sizes that are polynomially bigger than the starting parameters of the sum-rank codes. In particular, we get a \(\mathcal {O}(x^6)\) overhead. Due to this result, we can conclude that for the three considered metrics (Hamming, rank, sum-rank), Code Equivalence problems are in the class \(\textsf {TI}\). Figure 1 summarizes new and known reductions between code equivalence and other problems, showing the route we used. This work is organized as follows. In Sect. 2 we give some preliminaries on tensors, linear codes and equivalence problems in different metrics. Section 3 introduces the monomial isomorphism problem for tensors and a proof of its \(\textsf {TI}\)-hardness is given. Section 4 concerns the proof that the code equivalence problem in the sum-rank metric can be reduced to the same problem in the rank metric.

Fig. 1
figure 1

Reduction between problems and \(\textsf {TI}\)-completeness. “\(\textsf {A}\rightarrow \textsf {B}\)” indicates that \(\textsf {A}\) reduces to \(\textsf {B}\). Dashed arrows denote trivial reductions

2 Preliminaries

For a prime power q, \(\mathbb {F}_q\) is the finite field with q elements, and \(\mathbb {F}_q^n\) is the n-dimensional vector space over \(\mathbb {F}_q\). The span of vectors \(v_1,\dots ,v_k\) is denoted with \(\langle v_1,\dots ,v_k\rangle \). With \(\mathbb {F}_q^{n\times m}\) we denote the linear space of \(n\times m\) matrices with coefficients in \(\mathbb {F}_q\). Let \({{\,\textrm{GL}\,}}(n,\mathbb {F}_q)\) be the group of invertible \(n\times n\) matrices with coefficients in \(\mathbb {F}_q\). When the field is implicit, we use \({{\,\textrm{GL}\,}}(n)\) instead. A monomial \(n\times n\) matrix is given by the product of an \(n\times n\) diagonal matrix with non-zero entries on the diagonal, with an \(n\times n\) permutation matrix. The group of \(n\times n\) monomial matrices over the field \(\mathbb {F}_q\) is denoted with \({{\,\textrm{Mon}\,}}(n,\mathbb {F}_q)\) or \({{\,\textrm{Mon}\,}}(n)\), and is a subgroup of \({{\,\textrm{GL}\,}}(n)\). We denote with \(\mathbb {W}_1\oplus \mathbb {W}_2\) the direct sum of vector spaces \(\mathbb {W}_1\) and \(\mathbb {W}_2\) and its elements are written as \((w_1,w_2)\), where \(w_i\) is in \(\mathbb {W}_i\). With \(\mathcal {S}_t\) we denote the symmetric group over a set of t elements. The transpose of a matrix A is denoted with \(A^t\) and \(I_{\ell }\) denotes the \(\ell \times \ell \) identity matrix.

2.1 Tensors

Given a positive integer d, a d-tensor over \(\mathbb {F}_q\) is an element of the tensor space \(\bigotimes _{i=1}^d \mathbb {F}_q^{n_i}\). If we fix the bases \(\{e^{(i)}_1,\dots ,e^{(i)}_{n_i}\}\) for every linear space \(\mathbb {F}_q^{n_i}\), we can represent a d-tensor T with respect to its coefficients \(T(i_1,\dots ,i_d)\) in \(\mathbb {F}_q\)

$$\begin{aligned} T = \sum _{i_1,\dots ,i_d}T(i_1,\dots ,i_d)e_{i_1}^{(1)}\otimes \cdots \otimes e_{i_d}^{(d)}. \end{aligned}$$

We say that T has size \(n_1\times \dots \times n_d\). For example, observe that 1-tensors and 2-tensors can be represented as vectors and matrices, respectively.

A rank one (or decomposable) tensor is an element of the form \(a_1\otimes \cdots \otimes a_d\), where \(a_i\) is in \(\mathbb {F}_q^{n_i}\). Given a d-tensor T, its rank is the minimal non-negative integer r such that there exist \(t_1,\dots ,t_r\) rank one tensors for which \(T=\sum _{i=1}^r t_i\). In general, computing the rank of a d-tensor is a hard task for \(d\ge 3\) [15, 25, 28].

The projection to a can be defined for any a in \(\mathbb {F}_q^{n_j}\). Since we are interested mainly in projections to an element of the basis \(e_k^{(j)}\) of \(\mathbb {F}_q^{n_j}\), we define

$$\begin{aligned} \begin{aligned} {{\,\textrm{proj}\,}}&{}_{e_k^{(j)}}:\; \mathbb {F}_q^{n_1}\otimes \dots \otimes \mathbb {F}_q^{n_j} \otimes \dots \otimes \mathbb {F}_q^{n_d} \rightarrow \mathbb {F}_q^{n_1}\otimes \dots \otimes \mathbb {F}_q^{n_{j-1}} \otimes \mathbb {F}_q^{n_{j+1}} \otimes \dots \otimes \mathbb {F}_q^{n_d},\\&\sum _{i_1,\dots ,i_d}T(i_1,\dots ,i_d)e_{i_1}^{(1)}\otimes \cdots \otimes e_{i_d}^{(d)} \\&\mapsto \sum _{\begin{array}{c} i_1,\dots ,i_{j-1},\\ i_{j+1},\dots ,i_d \end{array}}T(i_1,\dots ,i_{j-1},k,i_{j+1},\dots ,i_d)e_{i_1}^{(1)}\otimes \cdots \otimes e_{i_{j-1}}^{(j-1)} \otimes e_{i_{j+1}}^{(j+1)} \otimes \cdots \otimes e_{i_d}^{(d)}. \end{aligned}\nonumber \\ \end{aligned}$$
(1)

In other words, we send to zero every component of \(\sum _{i_1,\dots ,i_d}T(i_1,\dots ,i_d)e_{i_1}^{(1)}\otimes \cdots \otimes e_{i_d}^{(d)}\) which does not contain \(e_k^{(j)}\), obtaining a \((d-1)\)-tensor.

A group action can be defined on the vector space \(\mathcal {T}=\bigotimes _{i=1}^d \mathbb {F}_q^{n_i}\) of d-tensors of size from the Cartesian product of invertible matrices \(G={{\,\textrm{GL}\,}}(n_1)\times \dots \times {{\,\textrm{GL}\,}}(n_d)\) as follows

$$\begin{aligned} \begin{aligned} \star :\;&G \times \mathcal {T} \rightarrow \mathcal {T},\\&\left( \left( A_1,\dots ,A_d\right) , \sum _{i_1,\dots ,i_d}T(i_1,\dots ,i_d)e_{i_1}^{(1)}\otimes \cdots \otimes e_{i_d}^{(d)} \right) \\&\mapsto \sum _{i_1,\dots ,i_d}T(i_1,\dots ,i_d)A_1e_{i_1}^{(1)}\otimes \cdots \otimes A_de_{i_d}^{(d)}. \end{aligned} \end{aligned}$$

It can be shown that the action defined above does not change the rank of a tensor.Footnote 1 In particular, this implies that the action of an element in \({{\,\textrm{GL}\,}}(n_1)\times \dots \times {{\,\textrm{GL}\,}}(n_{i-1}) \times {{\,\textrm{GL}\,}}(n_{i+1}) \times \dots \times {{\,\textrm{GL}\,}}(n_d)\) on the projection \({{\,\textrm{proj}\,}}_{e_k^{(i)}}(T)\) of a tensor T has the same rank as \({{\,\textrm{proj}\,}}_{e_k^{(i)}}(T)\). We summarize these properties in formulas

  1. 1.

    \({{\,\textrm{rk}\,}}\left( (A_1,\dots ,A_d)\star T \right) = {{\,\textrm{rk}\,}}\left( T\right) \),

  2. 2.

    \({{\,\textrm{rk}\,}}\left( (A_1,\dots ,A_{i-1},A_{i+1},\dots ,A_d)\star {{\,\textrm{proj}\,}}_{e_k^{(i)}}(T) \right) = {{\,\textrm{rk}\,}}\left( {{\,\textrm{proj}\,}}_{e_k^{(i)}}(T) \right) \).

The isomorphism problem between tensors has some interesting links and properties in computational complexity theory. Here we recall the formal definition of the problem.

Definition 1

The d-tensor isomorphism (d-\(\textsf {TI}\)) problem is given by

  • input: two d-tensors \(T_1\) and \(T_2\) in \(\bigotimes _{i=1}^d \mathbb {F}_q^{n_i}\);

  • output: YES if there exists an element g of \({{\,\textrm{GL}\,}}(n_1) \times \dots \times {{\,\textrm{GL}\,}}(n_d)\) such that \(T_2=g\star T_1\) and NO otherwise.

The search version is the problem of finding such matrices, given two isomorphic d-tensors.

If we recall the decision problems d-Colourability (d-\(\textsf {COL}\)) and d-\(\textsf {SAT}\), it is known that the first integer for which these problems are \(\textsf {NP} \)-complete is \(d=3\). In particular, there are polynomial reductions from d-\(\textsf {COL}\) to 3-\(\textsf {COL}\) and from d-\(\textsf {SAT}\) to 3-\(\textsf {SAT}\). The same happens for d-\(\textsf {TI}\) and 3-\(\textsf {TI}\), as shown in the following astonishing result from [14].

Theorem 1

d-\(\textsf {TI}\) and 3-\(\textsf {TI}\) are polynomially equivalent.

Since a lot of different problems can be reduced to d-\(\textsf {TI}\), in the same flavor of the complexity class \(\textsf {GI}\) (the set of problems reducible in polynomial time to graph isomorphism [17]), the authors of [13] define the \(\textsf {TI}\) class.

Definition 2

The tensor isomorphism class (\(\textsf {TI}\)) contains decision problems that can be polynomially reduced to d-\(\textsf {TI}\) for a certain d. A problem D is said \(\textsf {TI}\)-hard if d-\(\textsf {TI}\) can be reduced to D, for any d. A problem is said \(\textsf {TI}\)-complete if it is in \(\textsf {TI}\) and is \(\textsf {TI}\)-hard.

It is easy to see that \(\textsf {TI}\) is a subset of \(\textsf {NP} \), and we can adapt the \(\textsf {AM} \) protocol for graph non-isomorphism [11] and code non-equivalence [23] to show that \(\textsf {TI}\) is in \(\textsf {co}\textsf {AM} \). This means that no problem in \(\textsf {TI}\) cannot be \(\textsf {NP} \)-complete unless the polynomial hierarchy collapses at the second level [4].

2.2 Linear codes in different metrics

A linear code \(\mathcal {C}\) of dimension k is a linear space of dimension k. A linear code can be embedded in different linear spaces \(\mathbb {V}\) over \(\mathbb {F}_q\), depending on the form of the code. A code is endowed with a map weight \(\textrm{w}\) defined on \(\mathbb {V}\)

$$\begin{aligned} \textrm{w}: \mathbb {V} \rightarrow \mathbb {N} \end{aligned}$$

such that \(\textrm{w}(x)=0\) if and only if \(x=0\). We can define a metric \(\textrm{d}\) from a weight \(\textrm{w}\)

$$\begin{aligned} \textrm{d}: \mathbb {V}\times \mathbb {V} \rightarrow \mathbb {N},\; (x,y)\mapsto \textrm{w}(y-x). \end{aligned}$$

Throughout this paper, we will consider three weights with their metrics. We highlight that, even if we can endow the same code with two or more different metrics, we consider a code with just a metric.

The first one is the Hamming weight. Here we consider linear codes embedded in \(\mathbb {F}_q^n\), and we say that the code \(\mathcal {C}\) has length n. This weight is defined as the number of non-zero entries of a vector: We refer to the distance induced by \({{\,\mathrm{\mathrm {w_H}}\,}}\) as \({{\,\mathrm{\mathrm {d_H}}\,}}\). A useful representation of a k-dimensional code \(\mathcal {C}\) of length n in the Hamming metric is given by its generator matrix, a \(k\times n\) matrix having a basis \(\{v_1,\dots ,v_k\}\) of \(\mathcal {C}\) as rows. Notice that the generator matrix is not unique since there are many bases for the same linear code.

The second weight we consider is defined on matrices. This means that our code \(\mathcal {C}\) is a space of matrices and usually we refer to it as a matrix code. If we consider \(n\times m\) matrices, the code has length \(n\times m\). The map

$$\begin{aligned} {{\,\mathrm{\mathrm {w_{R}}}\,}}: \mathbb {F}_q^{n\times m} \rightarrow \mathbb {N},\; M \mapsto {{\,\textrm{rk}\,}}(M) \end{aligned}$$

is defined as the rank of the matrix M. Hence, the distance \({{\,\mathrm{\mathrm {d_{R}}}\,}}\) between \(M_1\) and \(M_2\) is given by the rank of \(M_2-M_1\).

The last class of codes we consider is embedded into the direct sum (or Cartesian product) of spaces of matrices. Given natural numbers \(d,n_1,\dots ,n_d,m_1,\dots ,m_d\), we have that the linear space \(\mathbb {V}\) defined above is \(\mathbb {F}_q^{n_1\times m_1} \oplus \cdots \oplus \mathbb {F}_q^{n_d\times m_d}\). We can define the Sum-rank weight as the sum of the ranks

$$\begin{aligned} \begin{array}{rccl} {{\,\mathrm{\mathrm {w_{SR}}}\,}}:&{}\mathbb {F}_q^{n_1\times m_1} \oplus \cdots \oplus \mathbb {F}_q^{n_d\times m_d}&{}\rightarrow &{}\mathbb {N},\\ &{}(M_1,\dots ,M_d)&{}\mapsto &{} \sum _{i=1}^{d}{{\,\textrm{rk}\,}}\left( M_i\right) . \end{array} \end{aligned}$$

The distance \({{\,\mathrm{\mathrm {d_{SR}}}\,}}\) induced by \({{\,\mathrm{\mathrm {w_{SR}}}\,}}\) is called sum-rank metric and we call a code endowed with this distance a sum-rank code of parameters \(d,n_1,\dots ,n_d,m_1,\dots ,m_d\).

Observe that the sum-rank metric is both a generalization of the Hamming and the rank distance. For \(n_1=\dots =n_d=m_1=\dots =m_d=1\), the sum-rank metric coincides with the Hamming metric, and sum-rank codes can be seen as linear codes of length d in \(\mathbb {F}_q^d\). If we have \(d=1\), then \({{\,\mathrm{\mathrm {d_{SR}}}\,}}\) is the rank metric, and sum-rank codes are matrix codes of size \(n_1\times m_1\).

2.3 Code equivalence

We recall the general problem of deciding whether two linear codes are equivalent. Given a weight \(\textrm{w}\) and a metric \(\textrm{d}\), we say that an invertible linear map f from the vector space \(\mathbb {V}\) to itself preserves the metric (or, equivalentely, the weight) if \(f\left( \textrm{w}(x)\right) = \textrm{w}(x)\) for every x in \(\mathbb {V}\). We call such maps linear isometries, and they form a group with the composition. Two linear codes are linearly equivalent if there exists a linear isometry between them. The task of checking if two codes are equivalent is called Linear code equivalence problem. Since in the rest of the paper we will consider only linear isometries, sometimes we drop the word “linear” when we talk about isometries or equivalences, in particular we refer to the problem above as code equivalence (\(\textsf {CE}\)). Its hardness depends on which codes and metric we consider. In the following, we define \(\textsf {CE}\) with respect to the three different metrics we saw in Sect. 2.2.

We can characterize linear isometries in the Hamming metric, reporting a well-known result from [18].

Proposition 2

If \(f:\mathbb {F}_q^{n}\rightarrow \mathbb {F}_q^{n}\) is a linear isometry in the Hamming metric, then there exists an \(n\times n\) monomial matrix Q such that \(f(x)=xQ\) for all x in \(\mathbb {F}_q^{n}\).

Then two codes \(\mathcal {C}\) and \(\mathcal {D}\) are linearly equivalent if there exists a monomial matrix Q such that

$$\begin{aligned} \mathcal {C} = \left\{ yQ\in \mathbb {F}_q^n: y\in \mathcal {D} \right\} . \end{aligned}$$

The generator matrix G of a code \(\mathcal {C}\) is not unique, hence, for every invertible matrix S, the matrix SG generates the same code \(\mathcal {C}\). This must be considered since we state the equivalence problem in terms of generator matrices.

Definition 3

The Hamming linear code equivalence (\(\textsf {CE}_{\textsf {H}}\)) problem is given by

  • input: two codes \(\mathcal {C}\) and \(\mathcal {D}\) represented by their \(k\times n\) generator matrices G and \(G'\), respectively;

  • output: YES if there exist a \(k\times k\) invertible matrix S and an \(n\times n\) monomial matrix Q such that \(G=SG'Q\), and NO otherwise.

The search version is the problem of finding such matrices given two linearly equivalent codes.

Observe that the matrix S in the above definition models a possible change of basis, while the monomial matrix Q is a permutation and a scaling of the coordinates of the code.

Now we consider the rank metric. From [20], linear isometries for the rank metric can be characterized as follows.

Proposition 3

If \(f:\mathbb {F}_q^{n\times m}\rightarrow \mathbb {F}_q^{n\times m}\) is a linear isometry in the rank metric, then there exist an \(n\times n\) invertible matrix A and an \(m\times m\) invertible matrix B such that

  1. 1.

    \(f(M)=AMB\) for all M in \(\mathbb {F}_q^{n\times m}\), or

  2. 2.

    \(f(M)=AM^tB\) for all M in \(\mathbb {F}_q^{n\times m}\),

where the latter case can occur only if \(n=m\).

Usually, an isometry can be denoted with a pair of matrices (AB).

In the literature, for example [7, 24], the linear equivalence problem for matrix codes is defined taking into account only the first case given in Proposition 3, even when we have \(n=m\). In terms of the computational effort to solve the problem, this is not an issue, since considering both cases requires at most twice the time of considering only the first one, and hence, just a polynomial overhead that we can ignore. For simplicity, we continue the approach from [7, 24] in the following definition.

Definition 4

The rank linear code equivalence (\(\textsf {CE}_{\textsf {rk}}\)) problem is given by

  • input: two \(n\times m\) matrix codes \(\mathcal {C}\) and \(\mathcal {D}\) of dimension s represented by their bases;

  • output: YES if there exist matrices A in \({{\,\textrm{GL}\,}}(n)\) and B in \({{\,\textrm{GL}\,}}(m)\) such that, for every M in \(\mathcal {D}\), we have that AMB is in \(\mathcal {C}\), and NO otherwise.

The search version is the problem of finding such matrices given two linearly equivalent codes.

In the literature, this problem is also called matrix code equivalence (\(\textsf {MCE}\)).

Given a matrix code \(\mathcal {C}\), an automorphism of \(\mathcal {C}\) is a linear isometry f such that \(f(\mathcal {C})=\mathcal {C}\). We say that \(\mathcal {C}\) has trivial automorphisms if the only automorphisms of \(\mathcal {C}\) are of the form \(M\mapsto \left( \lambda I_n\right) M\left( \mu I_m\right) \) for some non-zero \(\lambda ,\mu \) in \(\mathbb {F}_q\).

The equivalence problem between sum-rank codes was introduced in 2020 by Martínez-Peñas [19]. Before stating the problem, we characterize linear sum-rank isometries. This result is given in [5] and a slightly less general statement can be found in [21, Proposition 4.26].

Proposition 4

Let \(f:\mathbb {F}_q^{n_1\times m_1} \oplus \cdots \oplus \mathbb {F}_q^{n_d\times m_d} \rightarrow \mathbb {F}_q^{n_1\times m_1} \oplus \cdots \oplus \mathbb {F}_q^{n_d\times m_d}\) be a linear isometry in the sum-rank metric. Then there exists a permutation \(\sigma \) in \(\mathcal {S}_d\) such that \(n_i=n_{\sigma (i)}\) and \(m_i=m_{\sigma (i)}\) for every i, and there exist \(\psi _i:\mathbb {F}_q^{n_i\times m_i} \rightarrow \mathbb {F}_q^{n_i\times m_i}\) isometries in the rank metric such that

$$\begin{aligned} f(M_1,\dots ,M_d) = \left( \psi _1(M_{\sigma (1)}), \dots , \psi _d(M_{\sigma (d)})\right) \end{aligned}$$

for each \(M_i\in \mathbb {F}_q^{n_i\times m_i}\).

We are ready to state the linear equivalence problem for sum-rank codes. As in the case of \(\textsf {CE}_{\textsf {rk}}\), we choose to not include the case of transposition of matrices.

Remark 1

Observe that, even if for \(\textsf {CE}_{\textsf {rk}}\) the inclusion of the transposition of matrices has only a polynomial blow-up, this is not the case for \(\textsf {CE}_{\textsf {sr}}\). In fact, from [21] we can see that the transposition can be seen as the action of \(\mathbb {F}_2^d\). This implies that there is an overhead of \(\mathcal {O}(2^d)\) between considering or not the transposition of matrices, for example, see [7, Remark 2] for the rank case.

Recall that, as linear space, a sum-rank code \(\mathcal {C}\) of parameters \(d,n_1,\dots ,n_d,m_1,\dots ,m_d\) and dimension k admits a basis of the form \(\{\textbf{C}_1,\dots ,\textbf{C}_k\}\) where \(\textbf{C}_i=\left( C_i^{(1)},\dots ,C_i^{(d)}\right) \) is a tuple of matrices. In particular, \(C_i^{(j)}\) is in \(\mathbb {F}_q^{n_j\times m_j}\) for each i and j.

Definition 5

The sum-rank linear code equivalence (\(\textsf {CE}_{\textsf {sr}}\)) problem is given by

  • input: two sum-rank codes \(\mathcal {C}\) and \(\mathcal {D}\), of parameters \(d,n_1,\dots ,n_d,m_1,\dots ,m_d\) and dimension k represented by their bases \(\{\textbf{C}_i\}\) and \(\{\textbf{D}_i\}\), respectively;

  • output: YES if there exist matrices \(A_1,\dots ,A_d,B_1,\dots ,B_d\), where \(A_i\) is in \({{\,\textrm{GL}\,}}(n_i)\) and \(B_i\) is in \({{\,\textrm{GL}\,}}(m_i)\), and a permutation \(\sigma \) in \(\mathcal {S}_d\) such that

    $$\begin{aligned} \mathcal {C} = {{\,\textrm{Span}\,}}&\Bigl \{ \left( A_1D_1^{(\sigma (1))}B_1,\dots , A_d D_1^{(\sigma (d))} B_d \right) ,\dots ,\\&\left( A_1D_k^{(\sigma (1))}B_1 ,\dots , A_d D_k^{(\sigma (d))} B_d \right) \Bigr \} , \end{aligned}$$

    and NO otherwise.

The search version is the problem of finding such matrices given two linearly equivalent codes.

This formulation embraces both the previous linear equivalence problems for Hamming and rank metric as special cases. Due to this, we can formulate the next result.

Proposition 5

Both \(\textsf {CE}_{\textsf {H}}\) and \(\textsf {CE}_{\textsf {rk}}\) polynomially reduce to \(\textsf {CE}_{\textsf {sr}}\).

A natural question is about the converse, whether problems in the Hamming or the sum-rank metric reduce to \(\textsf {CE}_{\textsf {rk}}\). It has been show independently in [7, 14] that \(\textsf {CE}_{\textsf {H}}\) can be reduced to \(\textsf {CE}_{\textsf {rk}}\), using two different approaches. In [14, Sect. 5], the reduction uses 3-tensors via an “individualization” argument to force a matrix to be monomial. In [7], given a linear code of dimension k in \(\mathbb {F}_q^n\), the reduction defines a matrix code in \(\mathbb {F}_q^{k\times (k+n)}\). This approach will be generalized in the setting of d-tensors in the following section, and it will give us some reductions between tensors problem in dimensions higher than 3.

3 Monomial isomorphism problems

In this section, we will examine the relationship between tensor isomorphism problems when a matrix acting on a specific space is required to be monomial instead of using the action from the entire group \({{\,\textrm{GL}\,}}(n_1)\times \cdots \times {{\,\textrm{GL}\,}}(n_d)\). Specifically, there exists a j such that the action on the j-th space is given by \({{\,\textrm{Mon}\,}}(n_j)\). For simplicity, we will refer to this special space as the last one throughout the remainder of the article and in the problems statements. Since \({{\,\textrm{Mon}\,}}(n_d)\) is a subgroup of \({{\,\textrm{GL}\,}}(n_d)\), the action of the group \({{\,\textrm{GL}\,}}(n_1) \times \cdots \times {{\,\textrm{GL}\,}}(n_{d-1})\times {{\,\textrm{Mon}\,}}(n_d)\) on d-tensors is well-defined. When there exists an element g sending the d-tensor \(T_1\) into \(T_2\), we say that they are monomially isomorphic.

Definition 6

The monomial d-tensor isomorphism (d-\(\textsf {TI}^*\)) problem is given by

  • input: two d-tensors \(T_1\) and \(T_2\) in \(\bigotimes _{i=1}^d \mathbb {F}_q^{n_i}\);

  • output: YES if there exists an element g of \({{\,\textrm{GL}\,}}(n_1) \times \cdots \times {{\,\textrm{GL}\,}}(n_{d-1})\times {{\,\textrm{Mon}\,}}(n_d)\) such that \(T_2=g\star T_1\) and NO otherwise.

The search version is the problem of finding such matrices, given two monomially isomorphic d-tensors.

We recall that, if the action of the monomial matrix is not on the last vector space, we can permute the spaces to obtain the problem above. Observe that the problem 2-\(\textsf {TI}^*\) is exactly \(\textsf {CE}_{\textsf {H}}\) and the proof that \(\textsf {CE}_{\textsf {H}}\) reduces to \(\textsf {CE}_{\textsf {rk}}\) from [7] can be viewed as a reduction from 2-\(\textsf {TI}^*\) to 3-\(\textsf {TI}\). In the following, we generalize this approach to reduce d-\(\textsf {TI}^*\) to \((2d-1)\)-\(\textsf {TI}\).

Let \(\mathbb {V}_1,\dots ,\mathbb {V}_d\) be vector spaces over \(\mathbb {F}_q\) of dimension \(n_1,\dots ,n_d\), respectively. Now let \(\{v^{(j)}_1,\dots , v^{(j)}_{n_j} \}\) be a basis for the space \(\mathbb {V}_j\). We recall that \(\mathbb {W}_1\oplus \mathbb {W}_2\) is the direct sum of vector spaces \(\mathbb {W}_1\) and \(\mathbb {W}_2\) and its elements are of the form \((w_1,w_2)\). The action of an element of \({{\,\textrm{GL}\,}}(\dim (\mathbb {W}_1)+\dim (\mathbb {W}_2))\) is block-by-block:

$$\begin{aligned} \begin{pmatrix} A_{11} &{}\quad A_{12}\\ A_{21} &{}\quad A_{22} \end{pmatrix} \cdot \begin{pmatrix} w_1 \\ w_2 \end{pmatrix} = \begin{pmatrix} A_{11}w_1 + A_{12}w_2 \\ A_{21}w_1 + A_{22}w_2 \end{pmatrix}. \end{aligned}$$

The reduction we use is the following map, going from a space of d-tensors to a space of \((2d-1)\)-tensors,

$$\begin{aligned} \begin{aligned} \Psi&: \bigotimes _{i=1}^{d}\mathbb {V}_i \rightarrow \left( \bigotimes _{i=1}^{d-1}\mathbb {V}_i\right) \otimes \left( \bigotimes _{i=1}^{d-1}\left( \mathbb {V}_i\oplus \mathbb {V}_d\right) \right) \otimes \mathbb {V}_d,\\&\quad \sum _{i_1,\dots ,i_d} T(i_1,\dots ,i_d)v^{(1)}_{i_1}\otimes \dots \otimes v^{(d)}_{i_d} \mapsto \\&\quad \sum _{\begin{array}{c} i_1,\dots ,i_d,\\ j_1,\dots ,j_{d-1} \end{array}} T(i_1,\dots ,i_d)T(j_1,\dots ,j_{d-1},i_d)v^{(1)}_{i_1}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}} \\&\qquad \otimes (v^{(1)}_{j_1},0)\otimes \dots \otimes (v^{(d-1)}_{j_{d-1}},0) \otimes v^{(d)}_{i_d}\\&\qquad + \sum _{i_1,\dots ,i_d} T(i_1,\dots ,i_d) v^{(1)}_{i_1}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}} \otimes (0,v^{(d)}_{i_d}) \otimes \dots \otimes (0,v^{(d)}_{i_d}) \otimes v^{(d)}_{i_d}. \end{aligned} \end{aligned}$$
(2)

Example 1

(Running example) As an exmaple, consider \(d=3\) and a tensors in \(\mathbb {F}_2^2\otimes \mathbb {F}_2^2 \otimes \mathbb {F}_2^3\). The map \(\Psi \) became

$$\begin{aligned} \begin{aligned} \Psi&: \mathbb {F}_2^2\otimes \mathbb {F}_2^2 \otimes \mathbb {F}_2^3 \rightarrow \mathbb {F}_2^2\otimes \mathbb {F}_2^2 \otimes \left( \mathbb {F}_2^2\oplus \mathbb {F}_2^3\right) \otimes \left( \mathbb {F}_2^2\oplus \mathbb {F}_2^3\right) \otimes \mathbb {F}_2^3,\\&\quad \sum _{i,j,k} T(i,j,k)e_i\otimes e_j \otimes e_k \mapsto \\&\quad \sum _{\begin{array}{c} i,j,k,\\ i',j' \end{array}} T(i,j,k)T(i',j',k)e_i\otimes e_j \otimes (e_{i'},0)\otimes (e_{j'},0) \otimes e_k\\&\qquad + \sum _{i,j,k} T(i,j,k) e_i\otimes e_j \otimes (0,e_k) \otimes (0,e_k) \otimes e_k. \end{aligned} \end{aligned}$$

Given the tensor

$$\begin{aligned} T_1 = e_1\otimes e_1\otimes e_1 + e_2\otimes e_2\otimes e_2 + e_1\otimes e_2\otimes e_3, \end{aligned}$$

its image under \(\Psi \) is given by

$$\begin{aligned} \begin{aligned} \Psi (T_1)&= e_1\otimes e_1 \otimes (e_1,0)\otimes (e_1,0) \otimes e_1 + e_2\otimes e_2 \otimes (e_2,0)\otimes (e_2,0) \otimes e_2 \\&\quad + e_1\otimes e_2 \otimes (e_1,0)\otimes (e_2,0) \otimes e_3 + e_1\otimes e_1 \otimes (0,e_1)\otimes (0,e_1) \otimes e_1 \\&\quad + e_2\otimes e_2 \otimes (0,e_2)\otimes (0,e_2) \otimes e_2 + e_1\otimes e_2 \otimes (0,e_3)\otimes (0,e_3) \otimes e_3 \\ \end{aligned} \end{aligned}$$

In the following, we show that two tensors \(T_1\) and \(T_2\) are monomially isomorphic if and only if \(\Psi (T_1)\) and \(\Psi (T_2)\) are isomorphic.

Proposition 6

If \(T_1\) and \(T_2\) are two monomially isomorphic d-tensors, then \(\Psi (T_1)\) and \(\Psi (T_2)\) are isomorphic as \((2d-1)\)-tensors.

Proof

Suppose that \(T_1\) and \(T_2\) are in \(\bigotimes _{i=1}^{d}\mathbb {V}_i\) as defined above. Now, since \(T_1\) and \(T_2\) are monomially isomorphic, there exist \(d-1\) invertible matrices \(A_1,\dots ,A_{d-1}\) and a monomial matrix Q such that

$$\begin{aligned} (A_1,\dots ,A_{d-1},Q)\star T_1 = T_2. \end{aligned}$$

Let Q be the product of a permutation matrix P corresponding to the permutation \(\sigma \) in \(\mathcal {S}_{n_d}\) and a diagonal matrix \(D=\textrm{diag}(\alpha _1,\dots ,\alpha _{n_d})\). More explicitly

$$\begin{aligned} \begin{aligned}&\sum _{i_1,\dots ,i_d} T_1(i_1,\dots ,i_d)A_1v^{(1)}_{i_1}\otimes \dots \otimes A_{d-1}v^{(d-1)}_{i_{d-1}} \otimes \alpha _{i_d}v^{(d)}_{\sigma (i_d)} \\&= \sum _{i_1,\dots ,i_d} T_2(i_1,\dots ,i_d)v^{(1)}_{i_1}\otimes \dots \otimes v^{(d)}_{i_d}. \end{aligned} \end{aligned}$$
(3)

Our claim to obtain the thesis is that

$$\begin{aligned} (A_1,\dots ,A_{d-1},\tilde{A_1},\dots ,\tilde{A}_{d-1},\tilde{Q})\star \Psi (T_1)=\Psi (T_2), \end{aligned}$$

where for every \(i=1,\dots ,d-2\)

$$\begin{aligned} \tilde{A}_i = \begin{pmatrix} A_i &{} 0\\ 0 &{} P \end{pmatrix}, \end{aligned}$$

while

$$\begin{aligned} \tilde{A}_{d-1} = \begin{pmatrix} A_{d-1} &{} 0\\ 0 &{} PD^{-1} \end{pmatrix},\quad \text { and }\quad \tilde{Q} = PD^2 \end{aligned}$$

Consider \(T_2\), and, for a k in \(\{1,\dots ,n_d\}\), we write its projection to \(v^{(d)}_{k}\)

$$\begin{aligned} {{\,\textrm{proj}\,}}_{v^{(d)}_{k}}(T_2) = \sum _{i_1,\dots ,i_{d-1}} T_2(i_1,\dots ,i_{d-1},k)v^{(1)}_{i_1}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}}. \end{aligned}$$
(4)

Combining Eq. (3) and Eq. (4), we have

$$\begin{aligned} \begin{aligned}&\sum _{i_1,\dots ,i_{d-1}} T_2(i_1,\dots ,i_{d-1},k)v^{(1)}_{i_1}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}} \\&= \sum _{i_1,\dots ,i_{d-1}} \alpha _{\sigma ^{-1}(k)}T_1(i_1,\dots ,i_{d-1},\sigma ^{-1}(k))A_1v_{i_1}^{(1)}\otimes \dots \otimes A_{d-1}v_{i_{d-1}}^{(d-1)} \end{aligned} \end{aligned}$$
(5)

We define \(\iota \) to be the canonic injection of \(\bigotimes _{i=1}^{d-1}\mathbb {V}_i\) into \(\bigotimes _{i=1}^{d-1}\left( \mathbb {V}_i\oplus \mathbb {V}_d\right) \), and we consider \({{\,\textrm{proj}\,}}_{v^{(d)}_k}(T_2)\otimes \iota \left( {{\,\textrm{proj}\,}}_{v^{(d)}_k}(T_2)\right) \), that is

$$\begin{aligned} \begin{aligned}&\sum _{i_1,\dots ,i_{d-1}} T_2(i_1,\dots ,i_{d-1},k)v^{(1)}_{i_1}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}} \\&\otimes \sum _{j_1,\dots ,j_{d-1}} T_2(j_1,\dots ,j_{d-1},k)(v^{(1)}_{j_1},0)\otimes \dots \otimes (v^{(d-1)}_{i_{d-1}},0) \end{aligned} \end{aligned}$$

and, applying Eq. (5) two times, it is equal to

$$\begin{aligned} \begin{aligned} \sum _{\begin{array}{c} i_1,\dots ,i_{d-1},\\ j_1,\dots ,j_{d-1} \end{array}}&\alpha _{\sigma ^{-1}(k)}^2 T_1(i_1,\dots ,i_{d-1},\sigma ^{-1}(k))T_1(j_1,\dots ,j_{d-1},\sigma ^{-1}(k))\\&A_1v^{(1)}_{i_{1}}\otimes \dots \otimes A_{d-1}v^{(d-1)}_{i_{d-1}}\otimes (A_1v^{(1)}_{i_{1}},0)\otimes \dots \otimes (A_{d-1}v^{(d-1)}_{i_{d-1}},0). \end{aligned} \end{aligned}$$
(6)

Observe that, if we tensorize this element with \(v^{(d)}_{k}\) and we take the sum over \(k=1,\dots ,n_d\), we have the first term of \((A_1,\dots ,A_{d-1},\tilde{A_1},\dots ,\tilde{A}_{d-1},\tilde{Q})\star \Psi (T_1)\), that is equal to the first term of \(\Psi (T_2)\).

To complete the proof we compute the second term of \((A_1,\dots ,A_{d-1},\tilde{A_1},\dots ,\tilde{A}_{d-1},\tilde{Q})\star \Psi (T_1)\), and we show that it is equal to the second one of \(\Psi (T_2)\). In fact, using Eq. (5), we have

$$\begin{aligned} \begin{aligned} \sum _{i_d}\sum _{i_1,\dots ,i_{d-1}}&T_1(i_1,\dots ,i_d) A_1v^{(1)}_{i_{1}}\otimes \dots \otimes A_1v^{(d-1)}_{i_{d-1}} \\&\quad \otimes (0,v^{(d)}_{\sigma (i_d)})\otimes (0,v^{(d)}_{\sigma (i_d)}) \otimes (0,\alpha _{i_d}^{-1}v^{(d)}_{\sigma (i_d)}) \otimes \alpha _{i_d}^2 v^{(d)}_{\sigma (i_d)} \\&= \sum _{i_d}\sum _{i_1,\dots ,i_{d-1}}\\&T_2(i_1,\dots ,i_d) v^{(1)}_{i_{1}}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}} \otimes (0,v^{(d)}_{i_d})\otimes \dots \otimes (0,v^{(d)}_{i_d}) \otimes v^{(d)}_{i_d}. \end{aligned} \end{aligned}$$
(7)

The first and the second terms of \((A_1,\dots ,A_{d-1},\tilde{A_1},\dots ,\tilde{A}_{d-1},\tilde{Q})\star \Psi (T_1)\) are equal to the ones of \(\Psi (T_2)\), and we can conclude that

$$\begin{aligned} (A_1,\dots ,A_{d-1},\tilde{A_1},\dots ,\tilde{A}_{d-1},\tilde{Q})\star \Psi (T_1)=\Psi (T_2). \end{aligned}$$

To complete the proof we observe that matrices \(A_1,\dots ,A_{d-1},\tilde{A}_{1},\dots ,\tilde{A}_{d-1}\) and \(\tilde{Q}\) are invertible by construction, hence \(\Psi (T_1)\) and \(\Psi (T_2)\) are isomorphic as \((2d-1)\)-tensors. \(\square \)

Example 2

(Running example) Consider the tensor \(T_1\) from Example 1 under the action of matrices

$$\begin{aligned} A=\begin{pmatrix} 1 &{} 0\\ 0 &{} 1 \end{pmatrix}, \quad B= \begin{pmatrix} 0 &{} 1\\ 1 &{} 0 \end{pmatrix}, \quad C= \begin{pmatrix} 0 &{} 0 &{} 1\\ 0 &{} 1 &{} 0\\ 1 &{} 0 &{} 0 \end{pmatrix}. \end{aligned}$$

We obtain the monomially isomorphic tensor

$$\begin{aligned} T_2 = (A,B,C)\star T_1 = e_1\otimes e_2\otimes e_3 + e_2\otimes e_1\otimes e_2 + e_1\otimes e_1\otimes e_1 \end{aligned}$$

and it can be seen that \(\Psi (T_1)\) is isomorphic to \(\Psi (T_2)\) via the matrices \((A,B,\tilde{A},\tilde{B},\tilde{C})\), where

$$\begin{aligned} \tilde{A}=\begin{pmatrix} A &{} 0 \\ 0 &{} C \end{pmatrix}, \quad \tilde{B}= \begin{pmatrix} A &{} 0\\ 0 &{} C \end{pmatrix}, \quad \tilde{C}= C \end{aligned}$$

as in the proof of Proposition 6.

Now we show the converse.

Proposition 7

If \(\Psi (T_1)\) and \(\Psi (T_2)\) are isomorphic, then \(T_1\) and \(T_2\) are monomially isomorphic.

Proof

Since \(\Psi (T_1)\) and \(\Psi (T_2)\) are isomorphic, there exist invertible matrices \(A_1,\dots ,A_{d-1},\tilde{A}_1,\dots ,\tilde{A}_{d-1},\tilde{Q}\) such that

$$\begin{aligned} (A_1,\dots ,A_{d-1},\tilde{A}_1,\dots ,\tilde{A}_{d-1},\tilde{Q}) \star \Psi (T_1)=\Psi (T_2). \end{aligned}$$

We want to exhibit \(d-1\) invertible matrices \(A'_1,\dots ,A'_{d-1}\) and a monomial matrix \(Q'\) such that \((A'_1,\dots ,A'_{d-1},Q')\star T_1 = T_2\). In particular, we will show that \(A'_i=A\) for every \(i=1,\dots ,d-1\). First, we claim that \(\tilde{Q}\) is a monomial matrix. Consider \((I_{n_1},\dots ,I_{n_{d-1}},I_{n_1+n_d},\dots ,I_{n_{d-1}+n_d},\tilde{Q})\star \Psi (T_1)\) and use \(\tilde{Q}v^{(d)}_{i_d}=\sum _{j=1}^{n_d}\tilde{Q}_{j,i_d}v^{(d)}_{j}\)

$$\begin{aligned} \begin{aligned}&\sum _{\begin{array}{c} i_1,\dots ,i_d,\\ j_1,\dots ,j_{d-1} \end{array}} T_1(i_1,\dots ,i_d)T_1(j_1,\dots ,j_{d-1},i_d)v^{(1)}_{i_1}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}}\\&\qquad \otimes (v^{(1)}_{j_1},0)\otimes \dots \otimes (v^{(d-1)}_{j_{d-1}},0) \otimes \sum _{k=1}^{n_d}\tilde{Q}_{k,i_d}v^{(d)}_{k}\\&+ \sum _{i_1,\dots ,i_d}T_1(i_1,\dots ,i_d) v^{(1)}_{i_1}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}} \otimes (0,v^{(d)}_{i_d}) \otimes \dots \otimes (0,v^{(d)}_{i_d}) \otimes \sum _{k=1}^{n_d}\tilde{Q}_{k,i_d}v^{(d)}_{k}. \end{aligned} \end{aligned}$$
(8)

If we project it to \(v^{(d)}_{k}\) along the last space \(\mathbb {V}_d\) we obtain

$$\begin{aligned} \begin{aligned} \sum _{\begin{array}{c} i_1,\dots ,i_d,\\ j_1,\dots ,j_{d-1} \end{array}}&\tilde{Q}_{k,i_d} T_1(i_1,\dots ,i_d)T_1(j_1,\dots ,j_{d-1},i_d)v^{(1)}_{i_1}\\&\otimes \dots \otimes v^{(d-1)}_{i_{d-1}}\otimes (v^{(1)}_{i_1},0)\otimes \dots \otimes (v^{(d-1)}_{i_{d-1}},0)\\&+ \sum _{i_1,\dots ,i_d} \tilde{Q}_{k,i_d} T_1(i_1,\dots ,i_d) v^{(1)}_{i_{1}}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}} \otimes (0,v^{(d)}_{i_{d}}) \otimes \dots \otimes (0,v^{(d)}_{i_{d}}). \end{aligned} \end{aligned}$$
(9)

Now consider Eq. (9) as a 2-tensor in \(\Big (\bigotimes _{i=1}^{d-1}\mathbb {V}_i\Big ) \otimes \Big (\bigoplus _{i=1}^{d-1}\left( \mathbb {V}_i\oplus \mathbb {V}_d\right) \Big )\). With this new view, we obtain

$$\begin{aligned} \begin{aligned} \sum _{i_d}&\tilde{Q}_{k,i_d} \Biggl [\left( \sum _{i_1,\dots ,i_{d-1}}T_1(i_1,\dots ,i_d) v^{(1)}_{i_{1}}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}} \right) \\&\hspace{-0.3cm} \otimes \left( \sum _{j_1,\dots ,j_{d-1}} T_1(j_1,\dots ,j_{d-1},i_d) (v^{(1)}_{j_{1}},0)\otimes \dots \otimes (v^{(d-1)}_{j_{d-1}},0)\right) \Biggr ] \\&\hspace{-0.3cm} + \sum _{i_d}\tilde{Q}_{k,i_d} \left( \sum _{i_1,\dots ,i_{d-1}} T_1(i_1,\dots ,i_d) v^{(1)}_{i_{1}}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}}\right) \otimes (0,v^{(d)}_{i_{d}}) \otimes \dots \otimes (0,v^{(d)}_{i_{d}}) =\\ \sum _{i_d}&\tilde{Q}_{k,i_d} \Biggl [\Biggl (\sum _{i_1,\dots ,i_{d-1}}T_1(i_1,\dots ,i_d) v^{(1)}_{i_{1}}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}}\Biggr ) \otimes \\&\hspace{-0.4cm} \Biggl (\sum _{j_1,\dots ,j_{d-1}} T_1(j_1,\dots ,j_{d-1},i_d) (v^{(1)}_{j_{1}},0)\otimes \dots \otimes (v^{(d-1)}_{j_{d-1}},0) + (0,v^{(d)}_{i_{d}}) \otimes \dots \otimes (0,v^{(d)}_{i_{d}})\Biggr )\Biggr ], \end{aligned}\nonumber \\ \end{aligned}$$
(10)

having rank at most the number of non-zero elements of \(\tilde{Q}_{k,\cdot }\), the k-th row of the matrix \(\tilde{Q}\), but at least 1 since \(\tilde{Q}\) is invertible. Now consider the action of \((A_1,\dots ,A_{d-1},\tilde{A}_1,\dots ,\tilde{A}_{d-1})\) on this tensor: the rank remains the same. If we repeat this process for \(\Psi (T_2)\), we obtain the following rank-1 tensor in \(\Big (\bigotimes _{i=1}^{d-1}\mathbb {V}_i\Big ) \otimes \Big (\bigoplus _{i=1}^{d-1}\left( \mathbb {V}_i\oplus \mathbb {V}_d\right) \Big )\)

$$\begin{aligned} \begin{aligned}&\Biggr (\sum _{i_1,\dots ,i_{d-1}} T_2(i_1,\dots ,i_{d-1},k)v^{(1)}_{i_1}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}} \Biggr ) \otimes \\&\left( \sum _{j_1,\dots ,j_{d-1}} T_2(j_1,\dots ,j_d,k)(v^{(1)}_{i_1},0)\otimes \dots \otimes (v^{(d-1)}_{i_{d-1}},0) + (0,v^{(d)}_{k}) \otimes \dots \otimes (0,v^{(d)}_{k})\right) . \end{aligned}\nonumber \\ \end{aligned}$$
(11)

From the equality of the ranks, \(\tilde{Q}_{k,\cdot }\) must have exactly a non-zero element for each k, and hence, \(\tilde{Q}\) is a monomial matrix of the form PD, where \(D=\textrm{diag}(\alpha _1,\dots ,\alpha _{n_d})\) is a diagonal matrix and P is a permutation matrix corresponding to the permutation \(\sigma \) in \(\mathcal {S}_{n_d}\).

Without loss of generality, suppose that the permutation \(\sigma \) of the monomial matrix \(\tilde{Q}\) is the identity. This avoids the use of \(\sigma \) on the index of \(v^{(d)}_{i_{d}}\). Consider again \(\Psi (T_2)\) and its projection to \(v^{(d)}_{k}\) along \(\mathbb {V}_d\) as in Eq. (11). We project on elements of the basis of \(\bigoplus _{i=1}^{d-1}\left( \mathbb {V}_i\oplus \mathbb {V}_d\right) \). For elements of the form \((v^{(1)}_{\ell _1},0)\otimes \dots \otimes (v^{(d-1)}_{\ell _{d-1}},0)\) we get

$$\begin{aligned} \begin{aligned}&{{\,\textrm{proj}\,}}_{(v^{(1)}_{\ell _1},0)\otimes \dots \otimes (v^{(d-1)}_{\ell _{d-1}},0)} \left( {{\,\textrm{proj}\,}}_{v^{(d)}_k}(\Psi (T_2))\right) =\\&T_2(\ell _1,\dots ,\ell _{d-1},k)\sum _{i_1,\dots ,i_{d-1}} T_2(i_1,\dots ,i_{d-1},k)v^{(1)}_{i_1}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}}. \end{aligned} \end{aligned}$$
(12)

In particular, it is a multiple of \(\sum _{i_1,\dots ,i_{d-1}} T_2(i_1,\dots ,i_{d-1},k)v^{(1)}_{i_1}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}}\) for every choice of \(\ell _1,\dots ,\ell _{d-1}\). When we consider elements different from \((v^{(1)}_{\ell _1},0)\otimes \dots \otimes (v^{(d-1)}_{\ell _{d-1}},0)\), the projection is always zero, except for the case \((0,v^{(d)}_{k}) \otimes \dots \otimes (0,v^{(d)}_{k})\)

$$\begin{aligned} \begin{aligned}&{{\,\textrm{proj}\,}}_{(0,v^{(d)}_{i_{k}}) \otimes \dots \otimes (0,v^{(d)}_{i_{k}})} \left( {{\,\textrm{proj}\,}}_{v^{(d)}_{k}}(\Psi (T_2))\right) =\\&\sum _{i_1,\dots ,i_{d-1}} T_2(i_1,\dots ,i_{d-1},k)v^{(1)}_{i_1}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}}. \end{aligned} \end{aligned}$$
(13)

Hence, every projection of \({{\,\textrm{proj}\,}}_{v^{(d)}_{k}}(\Psi (T_2))\) is a multiple of \(\sum _{i_1,\dots ,i_{d-1}} T_2(i_1,\dots ,i_{d-1},k)v^{(1)}_{i_1}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}}\) and the linear space \(\mathcal {V}_k\) generated by all the projections is generated by the \((d-1)\)-tensor in Eq. (13). Consider now the projection to \(v^{(d)}_k\) of \((A_1,\dots ,A_{d-1},\tilde{A_1},\dots ,\tilde{A}_{d-1},\tilde{Q})\star \Psi (T_1)\), that is the (2d)-tensor

$$\begin{aligned} \begin{aligned}&\alpha _k \Biggl (\sum _{i_1,\dots ,i_{d-1}} T_1(i_1,\dots ,i_{d-1},k) A_1v^{(1)}_{i_1}\otimes \dots \otimes A_{d-1}v^{(d-1)}_{i_{d-1}}\Biggr ) \\&\quad \otimes \Biggl (\sum _{j_1,\dots ,j_{d-1}} T_1(j_1,\dots ,j_{d-1},k) \tilde{A}_1(v^{(1)}_{j_1},0)\otimes \dots \otimes \tilde{A}_{d-1}(v^{(d-1)}_{j_{d-1}},0) \\&\quad + \Big (\tilde{A}_1(0,v^{(d)}_{k}) \otimes \dots \otimes \tilde{A}_{d-1}(0,v^{(d)}_{k})\Big )\Biggr ). \end{aligned} \end{aligned}$$
(14)

Again, if we project to any element of the basis of \(\bigotimes _{i=1}^{d-1}\left( \mathbb {V}_i\oplus \mathbb {V}_d\right) \), we obtain a multiple of the \((d-1)\)-tensor

$$\begin{aligned} \begin{aligned} \alpha _{k}\sum _{i_1,\dots ,i_{d-1}} T_1(i_1,\dots ,i_{d-1},k)A_1v^{(1)}_{i_1}\otimes \dots \otimes A_{d-1}v^{d-1}_{i_{d-1}}. \end{aligned} \end{aligned}$$
(15)

By hypothesis, the space generated by these projections is equal to \(\mathcal {V}_k\), the space generated by the same projections of \(\Psi (T_2)\), that can be written as

$$\begin{aligned} \begin{aligned} \mathcal {V}_k&=\left\langle \sum _{i_1,\dots ,i_{d-1}} T_2(i_1,\dots ,i_{d-1},k)v^{(1)}_{i_1}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}}\right\rangle \\&= \left\langle \alpha _{k}\sum _{i_1,\dots ,i_{d-1}} T_1(i_1,\dots ,i_{d-1},k)A_1v^{(1)}_{i_1}\otimes \dots \otimes A_{d-1}v^{(d-1)}_{i_{d-1}}\right\rangle . \end{aligned} \end{aligned}$$

Hence there exists a non-zero \(\lambda _k\) in \(\mathbb {F}_q\) such that

$$\begin{aligned} \begin{aligned}&\sum _{i_1,\dots ,i_{d-1}} T_2(i_1,\dots ,i_{d-1},k)v^{(1)}_{i_1}\otimes \dots \otimes v^{(d-1)}_{i_{d-1}} \\&\quad = \lambda _k\alpha _{k}\sum _{i_1,\dots ,i_{d-1}} T_1(i_1,\dots ,i_{d-1},k)A_1v^{(1)}_{i_1}\otimes \dots \otimes A_{d-1}v^{(d-1)}_{i_{d-1}}. \end{aligned} \end{aligned}$$
(16)

Tensorizing Eq. (16) with \(v^{(d)}_{k}\) and taking the sum on k, we have that \(T_1\) and \(T_2\) are monomially isomorphic via \((A_1,\dots ,A_{d-1},Q')\), where \(Q'=D'P\) with \(D'=\textrm{diag}(\lambda _1\alpha _1,\dots ,\lambda _{n_d}\alpha _{n_d})\), and hence we have the thesis. \(\square \)

Example 3

(Running example) Recall the tensors \(T_1,T_2,\Psi (T_1)\) from examples 1 and 2. The tensor

$$\begin{aligned} \begin{aligned} \Psi (T_2)&= e_1\otimes e_2 \otimes (e_1,0)\otimes (e_2,0) \otimes e_3 + e_2\otimes e_1 \otimes (e_2,0)\otimes (e_1,0) \otimes e_2 \\&\quad + e_1\otimes e_1 \otimes (e_1,0)\otimes (e_1,0) \otimes e_1 + e_1\otimes e_2 \otimes (0,e_3)\otimes (0,e_3) \otimes e_3 \\&\quad + e_2\otimes e_1 \otimes (0,e_2)\otimes (0,e_2) \otimes e_2 + e_1\otimes e_1 \otimes (0,e_1)\otimes (0,e_1) \otimes e_1 \\ \end{aligned} \end{aligned}$$

is isomoprhic to \(\Psi (T_1)\) via the invertible matrices \((A,B,\tilde{A},\tilde{B},C)\). We want to prove that \(T_1\) is monomially isomorphic to \(T_2\) via matrices (ABC). In particular, we first show that C is monomial.

Let \(C=(c_{ij})\) and consider \((I_2,I_2,I_5,I_5,C)\star \Psi (T_1)\)

$$\begin{aligned} \begin{aligned}&e_1\otimes e_1 \otimes (e_1,0)\otimes (e_1,0) \otimes (c_{1,1}e_1 + c_{2,1}e_2 + c_{3,1}e_3) \\&\quad + e_2\otimes e_2 \otimes (e_2,0)\otimes (e_2,0) \otimes (c_{1,2}e_1 + c_{2,2}e_2 + c_{3,2}e_3) \\&\quad + e_1\otimes e_2 \otimes (e_1,0)\otimes (e_2,0) \otimes (c_{1,3}e_1 + c_{2,3}e_2 + c_{3,3}e_3) \\&\quad + e_1\otimes e_1 \otimes (0,e_1)\otimes (0,e_1) \otimes (c_{1,1}e_1 + c_{2,1}e_2 + c_{3,1}e_3) \\&\quad + e_2\otimes e_2 \otimes (0,e_2)\otimes (0,e_2) \otimes (c_{1,2}e_1 + c_{2,2}e_2 + c_{3,2}e_3) \\&\quad + e_1\otimes e_2 \otimes (0,e_3)\otimes (0,e_3) \otimes (c_{1,3}e_1 + c_{2,3}e_2 + c_{3,3}e_3). \end{aligned} \end{aligned}$$

Projecting this tensor to \(e_2\) from the basis of the last space \(\mathbb {F}_2^3\) gives

$$\begin{aligned} \begin{aligned}&c_{2,1} e_1\otimes e_1 \otimes (e_1,0)\otimes (e_1,0) + c_{2,2} e_2\otimes e_2 \otimes (e_2,0)\otimes (e_2,0) \\&\quad + c_{2,3} e_1\otimes e_2 \otimes (e_1,0)\otimes (e_2,0) + c_{2,1} e_1\otimes e_1 \otimes (0,e_1)\otimes (0,e_1) \\&\quad + c_{2,2} e_2\otimes e_2 \otimes (0,e_2)\otimes (0,e_2) + c_{2,3} e_1\otimes e_2 \otimes (0,e_3)\otimes (0,e_3). \end{aligned} \end{aligned}$$

Now consider the above tensor as a 2-tensor in the space \(\left( \mathbb {F}_2^2\otimes \mathbb {F}_2^2\right) \otimes \left( \left( \mathbb {F}_2^2\oplus \mathbb {F}_2^3\right) \otimes \left( \mathbb {F}_2^2\oplus \mathbb {F}_2^3\right) \right) \). We have

$$\begin{aligned} \begin{aligned}&c_{2,1}\left( e_1\otimes e_1\right) \otimes \big ((e_1,0)\otimes (e_1,0) + (0,e_1)\otimes (0,e_1)\big ) \\&\quad + c_{2,2}\left( e_2\otimes e_2\right) \otimes \big ((e_2,0)\otimes (e_2,0) + (0,e_2)\otimes (0,e_2) \big ) \\&\quad + c_{2,3}\left( e_1\otimes e_2\right) \otimes \big ( (e_1,0)\otimes (e_2,0) + (0,e_3)\otimes (0,e_3) \big ). \\ \end{aligned} \end{aligned}$$
(17)

This 2-tensor has rank at most the number of non-zero elements in the row \((c_{2,1},c_{2,2},c_{2,3})\). This rank does not change when we apply the remaining part of the action, that is the element \((A,B,\tilde{A},\tilde{B},I_3)\). If we take the same projection to \(e_2\) of \(\mathbb {F}_2^{3}\) and the same view as 2-tensor of \(\Psi (T_2)\), we obtain the following rank-1 tensor

$$\begin{aligned} \begin{aligned}&e_2\otimes e_1 \otimes (e_2,0)\otimes (e_1,0) + e_2\otimes e_1 \otimes (0,e_2) \otimes (0,e_2) \\&\quad = \Big (e_2\otimes e_1\Big ) \otimes \Big ((e_2,0)\otimes (e_1,0) + (0,e_2)\otimes (0,e_2)\Big ). \end{aligned} \end{aligned}$$
(18)

Since \((A,B,\tilde{A},\tilde{B},C) \star \Psi (T_1)=\Psi (T_2)\), we have that the rank of Eq. (17) is equal to the rank of Eq. (18), hence the row \((c_{2,1},c_{2,2},c_{2,3})\) has exactly one non-zero element Using the same argument, projecting on different elements of the basis of \(\mathbb {F}_2^3\), we show that every row of C has one non-zero entry. This shows that C is monomial and we denote with \(\sigma \) be the permutation associated to C. Now we deal with the last part of the proof, showing that \(T_1\) and \(T_2\) are monomial isomorphic. Consider again Eq. (18). We can project to elements of the basis of \((\mathbb {F}_2^2 \otimes \mathbb {F}_2^3) \otimes (\mathbb {F}_2^2 \otimes \mathbb {F}_2^3)\). For example, when we project to \((e_2,0)\otimes (e_1,0)\), we have \(e_2\otimes e_1\). Similarly, projecting to \((0,e_2)\otimes (0,e_2)\) produces again \(e_2\otimes e_1\). Other projections to \((0,e_i)\otimes (0,e_j)\) with \(i\ne j\), or to mixed elements like \((e_i,0)\otimes (0,e_j)\) give us the zero tensor. In particular, the non-zero projections are multiples of \(e_2\otimes e_1\). We denote the vector space generated by all these projections with \(\mathcal {V}_2\). This space must be equal to the span of all the same projections (up to \(\sigma \)) of \((A,B,\tilde{A},\tilde{B},C)\star \Psi (T_1)\). As an example, we first project to \(e_{\sigma ^{-1}(2)}\) of \(\mathbb {F}_2^3\), and then to \((e_1,0)\otimes (e_2,0)\). We obtain a multiple of the 2-tensor

$$\begin{aligned} \sum _{i,j}T_1(i,j,2)Ae_i\otimes Be_j. \end{aligned}$$

The vector space generated by these projections is exactly \(\mathcal {V}_2\) since \((A,B,\tilde{A},\tilde{B},C)\star \Psi (T_1)\) is equal \(\Psi (T_2)\). In other words,

$$\begin{aligned} \mathcal {V}_2 = \langle e_2\otimes e_1 \rangle = \left\langle \sum _{i,j}T_1(i,j,2)Ae_i\otimes Be_j \right\rangle . \end{aligned}$$

Hence, there exists a non-zero scalar \(\lambda _2\) (in this case equal to 1) such that

$$\begin{aligned} e_2\otimes e_1 = \sum _{i,j}T_1(i,j,2)Ae_i\otimes Be_j. \end{aligned}$$

We repeat the process with other elements of the basis of \(\mathbb {F}_2^3\), both for \(\Psi (T_2)\) and for \((A,B,\tilde{A},\tilde{B},C)\star \Psi (T_1)\). Then, we tensorise the projections of \(\Psi (T_2)\) with \(e_k\) and the ones of \((A,B,\tilde{A},\tilde{B},C)\star \Psi (T_1)\) with \(e_{\sigma ^{-1}(k)}\). Taking the sum on k gives us

$$\begin{aligned} T_2 = \sum _{k=1}^{3}\sum _{i,j}T_1(i,j,2)Ae_i\otimes Be_j\otimes e_{\sigma ^{-1}(k)} = (A,B,C)\star T_1. \end{aligned}$$

Therefore, \(T_1\) and \(T_2\) are monomially equivalent.

The combination of the two results above gives us the main result of this section.

Theorem 8

The problem d-\(\textsf {TI}^*\) polynomially reduces to \((2d-1)\)-\(\textsf {TI}\). Moreover, d-\(\textsf {TI}^*\) is \(\textsf {TI}\)-complete.

Proof

Given an instance \((T_1,T_2)\) of d-\(\textsf {TI}^*\), we can build an instance \(\left( \Psi (T_1),\Psi (T_2)\right) \) of \((2d-1)\)-\(\textsf {TI}\). If we call an oracle for \((2d-1)\)-\(\textsf {TI}\) on the latter pair of tensors, then we can decide the original monomial isomorphism: Proposition 6 shows that \(\Psi (T_1)\) and \(\Psi (T_2)\) are isomorphic if \(T_1\) and \(T_2\) are monomially isomorphic. On the other hand, Proposition 7 shows that if \(\Psi (T_1)\) and \(\Psi (T_2)\) are isomorphic, then \(T_1\) and \(T_2\) are monomially isomorphic. Since the map \(\Psi \) is polynomially computable, this is a correct and polynomial-time reduction. \(\square \)

Let us analyze the sizes of the reduction \(\Psi \). It takes a d tensor of size \(n_1\times \dots \times n_d\) and returns a \((2d-1)\)-tensor of size \(n_1\times \dots \times n_{d-1} \times (n_1+n_d)\times \dots \times (n_{d-1}+n_d) \times n_{d}\). We will use this reduction to link Code Equivalence problems in the following section, but this result could be of independent interest and shows how powerful is the \(\textsf {TI}\) class [13]. In particular, Theorem 8 proves that for every d, d-\(\textsf {TI}^*\) is in the class \(\textsf {TI}\). Moreover, a trivial reduction can be found from d-\(\textsf {TI}\) to \((d+1)\)-\(\textsf {TI}^*\) (send T to \(T\otimes 1\)), hence for \(d\ge 4\) we have that d-\(\textsf {TI}^*\) is \(\textsf {TI}\)-complete.

4 Relations between code equivalence problems

In this section, we show how to reduce the code equivalence problem for sum-rank codes to the one in the rank metric. A reduction is given in [24], but it assumes that the automorphism group of the obtained rank code is trivial in the sense of Sect. 2.3. We recall the technique from [24], and we observe how this kind of reduction (sending a tuple of elements of \(\mathbb {F}_{q}^m\) to a block-diagonal matrix) does not work without the trivial automorphisms assumptions.

Let \(\mathcal {C}\) be a sum-rank code with basis \(\{\textbf{C}_1,\dots ,\textbf{C}_k\}\), where \(\textbf{C}_i=\left( C_i^{(1)},\dots ,C_i^{(d)}\right) \) is a tuple of matrices. We denote with \(\Phi \) the map from the set of sum-rank codes to the set of matrix codes used in [24]

$$\begin{aligned} \Phi \left( \langle \textbf{C}_1,\dots ,\textbf{C}_k\rangle \right) = \langle W_1,\dots , W_k \rangle , \end{aligned}$$

where \(W_i\) is the \(\left( \sum _i n_i\right) \times \left( \sum _i n_i\right) \) block diagonal matrix with the elements of \(\textbf{C}_i\) on the diagonal. We recall that if the automorphisms group of the image of \(\Phi \) is not trivial, then, given an isometry in the rank metric, we cannot retrieve an isometry in the sum-rank setting since the two codes are not equivalent.

Example 4

Consider the field \(\mathbb {F}_2\) and the one-dimensional sum-rank codes \(\mathcal {C}\) and \(\mathcal {D}\) with parameters \(d=2,n_1=3,n_2=2,m_1=m_2=2\) generated by

$$\begin{aligned} C_1=\begin{pmatrix} 1 &{}\quad 0\\ 0 &{}\quad 0\\ 0 &{}\quad 0 \end{pmatrix}, \quad C_2=\begin{pmatrix} 1 &{}\quad 0\\ 0 &{}\quad 1 \end{pmatrix}\quad \text { and } \quad D_1=\begin{pmatrix} 1 &{}\quad 0\\ 0 &{}\quad 1\\ 0 &{}\quad 0 \end{pmatrix}, \quad D_2=\begin{pmatrix} 0 &{}\quad 0\\ 0 &{}\quad 1 \end{pmatrix}, \end{aligned}$$

respectively. It can be seen that \(\mathcal {C}\) and \(\mathcal {D}\) are not equivalent since there is not any sum-rank isometry between them: the permutation must be the identity since \(n_1\ne n_2\) and do not exist invertible matrices (AB) in \({{\,\textrm{GL}\,}}(3)\times {{\,\textrm{GL}\,}}(2)\) such that \(AC_1B\) is in the space generated by \(D_1\) (just look at their ranks). However, if we consider \(\Phi (\mathcal {C})\) and \(\Phi (\mathcal {D})\), we obtain the two one-dimensional matrix codes generated by

$$\begin{aligned} C'=\begin{pmatrix} \begin{array}{cc|cc} 1 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ \hline 0 &{}\quad 0 &{}\quad 1 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 \end{array} \end{pmatrix}\quad \text { and } \quad D'=\begin{pmatrix} \begin{array}{cc|cc} 1 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ \hline 0 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 \end{array} \end{pmatrix}, \end{aligned}$$

respectively. We can see that \(\Phi (\mathcal {C})\) and \(\Phi (\mathcal {D})\) are equivalent via the isometry given by permutation matrices \(P_{\sigma }\) and \(P_{\tau }\), where \(\sigma =(2\;4)\) is in \(\mathcal {S}_5\) and \(\tau =(2\; 3)\) is in \(\mathcal {S}_4\). In fact, \(P_{\sigma }C'P_{\tau }=D'\). This happens since the automorphisms groups of \(\Phi (\mathcal {C})\) and \(\Phi (\mathcal {D})\) are not trivial. For example, for \(\Phi (\mathcal {C})\) it contains the isometry \((P_{(4\;5)},P_{(3\;4)})\), where \((4\;5)\) and \((3\;4)\) are permutations in \(\mathcal {S}_5\) and \(\mathcal {S}_4\), respectively.

The 3-\(\textsf {TI}\) problem is equivalent to the Code Equivalence in the rank metric \(\textsf {CE}_{\textsf {rk}}\) since the former can be stated in terms of matrix spaces, and the admissible maps between these spaces are exactly the isometries used for \(\textsf {CE}_{\textsf {rk}}\) (see [14]). A sketch of the reduction is the following. To a matrix code \(\mathcal {C}\) generated by \(C_1,\dots ,C_k\) we associate the 3-tensor in the space \(\mathbb {A}\otimes \mathbb {B} \otimes \mathbb {C}\)

$$\begin{aligned} T_{\mathcal {C}} = \sum _{i_1,i_2,i_3}\left( C_{i_3}\right) _{i_1,i_2}a_{i_1}\otimes b_{i_2} \otimes c_{i_3}. \end{aligned}$$

In particular, \(\mathbb {A}\) and \(\mathbb {B}\) represent the spaces of rows and columns, respectively, while \(\mathbb {C}\) is the space representing the dimension of the code (or the elements in the basis). Hence, a matrix can be represented as a 2-tensor in \(\mathbb {A}\otimes \mathbb {B}\), and the action \((A,B)\star M\) is the matrix multiplication \(AMB^t\). The action regarding \(\mathbb {C}\) is the map sending a k-uple of matrices into another k-uple. Therefore, given two matrix codes \(\mathcal {C}\) and \(\mathcal {D}\), with bases \(C_1,\dots ,C_k\) and \(D_1,\dots ,D_k\), equivalent via (AB) and such that the invertible matrix M sends the basis \(AC_1B,\dots ,AC_kB\) to \(D_1,\dots ,D_k\), the tensors \(T_{\mathcal {C}}\) and \(T_{\mathcal {D}}\) are isomorphic via \((A,B^t,M)\). The vice versa is obtained similarly and we highlight that there is no overhead in the sizes of tensors and matrix spaces obtained in both directions.

Hence, we can resume the above observation in the following result.

Theorem 9

The problem \(\textsf {CE}_{\textsf {rk}}\) is \(\textsf {TI}\)-complete.

By the \(\textsf {TI}\)-hardness of \(\textsf {CE}_{\textsf {rk}}\) and since it can be reduced to \(\textsf {CE}_{\textsf {sr}}\), we get that \(\textsf {CE}_{\textsf {sr}}\) is \(\textsf {TI}\)-hard. If we want to show its \(\textsf {TI}\)-completeness, we need to prove that it is in \(\textsf {TI}\), presenting a reduction from a \(\textsf {TI}\)-complete problem, for instance 4-\(\textsf {TI}^*\).

Lemma 10

The problem \(\textsf {CE}_{\textsf {sr}}\) is polynomially reducible to 4-\(\textsf {TI}^*\).

Proof

We model a sum-rank code as a 4-tensor. Given a sum-rank code \(\mathcal {C}\) with parameters \(d,n_1,\dots ,n_d,m_1,\dots ,m_d\) and basis \(\{\textbf{C}_1,\dots ,\textbf{C}_k\}\), let N be the maximum among \(n_1,\dots ,n_d\) and M be the maximum among \(m_1,\dots ,m_d\). For each i from 1 to d, we can embed an \(n_i\times m_i\) matrix into an \(N \times M\) one, filling it with zeros. Hence, there are d embeddings \(g_i\) such that

$$\begin{aligned} g_i: \mathbb {F}_q^{n_i\times m_i} \rightarrow \mathbb {F}_q^{N\times M}. \end{aligned}$$

In the rest of the proof, we consider sum-rank codes embedded via the functions \(g_i\), this means that we work with codes having parameters \(d,n_i=N,m_i=M\) for every \(i=1,\dots ,d\). Let \(\mathfrak {SR}(d,N,M)\) be the set of sum-rank codes of parameters \(d,n_i=N,m_i=M\) and let \(\mathbb {A},\mathbb {B},\mathbb {C},\mathbb {D}\) be vector spaces of dimension NMkd with bases \(\{a_i\}_i\), \(\{b_i\}_i\), \(\{c_i\}_i\) and \(\{d_i\}_i\), respectively. Here, \(\mathbb {A}\) and \(\mathbb {B}\) denotes the row and column spaces of the matrices, \(\mathbb {C}\) denotes the dimension of the code, while \(\mathbb {D}\) models the factors of the sum-rank code. Hence, the code generated by \(\{\textbf{C}_1,\dots ,\textbf{C}_k\}\) can be seen as the 4-tensor

$$\begin{aligned} \sum _{i_1,\dots ,i_4} \left( C_{i_3}^{(i_4)}\right) _{i_1,i_2}a_{i_1} \otimes b_{i_2} \otimes c_{i_3} \otimes d_{i_4}. \end{aligned}$$

The projection to a factor \(\mathbb {F}_q^{n_j\times m_j}\) is a matrix code, which can be seen as the 3-tensor

$$\begin{aligned} \sum _{i_1,i_2,i_3} \left( C_{i_3}^{(j)}\right) _{i_1,i_2}a_{i_1} \otimes b_{i_2} \otimes c_{i_3}, \end{aligned}$$

where the action of (ABM) is intended as the left-right multiplication for A and \(B^t\), while M is a change of basis.

Let \(\delta _{i,j}\) be the Kronecker’s delta and define the map

$$\begin{aligned} \begin{aligned} \Phi :&\mathfrak {SR}(d,N,M) \rightarrow \left( \bigoplus _{i=1}^d\mathbb {A}\right) \otimes \left( \bigoplus _{i=1}^d\mathbb {B}\right) \otimes \left( \bigoplus _{i=1}^d\mathbb {C}\right) \otimes \mathbb {D},\\&\langle \textbf{C}_1,\dots ,\textbf{C}_k\rangle \\&\mapsto \sum _{i_1,\dots ,i_4} \left( C_{i_3}^{(i_4)}\right) _{i_1,i_2}(\delta _{i_4,1}a_{i_1}, \dots , \delta _{i_4,d}a_{i_1}) \\&\otimes (\delta _{i_4,1}b_{i_2}, \dots , \delta _{i_4,d}b_{i_2}) \otimes (\delta _{i_4,1}c_{i_3}, \dots , \delta _{i_4,d}c_{i_3}) \otimes d_{i_4}. \end{aligned} \end{aligned}$$
(19)

Now we show that sum-rank codes \(\mathcal {C}\) and \(\mathcal {D}\), with bases \(\{\textbf{C}_1,\dots ,\textbf{C}_k\}\) and \(\{\textbf{D}_1,\dots ,\textbf{D}_k\}\), are equivalent if and only if \(\Phi (\mathcal {C})\) and \(\Phi (\mathcal {D})\) are monomially isomorphic.

\(\implies \)”. Suppose that \(\mathcal {C}\) and \(\mathcal {D}\) are linear equivalent via the matrices \(A_1,\dots ,A_d\), \(B_1,\dots ,B_d\) and the permutation \(\sigma \) in \(\mathcal {S}_d\). Suppose that, for every i, \(M_i\) is the \(k\times k\) invertible matrix sending the basis \(\{A_iC_j^{(\sigma (i))}B_i\}_j\) to the basis \(\{D_j^{(i)}\}_j\). Then we define the matrices

$$\begin{aligned}&\tilde{L} = \begin{pmatrix} A_1 &{}\quad 0 &{}\quad \dots &{}\quad 0 \\ 0 &{}\quad A_2 &{}\quad \dots &{}\quad 0 \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad 0 &{}\quad \dots &{}\quad A_d \end{pmatrix},\quad \tilde{R} = \begin{pmatrix} B_1^t &{}\quad 0 &{}\quad \dots &{}\quad 0 \\ 0 &{}\quad B_2^t &{}\quad \dots &{}\quad 0 \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad 0 &{}\quad \dots &{}\quad B_d^t \end{pmatrix},\\&\tilde{S} = \begin{pmatrix} M_1 &{}\quad 0 &{}\quad \dots &{}\quad 0 \\ 0 &{}\quad M_2 &{}\quad \dots &{}\quad 0 \\ \vdots &{}\quad \vdots &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad 0 &{}\quad \dots &{}\quad M_d \end{pmatrix},\quad \text { and }\quad \tilde{Q} = P_{\sigma }. \end{aligned}$$

We can see that \((\tilde{L},\tilde{R},\tilde{S},\tilde{Q})\star \Phi (\mathcal {C})=\Phi (\mathcal {D})\), in fact

$$\begin{aligned} \begin{aligned}&\sum _{i_1,\dots ,i_4} \left( C_{i_3}^{(i_4)}\right) _{i_1,i_2}(0,\dots , A_{i_1}a_{i_1}, \dots , 0) \\&\quad \otimes (0,\dots , B_{i_2}b_{i_2}, \dots , 0) \otimes (0,\dots , M_{i_3}c_{i_3}, \dots , 0) \otimes d_{\sigma (i_4)}, \end{aligned} \end{aligned}$$
(20)

and this, by construction, is exactly \(\Phi (\mathcal {D})\).

”. Suppose that \(\Phi (\mathcal {C})\) and \(\Phi (\mathcal {D})\) are monomially isomorphic via invertible matrices L, R, S and the monomial matrix \(Q=DP\). We can see matrices L, R and S as block matrices, for example, we have

$$\begin{aligned} L = \begin{pmatrix} L_{11} &{}\quad \dots &{}\quad L_{1d} \\ L_{21} &{}\quad \dots &{}\quad L_{2d} \\ \vdots &{}\quad \ddots &{}\quad \vdots \\ L_{d1} &{}\quad \dots &{}\quad L_{dd} \end{pmatrix}, \end{aligned}$$

where \(L_{ij}\) is an \(N\times N\) matrix for every i and j. Analogously, R and S have the same structure, with blocks of dimension \(M\times M \) and \(k\times k\), respectively. Now, for simplicity, we will focus on the action of L on \(\Phi (\mathcal {C})\), but the same argument can be used for R and S. As in the proof of Proposition 7, we assume that the matrix Q is the identity matrix, otherwise we need take care of the permutation \(\sigma \) in the indexes and the scalars of D. We write \({{\,\textrm{proj}\,}}_{d_k}\left( (L,R,S,Q)\star \Phi (\mathcal {C})\right) \)

$$\begin{aligned} \begin{aligned}&\sum _{i_1,i_2,i_3} \left( C_{i_3}^{(k)}\right) _{i_1,i_2}(L_{1k}a_{i_1},\dots , L_{dk}a_{i_1}) \\&\quad \otimes (R_{1k}b_{i_2},\dots , R_{dk}b_{i_2}) \otimes (S_{1k}c_{i_3},\dots , S_{dk}c_{i_3}). \end{aligned} \end{aligned}$$
(21)

Consider the same projection of \(\Phi (\mathcal {D})\)

$$\begin{aligned} \sum _{i_1,i_2,i_3} \left( D_{i_3}^{(k)}\right) _{i_1,i_2}(0,\dots , a_{i_1}, \dots , 0) \otimes (0,\dots , b_{i_2}, \dots , 0) \otimes (0,\dots , c_{i_3}, \dots , 0), \end{aligned}$$
(22)

this tensor is equal to the one of Eq. (21), and this holds for every k. Now consider the tensor

$$\begin{aligned} v_{\ell _2,\ell _3}^{(k)} = (0,\dots ,\underbrace{b_{\ell _2}}_{k\text {-th}}, \dots ,0) \otimes (0,\dots ,\underbrace{c_{\ell _3}}_{k\text {-th}}, \dots ,0). \end{aligned}$$

The projection to \(v_{\ell _2,\ell _3}^{(k)}\) of \({{\,\textrm{proj}\,}}_{d_k}(\Phi (\mathcal {D}))\) is given by

$$\begin{aligned} \begin{aligned} \sum _{i_1} \left( D_{\ell _3}^{(k)}\right) _{i_1,\ell _2}(0,\dots , a_{i_1}, \dots , 0), \end{aligned} \end{aligned}$$
(23)

while, for \((L,R,S,Q)\star \Phi (\mathcal {C})\), we have

$$\begin{aligned} \begin{aligned} \sum _{i_1,i_2,i_3} \left( R_{kk}\right) _{\ell _2,i_2} \left( S_{kk}\right) _{\ell _3,i_3} \left( C_{i_3}^{(k)}\right) _{i_1,i_2}(L_{1k}a_{i_1},\dots , L_{dk}a_{i_1}). \end{aligned} \end{aligned}$$
(24)

By hypothesis, Eqs. (23) and (24) are equal. Then, for \(\bar{k}\ne k\), we have that \(L_{\bar{k}k}=0\). We can use the same argument for R and S, using the following tensors and the projections to them

$$\begin{aligned} (0,\dots ,\underbrace{a_{\ell _1}}_{k\text {-th}}, \dots ,0) \otimes (0,\dots ,\underbrace{c_{\ell _3}}_{k\text {-th}}, \dots ,0); \\ (0,\dots ,\underbrace{a_{\ell _1}}_{k\text {-th}}, \dots ,0) \otimes (0,\dots ,\underbrace{b_{\ell _2}}_{k\text {-th}}, \dots ,0). \end{aligned}$$

Finally, we obtain that L, R and S are block diagonal of the form

$$\begin{aligned} L = \begin{pmatrix} L_{11} &{}\quad 0 &{}\quad \dots &{}\quad 0 \\ 0 &{}\quad L_{22} &{}\quad \dots &{}\quad 0 \\ \vdots &{}\quad &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad \dots &{}\quad 0 &{}\quad L_{dd} \end{pmatrix}, \qquad R = \begin{pmatrix} R_{11} &{}\quad 0 &{}\quad \dots &{}\quad 0 \\ 0 &{}\quad R_{22} &{}\quad \dots &{}\quad 0 \\ \vdots &{}\quad &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad \dots &{}\quad 0 &{}\quad R_{dd} \end{pmatrix},\\ \text {and }\; S = \begin{pmatrix} S_{11} &{}\quad 0 &{}\quad \dots &{}\quad 0 \\ 0 &{}\quad S_{22} &{}\quad \dots &{}\quad 0 \\ \vdots &{}\quad &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad \dots &{}\quad 0 &{}\quad S_{dd} \end{pmatrix}. \end{aligned}$$

Since the matrices L, R and S are invertible, so are the matrices on their diagonal. We can conclude that codes \(\mathcal {C}\) and \(\mathcal {D}\) are equivalent via matrices \(L_{11},\dots ,L_{dd}\), \(R_{11}^t,\dots ,R_{dd}^t\) and the permutation \(\sigma \). \(\square \)

Example 5

Let \(\mathcal {C}\) be the sum-rank code with parameters \(d=2, n_1=3,n_2=m_1=m_2=2\) generated by \(\{\textbf{C}_1,\textbf{C}_2\}\), where

$$\begin{aligned} C_1^{(1)} = \begin{pmatrix} 1 &{}\quad 1 \\ 0 &{}\quad 0 \\ 1 &{}\quad 0 \end{pmatrix}, \quad C_1^{(2)} = \begin{pmatrix} 0 &{}\quad 0 \\ 0 &{}\quad 1 \end{pmatrix}, \text { and }\; C_2^{(1)} = \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix}, \quad C_2^{(2)} = \begin{pmatrix} 0 &{}\quad 0 \\ 1 &{}\quad 0 \end{pmatrix}. \end{aligned}$$

After applying the embeddings \(g_i\) from above, we can see \(\mathcal {C}\) as a sum-rank code with parameters \(d=2,n_1=n_2=3,m_1=m_2=2\) and we have

$$\begin{aligned} C_1^{(1)} = \begin{pmatrix} 1 &{}\quad 1 \\ 0 &{}\quad 0 \\ 1 &{} \quad 0 \end{pmatrix}, \quad C_1^{(2)} = \begin{pmatrix} 0 &{}\quad 0 \\ 0 &{}\quad 1 \\ 0 &{}\quad 0 \end{pmatrix}, \text { and }\; C_2^{(1)} = \begin{pmatrix} 1 &{}\quad 0 \\ 0 &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix}, \quad C_2^{(2)} = \begin{pmatrix} 0 &{}\quad 0 \\ 1 &{}\quad 0 \\ 0 &{}\quad 0 \end{pmatrix}. \end{aligned}$$

Using the notation from the previous proof, define \(\mathbb {A}=\mathbb {F}_2^{3}\), \(\mathbb {B}=\mathbb {F}_2^{2}\), \(\mathbb {C}=\mathbb {F}_2^{2}\) and \(\mathbb {D}=\mathbb {F}_2^{2}\). The image of \(\mathcal {C}\) under \(\Phi \) is the following 4-tensor in \(\left( \mathbb {A}\oplus \mathbb {A}\right) \otimes \left( \mathbb {B}\oplus \mathbb {B}\right) \otimes \left( \mathbb {C}\oplus \mathbb {C}\right) \otimes \mathbb {D}\)

Using the same strategy adopted in the proof of Theorem 8, and since the map \(\Phi \) is polynomial-time computable, the above result implies that \(\textsf {CE}_{\textsf {rk}}\) reduces to 4-\(\textsf {TI}^*\). This fact, combined with Theorems 1 and 9 leads to the following corollary.

Corollary 11

The problem \(\textsf {CE}_{\textsf {sr}}\) is \(\textsf {TI}\)-complete. In particular, it is polynomially reducible to \(\textsf {CE}_{\textsf {rk}}\).

A “proof” of the above result can be seen in Fig. 1, showing the path of the reduction from \(\textsf {CE}_{\textsf {sr}}\) to \(\textsf {CE}_{\textsf {rk}}\).