1 Introduction

Classical examples of tame symmetric algebras of period four are 2-blocks of finite-dimensional group algebras with quaternion defect groups. More recently, it was discovered that all weighted surface algebras [6] (see also [8, 9]) are tame symmetric of period four, and so are virtual mutations investigated in [11] or so called weighted generalized triangulation algebras [13], which generalize both mentioned classes. The main result of [7] established the classification of tame symmetric algebras of period four whose Gabriel quiver is 2-regular, which we think is evidence that a general classification, over algebraically closed fields, may be in reach. To obtain such a classification, one needs to determine all possible basic algebras, in the form \(\Lambda = KQ/I,\) where Q is a quiver, and I is an admissible ideal of the path algebra KQ.

Throughout, we fix an algebraically closed field, and we consider finite-dimensional associative K-algebras with identity. We also assume that algebras are basic and connected. Recall that an algebra A is self-injective, provided that \(\Lambda \) is injective as a right \(\Lambda \)-module, i.e., projective modules are also injective (see also [12]). In this paper, we focus our attention on symmetric algebras, that is, those self-injective algebras, for which there is a nondegenerate symmetric K-bilinear form \(\Lambda \times \Lambda \rightarrow K.\) There are many classical examples of symmetric algebras, for instance, blocks of finite-dimensional group algebras [4] or Hecke algebras associated to Coxeter groups [1]. Any algebra \(\Lambda \) is a quotient of its trivial extension \(T(\Lambda ),\) which is a symmetric algebra.

For an algebra \(\Lambda ,\) we denote by \({\text {mod}}\Lambda \) the category of finitely generated (right) \(\Lambda \)-modules. For a module M in \({\text {mod}}\Lambda ,\) its syzygy is a kernel \(\Omega (M)={\text {Ker}}(\pi )\) of a projective cover \(\pi :P\rightarrow M\) of M in \({\text {mod}}\Lambda \) (so syzygy is defined up to isomorphism of \(\Lambda \)-modules).

We call a module M in \({\text {mod}}\Lambda \) a periodic module if \(\Omega ^d(M)\cong M,\) for some \(d\geqslant 1\) (the smallest such d is the period of M). Recall that an algebra \(\Lambda \) is called a periodic algebra if \(\Lambda \) is periodic as a \(\Lambda \)-bimodule, or equivalently, \(\Lambda \) is a periodic module over its enveloping algebra \(\Lambda ^e=\Lambda \otimes _K\Lambda .\) Periodicity of an algebra implies periodicity of all non-projective indecomposable \(\Lambda \)-modules (see for example [14, Theorem IV.11.19]). In particular, if \(\Lambda \) is a periodic algebra, then all simple \(\Lambda \)-modules are periodic. Moreover, it is known (see [10, Theorem 1.4] that periodicity of simples in \({\text {mod}}\Lambda \) implies \(\Lambda \) is self-injective, and hence, periodic algebras form a subclass in the class of self-injective algebras.

Here we work with bound quiver algebras \(\Lambda =KQ/I,\) where the Gabriel quiver Q is biserial: that is, at each vertex at most two arrows start and at most two arrows end. We will consider algebras \(\Lambda \) which are both symmetric and tame, and we assume that \(\Lambda \) is a periodic algebra of period four. Any such algebra is said to be a TSP4 algebra. As input for a classification of these algebras, one needs a complete description of all possible quivers Q,  and information on minimal generators for the ideal I. In this note, we present a range of results towards this task, mostly with short proofs. In particular, we will show that triangles (and squares) appear naturally; see Section 4. Moreover, in the last section, we present partial results describing some distinguished types of vertices. Note that we focus on Q and minimal generators of I,  and we do not discuss complete presentations.

For the necessary background in the representation theory, we refer to the books [2, 14].

2 Preliminaries

A quiver Q is a finite directed graph, with vertex set \(Q_0\) and arrow set \(Q_1.\) Let \(\Lambda = KQ/I\) be an admissible presentation of the algebra \(\Lambda ,\) that is, I is an ideal in the path algebra which does not contain an arrow, and for some large n,  it contains all paths of length \(\geqslant n.\) The algebra \(\Lambda \) has a vector space basis consisting of elements of the form \(p+I\) where p is a monomial in the arrows, we call this a monomial basis. As usual, for \(w\in KQ,\) if w belongs to I,  then we say that \(w=0\) in \(\Lambda .\) We denote by J the radical of \(\Lambda .\) Recall that the images in \(\Lambda \) of the arrows of Q form a vector space basis of J modulo \(J^2.\) We assume that the algebra \(\Lambda \) is tame and symmetric, and has \(\Omega \)-period 4,  as an algebra. In particular, all simple modules are \(\Omega \)-periodic as \(\Lambda \)-modules with period dividing 4 [14, Theorem IV.11.19]. In fact, we can assume that all simples have period 4 (see Remark 2.2). We also assume Q is connected, that is, \(\Lambda \) is indecomposable as an algebra. For a vertex \(i\in Q,\) we denote by \(P_i\) the indecomposable projective module in \({\text {mod}}\Lambda \) associated to the vertex i,  and by \(p_i\) its dimension vector \(p_i:=\underline{\dim }(P_i).\) Similarly, we write \(S_i\) and \(s_i\) for the simple module associated to vertex i and its dimension vector.

For a vertex i of the quiver Q,  we let \(i^-\) be the set of arrows ending at i,  and \(i^+\) the set of arrows starting at i. In this paper, we assume the sizes \(|i^-|\) and \(|i^+|\) are at most 2. With this, Q is said to be 2-regular if \(|i^-|=|i^+|=2,\) and biserial if \(1\leqslant |i^-|, |i^+|\leqslant 2,\) for all vertices i in Q. We say that \(i\in Q_0\) is a regular vertex (1- or 2-regular), provided \(|i^-|=|i^+|\) (and the size is equal 1 or 2,  respectively). We will also use the term 1-vertex or 2-vertex instead. Otherwise, we call i a non-regular vertex.

We will use the following notation and convention for arrows: we write \(\alpha , {\bar{\alpha }}\) for the arrows starting at vertex i,  with the convention that \({\bar{\alpha }}\) does not exist in case \(|i^+|=1.\) Similarly we write \(\gamma , \gamma ^*\) for the arrows ending at some vertex i,  where again \(\gamma ^*\) may not exist.

Then Q has a subquiver

figure a

Consider the simple module \(S_i,\) \(i\in Q_0.\) We will briefly discuss some basic consequences of \(\Omega \)-periodicity of \(S_i,\) mainly, the associated exact sequence. Recall that there are natural isomorphisms \(\Omega (S_i)={\text {rad}}P_i=\alpha \Lambda +{\bar{\alpha }}\Lambda \) and \(\Omega ^{-1}(S_i)\cong (\gamma ,\gamma ^*)\Lambda \subset P_x\oplus P_y.\) Here \((\gamma , \gamma ^*)\Lambda \) is the submodule of \(P_x\oplus P_y\) generated by \((\gamma , \gamma ^*).\) In particular, it follows that the module \(P_i^+=P_j\oplus P_k\) is a projective cover of \(\Omega (S_i)\) and the module \(P_i^-=P_x\oplus P_y\) is an injective envelope of \(\Omega ^{-1}(S_i)\) (\(\Lambda \) is symmetric). Consequently, involving \(\Omega \)-periodicity (period 4) of \(S_i,\) we conclude that there is an exact sequence in \({\text {mod}}\Lambda \) of the form

figure b

with \({\text {Im}}(d_k)\cong \Omega ^k(S_i)\) for \(k\in \{1,2,3\}.\) By our convention, \(P_y\) or \(P_k\) may not exist. Moreover, we denote by \(p_i^+\) (respectively, \(p_i^-\)) the dimension vector \(\underline{\dim }(P_i^+)\) (respectively, \(\underline{\dim }(P_i^-)\)). Using the above sequence, one easily gets that \(p_i^+=p_i^-.\) We use this fact (without mentioning) many times throughout the paper.

Now, we will show a few examples of results obtained by using exact sequences of the form \((*).\) As a first application, note the following lemma.

Lemma 2.1

If \(\Lambda \) has infinite representation type, then there is no arrow \(\alpha : i\rightarrow j\) with \(i^+ = \{ \alpha \} = j^-.\)

Proof

Suppose there is such an arrow. Then \(\Omega (S_i) = \alpha \Lambda \cong \Omega ^{-1}(S_j)\) and \(\Omega ^2(S_i)\cong S_j.\) Therefore in the exact sequence for \(S_i,\) the projective \(P_i^-\) is isomorphic to \(P_j.\) Actually, it means there exists a unique arrow ending at i and starting at j.

Also, in the exact sequence for \(S_j,\) we have \(P_j^+ \cong P_i\) since \(\Omega ^2(S_j)\cong S_i.\) Therefore there is a unique arrow starting at j and it ends at i. Now, Q is connected and hence has only two vertices and two arrows. Then \(\Lambda \) is a Nakayama algebra of finite representation type, hence we have a contradiction (see for example [14, Theorems I.10.3 and 10.7]). \(\square \)

Remark 2.2

Actually, the existence of an arrow with the above described property implies that \(\Lambda \) is of finite type, as it is explained in the note [5]. There is also proved that this condition is equivalent to the existence of a simple S with \(\Omega ^2(S)\) being simple. Hence, when dealing with TSP4 algebras of infinite type, we may assume that all simples have period exactly 4.

We have also the following observation.

Lemma 2.3

The quiver Q does not have a subquiver of the form

$$\begin{aligned} j {\mathop {\longleftarrow }\limits ^{\longrightarrow }}i \longleftarrow t \end{aligned}$$

where all arrows to and from i are shown.

Proof

Assume this happens. Then in the exact sequence for \(S_i,\) we have \(P_i^+ \cong P_j\) and \(P_i^-\cong P_j \oplus P_t.\) Since \(P_t\ne 0,\) it follows that \(p_i^+ \ne p_i^-,\) a contradiction.

\(\square \)

To end this preliminary section, we will give one simple lemma concerning the vectors \(p_i^+=p_i^-\) (this common dimension vector of two modules \(P_i^+\) and \(P_i^-\) will be denoted by \({\hat{p}}_i\)).

It is clear from the exact sequence \((*)\) that \(p_i\) is less or equal to \({\hat{p}}_i+s_i\) (in the product order) since \(p_i-s_i=\underline{\dim }\Omega ^1(S_i)\) is less than \(\underline{\dim }(P_i^+)={\hat{p}}_i.\) Moreover, the vector space dimension of \(P_i^+\) or \(P_i^-\) is greater than the vector space dimension of \(P_i,\) as the following shows. We write |x| for the sum \(|x|=x_1+\cdots +x_n\) if \(x=(x_1,\ldots ,x_n)\in {\mathbb {N}}^n.\)

Lemma 2.4

If \(\Lambda \) is of infinite type, then \(|{\hat{p}}_i|>|p_i|\) for all vertices i of Q.

Proof

Of course, we have an exact sequence \(0\rightarrow \Omega ^2(S_i)\rightarrow P_i^+ \rightarrow \Omega ^1(S_i)\rightarrow 0,\) where \(\Omega ^1(S_i)={\text {rad}}P_i\) has dimension vector equal to \(p_i-s_i.\) We claim that

$$\begin{aligned} |{\hat{p}}_i|-\dim _K\Omega ^1(S_i)>1. \end{aligned}$$

Indeed, if this is not the case, then \(\dim _K\Omega ^2(S_i)=1,\) hence we conclude that \(\Omega ^2(S_i)\) is simple. But then, by Remark 2.2, \(\Lambda \) is of finite type, and we obtain a contradiction. Therefore, the above inequality holds. In particular, we get \(|{\hat{p}}_i|-\dim _K\Omega ^1(S_i)=|{\hat{p}}_i|-|p_i|+1>1,\) thus \(|{\hat{p}}_i|-|p_i|>0,\) and we are done.

\(\square \)

3 Period 4 and minimal relations

In this section, we develop further consequences of the structure of the exact sequence \((*)\) associated to the simple module \(S_i,\) as described in the previous section. We will focus on maps and show their connection with minimal relations defining the algebra \(\Lambda .\)

We start with our given presentation \(\Lambda =KQ/I\) and a vertex \(i\in Q_0.\) We will briefly write J for the Jacobson radical \({\text {rad}}\Lambda \) of \(\Lambda .\) Consider the associated exact sequence

figure c

where \(P_i^+=P_j\oplus P_k\) and \(P_i^-=P_x\oplus P_y.\) We may assume that \(d_1(x, y) : = \alpha x + {\bar{\alpha }}y\) since the induced epimorphism \((\alpha \ {\bar{\alpha }}):P_j\oplus P_k \rightarrow \Omega (S_i)=\alpha \Lambda +{\bar{\alpha }}\Lambda \) is a projective cover of \(\Omega (S_i)\) in \({\text {mod}}\Lambda .\) Adjusting arrows \(\gamma \) or \(\gamma ^*\) (and modifying generators of I accordingly), we can already say that \(d_3(e_i) = (\gamma , \gamma ^*)\) for some choice of the arrows \(\gamma , \gamma ^*\) ending at i (see also [7, Proposition 4.3]).

The kernel of \(d_1\) is then \(\Omega ^2(S_i)=\textrm{Im}(d_2),\) and it has one or two minimal generators. They are images of idempotents \(e_x\in P_x=e_x\Lambda \) and \(e_y\in P_y\) via \(d_2:P_i^-\rightarrow P_i^+.\) We denote them as \(\varphi \) and \(\psi ,\) respectively, and they are contained in \(P_j\oplus P_k,\) so we can also write

$$\begin{aligned} \varphi = d_2(e_x,0) = (\varphi _{jx}, \ \varphi _{kx}) \quad \text{ and }\quad \psi = d_2(0,e_y) = (\psi _{jy}, \ \psi _{ky}), \end{aligned}$$

where \(\varphi _{jx}\) belongs to \(e_j\Lambda e_x\) and similarly for the other components of \(\varphi , \psi .\)

The exact sequence gives information on minimal generators of the ideal I,  which we sometimes refer to as minimal relations. The following lemma describes how arrows of Q lead to minimal relations.

Lemma 3.1

If there is an arrow \(x\rightarrow i,\) then there is a minimal generator \(\rho \in e_i\Lambda e_x\) for the ideal I,  with respect to the given the presentation.

Proof

Consider the generators \(\varphi , \psi \) of the kernel of \(d_1.\) We have \(\alpha \varphi _{jx} + {\bar{\alpha }}\varphi _{kx}=0\) in \(\Lambda ,\) equivalently the element \(\rho :=\alpha \varphi _{jx} + {\bar{\alpha }}\varphi _{kx} \in KQ\) belongs to \(e_iIe_x.\) It is a minimal relation since \(\varphi \) is a minimal generator. \(\square \)

Recall that any homomorphism \(d:P_x\oplus P_y\rightarrow P_j\oplus P_k\) in \({\text {mod}}\Lambda \) can be written in matrix form

$$\begin{aligned} M={m_{jx} \quad \ m_{jy}\atopwithdelims ()m_{kx} \quad \ m_{ky}}, \end{aligned}$$

where \(m_{ab}\) is a homomorphism \(P_b\rightarrow P_a\) in \({\text {mod}}\Lambda ,\) identified with an element \(m_{ab}\in e_a\Lambda e_b\) for any \(a\in \{j,k\}\) and \(b\in \{x,y\}.\) In this way, d becomes multiplication by M,  i.e., \(d(u)=M\cdot u\) for \(u\in P_i^-\) (using column notation for vectors in \(P_i^-\) and \(P_i^+\)).

Continuing with the generators of \(\Omega ^2(S_i),\) let \(M_i\) be the matrix with columns \(\varphi \) and \(\psi ,\) that is, \(d_2\) is given by matrix

$$\begin{aligned} M_i={\varphi _{jx} \ \quad \psi _{jy} \atopwithdelims ()\varphi _{kx} \ \quad \psi _{ky}}. \end{aligned}$$

Rewriting the compositions \(d_1d_2=0\) and \(d_2d_3=0\) in matrix form, we get the identities

$$\begin{aligned} (\alpha \ \ {\bar{\alpha }})\cdot M_i = 0\quad \text{ and }\quad M_i\cdot {\gamma \atopwithdelims ()\gamma ^*} =0, \end{aligned}$$
(1)

for some choice of arrows \(\gamma , \gamma ^*\) ending at i (cf. [7, Proposition 4.3]). Basically, identities (1) determine generators (cogenerators) of \(\Omega ^2(S_i),\) which are encoded in columns (rows) of the matrix \(M_i,\) satisfying the universal properties described in the following lemma.

When we have an element \(\theta \in P_j\oplus P_k,\) we will say that \(\theta \not \in J^2\) if at least one of the components of \(\theta \) does not belong to J.

Lemma 3.2

  1. (i)

    If \(\theta ={\theta _1\atopwithdelims ()\theta _2}\in P_j\oplus P_k\) is an element \(\theta \in \Lambda e_z{\setminus } J^2\) such that \((\alpha \ {\bar{\alpha }})\cdot \theta =0,\) then \(z=x\) or \(z=y\) and there is an exact sequence isomorphic to \((*)\) with \(\theta \) being one of the columns of \(M_i.\)

  2. (ii)

    If \(\mu \in P_x\oplus P_y\) is an element \(\mu \in e_z\Lambda {\setminus } J^2\) such that \(\mu \cdot {\gamma \atopwithdelims ()\gamma ^*}=0,\) then \(z=j\) or \(z=k\) and there is an exact sequence isomorphic to \((*)\) with \(\mu \) being one of the rows of \(M_i.\)

Proof

Indeed, for \(\theta \) as in (i), by definition, \(\theta \in {\textrm{Ker}}(d_1)={\textrm{Im}}(d_2),\) so \(\theta \) can be written as \(\theta =M_i\cdot \eta \) for some \(\eta ={\eta _1\atopwithdelims ()\eta _2}\in P_x\oplus P_y,\) \(\eta \in \Lambda e_z.\) Note also that all entries of \(M_i\) are in J (equivalently, \(d_2\) is in \({\text {rad}}_\Lambda \)) since otherwise the equality \((\alpha \ {\bar{\alpha }})\cdot M_i=0\) implies that \(\alpha \) or \({\bar{\alpha }}\) in \(J^2\) (or \(\alpha \in K{\bar{\alpha }}\)), which is impossible for an arrow. But \(\theta \notin J^2,\) i.e., \(\theta _1\notin e_jJ^2e_z\) or \(\theta _2\notin e_kJ^2e_z,\) hence we infer that \(\eta \notin J\) because \(\eta \in J\) would force \(\theta =M_i\cdot \eta \in J^2.\) As a result, we get that \(\eta _1\notin e_x J e_z\) or \(\eta _2\notin e_y J e_z.\) Since for \(a\ne b\) in \(Q_0,\) we have \(e_a J e_b\simeq {\text {rad}}_\Lambda (P_b,P_a)={\text {Hom}}_\Lambda (P_b,P_a)\simeq e_a \Lambda e_b,\) we conclude that \(z=x\) or y,  and in both cases \(\eta _\cdot \) is a unit of the local algebra \(e_z\Lambda e_z.\) Assume that \(\eta _1\notin J.\) Then \(z=x\) and \(\eta _1\) is a unit in \(e_x \Lambda e_x\) (i.e., \(\eta _1\) is a scalar multiplication of \(e_x\)), so we obtain the following identity:

$$\begin{aligned} {\theta _1 \ \quad \psi _{jy} \atopwithdelims ()\theta _2 \ \quad \psi _{ky}}=M_i\cdot {\eta _1 \ \quad 0\atopwithdelims ()\eta _2 \quad \ e_y}. \end{aligned}$$

Denote by \(M'_i\) the matrix on the left hand side and let \(N={\eta _1 \ 0\atopwithdelims ()\eta _2 \ e_y}.\) Consequently, the above identity \(M_i'=M_i\cdot N\) translates into the following commutative diagram in \({\text {mod}}\Lambda \):

where we identify \(d_1=(\alpha \ {\bar{\alpha }}),\) \(d_2=M_i,\) \(d_3={\gamma \atopwithdelims ()\gamma ^*},\) \(d_2'=M_i',\) \(v=N\) (which is an isomorphism since \(\eta _1\) and \(e_y\) are units in the corresponding local algebras), and \(d_3'=v^{-1}d_3.\) It follows that the bottom row of the above diagram is the required exact sequence. Similarly, if \(\eta _2\notin J,\) then \(z=y\) and we can construct an analogous matrix \(M_i',\) but with \(\theta \) as the second column.

We can prove (ii) similarly, where we use the cokernel of \(d_3\) (instead of the kernel of \(d_1\)). Namely, by the universal property of cokernels, one can factorize the matrix \({\mu _1 \ \ \mu _2 \atopwithdelims ()\varphi _{kx} \ \psi _{ky} }\) through the cokernel of \(d_3\cong \hbox {Im}(d_2),\) and lift this factorization to a map \(u:P_j\oplus P_k\rightarrow P_j\oplus P_k,\) given by a matrix \(N={\eta _1 \ \eta _2 \atopwithdelims ()0 \ \ e_k },\) such that \(N\cdot M_i=M_i'\) and \(\eta _1\) is a unit. This means \(ud_2=d_2',\) yielding an analogous commutative diagram with (exact) isomorphic rows. \(\square \)

Note that the conditions (i)–(ii) mentioned above explain how minimal generators (relations) of I give rise to generators of \(\Omega ^2(S_i),\) and how these two are connected via the exact sequence \((*),\) up to isomorphisms (here we mean both isomorphisms of exact sequences and isomorphisms of algebras, i.e., changing presentation of \(\Lambda \)).

Namely, we may start with a minimal generator \(\rho \) of the ideal I of KQ,  without loss of generality, \(\rho \in e_i\Lambda e_j,\) where ij are vertices of Q. Say \(\alpha , {\bar{\alpha }}\) start at i and \(\beta , \beta ^*\) end at vertex j. Then we can write \(\rho \) as an element of KQ in the following way

$$\begin{aligned} \rho = \alpha x_1\beta + \alpha x_2\beta ^* + {\bar{\alpha }}x_3\beta + {\bar{\alpha }}x_4\beta ^*, \end{aligned}$$
(2)

where the \(x_i\) are linear combinations of monomials, and the expression is unique if written in terms of the monomial basis of KQ. Consequently, we infer from (i) that an element

$$\begin{aligned} \theta = (x_1\beta + x_2\beta ^*, \ x_3\beta + x_4\beta ^*) \end{aligned}$$

(viewed as a column) is in the kernel of \(d_1,\) and it can be taken as a generator for \(\Omega ^2(S_i)\) (column of \(M_i\)), for example if \(\theta \notin J^2.\) Similarly \(\mu =(\alpha x_1+{\bar{\alpha }}x_3,\alpha x_2+{\bar{\alpha }}x_4)\) gives a cogenerator (row of \(M_i\)) if \(\mu \notin J^2.\)

Remark 3.3

We note that not all minimal relations can be realized in this way. The smallest example is the local algebra of quaternion type (see [4, III.1(e)]). That is, take \(\Lambda = K\langle X, Y\rangle /(X^2-(YX)^{k-1}Y, Y^2-(XY)^{k-1}X, (XY)^k-(YX)^k, (XY)^kX, (XY)^kY)\) (with \(k\geqslant 2\)). The zero relation \((XY)^kX\) cannot be seen in a minimal projective resolution of the simple module.

4 Triangles and squares

In this section, we discuss some properties of triangles and squares in Q with respect to minimal relations. As we will see in Proposition 4.1 below, it is natural to investigate triangles in the quiver Q,  which appear together with paths of length 2 involved in minimal relations (note that this was an essential tool in the proof of [7, Proposition 4.2]). Similarly, squares come with paths of length 3 as shown in the parallel result (Proposition 4.5).

If p is a monomial in KQ,  we write \(p\prec I,\) provided that p occurs as a term (summand) in some minimal relation defining I (i.e., p is involved in a minimal relation). Very often, paths of length two occur in this way as shown in [7, Proposition 4.2].

4.1 Paths of length 2 and triangles

First, we focus on paths of length two and naturally arising triangles.

Proposition 4.1

Assume \(\alpha : i\rightarrow j\) and \(\beta : j\rightarrow k\) are arrows such that \(\alpha \beta \prec I.\) Then there is an arrow in Q from k to i,  so that \(\alpha \) and \(\beta \) are part of a triangle in Q.

Proof

We write \({\bar{\alpha }}\) for the second arrow starting at i (if it exists), and \(\beta ^*\) for the second arrow ending at k (if it exists). Then \(\alpha \beta \prec I\) means that

$$\begin{aligned} \alpha \beta + \alpha z_0\beta + \alpha z_1\beta ^* + {\bar{\alpha }}z_2\beta + {\bar{\alpha }}z_3\beta ^* = 0 \end{aligned}$$

in \(\Lambda \) where \(z_0\in J,\) and \(z_i\in \Lambda .\)

We may assume \(z_0=0\): Let \(\alpha ' = \alpha (1+z_0),\) then we may replace \(\alpha \) by \(\alpha '\) and still have a basis for \(J/J^2\) consisting of images of arrows since \(\alpha + J = \alpha ' + J.\) In other words, \(\alpha ' = \alpha u\) where u is a unit in \(e_j\Lambda e_j,\) and therefore \(\alpha = \alpha 'u^{-1}.\) With this, the above identity becomes

$$\begin{aligned} \alpha '\beta + \alpha ' z_1'\beta ^* + {\bar{\alpha }}z_2\beta + {\bar{\alpha }}z_3\beta ^*=0 \end{aligned}$$

where \(z_1' = u^{-1}z_1.\) This has the same form as the original but without the term where \(z_0\) occurs.

Using the exact sequence \((*)\) for \(S_i,\) the identity above gives an element \(\varphi \) in the kernel of \(d_1,\) namely

$$\begin{aligned} \varphi = (\beta + z_1\beta ^*, z_2\beta + z_3\beta ^*). \end{aligned}$$

Assume (for a contradiction) that \(\beta + z_1\beta ^* \in J^2.\) Then \(z_1 \not \in J,\) and \(z_1\) is a unit. But then \(z_1z\beta ^* = \lambda \beta ^*\) modulo \(J^2\) for some \(0\ne \lambda \in K\) and therefore \(\beta , \beta ^*\) are linearly dependent modulo \(J^2.\) This is a contradiction since different arrows must be linearly independent modulo \(J^2.\)

Since the first component of \(\varphi \) is not in \(J^2,\) it follows that \(\varphi \) is not in the radical of the kernel of \(d_1\) and therefore is a generator. This implies that \(P_k\) must be a direct summand of the projective \(P_i^-\) in the exact sequence \((*)\), and we conclude that \(k=x\) or y,  so that we have an arrow \(k\rightarrow i.\) \(\square \)

Note that this holds for any symmetric periodic algebra of period 4 (i.e., also for wild ones).

Example 4.2

In [7, Section 11], there is a quiver Q of an algebra B which is symmetric and periodic of period 4, but the algebra is wild (so out of our current interest; see also [3] and [7, Corollary 2]). This is mentioned as a consequence of the classification in [7], however it follows already from the above proposition.

Namely, by Proposition 4.1, any path \(\rho \) in Q of length two which does not involve a loop satisfies \(\rho \nprec I.\) Therefore \(B/J^3\) contains a wild subalgebra, given by a quiver of type \(\widetilde{{\widetilde{E}}}_7\) without any relations, as in [7, proof of Proposition 4.2].

Lemma 4.3

(Triangle lemma). Assume Q contains a triangle

with \(\alpha \beta \prec I.\) If \(\gamma \) is the unique arrow \(x\rightarrow i,\) then \(\gamma \alpha \prec I\) and \(\beta \gamma \prec I.\) If there are double arrows \(,\) then both \(\gamma \alpha \prec I\) and \(\beta \gamma \prec I\) or both \({\bar{\gamma }}\alpha \prec I\) and \(\beta {\bar{\gamma }}\prec I.\)

Proof

Assume first that Q is the Markov quiver [7, Section 5]. Then \(\Lambda \) is a weighted triangulation algebra, by [7, Theorem 5.1], and the properties required in the claim are easily verified. Hence we may assume that Q is not the Markov quiver.

Consider now the exact sequence for the simple module \(S_x\)

$$\begin{aligned} 0\rightarrow S_x \rightarrow P_x \rightarrow P_j\oplus P_{j^*} \rightarrow P_i\oplus P_{{\bar{i}}} \rightarrow P_x\rightarrow S_x \rightarrow 0, \end{aligned}$$

where \(j^*=s(\beta ^*)\) and \({\bar{i}}=t({\bar{\gamma }}).\) Taking minimal generators for \(\Omega ^2(S_x),\) one can fix the columns of the matrix \(M_x,\) that is,

$$\begin{aligned} M_x = \left( \begin{matrix} \varphi _{ij} &{}\quad \psi _{ij^*}\\ \varphi _{{\bar{i}}j} &{}\quad \psi _{{\bar{i}}j^*} \end{matrix}\right) . \end{aligned}$$

It satisfies \((\gamma \ {\bar{\gamma }})\cdot M_x=0\) and \(M_x\cdot {\beta \atopwithdelims ()\beta ^*}=0.\)

By the assumption, \(\alpha \beta \prec I,\) hence there is a minimal relation of the form

$$\begin{aligned} \alpha \beta + \alpha z_0\beta + \alpha z_1\beta ^* + {\bar{\alpha }}z_2\beta + {\bar{\alpha }}z_3\beta ^*=0 \end{aligned}$$

with \(z_1\in J.\) As in the proof of Proposition 4.1, we may assume \(z_0=0.\) Now, if we define

$$\begin{aligned} \mu := (\alpha + {\bar{\alpha }}z_2, \alpha z_1 + {\bar{\alpha }}z_3), \end{aligned}$$

then \(\mu \cdot {\beta \atopwithdelims ()\beta ^*}=0\) and \(\mu \notin J^2\) (since it involves an arrow).

Consequently, by Lemma 3.2(ii), we can take \(\mu \) as one of the rows of \(M_x.\) Clearly, if \(\gamma \) is the unique arrow \(x\rightarrow i,\) then \(i\ne {\bar{i}},\) so \(\mu \) must be the first row of \(M_x.\) In particular, we get \(\varphi _{ij}=\alpha +{\bar{\alpha }}z_3,\) so it follows that \(\gamma (\alpha + {\bar{\alpha }}z_3) + {\bar{\gamma }} \varphi _{{\bar{i}}j} =0,\) and hence, we obtain \(\gamma \alpha \prec I,\) as required. Similarly, we conclude from the above equality that the element \(\theta \in \Lambda e_x\) defined as

$$\begin{aligned} \theta :=(\beta +z_1\beta ^*,z_2\beta +z_3\beta ^*)\in P_j\oplus P_{{\bar{j}}} \end{aligned}$$

(viewed as column) satisfies \(\left[ \alpha \ {\bar{\alpha }}\right] \cdot \theta =0,\) so \(\theta \) can be taken as the first column of the middle map \(M_i\) (in the exact sequence for \(S_x\)). Now \(M_i\cdot {\gamma \atopwithdelims ()\gamma ^*}=0\) implies \(\beta \gamma \prec I,\) so we are done in case without double arrows.

Finally, suppose we have double arrows \(.\) Then \(i={\bar{i}},\) \(x=x^*,\) and \({\bar{\gamma }}=\gamma ^*.\) By the above considerations, we have both \(\gamma \alpha \prec I\) and \(\beta \gamma \prec I\) if both \(\mu \) is the first row of \(M_x\) and \(\theta \) is the first column of \(M_i.\) Dually, we obtain both \({\bar{\gamma }}\alpha \prec I\) and \(\beta {\bar{\gamma }}\prec I\) if \(\mu \) is the second row of \(M_x\) and \(\theta \) the second column of \(M_i.\)

Now, it remains to exclude the mixed case. To see this, suppose \(\mu \) is the first row of \(M_x.\) In this case, we obtain \(\gamma \alpha \prec I,\) so applying the first part with \(\gamma , \alpha \) (instead of \(\alpha , \beta \)), we get \(\beta \gamma \prec I,\) provided that \(\beta \) is the unique arrow \(j\rightarrow x.\) But we cannot have double arrows from j to x because then Q is the Markov quiver, due to [7, Lemma 5.2], which we excluded. Similarly, if \(\mu \) is the second row of \(M_x,\) we conclude that \({\bar{\gamma }}\alpha \prec I\) and \(\beta {\bar{\gamma }}\prec I.\) This finishes the proof. \(\square \)

Note that [7, Lemma 5.2] which we use does not require Q to be 2-regular. It follows from [7, proof of Lemma 5.2] that one only needs that the vertices i and k are 2-regular.

Lemma 4.4

Assume i is a 1-vertex which is part of a triangle

Then both x and j must be 2-vertices.

Proof

By Lemma 2.1, there must be another arrow, say \({\bar{\gamma }},\) starting at x,  and there must be another arrow, say \(\alpha ^*,\) ending at j. From the exact sequence for \(S_i,\) we know that \(p_x = p_j.\)

Assume that x is not a 2-vertex. Then \(\beta \) is the only arrow ending at x,  therefore

$$\begin{aligned} e_x\Lambda /S_x \cong \beta \Lambda . \end{aligned}$$

Moreover, again by Lemma 2.1, there must be another arrow starting at j,  call it \({\bar{\beta }}.\) Consequently,

$$\begin{aligned} \textrm{rad} (P_j) = \beta \Lambda + {\bar{\beta }}\Lambda , \end{aligned}$$

and hence, we get the following equalities of dimension vectors:

$$\begin{aligned} p_x= & {} s_x+\underline{\dim }(\beta \Lambda )\quad \text{ and } \\ p_j= & {} s_j+\underline{\dim }(\beta \Lambda +{\bar{\beta }}\Lambda )=s_j+ \underline{\dim }\beta \Lambda + \underline{\dim }({\bar{\beta }}\Lambda /\beta \Lambda \cap {\bar{\beta }}\Lambda ). \end{aligned}$$

Comparing \(p_x=p_j,\) we conclude that \(\underline{\dim }({\bar{\beta }}\Lambda /\beta \Lambda \cap {\bar{\beta }}\Lambda )=s_x-s_j,\) so this must be zero (i.e., \(x=j\)) since otherwise \(s_x-s_j\) has a negative coordinate, which cannot happen for a dimension vector of a \(\Lambda \)-module.

In particular, since the vector space dimension of \(\beta \Lambda \cap {\bar{\beta }}\Lambda \subseteq {\bar{\beta }}\Lambda \) is equal to the vector space dimension of \({\bar{\beta }}\Lambda \) (and we have an inclusion), we obtain that \({\bar{\beta }}\Lambda = \beta \Lambda \cap {\bar{\beta }}\Lambda .\) Now \({\bar{\beta }}\Lambda = \beta \Lambda \cap {\bar{\beta }}\Lambda \subseteq \beta \Lambda ,\) hence \({\bar{\beta }}\in \beta \Lambda .\) But this is not possible since \({\bar{\beta }}\) and \(\beta \) are distinct arrows of Q.

The proof that j must be a 2-vertex is dual. \(\square \)

4.2 Paths of length \(\geqslant 3\)

In this short paragraph, we consider paths of length 3 or 4 (and in particular, squares). Suppose the quiver of \(\Lambda \) has a subquiver

$$\begin{aligned} u{\mathop {\longrightarrow }\limits ^{\delta }}i {\mathop {\longrightarrow }\limits ^{\alpha }}k {\mathop {\longrightarrow }\limits ^{\beta }}t {\mathop {\longrightarrow }\limits ^{\gamma }}j. \end{aligned}$$

We have the following counterpart of Proposition 4.1.

Proposition 4.5

Suppose \(\alpha \beta \not \prec I\) and \(\alpha \) is the unique arrow \(i\rightarrow k.\) If \(\alpha \beta \gamma \prec I,\) then there is an arrow \(j\rightarrow i.\)

Proof

We work with a monomial basis. After possibly adjusting the arrow \(\beta ,\) there is a minimal relation of the form

$$\begin{aligned} \alpha \beta \gamma + \alpha z_1\gamma ^* + {\bar{\alpha }}z_2\gamma + {\bar{\alpha }}z_3 \gamma ^* \in I. \end{aligned}$$

Consider the exact sequence for the simple module \(S_i,\)

$$\begin{aligned} 0\rightarrow S_i \rightarrow P_i \rightarrow P_u\oplus P_{u'} \rightarrow P_k\oplus P_l\rightarrow P_i\rightarrow S_i\rightarrow 0. \end{aligned}$$

Here \(\alpha :i\rightarrow k\) and \({\bar{\alpha }}:i\rightarrow l\) start at i,  and \(\delta : u\rightarrow i\) and \(\delta ^*: u'\rightarrow i\) end at i,  where by convention, \({\bar{\alpha }}\) or \(\delta ^*\) may not exist (then we omit \(P_l\) or \(P_{u'}\)).

We take \(\Omega (S_i) = \alpha \Lambda + {\bar{\alpha }}\Lambda \) and \(\Omega ^2(S_i) = \{ (x, y)\in P_k\oplus P_l\mid \alpha x + {\bar{\alpha }}y = 0\}.\) From the exact sequence, this is equal to \(\varphi \Lambda + \psi \Lambda \) where \(\varphi = \varphi e_u\) and \(\psi = \psi e_{u'}.\) Let \(M_i\) be the matrix with columns \(\varphi \) and \(\psi .\) Then we have

$$\begin{aligned} (\alpha \ {\bar{\alpha }})\cdot M_i=0 \quad \text{ and }\quad M_i\cdot {\delta \atopwithdelims ()\delta ^*}=0 \end{aligned}$$

for some choice of arrows \(\delta , \delta ^*\) [7, see Proposition 4.3].

The minimal relation shows that \(\theta \in \Omega ^2(S_i)\) where

$$\begin{aligned} \theta = (\beta \gamma + z_1\gamma ^*, \ z_2\gamma + z_3\gamma ^*). \end{aligned}$$

If \(\theta \) is a generator of \(\Omega ^2(S_i),\) then we may take \(\theta \) as one of the columns of \(M_i.\) Since \(\beta \gamma \) occurs in \(\theta _1,\) it must be in \(e_k\Lambda e_u\) or \(e_k\Lambda e_{u'}.\) It follows that \(j=u\) or \(u',\) and hence \(\delta \) is an arrow \(j \rightarrow i.\)

Assume (for a contradiction) that \(\theta \in \textrm{rad}(\Omega ^2(S_i)) = \varphi J + \psi J.\) Then we can write \(\theta = \varphi v + \psi w\) and \(v, w\in J\) and we can take them in \(Je_j.\) Then

$$\begin{aligned} \beta \gamma + z_1\gamma ^* = \varphi _{ku}v + \psi _{ku'}w. \end{aligned}$$

Say \(\beta \gamma \) occurs in \(\varphi _{ku}v\) (otherwise we interchange u and \(u'\)). We can write \(v= ve_j = v_1\gamma + v_2\gamma ^*\) with \(v_1\in \Lambda e_t\) and \(v_2 \in \Lambda .\) Similarly, \(\varphi _{ku} = \beta y_1 + {\bar{\beta }} y_2\) for some \(y_1 \in e_t\Lambda \) and \(y_2\in \Lambda .\) Then

$$\begin{aligned} \varphi _{ku}v = \beta y_1v_1\gamma + \beta y_1v_2\gamma ^* + {\bar{\beta }}y_2v \end{aligned}$$

and \(\beta \gamma \) is a term of \(\beta y_1v_1\gamma .\) Therefore \(y_1v_1 = e_ty_1v_1e_t\) is a unit in \(e_t\Lambda e_t.\)

However, it factors through the vertex u,  and it follows that \(u=t.\) So we have a triangle \((\alpha , \beta , \delta )\) and \(\alpha \beta \not \prec I.\) By the triangle lemma (Lemma 4.3), we get that also \(\beta \delta \not \prec I\) since by the assumption \(\alpha \) is the unique arrow \(i\rightarrow k.\)

We exploit the identity for \(\varphi _{ku}v\) further. Since \(y_1v_1\) is a unit, we may assume \(\beta = \beta y_1.\) Then \(\varphi _{ku} = \beta + {\bar{\beta }}y_2\) and this is the top left entry of the matrix \(M_i.\) But \(M_i\cdot {\delta \atopwithdelims ()\delta ^*} = 0\) which gives

$$\begin{aligned} \beta \delta + {\bar{\beta }}y_2\delta + \psi _{ku'}\delta ^*=0. \end{aligned}$$

This means that \(\beta \delta \prec I,\) a contradiction. \(\square \)

Remark

The above has a dual version. Namely, suppose \(\alpha \beta \gamma \prec I\) and \(\beta \gamma \nprec I,\) and that \(\gamma \) is the unique arrow \(t\rightarrow j.\) Then there must be an arrow \(j\rightarrow i.\) To see this, it suffices to take the relation involving \(\alpha \beta \gamma \) and factor it as \(\mu \cdot {\gamma \atopwithdelims ()\gamma ^*}=0,\) where \(\mu =(\alpha \beta +{\bar{\alpha }}z_2,\alpha z_1 + {\bar{\alpha }}z_3).\) Then similarly, we prove that \(\mu \) is one of the cogenerators of \(\Omega _2(S_j)\) (i.e., rows of \(M_j\)), which forces that i is a target of the arrow in \(j^+.\)

The above proposition shows that paths of length 3 involved in minimal relations give rise to squares in Q. We have a result similar to the triangle lemma (Lemma 4.3), stated as follows.

Lemma 4.6

(Square lemma). Assume Q contains a square

with \(\alpha \beta \gamma \prec I\) but \(\beta \gamma \nprec I.\) If \(\delta \) is the unique arrow \(j\rightarrow i,\) then \(\beta \gamma \delta \prec I.\)

Proof

Suppose that \(\alpha \beta \gamma \prec I\) and \(\beta \gamma \nprec I.\) Consider the exact sequence for \(S_i.\) Then \(\Omega ^2(S_i)\) has generators being the columns of the matrix

$$\begin{aligned} M_i = \left( \begin{matrix} \varphi _{kj} &{} \psi _{k, j^*}\\ \varphi _{{\bar{k}}j} &{} \psi _{{\bar{k}}j^*} \end{matrix}\right) , \end{aligned}$$

and \((\alpha \ {\bar{\alpha }})\cdot M_i=0\) and \(M_i\cdot {\delta \atopwithdelims ()\delta ^*}=0.\) By our assumption, we have a minimal relation \(\alpha (\beta \gamma + x_1\gamma + x_2\gamma ^*) + {\bar{\alpha }}(x_3\gamma + x_4\gamma ^*) = 0\) with \(x_1\in J^2.\) This gives an element

$$\begin{aligned} \varphi = (\beta \gamma + x_1\gamma + x_2\gamma ^*, \ x_3\gamma + x_4\gamma ^*)\in \Omega ^2(S_i) \end{aligned}$$

which cannot be in the radical of \(\Omega ^2(S_i)\) since \(\beta \gamma \not \prec I.\) So we can take this as the first column of \(M_i.\) It follows that \((\beta \gamma + x_1\gamma + x_2\gamma ^*)\delta + \psi _{kj^*}\delta ^*=0,\) so \(\beta \gamma \delta \prec I.\) \(\square \)

The following lemma shows that sometimes one can relate paths of length 3 and 4.

Lemma 4.7

Suppose we have a path in Q of the form

$$\begin{aligned} u{\mathop {\longrightarrow }\limits ^{\delta }}i {\mathop {\longrightarrow }\limits ^{\alpha }}k {\mathop {\longrightarrow }\limits ^{\beta }}t {\mathop {\longrightarrow }\limits ^{\gamma }}j \end{aligned}$$

with \(|k^+|=1\) and \(\delta \alpha ,\alpha \beta ,\beta \gamma \nprec I.\) If \(\delta \alpha \beta \gamma \prec I,\) then \(\alpha \beta \gamma \prec I.\)

Proof

Suppose that \(\delta \alpha \beta \gamma \prec I\) and let \({\bar{\delta }}:u\rightarrow u'\) and \({\bar{\alpha }}:i\rightarrow i'\) denote the second arrow starting at u and i (if they exist). By the assumptions on k,  we conclude that any path \(p\in e_i A\) of length \(\ge 3\) starting with \(\delta \alpha \) must contain \(\delta \alpha \beta .\) Hence \(\delta \alpha \beta \gamma \) as a minimal generator of I is involved in a minimal relation of the form

$$\begin{aligned} \delta \alpha \beta \gamma + \delta \alpha \beta p+\delta {\bar{\alpha }}q+{\bar{\delta }}r=0, \end{aligned}$$

where \(p,q,r\in J.\) After adjusting \(\gamma :=\gamma +p,\) we may change the presentation to get \(p=0.\) Consequently, the element \(\rho =(\alpha \beta \gamma +{\bar{\alpha }}q,r)\) belongs to \(\Omega ^2(S_u)={\text {Ker}}([\delta \ {\bar{\delta }}]).\) Finally, if \(u^-=\{\sigma ,\sigma ^*\},\) and \(v=s(\sigma ),\) \(v'=s(\sigma ^*),\) then \(\Omega ^{2}(S_u)\cong {\text {Im}}(M_u),\) where \(M_u:P_v\oplus P_{v'}\rightarrow P_i\oplus P_{i'}\) is given by the matrix

$$\begin{aligned} \left( \begin{matrix} \varphi _{uv} &{} \psi _{u v'}\\ \varphi _{u'v} &{} \psi _{u'v'} \end{matrix}\right) , \end{aligned}$$

and \(\rho =M_u\cdot {\kappa _1 \atopwithdelims ()\kappa _2}\) for some \(\kappa _1\in P_{v}\) and \(\kappa _2\in P_{v'}.\) But then

$$\begin{aligned} \alpha \beta \gamma +{\bar{\alpha }}q=\varphi _{uv}\kappa _1+\psi _{uv'}\kappa _2, \end{aligned}$$

and therefore, \(\alpha \beta \gamma \) is generated by minimal relations. But, \(\alpha \beta \nprec I\) and \(\beta \gamma \nprec I,\) hence \(\alpha \beta \gamma \) is also involved in some minimal relation of I,  as claimed. \(\square \)

5 Non-regular vertices

In this section, we give some partial results describing non-regular vertices.

By definition, for Q biserial, the non-regular vertices i satisfy either \(|i^-|=1\) and \(|i^+|=2\) or \(|i^-|=2\) and \(|i^+|=1.\) In the first case, i is called a (1, 2)-vertex, whilst in the second a (2, 1)-vertex. Let i be a (1, 2)-vertex of the form

We call i a vertex of type R (respectively, of type N), provided that both \(\alpha \beta \prec I\) and \(\alpha {\bar{\beta }}\prec I\) (respectively, both \(\alpha \beta \nprec I\) and \(\alpha {\bar{\beta }}\nprec I\)). If \(k\ne l,\) we say that i is proper. If \(k=l,\) then we assume that the minimal relations for \(\alpha \beta , \alpha {\bar{\beta }}\) are independent, that is, linearly independent modulo \(e_jJ^3e_k.\) Similar notions can be defined for (2, 1)-vertices. Whenever we consider a (1, 2)-vertex i,  we keep the above notation for the arrows starting and ending at i.

Remark 5.1

We recall that there exist infinitely many pairwise non-isomorphic TSP4 algebras A containing arbitrary large numbers of (1, 2)- and (2, 1)-vertices of both types R or N. Indeed, one may take any weighted surface algebra \(\Lambda \) (see [8]) containing arbitrary number of ‘blocks’ of the form

with \(\xi _i\) being a virtual arrow. Then using results of [11], we conclude that the virtual mutation \(A=\Lambda (\xi )\) with respect to the sequence \(\xi =(\xi _1,\ldots ,\xi _n)\) of virtual arrows is a TSP4 algebra and vertices \(x_1,\ldots ,x_n\) are (1, 2)-vertices of type N (in \(Q_A\)), whereas \(y_1,\ldots ,y_n\) are (2, 1)-vertices of type N (in the definition of \(\Lambda ,\) we have to take weights \(m_{\xi _i}=m_{\alpha _i}=1\) for any \(i\in \{1,\ldots ,n\}\)).

For vertices of type R, one has to consider so called weighted generalized triangulation algebras [13], given by quivers which are glueings of blocks of five types I–V. Without going into details, we only mention that for any such algebra A (it is a TSP4 algebra in most cases), its Gabriel quiver contains two (1, 2)-vertices and two (2, 1)-vertices per each block of type V, and all these vertices are of type R (this follows directly from the shape of relations in A). Therefore, one can easily construct a TSP4 algebra with an arbitrarily large number of non-regular vertices of type R (in this case the number of (1, 2)-vertices is equal to the number of (2, 1)-vertices). But for these, the Gabriel quiver of A is not biserial.

We will now show that there are no non-regular vertices of type R if the Gabriel quiver is biserial. For proper ones, we have the following.

Lemma 5.2

There are no proper non-regular vertices of type R.

Proof

Suppose i is a (1, 2)-vertex of type R. In particular, \(\alpha \beta \prec I\) yields an arrow \(\gamma :k\rightarrow j,\) whereas if \(\alpha {\bar{\beta }} \prec I,\) there is an arrow \(\delta :l\rightarrow j\) by Proposition 4.1. Suppose now i is proper. Then \(j^-=\{\gamma ,\delta \}\) since Q is biserial, and hence \(p_j^-=p_k+p_l.\) But \(p_i^-=p_i^+\) gives \(p_j=p_k+p_l\) because i is a (1, 2)-vertex, and therefore, we get \(p_j=p_j^-={\hat{p}}_j,\) which is a contradiction to Lemma 2.4. Dual arguments give the proof for (2, 1)-vertices. \(\square \)

The following completes the proof.

Theorem 5.3

There are no non-regular vertices of type R.

Proof

By the previous lemma, it is sufficient to prove that there is no (1, 2)-vertex i of type R with \(k=l.\) Suppose to the contrary that such a vertex exists. For simplicity, we will use notation 1, 2, 3 for vertices, respectively, \(i,k=l,\) and j. Since \(1=i\) is of type R, we get an arrow \(\gamma :2\rightarrow 3\) (see Proposition 4.1), and consequently, Q admits the following subquiver

By Lemma 4.4, the vertex 3 cannot be a 1-vertex. Of course, 1 is a non-regular vertex by the assumption. We claim that also 2 is non-regular. Indeed, if this is not the case, then there is an arrow \({\bar{\gamma }}:2\rightarrow x,\) \({\bar{\gamma }}\ne \gamma ,\) and moreover, \(x\ne 3\): Otherwise, we would get a subquiver \(,\) hence \(p_1=p_3\) because \(p_2^-=p_2^+,\) and so \({\hat{p}}_1=p_1^-=p_3=p_1,\) which gives a contradiction to Lemma 2.4. Consequently, we have no arrows \(x\rightarrow 1,\) so both \(\beta {\bar{\gamma }}\nprec I\) and \({\bar{\beta }}{\bar{\gamma }}\nprec I\) by Proposition 4.1. But then \(\Lambda \) admits a wild (hereditary) factor algebra of the form \(,\) a contradiction. This proves that 2 is a (2, 1)-vertex.

In particular, using \(p_1^-=p_1^+\) and \(p_2^-=p_2^+,\) one gets \(p_1=p_2\) and \(p_3=2p_1.\) It follows also that \(3^-\cup 3^+\supsetneq \{\gamma ,\alpha \},\) and hence 3 is a 2-vertex (note that \(p_3^-=p_3^+\)). Suppose the other arrows at 3 are \({\bar{\alpha }}: 3\rightarrow x\) and \(\gamma ^*: y\rightarrow 3\) (where we allow for the possibility that \({\bar{\alpha }}= \gamma ^*\) and \(3=x=y\)).

We assume that there are two independent minimal relations involving \(\alpha \beta \) and \(\alpha {\bar{\beta }}\) respectively. By linear algebra, then there are two minimal relations of the form \(\alpha \beta + X\) and \(\alpha {\bar{\beta }} + Y\) with X and Y in \(e_3J^3e_2.\) We write these minimal relations as follows:

$$\begin{aligned} \alpha \beta + \alpha z_0 \beta + \alpha z_1{\bar{\beta }} + {\bar{\alpha }}z_2\beta + {\bar{\alpha }}z_3{\bar{\beta }} = 0, \\\alpha {\bar{\beta }} + \alpha y_0{\bar{\beta }} + \alpha y_1\beta + {\bar{\alpha }}y_2\beta + {\bar{\alpha }}y_3{\bar{\beta }} =0. \end{aligned}$$

Then \(z_1, z_2, z_3\) and \(y_1, y_2, y_3\) are in J. We may assume \(z_0=0\) and \(y_0=0,\) otherwise we replace \(\beta \) by \((1+z_0)\beta \) and \({\bar{\beta }}\) by \((1+y_0){\bar{\beta }}\) (note that these are still independent arrows and it does not change the shape of the relations). Writing the two identities in matrix form gives

$$\begin{aligned} (\alpha \ {\bar{\alpha }})\left( \begin{matrix} \beta + z_1{\bar{\beta }} &{}\quad {\bar{\beta }} + y_1\beta \\ z_2\beta + z_3{\bar{\beta }} &{}\quad y_2\beta + y_3{\bar{\beta }} \end{matrix}\right) = 0. \end{aligned}$$

The two column vectors are linearly independent in \(\Lambda ^2,\) hence they generate \(\Omega ^{-2}(S_3) \cong \Omega ^2(S_3).\) Therefore they can be taken as columns of the matrix \(M_3\) from the exact sequence for \(S_3.\) The relevant part of this sequence is

$$\begin{aligned} 0\rightarrow \Omega ^{-1}(S_3) \cong (\gamma , \gamma ^*)\Lambda \rightarrow P_2\oplus P_y \rightarrow \ P_1\oplus P_x \rightarrow \alpha \Lambda + {\bar{\alpha }}\Lambda \cong \Omega (S_3)\rightarrow 0. \end{aligned}$$

It follows that \(y=2\) and we have double arrows \(2 \rightrightarrows 3,\) and 2 is not a (2, 1)-vertex, a contradiction. \(\square \)