Given a group $G$ say finitely presented the word problem consists in finding an algorithm that let’s us decide if a word $\omega$ over the generators of the group is the neutral element of the group. The problem is difficult because it’s undecidable for a general group.

One way of looking at this problem is to get a language out of the presentation $\pi$ of the group the following way. \(WP_\pi (G) = \{ \omega \in \Sigma^* : \pi(\omega)=1 \}\) where $\Sigma^*$ is the free monoid over the alphabet of generators of $G$ using the presentation $\pi$. The idea is to study this language and obtain properties of the group and vice versa. So let’s try to understand a little bit about languages.

How to think about regular languages.

There is this famous way of looking at languages first studied by the linguist Noam Chomsky that classifies languages in 4 types depending how they are generated. To generate a language you need a grammar. That is a set of rules such as in the ordinary use of the word. For a grammar we need the following.

  • A set $\Sigma$ of terminal symbols or the alphabet of our language.

  • A $N$ set of nonterminal symbols. That is symbols that are not part of the word but that we use to generate the grammar.

  • A distinguished symbol $S$ that is a nonterminal symbol which we use at the start.

  • A set of productions that is rules of the form \((\Sigma \ \cup N)^* \to (\Sigma \ \cup N)^*\) which allows us to describe the language.

The type 0 languages in the Chomsky hierarchy are the ones we care about. The idea is that the productions for the grammar are of the type \(N \to \Sigma^* N\) that is each variable goes to a string followed by another variable.

For each regular language we can build an automata that recognises it. Morally an automata is a graph with instructions to move across it. The idea is that given a word $\omega$ you travel the graph following the edges which are marked by the corresponding letter that follows. When you are done you should finish in a state of the automata that is denoted as final. In that case the word is accepted and so it belongs to the language.

Another way of looking at this languages is through the lens of recognizable subsets. A subset $L$ is recognizable such that there exists an homomorphism of monoids $h$ to a finite monoid $N$ with $h(h^{-1}(L))=L$. The idea is that in general big subsets are recognizable but it’s less common for smaller subsets to be like this. Think of ${1} \subset M$ for an infinite monoid then necessarily every homomorphism to a finite monoid has non trivial kernel so that this set can never be recognizable. When we say a finite monoid think of boolean matrices. That is consider matrices with $ { 0,1}$ in their rows. This connects nicely with the idea of an automata as you can consider a matrix for each element $a \in \Sigma$ such that $M(a)_{ij} = 1$ if there is a way of going from state $i$ to $j$ through $a$. This way is the same as we think of graphs and their adjacency matrix.

Given $L \subset \Sigma^*$ subset of the free monoid we say that it is rational if it belongs to the following family of subsets (actually it’s a boolean algebra but nevermind that) generated by the following process.

  1. Start with the finite subsets.
  2. Add the union of subsets.
  3. Add the product of subsets (as they are in a monoid the product is well defined).
  4. Take the generated submonoid. That is add everything that $L$ lacks until it becomes a monoid in itself which we will denote $L^*$.

This family of subsets is RAT($\Sigma$). We can check that it’s closed by inverse homomorphism’s of monoids.

This languages are called regular because they can be described by a regular expression. A regular expression is a short way of describing a subset in terms of the following operations: *, |, ?. For example the language of strings that start with a can be described by the following regular expression, \(L= a(a|b)^{*}\)

Kleene’s theorem or all these definitions are the same.

The following theorem can be proved without much trouble but it’s not so nice to write it out.

Theorem (Kleene). All the languages we described over a finitely generated monoid (that is finite languages) are equivalent.

  1. Rational languages.
  2. Recognizable languages.
  3. Type 0 languages.
  4. Languages accepted by a finite automata.

This theorem is quite powerful and it let’s us prove the following nice result and the first step in trying to understand languages of the word problem.

Theorem (Animisov). A group has a regular word problem iff it’s finite.

  • Idea of proof. Notice that we want to prove that ${ 1} \subset G$ is recognizable iff the group is finite. Because then by $h^{-1}$ the language $WP_\pi(G)$ is regular.

References.

  1. John E. Hopcroft, Rajeev Motwani, Jeffrey D. Ullman. Introduction to Automata Theory, Languages, and Computation.
  2. Volker Diekert, Armin Weiß .Context-Free Groups and Bass–Serre Theory.