Do groups, rings and fields have practical applications in CS? If so, what are some?

If every second or so your computer’s memory were wiped completely clean, except for the input data; the clock; a static, unchanging program; and a counter that could only be set to 1, 2, 3, 4, or 5, it would still be possible (given enough time) to carry out an arbitrarily long computation — just as if the memory weren’t being wiped clean each second. This is almost certainly not true if the counter could only be set to 1, 2, 3, or 4. The reason 5 is special here is pretty much the same reason it’s special in Galois’ proof of the unsolvability of the quintic equation.

-Scott Aaronson


Groups and fields, primarily finite ones, are used extensively in coding theory. Many of the results in number theory that give rise to important encryption systems (e.g., RSA) can actually be seen to be results in group theory. If you include applications outside of computer science it would really be hard to exaggerate on the importance of group theory. Groups are literally everywhere. The theory of group representations for instance is useful in chemistry (particularly in crystallography).

The reason for the importance of groups is that they model symmetry and for fields, at least for coding theory and cryptography, is that they codify very intricate combinatorics.

So, in computer science, whenever you watch a video online, make a phone-call, purchase something over the internet, compress a file, send an email, or communicate with the Mars Rover lots of groups and fields are being used behind the scenes.


Saying my bit and it's too long to fit into a comment. I apologize in advance about being a bit chatty. Well, this is a soft question, so the answer is gonna be soft as well.

Several posters have emphasized some technological applications of abstract algebra. From the point of view of algebraists these are fine answers (surprisingly generously upvoted actually - may be the hearts of practitioners of abstract algebra warm up to these). You will get different answers, if you ask a different group of people. Some might refer to how algebraic structures give the playground to problems central to study of complexity classes of algorithms. I dunno?

I describe two discussions I have had that I think are relevant to this question.

I once chatted with a professor in computer science. I suggested that may be I should supplement our linear algebra lecture notes with a chapter on how orthogonal coordinate transformations (rotations and such) are applied in 3D-graphics. I had found out that homework problems related to this theme motivated some of my students. His reply was that it is not clear cut. Most of the programmers that the university spews out will end up working with teams. His point was that it is unnecessary for ALL the programmers to know about groups of rotations, because only a small subset of the team would work on the 3D-graphics engine - if any at all. In DOS-era with homegrown code it was enough for SOME of the programmers in the team to know this inside out. But nowadays a lot of the hard work has been "outsourced" via a standard interface to the manufacturer of the graphics card (so I was told). True, SOME OTHER programmers still need to know enough to use that interface to good effect.

I am thinking that the same principles apply elsewhere. It is not at all necessary for most of the programmers to know about finite fields even if they are working in a team building apps on top of a layer of error-correcting-codes and/or cryptographic modules/protocols. Yes, these are fascinating applications of algebra, but even for those guys an in-depth understanding of the algebra involved is not a high priority.

However, when you are creating something new, it is clearly good to have people in your team, who have a deeper understanding of the underlying principles. And "deep" means something else than what it means to the readers of this question. This is a pons asinorum to the other discussion I recall. During my short stint in telcomm industry I met this guy from Nokia-Siemens Networks. I was told that he is the top patent inventor of the team from the Siemens side. I also recognized him as one of the guys who competed at IMO for West Germany at the same time I represented Finland. A modest dude who explained the recipe of his success as "Oh, most of my inventions are trivial applications of modular arithmetic - programmers and engineers don't understand it." I will testify that his last sentence is true. They learn about the binary mod and various rules around it, but they don't learn the bidirectional power of congruences, and don't learn to think in terms of the residue class rings - it's all remainders of divisions of integers to them. So, by being at the right place at the right time, you can dine out simply because you can work swiftly with periodically repeating discrete structures and patterns.

Again, something that not all the CS majors need to know, but "in the land of blind people one-eyed man is the king". <- Sorry, I just couldn't resist. The upshot here is that we don't know what kind of an eye will become useful.

So the reason to learn a bit about abstract algebra is not its usefulness in any given set of applications. I guess (I truly don't think I am the right guy to say this) it is more a question about having more tools to bring order to the chaos surrounding the (currently unknown) programming tasks that lie in your future.