What good is engineering research with no practical relevance?

The short answer to your question is that you are vastly overestimating your, and other engineer's, ability to judge what techniques will ever have practical relevance.

I think it was Michael Stonebraker, a Turing award winning computer scientist with no lack of practical impact, who said that the sweet spot for academic applied research are techniques that are about 10 years away from being widely implementable. If you limit yourself to things that you can already do today, you will fail to propose the kind of radically new developments that should, at least in theory, distinguish academic research from other drivers of innovation, such as startup companies or industrial R&D. Incidentally, if the lack of impact your work has right now is distressing you, you should ponder the question whether you would not achieve higher job satisfaction in a startup or industrial lab.

I find your example of self-learning power grids particularly unconvincing. If we rewind time a few years and relate your arguments to research into automated driving, I am sure you will find plenty of people who found this research to be a waste of time. Driving surely is a safety critical field, and automotive is highly regulated. Algorithms for automated driving assistance completely failed to, and to some extent still fail to, address the practical concerns of many stakeholders as well as governmental safety guarantees. And yet here we are. I am not sure if the same will happen to power grids, but it is absolutely plausible that it will.

You may also be interested in reading into TRLs (technology readiness levels), as used for instance by the European Union's framework programmes as well as NASA.

EU TRLs

The basic concept here is that academic research is usually best suited to bring ideas from TRL 0 or 1 to 3 or 4. The "Matlab implementations" you complain about may very well just be the laboratory tests that are meant on TRL 3. This is very much in line with the position in the grander scheme of the progress of technology that many large organizations envision for academic research labs.


Things with "no practical relevance" are not necessarily useless. They may just be "waiting for their time."

For instance, the phenomenon of ionic liquids was first discovered in the early 1900's, but they didn't catch on economically or industrially until the early 2000's when they were "rediscovered" and brought to prominence as "green solvents."

So it's probably unfair to say something has no possible practical relevance. It just might not be obvious yet where they could be used in the future.

Another point to consider is the possibility that someone is engineering but not really doing what is considered "engineering." This may have been a hiring decision, or someone finding a home for where they teach rather than trying to find the right home for their research. (That is the situation for me: I am an engineer by training, but my research could just as easily fit into a chemistry or materials science department.)


Research which shows new methods does not have to demonstrate the practicality of the new methods to be useful. An example from something that can be very applied is research in numerical solvers for ODEs. The vast majority of methods which have been created are not used in production-quality ODE solvers. They just aren't efficient. But having a comprehensive literature to pull from can be really helping when trying to learn about the possibilities. Someone outlining a method which isn't very efficient might've contributed new ideas for how to adapt to a certain case that in the future someone else can use to create something that is actually practical. And having a publication which implicitly highlights "look, this thing really only works in special cases because of X" helps someone else in the future when they have that idea (it's much quicker and easier to read a paper and go "okay, that doesn't work as well as I'd hoped" instead of building it yourself).

This also relates back to publication bias. Publishing that something doesn't work is just as valuable as publishing that something does work. Of course, modern publication practices require "significance" so generally researcher have to be sly about how they write the abstract ("we find that in conditions X, Y, Z that this method may be more efficient than current standard choices"), but it's pretty clear from the paper what it actually means in practical terms.

In the end there's a wave of information that moves forward and almost accidentally stumbles upon ideas which work, and these stick and become used in industry. Meanwhile, research continues onward to see what else it can find.