Publishing research using outdated methods
What you call "an outdated method" another may call "the well-understood method".
In neuroscience, this is a very common occurrence. There are new techniques for analyzing different types of neural recordings coming out each month in a number of journals, and each one aims to improve on a specific aspect of a predecessor. Unfortunately, the new techniques are exactly that—new—and therefore untested against lots of data with different initial conditions. There are a good number of researchers who will simply ignore all the new techniques until people have developed them to a place of comfort. Even for those that do gain acceptance, they may not be appropriate for every type of analysis1.
I'm unfamiliar with your specific case, but I have seen similar concepts elsewhere in Econ, where older published techniques remain highly popular because (1) they're well-understood and (2) the new techniques were created to fix problems that not present in all databases, or not relevant for a given analysis. The old fogies sometimes do have something to offer.
1 In one case, a technique called DCM became widely popular in a very short period of time, and consequently was very quickly becoming widely misused. It got so bad that the authors actually published a paper titled "Ten simple rules for dynamic causal modeling" with the goal of educating researchers how to use the technique. (Biomed researchers in general don't have a great track record of performing world-class data analysis, but thats a separate story...)
What you are describing is not uncommon. In my field people still use methods developed 50 years ago. Some of these methods are still valid and have proven to be robust, some of these are flawed with known improvement, and some of these are down right logically inconsistent but people still use them because of inertia.
Whether using an outdated method is a critical flaw in a paper depends on many factors. But it eventually comes down to whether the flaw in the method invalidates the main conclusion. For example, if the main result is qualitative, and the improvement from the new method is incremental, then it's not a big deal. If the result is supported by multiple lines of evidence, then the fact that one of them is flawed is then less severe of a problem. If the method is known to fail in special cases and it is clear that the data do not fall into such cases, then it is also not a big concern.
Overall, for better or worse, people are going to be more forgiving if the newer method is not well known or the improvement is marginal.
I'm currently ... doing a referee report on a paper... [Author did X] Is this acceptable?
You're the referee, so you tell us!
As a referee you have the authority to use your discretion here and decide what kind of recommendation you want to give to the editor. You have identified that the authors use an outdated method of analysis that has some problems highlighted in later literature. You should point this out in your review, and you will then need to decide how big of an issue this is. Is the old method sufficiently poor that the method should be revised to the improved method from 1998? If so then perhaps a revise and resubmit might be appropriate (assuming other aspects of the paper are okay).