Counter strategy against group that repeatedly does strategic self-citations and ignores other relevant research
First, I think it's important to have a more precise diagnosis of the problem.
Self-citations by themselves are not necessarily a problem---even 30% self-citations might be quite reasonable depending on circumstances (e.g., work in a very narrow but deep niche). Likewise, re-inventing a result is not necessarily a problem if the prior result is in another community and your community was genuinely unaware of it.
Failing to properly credit the work of others when made aware of it, on the other hand, is definitely a problem, and can be addressed by the community of peers:
For journal articles, the peer reviewers can simply insist on citations. It's very easy to add a citation in revision and pretty much never a reason not to cite closely related work. Furthermore, reviewers get to see it again and don't have to say "accept" until they are satisfied. Thus, unless the editor thinks the failure to cite is OK, it's easy to force the citations to be added as a condition of publication (or they can go to a crap journal instead, which is worse for them).
For conference papers, it is more difficult to insist on citation, since there is often no second round of review. Here, I would recommend invoking the ongoing pattern of failure to cite in a review as a reason for giving a low score, thus signaling that this does not appear to be an accidental oversight.
For slides, of course, there is no pre-presentation review, but there are post-presentation questions. "I notice that you failed to mention Smith's prior work on this problem, would you care to comment on its relationship to your work?", "How does your results compare to Xu et al's results on the problem?" Embarrass them in front of the peers, and they will be motivated to improve---and more importantly, the peers will be motivated to make the same requests.
None of these strategies are things that you can really do just by yourself. But if it's a legitimate issue (as opposed to, for example, a potential distorted view of the significance of your own work), then other peers will hopefully have noticed as well and can be doing the same.
What you encounter is unfortunately common, even 20 years ago. I don't know if there is an English equivalent, but it is called in German Zitierkartell (citation cartel). I encountered it the first time when I researched phase-unwrapping algorithms for speckle interferometry in my Diplom, there were people working on network algorithms, region-growing algorithm, branch-cut algorithm and least-squares algorithm. Every single one was citing only their own research and of friendly groups.
What can you do against it? Essentially nothing from a higher instance. The only thing which prohibits it is a thing called "scientific ethos" which has now the conservation status of "Vulnerable" or even more pessimistic, "Endangered". It is not forbidden to ignore other people's work. Worse, it does work. By citing own and friendly work and getting cited in return, the citation index is inflated. It's the same method as the Salamitaktik, salami-slicing, instead of putting content in one paper, spread it thin over five to get more citations (in English: "least publishable units").
So the only way to punish the perpetrators is doing the same in return, so in time several citation cartels begin to grow, each fighting for funding. As citation cartels are unethical and unethical behavior seldom comes alone, you can look up their papers and search intently for data fudging and scientific misconduct. It goes without saying that you should have an impeccable record because you can expect retaliation in return.
Science is not a competition.
The "counter strategy" is to ignore the "problem" and do excellent science yourself. They are making a fool of themselves, and most other scientists will realise that as well. Behave like a responsible scientist, cite them if you need to, and only cite their relevant and interesting papers (as you would normally do anyway).