Curious case of self-citations and how to monitor self-citers?

I don't think there is a good way around self-citations, since most of the time, your current research builds upon your previous research, and therefore, your previous research (or that of your collaborators) is relevant to your current research. However, from my experience, self-citations form a small part of the overall number of citations.

Assuming the author's group is not the only one working on a topic, there seem to be two other problems here:

  1. other people don't seem to cite his work, which could mean that he is ignored, his work isn't visible or his work is just not good and/or relevant enough.

  2. basing quality on number of citations is flawed. The number of citations is closely related to an authors H-index, which is criticized regularly. The same criticism should be valid for number of citations.

To tackle the "problem" of self-citations (I would not say this is a problem in the majority of cases), some citation-based measures are based on citations excluding self-citations. This does of course get tricky when collaborators are citing someone's work, but the large bulk of self-citations will be removed this way.


To answer the questions directly:

  1. There is no such "system" that I know of. There are a variety of independent "watchdog" groups, bloggers, etc, who sometimes draw attention to what they see as abuses in academic publishing, and occasionally notorious cases will be discussed in academic or popular media. But I'm not aware that these people or organizations monitor these issues in a systematic way, beyond individual cases. There is certainly no single global body overseeing the issue.

  2. Again, there is no system. Academic publishing is decentralized and every publisher makes their own decisions about how to run their journals. Individual publishers might have their own internal blacklists of authors who have committed abuses in that publisher's journals, and whose work will no longer be considered by that publisher. But there is no global blacklist, nor any central organization that would have the power to enforce it. Many people would consider such a blacklist to be unethical - such systems have in the past been abused to exclude people for inappropriate reasons - and in some jurisdictions the idea of an industry-wide blacklist might be illegal.

  3. Institutions make their own decisions. They might not know about such abuses until they are brought to their attention by someone else. They might or might not consider them significant enough to fire a researcher. They might be swayed by the researcher's success in other areas. There might be disagreement within the institution about what to do. In many places there are tenure policies that can make it difficult to fire a researcher; it might require a broad consensus among faculty and administration that the researcher's actions are unacceptable, as well as requiring a long and costly legal or quasi-legal process. Institutions might not feel that self-citation abuse is worth the effort.

  4. Arguably. However, that is a matter of opinion and off-topic for this site.


1) I suspect social values concerning self-citations may vary field by field, but in my (partly-former) sub-discipline within AI, self-citations are not frowned upon as far as I can see.

2) It's not necessarily that people think it's a good way to increase the h-index because some metrics exclude self-citations. Instead, self-citing helps boost the profile of the paper, which in turn benefits real citations.

3) Even if self-citing is terrible, then the few that do self-cite a lot force others to do so too, that is, some people may simply be doing what they have to do.

People are gaming academia in such cases, but not Science itself. I don't think citations and the h-index are partly-constitutive of Science. So, if it is wrong, then it is violating an academic rather than a scientific norm, in my opinion. The 'real science' remains science regardless of if it is cited a lot or not at all.