MongoDB Replica Set Member State is "OTHER"

I was able to replicate the scenario and this is what I got when I did rs.status() from the mongod where the command prompt was showing OTHER:

{
    "state" : 10,
    "stateStr" : "REMOVED",
    "uptime" : 41,
    "optime" : {
        "ts" : Timestamp(1518192445, 1),
        "t" : NumberLong(26)
    },
    "optimeDate" : ISODate("2018-02-09T16:07:25Z"),
    "ok" : 0,
    "errmsg" : "Our replica set config is invalid or we are not a member of it",
    "code" : 93,
    "codeName" : "InvalidReplicaSetConfig",
    "operationTime" : Timestamp(1518192445, 1),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1518193246, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}

Let's assume the faulty mongod is the 3rd member (index 2) in the members array of the JSON you get when you do rs.conf(). Go to your primary and remove the faulty mongod from the replica set:

rsconf = rs.conf()
rsconf.members = [rsconf.members[0], rsconf.members[1]] //index 0 and 1 are your working members, we're omitting the 3rd member which is faulty.
rs.reconfig(rsconf);

Now restart your mongod that was showing as OTHER. And go to your primary again and add the member again. Let's assume the IP is 10.00.00.00 and port is 27019:

rs.add("10.00.00.00:27019")

This fixed the issue, and the status changed from OTHER to SECONDARY.

Note that reconf will reset all the client connections. You may need a maintenance window of about a minute to do the reconfigurations.


The config is not set correctly.

You can use following command to init:

rs.initiate({
      _id: "rs0",
      version: 1,
      members: [
         { _id: 0, host : "localhost:27017" }
      ]
   }
)

If you have already initiated, you may get the error msg like me:

singleNodeRepl:OTHER> rs.initiate({ _id: "rs0", members: [ { _id: 0, host : "localhost:27017" } ] } )
{
    "info" : "try querying local.system.replset to see current configuration",
    "ok" : 0,
    "errmsg" : "already initialized",
    "code" : 23,
    "codeName" : "AlreadyInitialized"
}

The solution is to reconf the mongo:

singleNodeRepl:OTHER> rsconf = rs.conf()
singleNodeRepl:OTHER> rsconf.members = [{_id: 0, host: "localhost:27017"}]
[ { "_id" : 0, "host" : "localhost:27017" } ]
singleNodeRepl:OTHER> rs.reconfig(rsconf, {force: true})
{ "ok" : 1 }
singleNodeRepl:OTHER>
singleNodeRepl:SECONDARY>
singleNodeRepl:PRIMARY>