Environment Introduction

  • Multipass VMs
  • mongo:latest

For an introduction to Multipass, you can check my other article “Cross-platform Ubuntu VMs Management Tool”.

ReplicaSet Cluster

Installing Nodes

multipass launch docker --name cluster-01 --disk 40G --cpus 2 --mem 4G
multipass launch docker --name cluster-02 --disk 40G --cpus 2 --mem 4G
multipass launch docker --name cluster-03 --disk 40G --cpus 2 --mem 4G
Name                    State             IPv4             Image
cluster-01              Running           10.0.8.4         Ubuntu 22.04 LTS
                                          172.17.0.1
cluster-02              Running           10.0.8.5         Ubuntu 22.04 LTS
                                          172.17.0.1
cluster-03              Running           10.0.8.6         Ubuntu 22.04 LTS
                                          172.17.0.1

As we can see, after the virtual machines are deployed, their IPs are 10.0.8.4, 10.0.8.5, and 10.0.8.6 respectively.

Getting the Latest MongoDB Image

Pull the latest Docker image of MongoDB on each node:

docker pull mongo:latest

Starting MongoDB Nodes

multipass shell cluster-01
docker run -d --name mongodb --net host mongo:latest mongod --replSet betterde --bind_ip 10.0.8.4

multipass shell cluster-02
docker run -d --name mongodb --net host mongo:latest mongod --replSet betterde --bind_ip 10.0.8.5

multipass shell cluster-03
docker run -d --name mongodb --net host mongo:latest mongod --replSet betterde --bind_ip 10.0.8.6

–replSet specifies the name of the ReplicaSet, and starts the corresponding mongod service on each of the three VMs.

Initializing the ReplicaSet

# Login to the VM node
multipass shell cluster-01

# Enter the MongoDB container
docker exec -it mongodb bash

# Connect to the MongoDB database using mongosh
mongosh

# Initialize the ReplicaSet
rs.initiate()

# Add nodes
rs.add("10.0.8.5:27017")
rs.add("10.0.8.6:27017")

If everything goes well, the cluster setup is complete. We can use the following command to check the cluster status:

rs.status();
{
  set: 'betterde',
  date: ISODate("2022-10-17T10:21:08.985Z"),
  myState: 1,
  term: Long("1"),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long("2000"),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1666002061, i: 1 }), t: Long("1") },
    lastCommittedWallTime: ISODate("2022-10-17T10:21:01.056Z"),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1666002061, i: 1 }), t: Long("1") },
    appliedOpTime: { ts: Timestamp({ t: 1666002061, i: 1 }), t: Long("1") },
    durableOpTime: { ts: Timestamp({ t: 1666002061, i: 1 }), t: Long("1") },
    lastAppliedWallTime: ISODate("2022-10-17T10:21:01.056Z"),
    lastDurableWallTime: ISODate("2022-10-17T10:21:01.056Z")
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1666002061, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate("2022-10-17T09:47:06.626Z"),
    electionTerm: Long("1"),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1666000026, i: 1 }), t: Long("-1") },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1666000026, i: 1 }), t: Long("-1") },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long("10000"),
    newTermStartDate: ISODate("2022-10-17T09:47:06.643Z"),
    wMajorityWriteAvailabilityDate: ISODate("2022-10-17T09:47:06.655Z")
  },
  members: [
    {
      _id: 0,
      name: '10.0.8.4:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 2200,
      optime: { ts: Timestamp({ t: 1666002061, i: 1 }), t: Long("1") },
      optimeDate: ISODate("2022-10-17T10:21:01.000Z"),
      lastAppliedWallTime: ISODate("2022-10-17T10:21:01.056Z"),
      lastDurableWallTime: ISODate("2022-10-17T10:21:01.056Z"),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1666000026, i: 2 }),
      electionDate: ISODate("2022-10-17T09:47:06.000Z"),
      configVersion: 6,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: '10.0.8.5:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 2006,
      optime: { ts: Timestamp({ t: 1666002061, i: 1 }), t: Long("1") },
      optimeDurable: { ts: Timestamp({ t: 1666002061, i: 1 }), t: Long("1") },
      optimeDate: ISODate("2022-10-17T10:21:01.000Z"),
      optimeDurableDate: ISODate("2022-10-17T10:21:01.000Z"),
      lastAppliedWallTime: ISODate("2022-10-17T10:21:01.056Z"),
      lastDurableWallTime: ISODate("2022-10-17T10:21:01.056Z"),
      lastHeartbeat: ISODate("2022-10-17T10:21:08.915Z"),
      lastHeartbeatRecv: ISODate("2022-10-17T10:21:08.916Z"),
      pingMs: Long("2"),
      lastHeartbeatMessage: '',
      syncSourceHost: '10.0.8.4:27017',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 6,
      configTerm: 1
    },
    {
      _id: 2,
      name: '10.0.8.6:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 2001,
      optime: { ts: Timestamp({ t: 1666002061, i: 1 }), t: Long("1") },
      optimeDurable: { ts: Timestamp({ t: 1666002061, i: 1 }), t: Long("1") },
      optimeDate: ISODate("2022-10-17T10:21:01.000Z"),
      optimeDurableDate: ISODate("2022-10-17T10:21:01.000Z"),
      lastAppliedWallTime: ISODate("2022-10-17T10:21:01.056Z"),
      lastDurableWallTime: ISODate("2022-10-17T10:21:01.056Z"),
      lastHeartbeat: ISODate("2022-10-17T10:21:08.915Z"),
      lastHeartbeatRecv: ISODate("2022-10-17T10:21:08.912Z"),
      pingMs: Long("2"),
      lastHeartbeatMessage: '',
      syncSourceHost: '10.0.8.4:27017',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 6,
      configTerm: 1
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1666002061, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0),
      keyId: Long("0")
    }
  },
  operationTime: Timestamp({ t: 1666002061, i: 1 })
}

Using the rs.status() command, we can see that all three nodes in the cluster are now running normally.

I hope this is helpful, Happy hacking…