MongoDB Series - Enabling Authentication and Access Control

Introduction⌗
By default, MongoDB does not have authentication and access control enabled, requiring manual configuration and activation. Here we will use the default SCRAM authentication mechanism.
The general process:
- Create users
- Generate a key file for authentication between nodes
- Stop cluster nodes
- Delete existing node containers
- Restart new containers
Creating Users⌗
mongosh "mongodb://<IP>:<PORT>,<IP>:<PORT>,<IP>:<PORT>/?replicaSet=betterde&readPreference=primaryPreferred"
admin = db.getSiblingDB("admin")
admin.createUser(
{
user: "root",
pwd: passwordPrompt(),
roles: [ { role: "root", db: "admin" } ]
}
);
Enter password
*************
{
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1677817269, i: 1 }),
signature: {
hash: Binary(Buffer.from("79633f9be574bdc6265a2762e9bcda55d7864ab2", "hex"), 0),
keyId: Long("7205813851093204998")
}
},
operationTime: Timestamp({ t: 1677817269, i: 1 })
}
db.getUsers();
{
users: [
{
_id: 'admin.root',
userId: new UUID("42eca873-b5fa-4623-bd89-02f00edce81f"),
user: 'root',
db: 'admin',
roles: [ { role: 'root', db: 'admin' } ],
mechanisms: [ 'SCRAM-SHA-1', 'SCRAM-SHA-256' ]
}
],
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1677816808, i: 1 }),
signature: {
hash: Binary(Buffer.from("092d36c40817c92cc135574924774d0d24e02538", "hex"), 0),
keyId: Long("7205813851093204998")
}
},
operationTime: Timestamp({ t: 1677816808, i: 1 })
}
After creating the user, we need to enable authentication for the cluster nodes.
Generating a Key File⌗
The authentication configuration for MongoDB ReplicaSet Cluster is different from a single node. ReplicaSet Cluster mode requires configuring the same key for connections between nodes.
openssl rand -base64 756 > mongodb.key
chmod 400 mongodb.key
If your cluster is running in Docker containers, remember to change the owner of this
mongodb.key
to the user and group of the container runtime, otherwise it will cause the MongoDB service to fail to start due to key reading failure!
If you’re not sure, you can check the owner information of the data files in MongoDB, for example:
ls -la /data
total 24
drwxr-xr-x 4 root root 4096 Feb 2 23:28 .
drwxr-xr-x 1 root root 4096 Mar 3 03:39 ..
drwxr-xr-x 2 mongodb mongodb 4096 Feb 2 23:28 configdb
drwxr-xr-x 5 mongodb root 12288 Mar 3 06:32 db
id mongodb
uid=999(mongodb) gid=999(mongodb) groups=999(mongodb)
After obtaining the ID corresponding to the mongodb user in the container, set the owner and group of the previously generated mongodb.key
file to this ID:
sudo chown 999:999 mongodb.key
Now you can use this key as the authentication credential between cluster nodes.
Restarting MongoDB Containers⌗
sudo docker stop mongodb
sudo docker rm mongodb
sudo docker run -d \
--name mongodb \
-p 27017:27017 \
-v /data/mongodb:/data/db \
-v /data/mongodb.key:/srv/mongodb/mongodb.key \
mongo:latest \
mongod --auth --keyFile=/srv/mongodb/mongodb.key --replSet betterde --bind_ip 0.0.0.0
If everything goes well, the MongoDB service should start normally. Here we mainly added two parameters:
- –auth: Enable authentication
- –keyFile: Set the key file for authentication between cluster nodes
The above steps need to be executed on all other nodes in the cluster!
I hope this is helpful, Happy hacking…