Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
95a9e05
add option single mongo server
crosmuller Jan 8, 2026
f5c3025
Fix order of tasks so that it works for single and cluster mongo servers
crosmuller Jan 14, 2026
39df4d2
introduce varaible mongo_mode for seamless transition to standalone s…
crosmuller Jan 14, 2026
815cc86
option to change the admin user after creation
crosmuller Jan 14, 2026
5bb0fdb
authentication check
crosmuller Jan 15, 2026
5c50a3f
add when statement
crosmuller Jan 15, 2026
3c746f8
restore no log
crosmuller Jan 15, 2026
a35d44a
fix merge conflict
crosmuller Feb 9, 2026
a92d8d2
Merge branch 'main' into feature/refactor_mongo
crosmuller Feb 9, 2026
503bf25
fix check mode change
crosmuller Feb 9, 2026
c21cea9
remove obsolete tasks
crosmuller Feb 9, 2026
dfa498f
use -vv in debug
crosmuller Feb 9, 2026
78f6244
separate standalone and cluster config
crosmuller Feb 9, 2026
9e4d1c4
fix task file name typo
crosmuller Feb 9, 2026
e1f6796
more debugging
crosmuller Feb 9, 2026
040a1ae
different order
crosmuller Feb 9, 2026
fc6ad3a
disbale no_log temprarily
crosmuller Feb 9, 2026
6190532
add config later
crosmuller Feb 9, 2026
171cdfc
first stab at serial 1 cluster creation
crosmuller Feb 11, 2026
1d183f8
first stab at serial 1 cluster creation
crosmuller Feb 11, 2026
5f3369f
second stab at serial 1 cluster creation
crosmuller Feb 11, 2026
37b2a75
Works for a replication cluster with only primary, to be continued
crosmuller Feb 11, 2026
7d31bca
work in progress
crosmuller Feb 16, 2026
c8ce1c8
standalone deployment works
crosmuller Mar 23, 2026
8cd50df
only configure cluster when variabel is set
crosmuller Mar 25, 2026
ea8d1c1
add some comments
crosmuller Mar 25, 2026
f4e39cf
kind of works
crosmuller Mar 25, 2026
20a048f
replication-set_name variable rename
crosmuller Mar 25, 2026
c200530
clear error message broken cluster
crosmuller Mar 27, 2026
8876227
cluster intialise almost works
crosmuller Mar 27, 2026
a622786
cluster creation works
crosmuller Mar 30, 2026
7215c52
some cleanuo
crosmuller Mar 30, 2026
ac4c45c
add reconfigure option
crosmuller Mar 30, 2026
b88be2c
change readme, add arbiter option
crosmuller Apr 1, 2026
cd5b3b0
add more reconfigure info
crosmuller Apr 1, 2026
6f56e74
change readme
crosmuller Apr 1, 2026
a0a404e
better errors
crosmuller Apr 1, 2026
1ac754c
last changes
crosmuller Apr 2, 2026
b9c8c56
more documentation and fix for writeconcern number format issue
crosmuller Apr 13, 2026
6feb77b
more documentation
crosmuller Apr 13, 2026
6ff561e
add role default
crosmuller Apr 22, 2026
64be4be
pymongo version
crosmuller Apr 22, 2026
6f55870
pymongo version
crosmuller Apr 22, 2026
4eb8971
pymongo version
crosmuller Apr 22, 2026
1521493
pymongo version
crosmuller Apr 22, 2026
8d48472
rename replica_set to mongo_
crosmuller Apr 22, 2026
e3d4e69
fix merge conflict
crosmuller Apr 24, 2026
abf763d
fix merge conflict
crosmuller Apr 24, 2026
15cce53
add task
crosmuller Apr 24, 2026
4717b5f
add error
crosmuller Apr 29, 2026
371da40
fix fail pn even numbers
crosmuller Apr 30, 2026
e23fc75
readable cluster config check
crosmuller Apr 30, 2026
7a5dc77
some typos
crosmuller Apr 30, 2026
08c6106
better error message
crosmuller Apr 30, 2026
c0b6700
readme item checked
crosmuller Apr 30, 2026
b58c07d
fix mongoshrc
crosmuller Apr 30, 2026
43a7380
add todo items
crosmuller May 1, 2026
ce4e2ca
some typos
crosmuller May 4, 2026
e288ea4
fixed users error
crosmuller May 4, 2026
f7db56c
better check file name
crosmuller May 4, 2026
0c8716a
fix mebers format
crosmuller May 4, 2026
9f9530a
some housekeeping
crosmuller May 4, 2026
58c50e3
some housekeeping
crosmuller May 4, 2026
f7176ee
add comment
crosmuller May 6, 2026
60910c0
change check or standalone mode will not get users
crosmuller May 6, 2026
cf9a0f9
fix users error no role and disable logging
crosmuller May 7, 2026
27d64ff
quote gedeoe
crosmuller May 7, 2026
98b58f4
fix mongoshrc for standalone
crosmuller May 7, 2026
aeef30c
no output
crosmuller May 7, 2026
bf8f194
do not check cluster memers in standalone
crosmuller May 7, 2026
246ecaf
update example
crosmuller May 7, 2026
a6fb766
keep mongo_port
crosmuller May 7, 2026
10ac1d1
keep mongo_port
crosmuller May 7, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion environments/template/group_vars/mongo_servers.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
replica_set_name: my_mongo_cluster
mongo_replica_set_name: my_mongo_cluster

mongo_cluster_members:
- host: "mongo3.example.com:{{ mongo_port }}" # arbiter first or change mongo_arbiter_index
Expand Down
2 changes: 1 addition & 1 deletion environments/template/secrets/secret_example.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ mongo_passwords:
oidcng: secret
myconext: secret

mongo_admin_password: secret
mongo_admin_password: secret # this works for first time install, if you change it later you will have to do it manually
mongo_ca_passphrase: secret

engine_api_metadata_push_password: secret
Expand Down
29 changes: 27 additions & 2 deletions roles/mongo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,31 @@ Set the mongo_cluster_private_key variable encrypted in host_vars

Please review the official Mongo documentation for more information.

# Mongo deployment

To avoid surprisesyou can enable or disable cluster configuration with the boolean option mongo_configure_cluster. The role willonly initiate or reconfigure cluster if this is true (safest option is to use -e mongo_configure_cluster=true with your deployment when cluster configuration is necessary).
Another issue is the serial value, it is safest to set it to 1 in your playbook, if it is higher multiple mongo nodes can will be restarted at once and it can break your cluster. However when you want to intialise a new cluster you need to run the tasks parallel and serial needs to be as high as the amount of nodes. We handled this with a variable serial with the name serial_number in our playbook with a default 1. If cluster initialisation or reconfiguration is necessary use -e "serial_number=<AMOUNT_OF_CLUSTERMEMBERS>"


See also https://docs.ansible.com/projects/ansible/latest/playbook_guide/playbooks_strategies.html#setting-the-batch-size-with-serial

# Cluster reconfiguration

Warning: the cluster reconfiguration option in the mongodb_replicationset module is experimental. and you can only add or remove one node at a time.

# Todo
- [ ] Add the possibility for adding and removing cluster members
- [ ] Add the possibility for a standalone mongo server
- [x] Check mongo_replication_roles and give a clear fail message when not set
- [ ] Add option to change the already existing admin user, for now change the password manually and change it in the ansible config accordingly
- [x] Add the possibility for adding and removing cluster members
- [x] Add the possibility for a standalone mongo server
- [x] Cluster changes can be enabled or disabled
- [ ] Reconfigure cluster always reports changed
- [ ] Initialise cluster always reports changed
- [ ] check mode for writeconcern change tasks does not report change () same for any other mongodb_shell task "remote module (community.mongodb.mongodb_shell) does not support check mode"}
- [X] Clearer error messaging for even number of votes
- [X] Role refuses to add users when a new cluster is built (3 nodes) (cannot add users on a broken cluster)
- [X] it would be helpfull if role (for example primary) is not defined in host_vars but in the mongo_cluster_members array
- [X] removing primary from the cluster will not work but the error is unclear, this is related to the todo above
- [ ] is it necessary to make votes configurable?
- [X] preflight check are cluster members in the inventory and monog_servers group
- [ ] Standalone mongo also requires cluster certificates, not logical although it doens't hurt
56 changes: 43 additions & 13 deletions roles/mongo/defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,35 +13,37 @@ mongo_servers: [] # Set this in group_vars
# Not all mongo servers in the inventory are cluster members, so we use a separate list for this.
# Set this in group_vars of your environment(s). The arbiter should go first, or change the mongo_arbiter_index.
# mongo_cluster_members:
# - host: "mongoarbiter.example.com:27017"
# - host: "mongoarbiter.example.com"
# priority: 1 # can vote, cannot become primary
# - host: "mongo2.example.com:27017"
# port: 27017
# - host: "mongo2.example.com"
# priority: 2
# - host: "mongo1.example.com:27017"
# port: 27017
# - host: "mongo1.example.com"
# priority: 3
# mongo_arbiter_index: 0

# The replication role
# mongo_replication_role: # Set this in host_vars, it can have the values: "primary", "secondary" or arbiter
# port: 27017

# Todo: there is a link between mongo_replication_role and priority (arbiter is priority 1, primary the highest) so
# setting them separately is not ideal.

# The port for mongo server
mongod_port: 27017
mongo_port: 27017

# The password for admin user
mongo_admin_pass: "{{ mongo_admin_password }}" # Set this in secrets
# mongo_admin_password: # set this in secrets

# Are we using a cluster?
mongo_mode: "cluster" # cluster or standalone

# The name of the replication set
replica_set_name: "{{ instance_name }}" # Set this in group_vars
mongo_replica_set_name: "{{ instance_name }}" # Set this in group_vars

# Add a database
mongo:
users:
- { name: managerw, db_name: metadata, password: "{{ mongo_passwords.manage }}" }
- { name: oidcsrw, db_name: oidc, password: "{{ mongo_passwords.oidcng }}" }
- { name: myconextrw, db_name: myconext, password: "{{ mongo_passwords.myconext }}" }
- { name: managerw, db_name: metadata, password: "{{ mongo_passwords.manage }}", role: "readWrite" }
- { name: oidcsrw, db_name: oidc, password: "{{ mongo_passwords.oidcng }}", role: "readWrite"}
- { name: myconextrw, db_name: myconext, password: "{{ mongo_passwords.myconext }}", role: "readWrite" }

# Listen on all addresses by default
mongo_bind_listen_address: "0.0.0.0"
Expand All @@ -53,3 +55,31 @@ mongo_pki_dir: "/etc/pki/mongo"

# Users and groups
mongo_group: "mongod"

# Paths
mongo_config_file: "/etc/mongod.conf"
mongo_data_path: "/var/lib/mongo"
mongo_pymongo_version: 4.16.0

# cluster members
# set in group_vars
# mongo_cluster_members:
# - host: mongo1.example.com
# priority: 3
# votes: 1
# port: 27017
# - host: mongo2.example.com
# priority: 2
# votes: 1
# port: 27017
# - host: mongo3.example.com
# priority: 1
# votes: 1
# port: 27017
# arbiterOnly: true

mongo_cluster_write_concern: "majority"
mongo_cluster_write_timeout: 5000

# to avoid surprises only initiate or reconfigure cluster if this is true (safest option is to use -e mongo_configure_cluster=true with your deployment when cluster configuration is necessary)
mongo_configure_cluster: false
225 changes: 178 additions & 47 deletions roles/mongo/tasks/clusterconfig.yml
Original file line number Diff line number Diff line change
@@ -1,60 +1,191 @@
---
# todo this weorks only for new deployments
# rewrite so mongo config can be changed and cluster members can be added or removed
- name: Check if hosts are in clustered
ansible.builtin.command: mongosh --port {{ mongod_port }} --quiet --eval 'db.isMaster().hosts'
register: check_cluster
changed_when: false
check_mode: false

- name: Debug check_cluster variable
# In this task file the cluster is configured

# priority moet matchen met replication role, of replication role uit cluster mebers halen?
# todo write concern zetten

# Do some preflight checks
- name: Check some cluster related variables
when: mongo_mode == "cluster"
block:
- name: Fail on undefined mongo_replica_set_name
when: mongo_replica_set_name is not defined
ansible.builtin.fail:
msg: "Something is wrong, mongo_mode was set to cluster but mongo_replica_set_name is undefined."

- name: Debug replica settings
ansible.builtin.debug:
msg: "{{ check_cluster }}"
msg: "Replica set name {{ mongo_replica_set_name }}"
verbosity: 2

- name: Debug mongo_cluster_members variable
# Loop over cluster members and check their presence in mong_servers group and their mode (not standalone)

- name: Check if mongo_cluster_members exist in inventory group
ansible.builtin.assert:
that:
- item.host in groups['mongo_servers']
fail_msg: "Server '{{ item.host }}' is not in the mongo_servers inventory group"
success_msg: "Server '{{ item.host }}' found in mongo_servers inventory group"
run_once: true
loop: "{{ mongo_cluster_members }}"

# Loop over cluster members and check for primary

- name: Set primary host fact
ansible.builtin.set_fact:
mongo_primary_host: "{{ (mongo_cluster_members | max(attribute='priority')).host }}"

- name: Debug primary settings
ansible.builtin.debug:
msg: "{{ mongo_cluster_members }}"
msg: "Primary is {{ mongo_primary_host }}"
verbosity: 2

- name: Debug mongo_replication_role variable
# What is the replication role of the current host
- name: Debug replication role settings
ansible.builtin.debug:
msg: "{{ mongo_replication_role }}"
msg: "This nodes replication role is {{ mongo_replication_role }}"
verbosity: 2

- name: Initial cluster initialisation
community.mongodb.mongodb_replicaset:
login_host: localhost
login_user: admin
login_port: "{{ mongod_port }}"
login_password: "{{ mongo_admin_password }}"
replica_set: "{{ replica_set_name }}"
members: "{{ mongo_cluster_members }}"
arbiter_at_index: "{{ mongo_arbiter_index | default(0) }}"
validate: false
run_once: true
when: mongo_replication_role == 'primary'
# Cannot initialise a cluster without starting.......
- name: Enable and start mongod
ansible.builtin.service:
name: mongod.service
enabled: true
state: started

- name: Wait until cluster health is ok
community.mongodb.mongodb_status:
login_user: admin
login_password: "{{ mongo_admin_password }}"
login_database: admin
login_port: "{{ mongod_port }}"
validate: default
poll: 5
interval: 12
replica_set: "{{ replica_set_name }}"
# Initialise cluster block
- name: Initialise or reconfigure cluster block
when: mongo_replication_role == 'primary'
block:
- name: Check if replica set is already initialised
community.mongodb.mongodb_shell:
login_host: localhost
login_user: admin
login_port: "{{ mongo_port }}"
login_password: "{{ mongo_admin_password }}"
eval: "rs.status().ok"
db: admin
register: rs_already_init
ignore_errors: true

- name: Add the admin user
community.mongodb.mongodb_user:
database: admin
name: admin
password: "{{ mongo_admin_password }}"
login_port: "{{ mongod_port }}"
roles: root
state: present
when: check_cluster.stdout == ""
no_log: true
run_once: true
- name: Debug cluster initialization check
ansible.builtin.debug:
msg: "{{ rs_already_init }}"
verbosity: 2

# This should be possible with community.mongodb.mongodb_replicaset
# But we keep getting authenticatione error so leave it like this for now
- name: Initialise replica set if necessary
community.mongodb.mongodb_shell:
login_host: localhost
login_user: admin
login_port: "{{ mongo_port }}"
login_password: "{{ mongo_admin_password }}"
eval: |
rs.initiate({
_id: "{{ mongo_replica_set_name }}",
members: [
{% for m in mongo_cluster_members %}
{ _id: {{ loop.index0 }}, host: "{{ m.host }}:{{ m.port }}", priority: {{ m.priority }}, votes: {{ m.votes }}{% if m.arbiterOnly is defined and m.arbiterOnly and m.arbiterOnly == true %}, arbiterOnly: true {% endif %} }{{ "," if not loop.last else "" }}
{% endfor %}
]
})
db: admin
when: rs_already_init.failed
register: rs_init

- name: Debug cluster initialization
ansible.builtin.debug:
msg: "{{ rs_init }}"
verbosity: 2

- name: Format members list
ansible.builtin.set_fact:
mongo_cluster_members_formatted: "{{ mongo_cluster_members_formatted | default([]) + [m | combine({'host': m.host ~ ':' ~ (m.port | string)}) | dict2items | rejectattr('key', 'eq', 'port') | list | items2dict] }}"
loop: "{{ mongo_cluster_members }}"
loop_control:
loop_var: m

- name: Debug members list
ansible.builtin.debug:
msg: "{{ mongo_cluster_members }}"
verbosity: 2

- name: Debug formatted members list
ansible.builtin.debug:
msg: "{{ mongo_cluster_members_formatted }}"
verbosity: 2

# Reconfigure cluster
# todo: this always returns changed even when nothing changes
- name: Reconfigure cluster if necessary
community.mongodb.mongodb_replicaset:
login_host: localhost
login_user: admin
login_password: "{{ mongo_admin_password }}"
login_port: "{{ mongo_port }}"
reconfigure: true
replica_set: "{{ mongo_replica_set_name }}"
members: "{{ mongo_cluster_members_formatted }}"
register: rs_reconfigure

- name: Debug cluster reconfiguration
ansible.builtin.debug:
msg: "{{ rs_reconfigure }}"
verbosity: 2

- name: Wait for the replicaset to stabilise
community.mongodb.mongodb_status:
replica_set: "{{ mongo_replica_set_name }}"
login_host: localhost
login_user: admin
login_password: "{{ mongo_admin_password }}"
login_port: "{{ mongo_port }}"
poll: 5
interval: 30
validate: minimal # default fails on even number of servers and although this is not a great situation, it is sometimes the temporary situation because we can onlye add or remove 1 node at a time

# Cluster settings that cannot be changed with mongodb_replicaset

- name: Get current default write concern
community.mongodb.mongodb_shell:
login_host: localhost
login_port: 27017
login_user: admin
login_password: "{{ mongo_admin_password }}"
eval: "db.adminCommand({ getDefaultRWConcern: 1 })"
register: current_write_concern
changed_when: false

- name: Debug write concern check
ansible.builtin.debug:
msg: "{{ current_write_concern.transformed_output.defaultWriteConcern }}"
verbosity: 2
when: current_write_concern.transformed_output.defaultWriteConcern is defined

- name: Set default write concern
when: >
current_write_concern.transformed_output.defaultWriteConcern is defined
and
(current_write_concern.transformed_output.defaultWriteConcern.w | string != mongo_cluster_write_concern | default('majority') | string
or
current_write_concern.transformed_output.defaultWriteConcern.wtimeout | int != mongo_cluster_write_timeout | default(5000) | int)
or current_write_concern.transformed_output.defaultWriteConcern is not defined
block:
- name: "set write concern majority"
when: mongo_cluster_write_concern == "majority"
community.mongodb.mongodb_shell:
login_host: localhost
login_user: admin
login_password: "{{ mongo_admin_password }}"
login_port: "{{ mongo_port }}"
eval: "db.adminCommand({ setDefaultRWConcern: 1, defaultWriteConcern: { w: \"{{ mongo_cluster_write_concern | default('majority') }}\", wtimeout: {{ mongo_cluster_write_timeout | default(5000) }} } })"
# could not get this to work with either majority with quotes or number without quotes so for now an ugly fix
- name: "set write concern numeric"
when: mongo_cluster_write_concern != "majority"
community.mongodb.mongodb_shell:
login_host: localhost
login_user: admin
login_password: "{{ mongo_admin_password }}"
login_port: "{{ mongo_port }}"
eval: "db.adminCommand({ setDefaultRWConcern: 1, defaultWriteConcern: { w: {{ mongo_cluster_write_concern | default('majority') }}, wtimeout: {{ mongo_cluster_write_timeout | default(5000) }} } })"
Loading