I will describe the best pratice of the ansible
first of all all the change should be record with git so you have a concise historique on you Iac what is the problem between the current and the last working version
tree structure for ansible
site.yml # master playbook the entry point calling others
webserver.yml # playbook for webserver
deployonce.yml # playbook for one-time deployment tasks
inventories/ # store different stages
prod/
hosts
group_vars/
all.yml
staging/
roles/
requirements.yml
common/ # base config for the all company
tasks/
main.yml
handlers/
main.yml
templates/
config.j2
webtier/
make your playbooks easily readable
- hosts: web
name: installs and starts nginx # describe the global confonction of the playbook
tasks:
- name: Ensure Nginx is installed
apt:
name: nginx
state: present
- name: Start Nginx service
service:
name: nginx
state: started
enabled: true
- name: Install and start Nginx # description of the block
when: ansible_os_family == "Debian" # condition to run this block
block:
- name: Ensure Nginx is installed
apt:
name: nginx
state: present
- name: Start Nginx service
service:
name: nginx
state: started
enabled: true
at the beginning put everything in one git repository but after dedicated repo for separated teams/tasks
that where you define the hosts and groups for your playbooks. Use the inventory directory to manage different environments (e.g., production, staging) and their specific configurations.
you should use names no ip address
node1 ansible_host=10.10.1.1
node2 ansible_host=10.10.1.2
web1 ansible_host=w13301.acme.com
use group for easier management of your inventory:
[all]
node1
node2
node3
web1
[cluster]
masters
workers
[webservers]
web1
[masters]
node1
[workers]
node2
node3
avoid conflicts name, also add role to the variable
nginx_max_keepalive: 25
nginx_port: 80
you can run single tasks commande
ansible all -i neon.qxyz.de, -m service -a "name=redhat state=present"
to run the playbooks
ansible-playbook -i neon.qxyz.de, site.yml
you should alway try to use modules before turning to command
- name: install nginx
yum:
name: nginx
state: present
don’t
- name: install nginx
command: yum install nginx -y
don’t forget to put info so admin don’t modify files manualy or ansible will overwrite changes. So put this in the ansible.cfg file:
[defaults]
ansible_managed = This file is managed by Ansible, changes will be lost!
will don’t want to hard code the comment it depand on the type of file # //, so ansible have variable so this {{ ansible_managed | comment }} in jinja2 will create # Ansible managed: Do NOT edit this file manually! or // Ansible managed: Do NOT edit this file manually!
the user should only be accessible via SSH key authentication.
set ansible user into the sudoers group /etc/sudoers.d/ansible:
ansible_user ALL=(ALL) NOPASSWD: ALL
disable the password
sudo passwd -l ansible
even more security by defining a specific IP whitelist in the SSH configuration /etc/ssh/sshd_config:
Match User ansible
PasswordAuthentication no
PermitRootLogin no
AllowUsers ansible@<ansible-control-IP>
reload ssh sudo systemctl reload sshd
ansible by defaut is trim_blocks True Remove the \n after the block lstrip_blocks False The spaces before {% if %} and {% endif %} are removed.
{%+ if something %} → disable lstrip for this block {% if something +%} → disable trim for this block {%- if something -%} → strip whitespace before/after this block
use this {{ ":" }}.
but we can also use line like this
- name: Comment out elasticsearch the config.js to ElasticSearch server
lineinfile:
dest: /var/www/kibana/config.js
backrefs: true
line: >
elasticsearch: {{ elasticsearch_URL }}:{{ elasticsearch_port }}
state: present
regexp: (elasticsearch.* "http.*)$
do
pp_size: "{{ item.ppsize | default(omit) }}"
don’t use it but use template much better and easier to read
you can create you own filter in python
from ansible.errors import AnsibleFilterError
def invert_role_to_group_mapping(role_dict):
"""
Invert a dict of roles -> groups into groups -> roles.
Input: {"admin": ["gp-jcms", "gp-jmp"], "viewer": ["gp-sc"]}
Output: {"gp-jcms": ["admin"], "gp-jmp": ["admin"], "gp-sc": ["viewer"]}
"""
if not isinstance(role_dict, dict):
raise AnsibleFilterError("Input must be a dict of roles -> groups")
inverted = {}
for role, groups in role_dict.items():
for group in groups:
inverted.setdefault(group, []).append(role)
return inverted
class FilterModule(object):
""" Ansible filter plugin to invert role -> group mapping """
def filters(self):
return {
"invert_role_to_group_mapping": invert_role_to_group_mapping
}
you only need to add at the end class FilterModule and declare the dict of the fonction to use See this for more info
ansible will zip the content module and send into the server
map selectattr subelements You would need nested loops, which are harder to express cleanly in Ansible. subelements flattens the structure into a single iterable list.
example structure
users:
- name: alice
groups:
- admin
- dev
- name: bob
- name: carol
groups:
- ops
- debug:
msg: "{{ item.0.name }} -> {{ item.1 }}"
loop: "{{ users | subelements('groups', skip_missing=True) }}"
log
alice -> admin
alice -> dev
carol -> ops
in ansible for some complexe task you can just see what is configgggggggggg for example this code for client scopes mapper
- name: Inspect existing client scope
community.general.keycloak_clientscope:
auth_keycloak_url: '{{ keycloak_auth_url }}'
auth_realm: '{{ keycloak_auth_realm }}'
auth_username: '{{ keycloak_admin_user }}'
auth_password: '{{ keycloak_admin_password }}'
auth_client_id: admin-cli
name: ldap-groups-scope
realm: '{{ realm_name }}'
state: present
check_mode: yes
register: result
- debug:
var: result
need to explose
ANSIBLE_KEEP_REMOTE_FILES=1 this will keep the remote module files instead or remoting them. use -vvv to see where are the files stores.
there is something more powerfull the combinaison of
Task level debugger on task (so you can redo the task)
- name: Execute a command
ansible.builtin.command: "false"
debugger: on_failed # always never on_failed on_unreachable on_skipped
what are the usefull command ?
| Command | Shortcut | Action |
|---|---|---|
| p | Print information about the task | |
| task.args[key] = value | no shortcut | Update module arguments |
| task_vars[key] = value | no shortcut | Update task variables (you must update_task next) |
| update_task | u | Recreate a task with updated task variables |
| redo | r | Run the task again |
| continue | c | Continue executing, starting with the next task |
| quit | q | Quit the debugger |
when changing args you will need to recreate the task u then r
Module level use the madbg
put import madbg; madbg.set_trace() inside you code where you want to debug.
then connect to the node itself ssh then madbg connect (or) madbg connect host port
personaly i use
import pprint
pprint.pprint(object)
var # print samll var
table command
| Command | Description | |
|---|---|---|
n or next | Execute the next line of code (step over). | |
s or step | Step into a function call. | |
c or continue | Continue execution until the next breakpoint or program end. | |
| `b [line | function]` | Set a breakpoint at a line number or function. Example: b 42 or b my_func. |
| `cl [line | function]` | Clear a breakpoint at a given line or function. |
l or list | Show source code around the current line. | |
p [expression] | Print the value of a variable or expression. Example: p my_var. | |
q or quit | Quit the debugger and terminate the program. | |
r or return | Continue execution until the current function returns. | |
bt | Show the current call stack (backtrace). |
you can add extra vars via the combine and using the json
what I’m trying to accomplish is to run commands inside of a Docker container that has already been created on a Digital Ocean Ubuntu/Docker Droplet using Ansible.
Solution can be this:
- name: add container to inventory
add_host:
name: [container-name]
ansible_connection: docker
changed_when: false
- name: run command in container
delegate_to: [container-name]
raw: bash
If you have python installed in your image, you can use the command module or any other module instead of raw.
If you want to do this on a remote docker host, add:
ansible_docker_extra_args: "-H=tcp://[docker-host]:[api port]"
to the add_host block.
By default, Ansible runs:
docker exec ...
against the local Docker daemon (e.g., /var/run/docker.sock). If your container is running on a remote Docker host, you must tell Docker where that daemon is located. That is what this does:
ansible_docker_extra_args: "-H=tcp://docker-host:2375"
It makes Ansible run:
docker -H tcp://docker-host:2375 exec <container> <command>
So the -H flag is passed to the Docker CLI.
example for multiple server
- name: Add containers dynamically
add_host:
name: "{{ item.name }}"
groups: docker_containers
ansible_connection: docker
ansible_docker_extra_args: "-H=ssh://{{ item.docker_host }}"
loop:
- { name: "c1", docker_host: "docker-host-1" }
- { name: "c2", docker_host: "docker-host-2" }
- name: Update apt inside containers
hosts: docker_containers
become: true
tasks:
- name: Update apt cache
apt:
update_cache: yes