I will describe the best pratice of the ansible

first of all all the change should be record with git so you have a concise historique on you Iac what is the problem between the current and the last working version

tree structure for ansible

site.yml            # master playbook the entry point calling others
webserver.yml       # playbook for webserver
deployonce.yml      # playbook for one-time deployment tasks
inventories/        # store different stages
    prod/
        hosts
        group_vars/
            all.yml
    staging/
    roles/
        requirements.yml
        common/         # base config for the all company
            tasks/
                main.yml
            handlers/
                main.yml
            templates/
                config.j2
        webtier/

good playbook

make your playbooks easily readable

- hosts: web
  name: installs and starts nginx # describe the global confonction of the playbook
  tasks:
    - name: Ensure Nginx is installed
      apt:
        name: nginx
        state: present

    - name: Start Nginx service
      service:
        name: nginx
        state: started
        enabled: true

blocks help organise you code

- name: Install and start Nginx       # description of the block
  when: ansible_os_family == "Debian" # condition to run this block
  block:
    - name: Ensure Nginx is installed
      apt:
        name: nginx
        state: present

    - name: Start Nginx service
      service:
        name: nginx
        state: started
        enabled: true

star small but grow after

at the beginning put everything in one git repository but after dedicated repo for separated teams/tasks

Inventory

that where you define the hosts and groups for your playbooks. Use the inventory directory to manage different environments (e.g., production, staging) and their specific configurations.

you should use names no ip address

node1 ansible_host=10.10.1.1
node2 ansible_host=10.10.1.2

web1 ansible_host=w13301.acme.com

use group for easier management of your inventory:

[all]
node1
node2
node3
web1

[cluster]
masters
workers

[webservers]
web1

[masters]
node1

[workers]
node2
node3

variable naming

avoid conflicts name, also add role to the variable

nginx_max_keepalive: 25
nginx_port: 80

Run

you can run single tasks commande

ansible all -i neon.qxyz.de, -m service -a "name=redhat state=present"

to run the playbooks

ansible-playbook -i neon.qxyz.de, site.yml

use the right tools !

you should alway try to use modules before turning to command

- name: install nginx
  yum:
    name: nginx
    state: present

don’t

- name: install nginx
  command: yum install nginx -y

warning

don’t forget to put info so admin don’t modify files manualy or ansible will overwrite changes. So put this in the ansible.cfg file:

[defaults]
ansible_managed = This file is managed by Ansible, changes will be lost!

will don’t want to hard code the comment it depand on the type of file # //, so ansible have variable so this {{ ansible_managed | comment }} in jinja2 will create # Ansible managed: Do NOT edit this file manually! or // Ansible managed: Do NOT edit this file manually!

Security

ansible by defaut is trim_blocks True Remove the \n after the block lstrip_blocks False The spaces before {% if %} and {% endif %} are removed.

{%+ if something %} → disable lstrip for this block {% if something +%} → disable trim for this block {%- if something -%} → strip whitespace before/after this block

escape colon in ansible

use this {{ ":" }}. but we can also use line like this

- name: Comment out elasticsearch the config.js to ElasticSearch server
  lineinfile:
    dest: /var/www/kibana/config.js
    backrefs: true
    line: >
      elasticsearch: {{ elasticsearch_URL }}:{{ elasticsearch_port }}
    state: present
    regexp: (elasticsearch.* "http.*)$

Exclude parameter

do

    pp_size: "{{ item.ppsize | default(omit) }}"

onelinefile

don’t use it but use template much better and easier to read

filter

create your own

you can create you own filter in python

from ansible.errors import AnsibleFilterError

def invert_role_to_group_mapping(role_dict):
    """
    Invert a dict of roles -> groups into groups -> roles.
    Input: {"admin": ["gp-jcms", "gp-jmp"], "viewer": ["gp-sc"]}
    Output: {"gp-jcms": ["admin"], "gp-jmp": ["admin"], "gp-sc": ["viewer"]}
    """
    if not isinstance(role_dict, dict):
        raise AnsibleFilterError("Input must be a dict of roles -> groups")

    inverted = {}
    for role, groups in role_dict.items():
        for group in groups:
            inverted.setdefault(group, []).append(role)
    return inverted


class FilterModule(object):
    """ Ansible filter plugin to invert role -> group mapping """

    def filters(self):
        return {
            "invert_role_to_group_mapping": invert_role_to_group_mapping
        }

you only need to add at the end class FilterModule and declare the dict of the fonction to use See this for more info

ansible will zip the content module and send into the server

existing filter

map selectattr subelements You would need nested loops, which are harder to express cleanly in Ansible. subelements flattens the structure into a single iterable list.

example structure

users:
  - name: alice
    groups:
      - admin
      - dev
  - name: bob
  - name: carol
    groups:
      - ops

- debug:
    msg: "{{ item.0.name }} -> {{ item.1 }}"
  loop: "{{ users | subelements('groups', skip_missing=True) }}"

log

alice -> admin
alice -> dev
carol -> ops

Check see defautl config

in ansible for some complexe task you can just see what is configgggggggggg for example this code for client scopes mapper

- name: Inspect existing client scope
  community.general.keycloak_clientscope:
    auth_keycloak_url: '{{ keycloak_auth_url }}'
    auth_realm: '{{ keycloak_auth_realm }}'
    auth_username: '{{ keycloak_admin_user }}'
    auth_password: '{{ keycloak_admin_password }}'
    auth_client_id: admin-cli
    name: ldap-groups-scope
    realm: '{{ realm_name }}'
    state: present
  check_mode: yes
  register: result

- debug:
    var: result

ansible debug

need to explose

ANSIBLE_KEEP_REMOTE_FILES=1 this will keep the remote module files instead or remoting them. use -vvv to see where are the files stores.

there is something more powerfull the combinaison of

  1. Task level debugger on task (so you can redo the task)

    - name: Execute a command
      ansible.builtin.command: "false"
      debugger: on_failed # always never on_failed on_unreachable on_skipped

    what are the usefull command ?

    CommandShortcutAction
    printpPrint information about the task
    task.args[key] = valueno shortcutUpdate module arguments
    task_vars[key] = valueno shortcutUpdate task variables (you must update_task next)
    update_taskuRecreate a task with updated task variables
    redorRun the task again
    continuecContinue executing, starting with the next task
    quitqQuit the debugger

    when changing args you will need to recreate the task u then r

  2. Module level use the madbg put import madbg; madbg.set_trace() inside you code where you want to debug. then connect to the node itself ssh then madbg connect (or) madbg connect host port

    personaly i use

     import pprint
     pprint.pprint(object)
     var # print samll var

    table command

    CommandDescription
    n or nextExecute the next line of code (step over).
    s or stepStep into a function call.
    c or continueContinue execution until the next breakpoint or program end.
    `b [linefunction]`Set a breakpoint at a line number or function. Example: b 42 or b my_func.
    `cl [linefunction]`Clear a breakpoint at a given line or function.
    l or listShow source code around the current line.
    p [expression]Print the value of a variable or expression. Example: p my_var.
    q or quitQuit the debugger and terminate the program.
    r or returnContinue execution until the current function returns.
    btShow the current call stack (backtrace).

ansible add extra vars

you can add extra vars via the combine and using the json

Ansible run command inside container (stackoverflow auestion i find interressting)

what I’m trying to accomplish is to run commands inside of a Docker container that has already been created on a Digital Ocean Ubuntu/Docker Droplet using Ansible.

Solution can be this:

- name: add container to inventory
  add_host:
    name: [container-name]
    ansible_connection: docker
  changed_when: false

- name: run command in container
  delegate_to: [container-name]
  raw: bash

If you have python installed in your image, you can use the command module or any other module instead of raw.

If you want to do this on a remote docker host, add:

ansible_docker_extra_args: "-H=tcp://[docker-host]:[api port]"

to the add_host block.

By default, Ansible runs:

docker exec ...

against the local Docker daemon (e.g., /var/run/docker.sock). If your container is running on a remote Docker host, you must tell Docker where that daemon is located. That is what this does:

ansible_docker_extra_args: "-H=tcp://docker-host:2375"

It makes Ansible run:

docker -H tcp://docker-host:2375 exec <container> <command>

So the -H flag is passed to the Docker CLI.

example for multiple server

- name: Add containers dynamically
  add_host:
    name: "{{ item.name }}"
    groups: docker_containers
    ansible_connection: docker
    ansible_docker_extra_args: "-H=ssh://{{ item.docker_host }}"
  loop:
    - { name: "c1", docker_host: "docker-host-1" }
    - { name: "c2", docker_host: "docker-host-2" }


- name: Update apt inside containers
  hosts: docker_containers
  become: true
  tasks:
    - name: Update apt cache
      apt:
        update_cache: yes