AnsiblePilot — Master Ansible Automation

AnsiblePilot is the leading resource for learning Ansible automation, DevOps, and infrastructure as code. Browse over 1,400 tutorials covering Ansible modules, playbooks, roles, collections, and real-world examples. Whether you are a beginner or an experienced engineer, our step-by-step guides help you automate Linux, Windows, cloud, containers, and network infrastructure.

Popular Topics

About Luca Berton

Luca Berton is an Ansible automation expert, author of 8 Ansible books published by Apress and Leanpub including "Ansible for VMware by Examples" and "Ansible for Kubernetes by Example", and creator of the Ansible Pilot YouTube channel. He shares practical automation knowledge through tutorials, books, and video courses to help IT professionals and DevOps engineers master infrastructure automation.

Ansible on Kubernetes 1.31 Automation Complete Guide

By Luca Berton · Published 2024-01-01 · Category: installation

Automate Kubernetes 1.31 with Ansible: kubernetes.core collection, kubeadm bootstrap, manifests, Helm charts, namespaces, RBAC, ArgoCD bootstrapping.

Kubernetes 1.31 "Elli" (released August 2024) is a long-supported minor: PV last-phase transition, AppArmor GA, kube-proxy nftables, ImageVolume alpha, and the deprecation of in-tree cloud providers. Ansible automates K8s clusters at two layers: bootstrap (kubeadm, OS prep) with shell/posix modules, and workload (manifests, Helm, RBAC) with the kubernetes.core collection. This is the master Ansible guide for Kubernetes 1.31.

Kubernetes 1.31 release facts

| Item | Value | |---|---| | Codename | Elli | | Released | 2024-08-13 | | Support | until 2025-10 (standard), then maintenance | | New | nftables kube-proxy GA, AppArmor GA, ImageVolume (alpha) |

See also: Ansible on Kubernetes 1.32 Automation Complete Guide

Ansible-core compatibility

Use ansible-core 2.18 LTS with Python kubernetes>=30.1.0:

pip install kubernetes openshift jsonpatch

Collections:

collections:
  - name: kubernetes.core
    version: ">=5.0.0"
  - name: community.general

Inventory

[k8s_control]
cp01 ansible_host=10.0.1.11
cp02 ansible_host=10.0.1.12
cp03 ansible_host=10.0.1.13

[k8s_workers] w01 ansible_host=10.0.2.11 w02 ansible_host=10.0.2.12

See also: Ansible for Kubernetes: Automate K8s Cluster Management and Application Deployment

Bootstrap a node (containerd + kubeadm)

- name: Prepare K8s node
  hosts: k8s_control:k8s_workers
  become: true
  tasks:
    - name: Disable swap
      ansible.builtin.shell: |
        swapoff -a
        sed -i.bak '/ swap / s/^/#/' /etc/fstab
      changed_when: false

- name: br_netfilter + sysctl ansible.posix.sysctl: name: "{{ item.name }}" value: "{{ item.value }}" sysctl_set: true state: present reload: true loop: - { name: net.ipv4.ip_forward, value: '1' } - { name: net.bridge.bridge-nf-call-iptables, value: '1' }

- name: Install containerd + kubeadm/kubelet/kubectl ansible.builtin.package: name: - containerd - kubeadm=1.31.* - kubelet=1.31.* - kubectl=1.31.* state: present

kubeadm init / join

- name: Initialize control plane
  hosts: cp01
  become: true
  tasks:
    - name: kubeadm init
      ansible.builtin.command: >
        kubeadm init
        --kubernetes-version=1.31.0
        --pod-network-cidr=10.244.0.0/16
        --control-plane-endpoint=k8s.lab.example.com
        --upload-certs
      args:
        creates: /etc/kubernetes/admin.conf

See also: Ansible for Kubernetes: Deploy, Manage, and Automate K8s Clusters Complete Guide

Apply manifests

- name: Deploy nginx
  hosts: localhost
  gather_facts: false
  tasks:
    - name: Namespace
      kubernetes.core.k8s:
        kubeconfig: ~/.kube/config
        state: present
        definition:
          apiVersion: v1
          kind: Namespace
          metadata:
            name: web

- name: Deployment kubernetes.core.k8s: kubeconfig: ~/.kube/config state: present namespace: web definition: apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 3 selector: { matchLabels: { app: nginx } } template: metadata: { labels: { app: nginx } } spec: containers: - name: nginx image: nginx:1.27-alpine ports: [ { containerPort: 80 } ]

Helm charts

- name: Install ingress-nginx via Helm
  hosts: localhost
  gather_facts: false
  tasks:
    - name: Add repo
      kubernetes.core.helm_repository:
        name: ingress-nginx
        repo_url: https://kubernetes.github.io/ingress-nginx

- name: Install kubernetes.core.helm: kubeconfig: ~/.kube/config name: ingress-nginx chart_ref: ingress-nginx/ingress-nginx release_namespace: ingress-nginx create_namespace: true update_repo_cache: true

Best practices

• Keep K8s bootstrap in one role and workload delivery in another — two different lifecycles. • Use kubernetes.core.k8s with state: present and full manifests; treat YAML in roles as the source of truth. • Wrap Helm releases in kubernetes.core.helm with pinned chart versions. • For GitOps, use Ansible to bootstrap ArgoCD/Flux, then let GitOps own day-2.

Conclusion

Kubernetes 1.31 + kubernetes.core provides a clean automation surface for both cluster lifecycle (kubeadm) and workload delivery (manifests + Helm). Split bootstrap from day-2, pin versions, and hand off to GitOps for sustainable cluster management.

Category: installation

Browse all Ansible tutorials · AnsiblePilot Home