Ansible on bootc Images: build-time automation, first-boot config, and day-2 ops
Image mode for RHEL (aka bootc) lets you build and manage an entire OS as a container image, then deploy it as a bootable system. If you’re already invested in Ansible, you can keep using it—just a bit differently. This guide shows where Ansible fits best in the bootc lifecycle, patterns that work well, and copy-pasteable examples.
Where Ansible fits
Three touch points make sense:
- Build time (inside the
Containerfile
) — Bake configuration into the immutable bootc image by running roles/playbooks duringpodman build
. Red Hat notes that some OCI config (likeENTRYPOINT
,CMD
,ENV
,USER
) is ignored after installation, so express system state via files and systemd—not runtime directives. (Red Hat Docs) - First boot (
cloud-init
+ansible-pull
) — Inject SSH keys, metadata, and host-unique tweaks usingcloud-init
’s native Ansible integration (cc_ansible
) to runansible-pull
. (cloudinit.readthedocs.io) - Day-2 operations (targeted changes only) — Treat the OS as image-managed. Use Ansible for app payloads, service orchestration, cert rotation, and one-off remediation—rebuild the image for base-OS changes. Recent guidance and community practice encourage shifting provisioning to build time. (Red Hat Developer)
What changes with bootc (and why it matters to Ansible)
- Containerfile semantics: After the bootc image is installed as a system, the OCI config section (e.g.,
ENTRYPOINT
,CMD
,ENV
,USER
,EXPOSE
) doesn’t apply. Configure users/env with systemd and files baked into the image. (Red Hat Docs) - Image validation: Add
bootc container lint
as a final build step to catch issues early. (Fedora Docs) - Delivery formats: Build once as a container, then create disk images (QCOW2/AMI/ISO, etc.) with bootc-image-builder. (Red Hat Docs)
- System roles: RHEL System Roles (Ansible roles) are supported in image mode; run them at build time to encode your baseline. (Red Hat Developer)
Pattern 1 — Build time: bake roles & run playbooks in the image
Use when: Every host that boots this image should be identical.
Example Containerfile
(RHEL 10 image mode):
# Build-time configuration with RHEL System Roles & your playbook
FROM registry.redhat.io/rhel10/rhel-bootc:latest
# Install what your roles need (+ cloud-init for later)
RUN dnf -y install ansible-core rhel-system-roles cloud-init && dnf clean all
# Copy your Ansible content
ADD ansible/ /opt/ansible/
WORKDIR /opt/ansible/
# Run playbook now to bake config in the image
RUN ANSIBLE_FORCE_COLOR=1 \
ansible-playbook -i "localhost," -c local site.yml
# Validate the bootc image
RUN bootc container lint
Minimal site.yml
with RHEL System Roles:
- hosts: all
become: true
roles:
- rhel-system-roles.timesync
- rhel-system-roles.selinux
vars:
timesync_ntp_servers:
- hostname: time.cloudflare.com
selinux_state: enforcing
Why this is good: Roles designed for RHEL image mode give you a clean, testable, reproducible OS image without post-install drift. (Red Hat Developer)
Pattern 2 — First-boot customization with cloud-init
+ ansible-pull
Use when: You need per-instance secrets, keys, or small unique changes.
Install cloud-init
at build time (often useful for clouds/virt):
RUN dnf -y install cloud-init && dnf clean all
#cloud-config
user-data to run Ansible on first boot:
#cloud-config
ssh_authorized_keys:
- ssh-ed25519 AAAA... user@example
packages:
- git
ansible:
package_name: ansible-core
run_user: root
pull:
url: https://git.example.com/platform/firstboot-playbooks.git
playbook_name: firstboot.yml
extra_args:
- --extra-vars
- "hostname={{ ds.meta_data.local_hostname }}"
cc_ansible
installs Ansible during boot and runs ansible-pull
against your repo—once per instance by default. (cloudinit.readthedocs.io)
Pattern 3 — Day-2 with Ansible (without fighting immutability)
Do:
- Start/enable services, rotate certs, update app configs, orchestrate external systems, and query cloud-init facts. (docs.ansible.com)
- Use Ansible to trigger an image rollout (e.g., point hosts to a new bootc image and reboot into it) instead of mutating the base OS.
Avoid:
- Large package installs or deep OS reconfiguration on live hosts—move that into the image build and redeploy. This aligns with Red Hat’s “do more at build time” guidance for image mode. (Red Hat Developer)
End-to-end workflow (quick reference)
Author roles/playbooks that reflect your baseline (System Roles + your roles). (Red Hat Developer)
Build & test locally or in CI:
podman build -t quay.io/acme/rhel10-platform:2025.10 . podman run -it --rm quay.io/acme/rhel10-platform:2025.10 /bin/bash
Remember: OCI
ENTRYPOINT/CMD/ENV/USER
won’t matter once installed as a system. (Red Hat Docs)Push & produce disk artifacts with bootc-image-builder (QCOW2, AMI, ISO). (Red Hat Docs)
Deploy to cloud/virt; pass
cloud-init
user-data to inject keys and runansible-pull
. (cloudinit.readthedocs.io)Operate day-2 via Ansible for services/orchestration; for base OS edits, rebuild and roll out a new image. (Red Hat Developer)
Tips & gotchas
- Honor bootc’s rules: Don’t rely on runtime
ENTRYPOINT
,CMD
,ENV
, orUSER
; configure via systemd and files in the image. (Red Hat Docs) - Lint in CI:
RUN bootc container lint
to catch common mistakes. (Fedora Docs) - Keep images small: Use multi-stage builds: compile in a builder stage, copy only what you need into the bootc stage. (Standard container hygiene, works great here.) (Red Hat Docs)
- Know your tooling: RHEL provides multiple paths—bootc for image-based OS management and bootc-image-builder for producing installable artifacts. (Red Hat Docs)
FAQ
Can I just keep using Ansible exactly like before? You can, but you’ll get the most benefit by shifting provisioning into the image build (roles at build time) and using first-boot + day-2 Ansible sparingly for instance-specific or application-level tasks. (Red Hat Developer)
How do I test before I install to disk?
Run the image with podman
to validate files and playbook effects, then lint with bootc container lint
. Remember that some OCI config is ignored once installed. (Red Hat Docs)
What about non-cloud environments?
Use bootc-image-builder to produce ISOs or QCOW2 for bare-metal/virt; you can still feed cloud-init
user-data (e.g., NoCloud) or run a tiny first-boot systemd service that calls ansible-pull
. (Red Hat Docs)
Closing thoughts
bootc doesn’t replace Ansible—it repositions it. Let Ansible shine where it’s strongest:
- Build time to create high-quality, compliant images (especially with RHEL System Roles),
- First boot to stamp per-instance uniqueness, and
- Day-2 for orchestration and service control, while keeping the OS itself image-managed. (Red Hat Developer)
Want a tailored lab? I can turn this into a step-by-step exercise (with sample repo layout and CI pipeline) for your environment.