Ansible troubleshooting - Kubernetes K8s or OpenShift OCP 401 Unauthorized
By Luca Berton · Published 2024-01-01 · Category: events
Explore troubleshooting steps for Kubernetes 401 Unauthorized errors in Ansible when interacting with Kubernetes or OpenShift clusters.

Ansible troubleshooting - Kubernetes K8s/OpenShift OCP 401 Unauthorized
Today we're going to talk about Ansible troubleshooting, specifically about the "Kubernetes 401 Unauthorized" message. This fatal error message happens when we are trying to execute some code against your Kubernetes K8s or OpenShift OCP cluster without any authentication tokens. These circumstances are usually related to Kubernetes K8s or OpenShift OCP authentication and usually are not related to Ansible Playbook or Ansible configuration. I'm Luca Berton and welcome to today's episode of Ansible Pilot.
See also: Deploy Kubernetes Resources with Ansible Playbook
Playbook
How to reproduce, troubleshoot, and fix the error: "Kubernetes 401 Unauthorized". The best way of talking about Ansible troubleshooting is to jump in a live Playbook to show you practically the "Kubernetes 401 Unauthorized" and how to solve it! This Playbook is going to try to create an "example" namespace in a Kubernetes/OpenShift cluster.Ansible Playbook code
---
- name: k8s Playbook
hosts: localhost
gather_facts: false
connection: local
vars:
myproject: "example"
tasks:
- name: create {{ myproject }} namespace
kubernetes.core.k8s:
api_version: v1
kind: Namespace
name: "{{ myproject }}"
state: present
error execution
ansible-pilot $ ansible-playbook kubernetes/namespace.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
PLAY [k8s Playbook] ***********************************************************************************
TASK [create example namespace] *******************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "error": 401, "msg": "Namespace example: Failed to retrieve requested object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"Unauthorized\",\"reason\":\"Unauthorized\",\"code\":401}\\n'", "reason": "Unauthorized", "status": 401}
PLAY RECAP ****************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
ansible-pilot $
troubleshooting
ansible-pilot $ oc get namespace
error: You must be logged in to the server (Unauthorized)
ansible-pilot $ crc status
CRC VM: Running
OpenShift: Running (v4.9.15)
Disk Usage: 18.27GB of 32.74GB (Inside the CRC VM)
Cache Usage: 12.79GB
Cache Directory: /Users/lberton/.crc/cache
ansible-pilot $ crc start
WARN A new version (2.0.1) has been published on https://developers.redhat.com/content-gateway/file/pub/openshift-v4/clients/crc/2.0.1/crc-macos-amd64.pkg
INFO A CodeReady Containers VM for OpenShift 4.9.15 is already running
Started the OpenShift cluster.
The server is accessible via web console at:
https://console-openshift-console.apps-crc.testing
Log in as administrator:
Username: kubeadmin
Password: WhDvM-c8WiV-zJ8iH-UKhKV
Log in as user:
Username: developer
Password: developer
Use the 'oc' command line interface:
$ eval $(crc oc-env)
$ oc login -u developer https://api.crc.testing:6443
ansible-pilot $ eval $(crc oc-env)
ansible-pilot $ oc login -u kubeadmin https://api.crc.testing:6443
Logged into "https://api.crc.testing:6443" as "kubeadmin" using existing credentials.
You have access to 66 projects, the list has been suppressed. You can list all projects with 'oc projects'
Using project "example".
ansible-pilot $ oc get namespace | grep example
example Active 63d
ansible-pilot $ cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://api.crc.testing:6443
name: api-crc-testing:6443
contexts:
- context:
cluster: api-crc-testing:6443
user: developer/api-crc-testing:6443
name: /api-crc-testing:6443/developer
- context:
cluster: api-crc-testing:6443
namespace: default
user: kubeadmin
name: crc-admin
- context:
cluster: api-crc-testing:6443
namespace: default
user: developer
name: crc-developer
- context:
cluster: api-crc-testing:6443
namespace: default
user: kubeadmin/api-crc-testing:6443
name: default/api-crc-testing:6443/kubeadmin
- context:
cluster: api-crc-testing:6443
namespace: example
user: developer/api-crc-testing:6443
name: example/api-crc-testing:6443/developer
- context:
cluster: api-crc-testing:6443
namespace: example
user: kubeadmin/api-crc-testing:6443
name: example/api-crc-testing:6443/kubeadmin
current-context: example/api-crc-testing:6443/kubeadmin
kind: Config
preferences: {}
users:
- name: developer
user:
token: sha256~REDACTED
- name: developer/api-crc-testing:6443
user:
token: sha256~REDACTED
- name: kubeadmin
user:
token: sha256~REDACTED
- name: kubeadmin/api-crc-testing:6443
user:
token: sha256~REDACTED
ansible-pilot $
fix execution
ansible-pilot $ ansible-playbook kubernetes/namespace.yml
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
PLAY [k8s Playbook] ***********************************************************************************
TASK [create example namespace] *******************************************************************
ok: [localhost]
PLAY RECAP ****************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
ansible-pilot $
Conclusion
Now you know better how to troubleshoot the Ansible "Kubernetes/OpenShift 401 Unauthorized" message.
See also: Optimize Kubernetes CPU Resources with Ansible Playbooks
Related Articles
• static and dynamic Ansible inventoryRoot Cause
A 401 Unauthorized error means the Kubernetes/OpenShift API rejected your credentials. Common causes:
Expired token — service account or user token has expired
Wrong kubeconfig — pointing to wrong cluster or context
Missing permissions — RBAC role not assigned
Certificate issues — CA certificate mismatch
See also: Assign Memory to Kubernetes Pods with Ansible
Debugging Steps
# Check current context
kubectl config current-context
# Test authentication
kubectl auth whoami
# Verify token
kubectl get pods --v=8 2>&1 | grep -i "authorization"
# Check service account token
kubectl get secret -n kube-system
Fix: Service Account Token
- name: Fix K8s 401 — create proper service account
hosts: localhost
connection: local
tasks:
- name: Create service account
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: ServiceAccount
metadata:
name: ansible-sa
namespace: default
- name: Create cluster role binding
kubernetes.core.k8s:
state: present
definition:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ansible-sa-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: ansible-sa
namespace: default
- name: Get service account token
kubernetes.core.k8s_info:
kind: Secret
namespace: default
label_selectors:
- "kubernetes.io/service-account.name=ansible-sa"
register: sa_secret
OpenShift-Specific Fix
# Login to OpenShift
oc login https://api.cluster.example.com:6443 -u admin -p password
# Get token
oc whoami -t
# Use in Ansible
ansible-playbook playbook.yml -e "k8s_auth_api_key=$(oc whoami -t)"
FAQ
Why does my token expire?
Kubernetes tokens have a default expiry (1 hour for bound tokens in K8s 1.22+). Use long-lived service account tokens or automate token refresh.
How do I use a kubeconfig file with Ansible?
Set K8S_AUTH_KUBECONFIG environment variable or pass kubeconfig: parameter to kubernetes.core.k8s module.
Category: events
Watch the video: Ansible troubleshooting - Kubernetes K8s or OpenShift OCP 401 Unauthorized — Video Tutorial