Ansible Pilot

Assign CPU Resources to Kubernetes K8s or OpenShift OCP Containers and Pods — Ansible module k8s

How to assign CPU Request and CPU Limit to Kubernetes K8s or OpenShift OCP Containers and Pods with Ansible Playbook using module k8s.

How to Assign CPU Resources to Kubernetes K8s or OpenShift OCP Containers and Pods with Ansible?

I’m going to show you a live demo and some simple Ansible code. I’m Luca Berton and welcome to today’s episode of Ansible Pilot.

Containers cannot use more CPU than the configured limit. Provided the system has CPU time free, a container is guaranteed to be allocated as much CPU as it requests. To specify a CPU request for a container, include the resources:requests field in the Container resource manifest. To specify a CPU limit, include resources:limits.

Ansible creates Kubernetes or OpenShift service

Let’s talk about the Ansible module k8s. The full name is kubernetes.core.k8s, which means that is part of the collection of modules of Ansible to interact with Kubernetes and Red Hat OpenShift clusters. It manages Kubernetes (K8s) objects.

Parameters

There is a long list of parameters of the k8s module. Let me summarize the most used. Most of the parameters are very generic and allow you to combine them for many use-cases. The name and namespace specify object name and/or the object namespace. They are useful to create, delete, or discover an object without providing a full resource definition. The api_version parameter specifies the Kubernetes API version, the default is “v1” for version 1. The kind parameter specifies an object model. The state like for other modules determines if an object should be created - present option, patched - patched option, or deleted - absent option. The definition parameter allows you to provide a valid YAML definition (string, list, or dict) for an object when creating or updating. If you prefer to specify a file for the YAML definition, the src parameter provides a path to a file containing a valid YAML definition of an object or objects to be created or updated. You could also specify a YAML definition template with the template parameter. You might find useful also the validate parameter in order to define how to validate the resource definition against the Kubernetes schema. Please note that requires the kubernetes-validate python module.

demo

How to Assign CPU Resources to Kubernetes K8s or OpenShift OCP Containers and Pods with Ansible Playbook. Specifically, the following example is going to create the “cpu-example” Namespace / Project with a “cpu-demo” Pod running “vish/stress” image in Kubernetes K8s or OpenShift OCP cluster. The -cpus "2" argument tells the Container to attempt to use 2 CPUs.

code

---
- name: k8s cpu demo
  hosts: localhost
  gather_facts: false
  connection: local
  vars:
    myproject: "cpu-example"
  tasks:
    - name: create {{ myproject }} namespace
      kubernetes.core.k8s:
        kind: Namespace
        name: "{{ myproject }}"
        state: present
        api_version: v1
- name: create k8s pod
      kubernetes.core.k8s:
        state: present
        definition:
          apiVersion: v1
          kind: Pod
          metadata:
            name: cpu-demo
            namespace: "{{ myproject }}"
          spec:
            containers:
              - name: cpu-demo-ctr
                image: vish/stress
                resources:
                  limits:
                    cpu: "1"
                  requests:
                    cpu: "0.5"
                args:
                  - -cpus
                  - "2"

execution

ansible-pilot $ ansible-playbook kubernetes/assigncpu.yml 
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
PLAY [k8s cpu demo] *******************************************************************************
TASK [create cpu-example namespace] ***************************************************************
changed: [localhost]
TASK [create k8s pod] *****************************************************************************
changed: [localhost]
PLAY RECAP ****************************************************************************************
localhost                  : ok=2    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible-pilot $

idempotency

ansible-pilot $ ansible-playbook kubernetes/assigncpu.yml 
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'
PLAY [k8s cpu demo] *******************************************************************************
TASK [create cpu-example namespace] ***************************************************************
ok: [localhost]
TASK [create k8s pod] *****************************************************************************
ok: [localhost]
PLAY RECAP ****************************************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
ansible-pilot $

before execution

ansible-pilot $ kubectl get namespace | grep cpu
ansible-pilot $
ansible-pilot $ oc get namespace | grep cpu
ansible-pilot $

after execution

ansible-pilot $ kubect get namespace cpu-example
NAME          STATUS   AGE
cpu-example   Active   36s
ansible-pilot $ kubect project cpu-example
Now using project "cpu-example" on server "https://api.crc.testing:6443".
ansible-pilot $ kubect get pods
NAME       READY   STATUS    RESTARTS   AGE
cpu-demo   1/1     Running   0          103s
ansible-pilot $ kubect get pod cpu-demo --namespace=cpu-example
NAME       READY   STATUS    RESTARTS   AGE
cpu-demo   1/1     Running   0          2m51s
ansible-pilot $ kubect get pod cpu-demo --namespace=cpu-example --output=yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "openshift-sdn",
          "interface": "eth0",
          "ips": [
              "10.217.0.85"
          ],
          "default": true,
          "dns": {}
      }]
    k8s.v1.cni.cncf.io/networks-status: |-
      [{
          "name": "openshift-sdn",
          "interface": "eth0",
          "ips": [
              "10.217.0.85"
          ],
          "default": true,
          "dns": {}
      }]
    openshift.io/scc: anyuid
  creationTimestamp: "2022-04-14T09:17:55Z"
  name: cpu-demo
  namespace: cpu-example
  resourceVersion: "242377"
  uid: 98e24196-b29f-4d17-a08d-3fac9686179c
spec:
  containers:
  - args:
    - -cpus
    - "2"
    image: vish/stress
    imagePullPolicy: Always
    name: cpu-demo-ctr
    resources:
      limits:
        cpu: "1"
      requests:
        cpu: 500m
    securityContext:
      capabilities:
        drop:
        - MKNOD
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-kvlr6
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: default-dockercfg-8b6n9
  nodeName: crc-8rwmc-master-0
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    seLinuxOptions:
      level: s0:c26,c0
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  - effect: NoSchedule
    key: node.kubernetes.io/memory-pressure
    operator: Exists
  volumes:
  - name: kube-api-access-kvlr6
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
      - configMap:
          items:
          - key: service-ca.crt
            path: service-ca.crt
          name: openshift-service-ca.crt
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-04-14T09:17:55Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2022-04-14T09:18:07Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-04-14T09:18:07Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2022-04-14T09:17:55Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://49fed2ab9b3d1672d0ec425e1d11583e39002d248ad63ecbbeb4ecd83657f5e6
    image: docker.io/vish/stress:latest
    imageID: docker.io/vish/[email protected]:b6456a3df6db5e063e1783153627947484a3db387be99e49708c70a9a15e7177
    lastState: {}
    name: cpu-demo-ctr
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2022-04-14T09:18:07Z"
  hostIP: 192.168.126.11
  phase: Running
  podIP: 10.217.0.85
  podIPs:
  - ip: 10.217.0.85
  qosClass: Burstable
  startTime: "2022-04-14T09:17:55Z"
ansible-pilot $
ansible-pilot $ oc get namespace cpu-example
NAME          STATUS   AGE
cpu-example   Active   36s
ansible-pilot $ oc project cpu-example
Now using project "cpu-example" on server "https://api.crc.testing:6443".
ansible-pilot $ oc get pods
NAME       READY   STATUS    RESTARTS   AGE
cpu-demo   1/1     Running   0          103s
ansible-pilot $ oc get pod cpu-demo --namespace=cpu-example
NAME       READY   STATUS    RESTARTS   AGE
cpu-demo   1/1     Running   0          2m51s
ansible-pilot $ oc get pod cpu-demo --namespace=cpu-example --output=yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.v1.cni.cncf.io/network-status: |-
      [{
          "name": "openshift-sdn",
          "interface": "eth0",
          "ips": [
              "10.217.0.85"
          ],
          "default": true,
          "dns": {}
      }]
    k8s.v1.cni.cncf.io/networks-status: |-
      [{
          "name": "openshift-sdn",
          "interface": "eth0",
          "ips": [
              "10.217.0.85"
          ],
          "default": true,
          "dns": {}
      }]
    openshift.io/scc: anyuid
  creationTimestamp: "2022-04-14T09:17:55Z"
  name: cpu-demo
  namespace: cpu-example
  resourceVersion: "242377"
  uid: 98e24196-b29f-4d17-a08d-3fac9686179c
spec:
  containers:
  - args:
    - -cpus
    - "2"
    image: vish/stress
    imagePullPolicy: Always
    name: cpu-demo-ctr
    resources:
      limits:
        cpu: "1"
      requests:
        cpu: 500m
    securityContext:
      capabilities:
        drop:
        - MKNOD
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-kvlr6
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: default-dockercfg-8b6n9
  nodeName: crc-8rwmc-master-0
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    seLinuxOptions:
      level: s0:c26,c0
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  - effect: NoSchedule
    key: node.kubernetes.io/memory-pressure
    operator: Exists
  volumes:
  - name: kube-api-access-kvlr6
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
      - configMap:
          items:
          - key: service-ca.crt
            path: service-ca.crt
          name: openshift-service-ca.crt
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-04-14T09:17:55Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2022-04-14T09:18:07Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-04-14T09:18:07Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2022-04-14T09:17:55Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://49fed2ab9b3d1672d0ec425e1d11583e39002d248ad63ecbbeb4ecd83657f5e6
    image: docker.io/vish/stress:latest
    imageID: docker.io/vish/[email protected]:b6456a3df6db5e063e1783153627947484a3db387be99e49708c70a9a15e7177
    lastState: {}
    name: cpu-demo-ctr
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2022-04-14T09:18:07Z"
  hostIP: 192.168.126.11
  phase: Running
  podIP: 10.217.0.85
  podIPs:
  - ip: 10.217.0.85
  qosClass: Burstable
  startTime: "2022-04-14T09:17:55Z"
ansible-pilot $
I0414 09:43:57.577707       1 main.go:26] Allocating "0" memory, in "4Ki" chunks, with a 1ms sleep between allocations
I0414 09:43:57.577885       1 main.go:39] Spawning a thread to consume CPU
I0414 09:43:57.577896       1 main.go:39] Spawning a thread to consume CPU
I0414 09:43:57.577903       1 main.go:29] Allocated "0" memory

ansible module k8s after execution

Recap

Now you know how to Assign CPU Resources to Kubernetes K8s or OpenShift OCP Containers and Pods with Ansible.

Subscribe to the YouTube channel, Medium, Website and Twitter to not miss the next episode of the Ansible Pilot.

Academy

Learn the Ansible automation technology with some real-life examples in my

My book Ansible By Examples: 100+ Automation Examples For Linux and Windows System Administrator and DevOps

Want to keep this project going? Please donate

Trustpilot
Follow me

Subscribe not to miss any new releases

April 14, 2022

FREE Top 10 Best Practices

Top 10 Best Practices of Ansible Automation: save time, reduce errors and stress