<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://juergenpointinger.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://juergenpointinger.github.io/" rel="alternate" type="text/html" /><updated>2026-01-29T14:48:24+00:00</updated><id>https://juergenpointinger.github.io/feed.xml</id><title type="html">© Jürgen Pointinger</title><subtitle>A blog about DevOps, Agility, Software Development Practices, Coding and Leadership. Created by Jürgen Pointinger, an IT Consultant, DevOps Enthusiast, Solution Architect and Software Developer.</subtitle><author><name>Jürgen Pointinger</name></author><entry><title type="html">Kubernetes Cheat Sheet</title><link href="https://juergenpointinger.github.io/k8s-cheat-sheet/" rel="alternate" type="text/html" title="Kubernetes Cheat Sheet" /><published>2023-04-01T00:00:00+00:00</published><updated>2023-04-01T00:00:00+00:00</updated><id>https://juergenpointinger.github.io/k8s-cheat-sheet</id><content type="html" xml:base="https://juergenpointinger.github.io/k8s-cheat-sheet/"><![CDATA[<h2 id="user-local-k8s">User local k8s</h2>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>minikube delete
<span class="nv">$ </span>minikube delete <span class="nt">--all</span> <span class="nt">--purge</span>

<span class="nv">$ </span>minikube start <span class="nt">--wait</span><span class="o">=</span><span class="nb">false</span>
</code></pre></div></div>

<h2 id="kubectl">Kubectl</h2>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl cluster-info
</code></pre></div></div>

<p><a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/">k8s docs</a></p>

<h3 id="context-and-configuration">Context and configuration</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>kubectl config view <span class="c"># Show Merged kubeconfig settings.</span>

<span class="nv">$ </span>kubectl config view <span class="nt">-o</span> <span class="nv">jsonpath</span><span class="o">=</span><span class="s1">'{.users[].name}'</span>    <span class="c"># display the first user</span>
<span class="nv">$ </span>kubectl config view <span class="nt">-o</span> <span class="nv">jsonpath</span><span class="o">=</span><span class="s1">'{.users[*].name}'</span>   <span class="c"># get a list of users</span>
<span class="nv">$ </span>kubectl config get-contexts                          <span class="c"># display list of contexts </span>
<span class="nv">$ </span>kubectl config current-context                       <span class="c"># display the current-context</span>
<span class="nv">$ </span>kubectl config use-context my-cluster-name           <span class="c"># set the default context to my-cluster-name</span>
</code></pre></div></div>

<h3 id="apply">Apply</h3>

<h3 id="viewing-finding-resources">Viewing, finding resources</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Get commands with basic output</span>
kubectl get services                          <span class="c"># List all services in the namespace</span>
kubectl get pods <span class="nt">--all-namespaces</span>             <span class="c"># List all pods in all namespaces</span>
kubectl get pods <span class="nt">-o</span> wide                      <span class="c"># List all pods in the current namespace, with more details</span>
kubectl get deployment my-dep                 <span class="c"># List a particular deployment</span>
kubectl get pods                              <span class="c"># List all pods in the namespace</span>
kubectl get pod my-pod <span class="nt">-o</span> yaml                <span class="c"># Get a pod's YAML</span>

<span class="c"># Describe commands with verbose output</span>
kubectl describe nodes my-node
kubectl describe pods my-pod

<span class="c"># List Services Sorted by Name</span>
kubectl get services <span class="nt">--sort-by</span><span class="o">=</span>.metadata.name

<span class="c"># List pods Sorted by Restart Count</span>
kubectl get pods <span class="nt">--sort-by</span><span class="o">=</span><span class="s1">'.status.containerStatuses[0].restartCount'</span>

<span class="c"># List PersistentVolumes sorted by capacity</span>
kubectl get pv <span class="nt">--sort-by</span><span class="o">=</span>.spec.capacity.storage

<span class="c"># Get the version label of all pods with label app=cassandra</span>
kubectl get pods <span class="nt">--selector</span><span class="o">=</span><span class="nv">app</span><span class="o">=</span>cassandra <span class="nt">-o</span> <span class="se">\</span>
  <span class="nv">jsonpath</span><span class="o">=</span><span class="s1">'{.items[*].metadata.labels.version}'</span>

<span class="c"># Retrieve the value of a key with dots, e.g. 'ca.crt'</span>
kubectl get configmap myconfig <span class="se">\</span>
  <span class="nt">-o</span> <span class="nv">jsonpath</span><span class="o">=</span><span class="s1">'{.data.ca\.crt}'</span>

<span class="c"># Retrieve a base64 encoded value with dashes instead of underscores.</span>
kubectl get secret my-secret <span class="nt">--template</span><span class="o">=</span><span class="s1">''</span>

<span class="c"># Get all worker nodes (use a selector to exclude results that have a label</span>
<span class="c"># named 'node-role.kubernetes.io/control-plane')</span>
kubectl get node <span class="nt">--selector</span><span class="o">=</span><span class="s1">'!node-role.kubernetes.io/control-plane'</span>

<span class="c"># Get all running pods in the namespace</span>
kubectl get pods <span class="nt">--field-selector</span><span class="o">=</span>status.phase<span class="o">=</span>Running

<span class="c"># Get ExternalIPs of all nodes</span>
kubectl get nodes <span class="nt">-o</span> <span class="nv">jsonpath</span><span class="o">=</span><span class="s1">'{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'</span>

<span class="c"># List Names of Pods that belong to Particular RC</span>
<span class="c"># "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://stedolan.github.io/jq/</span>
<span class="nv">sel</span><span class="o">=</span><span class="k">${</span><span class="si">$(</span>kubectl get rc my-rc <span class="nt">--output</span><span class="o">=</span>json | jq <span class="nt">-j</span> <span class="s1">'.spec.selector | to_entries | .[] | "\(.key)=\(.value),"'</span><span class="si">)</span><span class="p">%?</span><span class="k">}</span>
<span class="nb">echo</span> <span class="si">$(</span>kubectl get pods <span class="nt">--selector</span><span class="o">=</span><span class="nv">$sel</span> <span class="nt">--output</span><span class="o">=</span><span class="nv">jsonpath</span><span class="o">={</span>.items..metadata.name<span class="o">}</span><span class="si">)</span>

<span class="c"># Show labels for all pods (or any other Kubernetes object that supports labelling)</span>
kubectl get pods <span class="nt">--show-labels</span>

<span class="c"># Check which nodes are ready</span>
<span class="nv">JSONPATH</span><span class="o">=</span><span class="s1">'{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'</span> <span class="se">\</span>
 <span class="o">&amp;&amp;</span> kubectl get nodes <span class="nt">-o</span> <span class="nv">jsonpath</span><span class="o">=</span><span class="s2">"</span><span class="nv">$JSONPATH</span><span class="s2">"</span> | <span class="nb">grep</span> <span class="s2">"Ready=True"</span>

<span class="c"># Output decoded secrets without external tools</span>
kubectl get secret my-secret <span class="nt">-o</span> go-template<span class="o">=</span><span class="s1">'### \n\n\n'</span>

<span class="c"># List all Secrets currently in use by a pod</span>
kubectl get pods <span class="nt">-o</span> json | jq <span class="s1">'.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name'</span> | <span class="nb">grep</span> <span class="nt">-v</span> null | <span class="nb">sort</span> | <span class="nb">uniq</span>

<span class="c"># List all containerIDs of initContainer of all pods</span>
<span class="c"># Helpful when cleaning up stopped containers, while avoiding removal of initContainers.</span>
kubectl get pods <span class="nt">--all-namespaces</span> <span class="nt">-o</span> <span class="nv">jsonpath</span><span class="o">=</span><span class="s1">'{range .items[*].status.initContainerStatuses[*]}{.containerID}{"\n"}{end}'</span> | <span class="nb">cut</span> <span class="nt">-d</span>/ <span class="nt">-f3</span>

<span class="c"># List Events sorted by timestamp</span>
kubectl get events <span class="nt">--sort-by</span><span class="o">=</span>.metadata.creationTimestamp

<span class="c"># List all warning events</span>
kubectl events <span class="nt">--types</span><span class="o">=</span>Warning

<span class="c"># Compares the current state of the cluster against the state that the cluster would be in if the manifest was applied.</span>
kubectl diff <span class="nt">-f</span> ./my-manifest.yaml

<span class="c"># Produce a period-delimited tree of all keys returned for nodes</span>
<span class="c"># Helpful when locating a key within a complex nested JSON structure</span>
kubectl get nodes <span class="nt">-o</span> json | jq <span class="nt">-c</span> <span class="s1">'paths|join(".")'</span>

<span class="c"># Produce a period-delimited tree of all keys returned for pods, etc</span>
kubectl get pods <span class="nt">-o</span> json | jq <span class="nt">-c</span> <span class="s1">'paths|join(".")'</span>

<span class="c"># Produce ENV for all pods, assuming you have a default container for the pods, default namespace and the `env` command is supported.</span>
<span class="c"># Helpful when running any supported command across all pods, not just `env`</span>
<span class="k">for </span>pod <span class="k">in</span> <span class="si">$(</span>kubectl get po <span class="nt">--output</span><span class="o">=</span><span class="nv">jsonpath</span><span class="o">={</span>.items..metadata.name<span class="o">}</span><span class="si">)</span><span class="p">;</span> <span class="k">do </span><span class="nb">echo</span> <span class="nv">$pod</span> <span class="o">&amp;&amp;</span> kubectl <span class="nb">exec</span> <span class="nt">-it</span> <span class="nv">$pod</span> <span class="nt">--</span> <span class="nb">env</span><span class="p">;</span> <span class="k">done</span>

<span class="c"># Get a deployment's status subresource</span>
kubectl get deployment nginx-deployment <span class="nt">--subresource</span><span class="o">=</span>status
</code></pre></div></div>

<h3 id="updating-resources">Updating resources</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl <span class="nb">set </span>image deployment/frontend <span class="nv">www</span><span class="o">=</span>image:v2               <span class="c"># Rolling update "www" containers of "frontend" deployment, updating the image</span>
kubectl rollout <span class="nb">history </span>deployment/frontend                      <span class="c"># Check the history of deployments including the revision</span>
kubectl rollout undo deployment/frontend                         <span class="c"># Rollback to the previous deployment</span>
kubectl rollout undo deployment/frontend <span class="nt">--to-revision</span><span class="o">=</span>2         <span class="c"># Rollback to a specific revision</span>
kubectl rollout status <span class="nt">-w</span> deployment/frontend                    <span class="c"># Watch rolling update status of "frontend" deployment until completion</span>
kubectl rollout restart deployment/frontend                      <span class="c"># Rolling restart of the "frontend" deployment</span>


<span class="nb">cat </span>pod.json | kubectl replace <span class="nt">-f</span> -                              <span class="c"># Replace a pod based on the JSON passed into stdin</span>

<span class="c"># Force replace, delete and then re-create the resource. Will cause a service outage.</span>
kubectl replace <span class="nt">--force</span> <span class="nt">-f</span> ./pod.json

<span class="c"># Create a service for a replicated nginx, which serves on port 80 and connects to the containers on port 8000</span>
kubectl expose rc nginx <span class="nt">--port</span><span class="o">=</span>80 <span class="nt">--target-port</span><span class="o">=</span>8000

<span class="c"># Update a single-container pod's image version (tag) to v4</span>
kubectl get pod mypod <span class="nt">-o</span> yaml | <span class="nb">sed</span> <span class="s1">'s/\(image: myimage\):.*$/\1:v4/'</span> | kubectl replace <span class="nt">-f</span> -

kubectl label pods my-pod new-label<span class="o">=</span>awesome                      <span class="c"># Add a Label</span>
kubectl label pods my-pod new-label-                             <span class="c"># Remove a label</span>
kubectl annotate pods my-pod icon-url<span class="o">=</span>http://goo.gl/XXBTWq       <span class="c"># Add an annotation</span>
kubectl autoscale deployment foo <span class="nt">--min</span><span class="o">=</span>2 <span class="nt">--max</span><span class="o">=</span>10                <span class="c"># Auto scale a deployment "foo"</span>
</code></pre></div></div>

<h3 id="patching-resources">Patching resources</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Partially update a node</span>
kubectl patch node k8s-node-1 <span class="nt">-p</span> <span class="s1">'{"spec":{"unschedulable":true}}'</span>

<span class="c"># Update a container's image; spec.containers[*].name is required because it's a merge key</span>
kubectl patch pod valid-pod <span class="nt">-p</span> <span class="s1">'{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'</span>

<span class="c"># Update a container's image using a json patch with positional arrays</span>
kubectl patch pod valid-pod <span class="nt">--type</span><span class="o">=</span><span class="s1">'json'</span> <span class="nt">-p</span><span class="o">=</span><span class="s1">'[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]'</span>

<span class="c"># Disable a deployment livenessProbe using a json patch with positional arrays</span>
kubectl patch deployment valid-deployment  <span class="nt">--type</span> json   <span class="nt">-p</span><span class="o">=</span><span class="s1">'[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]'</span>

<span class="c"># Add a new element to a positional array</span>
kubectl patch sa default <span class="nt">--type</span><span class="o">=</span><span class="s1">'json'</span> <span class="nt">-p</span><span class="o">=</span><span class="s1">'[{"op": "add", "path": "/secrets/1", "value": {"name": "whatever" } }]'</span>

<span class="c"># Update a deployment's replica count by patching its scale subresource</span>
kubectl patch deployment nginx-deployment <span class="nt">--subresource</span><span class="o">=</span><span class="s1">'scale'</span> <span class="nt">--type</span><span class="o">=</span><span class="s1">'merge'</span> <span class="nt">-p</span> <span class="s1">'{"spec":{"replicas":2}}'</span>
</code></pre></div></div>

<h3 id="scaling-resources">Scaling resources</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl scale <span class="nt">--replicas</span><span class="o">=</span>3 rs/foo                                 <span class="c"># Scale a replicaset named 'foo' to 3</span>
kubectl scale <span class="nt">--replicas</span><span class="o">=</span>3 <span class="nt">-f</span> foo.yaml                            <span class="c"># Scale a resource specified in "foo.yaml" to 3</span>
kubectl scale <span class="nt">--current-replicas</span><span class="o">=</span>2 <span class="nt">--replicas</span><span class="o">=</span>3 deployment/mysql  <span class="c"># If the deployment named mysql's current size is 2, scale mysql to 3</span>
kubectl scale <span class="nt">--replicas</span><span class="o">=</span>5 rc/foo rc/bar rc/baz                   <span class="c"># Scale multiple replication controllers</span>
</code></pre></div></div>

<h3 id="deleting-resources">Deleting resources</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl delete <span class="nt">-f</span> ./pod.json                                      <span class="c"># Delete a pod using the type and name specified in pod.json</span>
kubectl delete pod unwanted <span class="nt">--now</span>                                 <span class="c"># Delete a pod with no grace period</span>
kubectl delete pod,service baz foo                                <span class="c"># Delete pods and services with same names "baz" and "foo"</span>
kubectl delete pods,services <span class="nt">-l</span> <span class="nv">name</span><span class="o">=</span>myLabel                      <span class="c"># Delete pods and services with label name=myLabel</span>
kubectl <span class="nt">-n</span> my-ns delete pod,svc <span class="nt">--all</span>                             <span class="c"># Delete all pods and services in namespace my-ns,</span>
<span class="c"># Delete all pods matching the awk pattern1 or pattern2</span>
kubectl get pods  <span class="nt">-n</span> mynamespace <span class="nt">--no-headers</span><span class="o">=</span><span class="nb">true</span> | <span class="nb">awk</span> <span class="s1">'/pattern1|pattern2/{print $1}'</span> | xargs  kubectl delete <span class="nt">-n</span> mynamespace pod
</code></pre></div></div>

<h3 id="interacting-with-running-pods">Interacting with running Pods</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl logs my-pod                                 <span class="c"># dump pod logs (stdout)</span>
kubectl logs <span class="nt">-l</span> <span class="nv">name</span><span class="o">=</span>myLabel                        <span class="c"># dump pod logs, with label name=myLabel (stdout)</span>
kubectl logs my-pod <span class="nt">--previous</span>                      <span class="c"># dump pod logs (stdout) for a previous instantiation of a container</span>
kubectl logs my-pod <span class="nt">-c</span> my-container                 <span class="c"># dump pod container logs (stdout, multi-container case)</span>
kubectl logs <span class="nt">-l</span> <span class="nv">name</span><span class="o">=</span>myLabel <span class="nt">-c</span> my-container        <span class="c"># dump pod logs, with label name=myLabel (stdout)</span>
kubectl logs my-pod <span class="nt">-c</span> my-container <span class="nt">--previous</span>      <span class="c"># dump pod container logs (stdout, multi-container case) for a previous instantiation of a container</span>
kubectl logs <span class="nt">-f</span> my-pod                              <span class="c"># stream pod logs (stdout)</span>
kubectl logs <span class="nt">-f</span> my-pod <span class="nt">-c</span> my-container              <span class="c"># stream pod container logs (stdout, multi-container case)</span>
kubectl logs <span class="nt">-f</span> <span class="nt">-l</span> <span class="nv">name</span><span class="o">=</span>myLabel <span class="nt">--all-containers</span>    <span class="c"># stream all pods logs with label name=myLabel (stdout)</span>
kubectl run <span class="nt">-i</span> <span class="nt">--tty</span> busybox <span class="nt">--image</span><span class="o">=</span>busybox:1.28 <span class="nt">--</span> sh  <span class="c"># Run pod as interactive shell</span>
kubectl run nginx <span class="nt">--image</span><span class="o">=</span>nginx <span class="nt">-n</span> mynamespace      <span class="c"># Start a single instance of nginx pod in the namespace of mynamespace</span>
kubectl run nginx <span class="nt">--image</span><span class="o">=</span>nginx <span class="nt">--dry-run</span><span class="o">=</span>client <span class="nt">-o</span> yaml <span class="o">&gt;</span> pod.yaml
                                                    <span class="c"># Generate spec for running pod nginx and write it into a file called pod.yaml</span>
kubectl attach my-pod <span class="nt">-i</span>                            <span class="c"># Attach to Running Container</span>
kubectl port-forward my-pod 5000:6000               <span class="c"># Listen on port 5000 on the local machine and forward to port 6000 on my-pod</span>
kubectl <span class="nb">exec </span>my-pod <span class="nt">--</span> <span class="nb">ls</span> /                         <span class="c"># Run command in existing pod (1 container case)</span>
kubectl <span class="nb">exec</span> <span class="nt">--stdin</span> <span class="nt">--tty</span> my-pod <span class="nt">--</span> /bin/sh        <span class="c"># Interactive shell access to a running pod (1 container case)</span>
kubectl <span class="nb">exec </span>my-pod <span class="nt">-c</span> my-container <span class="nt">--</span> <span class="nb">ls</span> /         <span class="c"># Run command in existing pod (multi-container case)</span>
kubectl top pod POD_NAME <span class="nt">--containers</span>               <span class="c"># Show metrics for a given pod and its containers</span>
kubectl top pod POD_NAME <span class="nt">--sort-by</span><span class="o">=</span>cpu              <span class="c"># Show metrics for a given pod and sort it by 'cpu' or 'memory'</span>
</code></pre></div></div>

<h3 id="copy-files-and-directories-to-and-from-containers">Copy files and directories to and from containers</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl <span class="nb">cp</span> /tmp/foo_dir my-pod:/tmp/bar_dir            <span class="c"># Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the current namespace</span>
kubectl <span class="nb">cp</span> /tmp/foo my-pod:/tmp/bar <span class="nt">-c</span> my-container    <span class="c"># Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container</span>
kubectl <span class="nb">cp</span> /tmp/foo my-namespace/my-pod:/tmp/bar       <span class="c"># Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace my-namespace</span>
kubectl <span class="nb">cp </span>my-namespace/my-pod:/tmp/foo /tmp/bar       <span class="c"># Copy /tmp/foo from a remote pod to /tmp/bar locally</span>
</code></pre></div></div>]]></content><author><name>Jürgen Pointinger</name></author><category term="Code &amp; Snippets" /><summary type="html"><![CDATA[User local k8s]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://juergenpointinger.github.io/assets/images/blog/k8s-cheat-sheet.jpg" /><media:content medium="image" url="https://juergenpointinger.github.io/assets/images/blog/k8s-cheat-sheet.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Being a heart surgeon in an IT depression</title><link href="https://juergenpointinger.github.io/it-depression/" rel="alternate" type="text/html" title="Being a heart surgeon in an IT depression" /><published>2021-03-22T00:00:00+00:00</published><updated>2021-03-22T00:00:00+00:00</updated><id>https://juergenpointinger.github.io/it-depression</id><content type="html" xml:base="https://juergenpointinger.github.io/it-depression/"><![CDATA[<h1 id="jonas-heart-surgeon">Jonas, heart surgeon</h1>

<p>Jonas is 28 years old and an aspiring heart surgeon. Some would argue that at 28, he’s still a bit young to be a heart surgeon. But Jonas has always been ahead of his time and is very talented. In his younger years, he was already present at many of his colleagues’ surgeries, was allowed to assist and already had some of these vital organs in his hands.</p>

<p>In fact, they were all different, but as Jonas looked closer, he saw many similarities. Well, at least they were all hearts, you might think. But let him tell us about his discovery himself:</p>

<blockquote>
  <p>I have always been fascinated by the most important organ in our body. The heart looks different with each person. Sometimes a little smaller, sometimes larger. Sometimes, unfortunately, the heart does not work as it should. When I looked very closely, I could even see broken hearts.</p>

  <p>Some of the people who needed heart surgery seemed to work directly in the IT industry. Which is why I tried to find a connex.</p>

  <p>Teams act like hearts, I thought to myself. They keep our organization, our organism, alive.
Without a heart, we could not survive and so could not our organization.</p>

  <p>Communication is like an artery through which our heart is supplied with oxygen. If our arteries are clogged, our heart doesn’t work properly either - in the worst cases we die!</p>

  <p>Our communication, our teams and if we think it further our culture, do not only determine the survival of our organization, but wrongly treated they make us stop functioning or even ceasing to exist from one moment to the next!</p>
</blockquote>

<p><img src="/assets/images/blog/doctor.png" alt="Dr. Jonas" />
<em>Dr. Jonas and his colleagues</em></p>

<p>Jonas has had an uneasy feeling for some time and has been looking for answers to his thesis in the IT industry. He recognizes many hearts that are seriously ill. The COVID-19 year, with its #WFA (work from anywhere) or the change in face-to-face communication, was in some places just a drop in the bucket. He comes to one conclusion:</p>

<blockquote>
  <p>The IT industry is in the midst of a depression, and not just since yesterday. It has been shown to us more clearly than ever what many of us are facing.</p>

  <p>Why is it not our highest priority to keep our heart healthy?</p>
</blockquote>

<h2 id="heart-attack---i-hope-not">Heart attack - I hope not</h2>

<p>With these findings, Jonas goes to his experienced colleagues, with whom he has always had a very good exchange and for whom the innovative views he brings in are a benefit. He shows them what he has discovered, which startles his colleagues, who thank him for this new insight. After careful consideration, his colleagues recognize a pattern:</p>

<blockquote>
  <p>It seems that we have put our bodies through a lot in recent years. In many IT organizations, the sinus rhythm does not seem to match the ideal, we are ailing. Have we done too little sport or been out in the fresh air too rarely? In times of Covid-19, this could well be a reason. In addition, there are the psychological burdens to which we are exposed during what feels like our 100th lockdown.</p>

  <p>We need to save our organization from a heart attack. Constant movement and adaptation protects us from this … and of course all the <em>good</em> characters from <em>Once upon a time … Life</em></p>
</blockquote>

<p><img src="/assets/images/blog/Es-war-einmal-das-Leben.png" alt="Es war einmal das Leben" />
<em>Once upon a time … Life</em></p>

<h3 id="keep-moving">Keep moving</h3>

<p>Why do we still have such a hard time adapting to change despite the agile manifesto with the value <strong>Responding to change</strong> that has been known for 20 years? Have we forgotten how to adapt? Have we become too inert and do we reflect far too rarely on what has happened? Jonas wonders why the IT pandemic is hitting us so hard. And he is not talking about the disease itself.</p>

<p>IT organizations today should be able to react very fast and flexible to changes in their market. A customer-centric mindset helps us significantly in this regard. But what Jonas could observe just last year is that the changes are hardly directed inwards. Working methodologies, way of working, processes, teams, communication channels … many things simply remain unchanged.</p>

<p><img src="/assets/images/blog/flexible.jpg" alt="Keep moving" />
<em>Keep moving</em></p>

<p>Jonas remembers a tweet he read just a while ago. It was about a product development and the organizational goals for the next year:</p>

<blockquote>
  <p>C-Level to PM: We need to roll out 20 new projects this year to be successful, but the development team believes they could only focus on 2 projects at a time with the best quality. What should we do?</p>
</blockquote>

<p>This did not sound at all like healthy growth to him. The correct answer to the question, as he found it, and whether this was really done in reality remains to be seen, was:</p>

<blockquote>
  <p>PM to C-Level: We look for the 2 projects with the highest priority, the greatest customer value and the best return on investment. And focus entirely on these projects and drop the others!</p>
</blockquote>

<p>Of course, it is a major challenge to explain this situation to the C-level management, but what would be the alternative? External contractors, outsourcing projects … I hope you know where I am going with this.</p>

<blockquote>
  <ul>
    <li>Organic growth is healthy</li>
    <li>Change to be able to react to market situations is good</li>
    <li>Intrinsic adaptation at the same speed is effective</li>
  </ul>
</blockquote>

<p>Maybe you know similar situations or are currently in a similar job environment. In an organization that has grown very quickly and seems to struggle with structures and processes.</p>

<h2 id="one-body---one-life">One body - One life</h2>

<p>Jonas continues to reflect:</p>

<blockquote>
  <p>Our heart, as we know, can perfectly supply a body for years. In some exceptional cases, it is even strong enough to keep 2 bodies alive, as in the case of Siamese twins. I, as an aspiring heart surgeon, believe I can claim that this only works in certain and very special cases and only for a certain period of time.</p>
</blockquote>

<p>If we now compare a delivery team in our IT organization with a body, a delivery team should, just like the heart, focus on one major topic, one business domain, one bounded context or one value stream, and master it perfectly.</p>

<p>This is how we create a continuous flow and master our business best. However, if a team has to serve multiple domains, or in other words, if the heart suddenly had to take over the tasks of another organ, the cognitive load in the team increases tremendously.</p>

<blockquote>
  <p>Hardly feasible, as Jonas feels - The focus is lost, we suffer from loss of quality. It is not one of our main objectives. Little by little, the team will slow down, the quality will move into the negative. We will pay for our misguided decisions.</p>
</blockquote>

<h3 id="focus-on-excellence">Focus on excellence</h3>

<p>If we stay with IT organizations, we talk about establishing feature teams and cross-functional teams. So we want to make sure that all the necessary skills are in a delivery team or in an extended delivery team. We are looking for motivated contributors who are not only experts in their field, but also think outside their comfort zone (T-shaped). If this alone would not be a huge challenge, there are other aspects that we must not neglect under any circumstances.</p>

<p>We have to take diversity into account in our teams, and thus bring different points of view into the team and, above all, we also promote creativity. We should also not disregard the different and preferably homogeneous seniority levels.</p>

<p><img src="/assets/images/blog/focus.jpg" alt="Focus on excellence" />
<em>Focus on excellence</em></p>

<p>We have to make sure that we create long-lasting teams that encourage, support and challenge each other at the right moment, always with the appropriate professionalism, but also with the highest possible trust - without trust we don’t create teams, we create waste! We want to create high-performing teams and thus a results-oriented organization.</p>

<p>This list is far from complete, but it should show that the composition of one or more teams and team topologies is a very complex and extensive topic.</p>

<p>But back to Jonas.</p>

<blockquote>
  <p>So we now know that <strong>continuous change</strong> helps us prevent heart attacks. Furthermore, we have found that focus is an important aspect and that we need to <strong>avoid cognitive load</strong>. And we know that <strong>the way we communicate</strong>, among other things, determines success or failure.</p>

  <p>Can we use these findings to prevent the whole thing from happening?</p>
</blockquote>]]></content><author><name>Jürgen Pointinger</name></author><category term="Leadership" /><summary type="html"><![CDATA[Jonas, heart surgeon]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://juergenpointinger.github.io/assets/images/blog/it-depression.jpg" /><media:content medium="image" url="https://juergenpointinger.github.io/assets/images/blog/it-depression.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Aspiration for the future</title><link href="https://juergenpointinger.github.io/aspiration-for-the-future/" rel="alternate" type="text/html" title="Aspiration for the future" /><published>2020-12-27T00:00:00+00:00</published><updated>2020-12-27T00:00:00+00:00</updated><id>https://juergenpointinger.github.io/aspiration-for-the-future</id><content type="html" xml:base="https://juergenpointinger.github.io/aspiration-for-the-future/"><![CDATA[<h2 id="i-had-no-clue">I had no clue</h2>

<p>In my early days as an apprentice in the IT industry, as a software developer, later as a software architect and even as a team leader, I often wondered why it was so difficult for us to fulfill the wishes of management. Only later I realized that at that time probably nobody could have fulfilled them, because the wishes changed from day to day. At that time, management didn’t know what was important to them tomorrow, or they just couldn’t communicate it. In the beginning, I was still fooled into thinking that this is what agile meant - being able to respond to management’s wishes at any point in time. Who was I to question that. I didn’t visit university, I learned the theory in practice. Possibly I just didn’t understand the theory correctly.</p>

<p>It chased me and just wouldn’t leave me alone.</p>

<blockquote>
  <p>I guess it’s just the way it is. There’s nothing you can do about it! Or can you?</p>
</blockquote>

<p>I began to question the theory and wanted to see for myself what is possible. One of the first books I read on leadership was ‘Managing Humans - Biting and Humorous Tales of a Software Engineering Manager’, by Michael Lopp. That was the first time, things seemed so clear to me. Much of what I have read in this book seemed familiar. On the one hand it was amusing that not only I have had this kind of experience, on the other hand it was shocking that there was such a kind of literature available for everyone who is even remotely interested in leadership - nevertheless we still go on like this.</p>

<p><img src="/assets/images/blog/managing-humans.png" alt="Managing Humans - Biting and Humorous Tales of a Software Engineering Manager" /></p>

<p>I have to admit that I don’t know if I read it in one of the great books on leadership or if it just evolved over the years and many, many conversations with colleagues, but it is a quote that has stuck with me ever since:</p>

<blockquote>
  <p>It’s not about managing people, it’s about managing the environment that surrounds them.</p>
</blockquote>

<h2 id="manage-the-environment">Manage the environment</h2>

<p>So what is the environment we are referring to here? I would say it is the team area, the system, the organization in general. The idea is to have a serve-first mindset and focus on empowering and developing people. We actively seek to develop and align an individual’s sense of purpose with the business goals. Servant leaders remove organizational impediments, in other words <strong>blockers</strong>. Do you have everything you need to get the job done?  In other words, servant leaders clear the way for others to contribute.</p>

<p>After many more articles, blogs, and a few more fantastic books, I began to understand that the whole topic of leadership is probably a bit more complex:</p>

<ul>
  <li>‘The Five Dysfunctions of a Team: A Leadership Fable’, by Patrick Lencioni</li>
  <li>‘Management 3.0: Leading Agile Developers, Developing Agile Leaders’, by Jurgen Appelo</li>
  <li>‘Story Driven: You don’t need to compete when you know who you are’, by Bernadette Jiwa</li>
</ul>

<p><img src="/assets/images/blog/leadership-books.png" alt="Leadership books" /></p>

<p>I have tried to apply, try out and adapt what I have learned in different positions and different IT organizations. Some things worked well, some things didn’t.</p>

<p>But what has always worked was to communicate in a way that is polite, inspiring and motivating, even in an uncertain or changing environment. When responding with consideration to colleagues’ personal needs and feelings. When recognizing, valuing and communicating the achievement of goals.</p>

<h2 id="build-trust">Build trust</h2>

<p>As a manager, you often seem untouchable to many employees. Some managers are hardly tangible for their employees. Their door is usually locked. Managers only leave their offices when the roof is on fire, at least that is how it appears to many.</p>

<blockquote>
  <p>Trust, but verify.</p>

  <p>used by <strong>Lenin</strong></p>
</blockquote>

<p>So it’s not surprising that in many companies you notice as soon as managers take part in a meeting. It seems as if managers give a monologue - employees are not given the opportunity to contribute. They don’t question anything and withdraw.</p>

<p>At the beginning of my career I reacted similarly, then as a team leader I saw the other side and wondered why others reacted to me in the same way. There was a lack of trust, from both sides - at least that was the feeling at the time. We worked on it and it was great to see how the team grew together. To make it even more explicit, trust is always better than control!</p>

<p>In Lencioni’s pyramid, trust is the basis for everything, and I see it the same way. For building trust in your team culture, create an environment where you are open and honest, share problems, admit mistakes, and support each other.</p>

<p>A good first step for managers is to …</p>

<blockquote>
  <p>Don’t be a prick - Go first - Be human! Make yourself vulnerable.</p>
</blockquote>

<p><img src="/assets/images/blog/dont-be-a-prick.jpg" alt="Don't be a prick" /></p>

<p>But of course there were also conflicts in the team that we had to solve here and there.</p>

<h2 id="give-a-voice">Give a voice</h2>

<p>Welcome productive conflict in your team. Encourage others to speak up, get input from the team and confront issues quickly. But remember, productive and constructive conflict cannot happen without trust!</p>

<p>With trust and healthy conflicts, you engage your team. This ensures that you are aligned on common objectives and being clear on direction and priorities. Team members should feel that they can take responsibility and hold each other accountable, even when it is difficult. Similarly, a high-performing team also recognizes the performance of others with praise or rewards.</p>

<p>Voice has a positive impact on organizational culture and is fostered by giving your teams more autonomy and trust.</p>

<h2 id="do-retrospectives">Do retrospectives</h2>

<p>Retrospectives, learning reviews, and post-mortems are team-based activities designed to learn systematically from past events in order to improve future performance. Delivery teams carry out retrospectives frequently and perform post-mortems or learning reviews after incidents.</p>

<p>As a developer myself, I have experienced <em>post-mortems</em> when errors occur in the system. Most of the time, the management started like this <em>“Who caused this error again?”</em>. Nice, isn’t it? Later I have seen that retrospectives were conducted, but the outcome was equally questionable. <em>“What’s the point in doing retrospectives? It’s just a waste of time and I could use the time for something meaningful, like development.</em>”, I have heard team members say.</p>

<p>Of course, these were not post-mortems or good retrospectives, but a pure blame-game without any meaningful learning.</p>

<p><img src="/assets/images/blog/project-retrospectives.png" alt="Project Retrospectives: A Handbook for Team Reviews" /></p>

<p>Both the agile and operations communities emphasize the importance of making these activities <strong>blameless</strong>. Norman L. Kerth in ‘Project Retrospectives: A Handbook for Team Reviews’ proposes the idea of <strong>Retrospective Prime Directive</strong> which every participant should read at the beginning of a retrospective.</p>

<blockquote>
  <p><em>Retrospective Prime Directive</em></p>

  <p>Regardless of what we discover, we must understand and truly believe that everyone did the best job they could, given what was known at the time, their skills and abilities, the resources available, and the situation at hand.</p>

  <p><strong>Norman L. Kerth</strong>, 2001</p>
</blockquote>

<p>Learning reviews and retrospectives help contribute to a climate for learning, and also impact organizational culture. When teams conduct these reviews and retrospectives, they learn from their mistakes and failures and turn them into opportunities to improve their way of working.</p>

<p>In particular, teams that leverage their findings to implement changes to tooling, processes, or procedures see the strongest impacts.</p>

<p>For an in-depth look at learning reviews and retrospectives, see the <a href="https://services.google.com/fh/files/misc/state-of-devops-2018.pdf" target="_blank">2018 State of DevOps Report</a>.</p>

<h2 id="be-transformational">Be transformational</h2>

<p>Some of the challenges mentioned above have led me again and again to the <strong>Transformational leadership’s</strong> model. It combines many good aspects of leadership. The idea of the Transformational leadership model goes back to James MacGregor Burns in the ’70s. Burns described the difference between transactional and transformational leadership at that time.</p>

<p>Despite the great interest in transformational leadership, a number of theoretical problems have been identified with this model.</p>

<p>In the study <a href="https://isiarticles.com/bundles/Article/pre/pdf/19462.pdf">Dimensions of transformational leadership: conceptual and empirical extensions</a> (2004), published by Mark A. Griffin and Alannah E. Rafferty, aspects of transformational leadership theory were identified that led to a lack of empirical support for the hypothesized factor structure of the model and very strong relationships among leadership components.</p>

<p>They proposed five more focused subdimensions of transformational leadership, including vision, inspirational communication, intellectual stimulation, supportive leadership and personal recognition. Without a vision, we don’t know where to go. Lewis Carroll paraphrased this beautifully in <em>Alice’s Adventures in Wonderland</em>.</p>

<blockquote>
  <p>Alice: Would you tell me, please, which way I ought to go from here? <br />
The Cheshire Cat: That depends a good deal on where you want to get to. <br />
Alice: I don’t much care where. <br />
The Cheshire Cat: Then it doesn’t much matter which way you go. <br />
Alice: … So long as I get somewhere. <br />
The Cheshire Cat: Oh, you’re sure to do that, if only you walk long enough.</p>

  <p><strong>Lewis Carroll</strong>, 1865</p>
</blockquote>

<p>Transformational leadership is a model in which leaders inspire and motivate followers to achieve higher performance by appealing to their values and sense of purpose, facilitating wide-scale organizational change. These leaders encourage their teams to work towards a common goal through their vision, values, communication, example-setting, and their evident caring about their followers’ personal needs.</p>

<p>It has been observed that there are similarities between servant leadership and transformational leadership, but they differ in the leader’s focus. Servant leaders focus on their followers’ development and performance, whereas transformational leaders focus on getting followers to identify with the organization and engage in support of organizational objectives.</p>

<p>The five dimensions of transformational leadership includes:</p>

<p><img src="/assets/images/blog/transformational-leadership-model.png" alt="Transformational leadership model" />
<em>Dimensions of Transformational leadership<br />(adapted from the 2017 State of DevOps Report)</em></p>

<ul>
  <li><strong>Vision:</strong> Has a clear concept of where the organization is going and where it should be in five years.</li>
  <li><strong>Intellectual stimulation:</strong> Challenges followers to think about problems in new ways.</li>
  <li><strong>Inspirational communication:</strong> Communicates in a way that inspires and motivates, even in an uncertain or changing environment.</li>
  <li><strong>Supportive leadership:</strong> Demonstrates care and consideration of followers’ needs and feelings.</li>
  <li><strong>Personal recognition:</strong> Praises and acknowledges achievement of goals and improvements in work quality; personally compliments others when they do outstanding work.</li>
</ul>

<p>Good leaders build great teams, great technology, and great organizations. Transformational leadership enables practices that correlate with high performance, and it helps team members communicate and collaborate in pursuit of organizational goals. Such kind of leadership provides the foundation for a culture in which continuous experimentation and learning is part of everybody’s daily work.</p>

<p>The <a href="https://services.google.com/fh/files/misc/state-of-devops-2017.pdf" target="_blank">2017 State of DevOps Report</a> also takes an in-depth look at transformational leadership.</p>

<h2 id="conclusion">Conclusion</h2>

<p>Today it still happens here and there that we cannot fulfill all the wishes of the management. The difference from when I started is that now I know proven principles and approaches, that help me deal with these situations. We now know how to handle these exceptional cases in true agile leadership and quickly work together to achieve our goals.</p>]]></content><author><name>Jürgen Pointinger</name></author><category term="Leadership" /><summary type="html"><![CDATA[I had no clue]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://juergenpointinger.github.io/assets/images/blog/aspiration-for-the-future.jpg" /><media:content medium="image" url="https://juergenpointinger.github.io/assets/images/blog/aspiration-for-the-future.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Organizational Performance</title><link href="https://juergenpointinger.github.io/organizational-performance/" rel="alternate" type="text/html" title="Organizational Performance" /><published>2020-12-19T00:00:00+00:00</published><updated>2020-12-19T00:00:00+00:00</updated><id>https://juergenpointinger.github.io/organizational-performance</id><content type="html" xml:base="https://juergenpointinger.github.io/organizational-performance/"><![CDATA[<h2 id="welcome-the-change">Welcome the change</h2>

<p>If we consider an IT organization from the outside, the majority of organizations worldwide are busy continuously reinventing themselves. Some IT organizations find this easier than others, and some have to leave the field completely in the face of these rapid market changes and disappear entirely.</p>

<p><img src="/assets/images/blog/grief.jpg" alt="Grief" /></p>

<h3 id="but-why-is-this-and-how-can-we-do-our-best-to-avoid-being-one-of-the-latter">But why is this and how can we do our best to avoid being one of the latter?</h3>

<p>In today’s world, it is necessary to be able to react fast and flexible, but also cost-saving and with high quality, to the wishes of the customers.</p>

<p>Charles Darwin, as I have already mentioned in one of my other articles, made what I consider to be a very accurate statement, which is as true today as it was then:</p>

<blockquote>
  <p>It is not the strongest of the species that survives, 
nor the most intelligent. It is the one most adaptable to change.</p>

  <p><strong>Charles Darwin</strong>, 1809</p>
</blockquote>

<p>So we should not strive to be the strongest species, but to be able to adapt best. Applied to IT organizations, this means that we should not react reactively to market changes, but should prepare as well as possible for the change that is definitely coming and consider it as part of our DNA.</p>

<h2 id="manage-the-change">Manage the change</h2>

<p>From classic change management, we should be aware that changes not only have an impact on the IT organization, but also on each individual person within this organization.</p>

<p>Elisabeth Küberl-Ross, an American-Swiss psychiatrist, has described in “The stages of grief” how we react to change and which stages we go through more or less intensively.</p>

<p><img src="/assets/images/blog/stages-of-grief.jpg" alt="The stages of grief" /></p>

<p>It doesn’t seem to matter whether these changes have occurred in the private or professional sphere. We must be mindful that each person has their own way of dealing with change, and we should always meet them where they are and pick them up at the stage they are at.</p>

<h2 id="organizational-performance">Organizational performance</h2>

<p>Organizational performance measures the ability of an organization to achieve commercial and non-commercial goals. Academic research has validated this measure and found it to be highly correlated to measures of return on investment (ROI), and it is robust to economic cycles.</p>

<p>Combined, commercial and non-commercial goals include:</p>
<ul>
  <li>Profitability</li>
  <li>Productivity</li>
  <li>Market share</li>
  <li>Number of customers</li>
  <li>Quantity of products or services</li>
  <li>Operating efficiency</li>
  <li>Customer satisfaction</li>
  <li>Quality of products or services provided</li>
  <li>Achieving organization or mission goals</li>
</ul>

<h2 id="capabilities-and-practices">Capabilities and practices</h2>

<p>The <a href="https://www.devops-research.com/research.html">DevOps Research and Assessment (DORA)</a> team, has identified and validated a set of capabilities and practices that drive higher software delivery and organizational performance. The DORA’s State of DevOps research program includes data from over 31,000 professionals worldwide and has now been conducted regularly every year since 2014.</p>

<p>It is the longest-running, academically rigorous research survey of its kind and provides an independent look at the practices and capabilities that drive high performance in technology delivery and ultimately organizational outcomes. This research uses behavioral science to identify the most effective and efficient ways to develop and deliver software.</p>

<p>If we superimpose the organizational performance goals on the idea of being highly adaptable, we can figure out what a high-performing IT organization will look like. We can keep our goals in mind while still being adaptive to market changes.</p>

<p>The intrinsic motivation to influence the goals is obvious and instead of being blindsided by the external influences, we can equip ourselves well for it with management and technical practices.</p>

<h3 id="culture-and-work-environment">Culture and work environment</h3>

<ul>
  <li><strong>Climate for learning:</strong> An organization with a climate for learning views learning as an investment that is needed for growth, not as a necessary evil, undertaken only when required.</li>
  <li><strong>Westrum organizational culture:</strong> This model of organizational culture was developed by sociologist Dr Ron Westrum. It classifies organizations as <em>pathological</em>, <em>bureaucratic</em>, or <em>generative</em> based on levels of cooperation, how problems are surfaced, the extent to which the organization is siloed, and how people react to failure and novelty.</li>
  <li><strong>Culture of psychological safety:</strong> In teams with a culture of psychological safety, team members trust each other, are able to resolve conflict, take calculated and moderate risks, speak up, and are more creative.</li>
  <li><strong>Job satisfaction:</strong> People feel supported by their employers, have the tools and resources to do their work, and feel their judgement is valued.</li>
  <li><strong>Identity:</strong> Employees identify with the organization they work for. They say that the organization is a good place to work. They feel that the organization cares about them. And they’re willing to put in extra effort to help the organization succeed.</li>
</ul>

<h3 id="lean-product-development">Lean product development</h3>

<ul>
  <li><strong>Working in small batches:</strong> The extent to which teams break products and features into small batches that can be completed in less than a week and released frequently, including the use of MVPs (minimum viable products).</li>
  <li><strong>Visibility of work in the value stream:</strong> Whether teams understand the workflow from business to customers, and whether they have visibility into this flow, including the status of products and features.</li>
  <li><strong>Customer feedback:</strong> Whether organizations actively and regularly seek customer feedback and incorporate it into their product design.</li>
  <li><strong>Team experimentation:</strong> Whether development teams have the authority to create and change specifications as part of the development process without requiring approval.</li>
</ul>

<h3 id="software-delivery-and-operational-performance">Software delivery and operational performance</h3>

<ul>
  <li>Continuous delivery</li>
  <li>Autonomy, trust and voice</li>
  <li>Cloud infrastructure</li>
  <li>Disaster recovery testing</li>
  <li>No functional outsourcing</li>
  <li>Clear and lightweight change process</li>
  <li>Lean product development</li>
  <li>Lean management practices</li>
</ul>

<p>I will go into these topics in one of my next articles.</p>

<h2 id="additional-reading">Additional reading</h2>

<ul>
  <li><a href="https://www.devops-research.com/research.html">DevOps Research and Assessment (DORA)</a></li>
  <li><a href="https://www.devops-research.com/quickcheck.html">DORA DevOps Quick Check</a></li>
  <li><a href="https://cloud.google.com/devops/state-of-devops">Latest State of DevOps Report</a></li>
</ul>]]></content><author><name>Jürgen Pointinger</name></author><category term="Leadership" /><summary type="html"><![CDATA[Welcome the change]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://juergenpointinger.github.io/assets/images/blog/organizational-performance.jpg" /><media:content medium="image" url="https://juergenpointinger.github.io/assets/images/blog/organizational-performance.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">DevOps Frameworks</title><link href="https://juergenpointinger.github.io/devops-frameworks/" rel="alternate" type="text/html" title="DevOps Frameworks" /><published>2020-07-24T00:00:00+00:00</published><updated>2020-07-24T00:00:00+00:00</updated><id>https://juergenpointinger.github.io/devops-frameworks</id><content type="html" xml:base="https://juergenpointinger.github.io/devops-frameworks/"><![CDATA[<h2 id="culture-automation-lean-measurement-and-sharing">Culture, Automation, Lean, Measurement and Sharing</h2>

<p>In my last article about <a href="/why-do-we-need-devops">Why do we need DevOps?</a> I already mentioned a framework that can be used to introduce DevOps in an organization - <strong>CALMS</strong>.</p>

<p>The CALMS framework was originally invented by <em>Jez Humble</em>, co-author of <em>“The DevOps Handbook”</em> and <em>“Accelerate”</em>. It is used as a means to assess whether an organization is ready to adopt DevOps processes or how an organization is progressing in its DevOps transition.</p>

<p>The acronym stands for Culture, Automation, Lean, Measurement and Sharing.</p>

<p>Although this is my preferred way to assess, the approaches <strong>“The Tree Ways”</strong>, described in <em>“The Phoenix Project”</em> or <em>“The DevOps Handbook”</em> and <strong>“Mature capabilities in technical and management practices”</strong> found in high-performing DevOps teams, based on the research presented in “Accelerate” are also mentioned.</p>

<p>Nevertheless, I would like to talk a bit more about the CALMS framework.</p>

<h3 id="what-devops-is-not">What DevOps is NOT</h3>

<p>But before I do that, I would like to clarify what DevOps is NOT from my perspective by some debates in my nearer past. I have already mentioned some of these in passing in my last article. However, I would like to emphasize it clearly once again:</p>

<ul>
  <li><strong>DevOps is not the simple combination of Development and Operations teams:</strong> <em>“We turn the development and operations team into a team - and voila we make DevOps.”</em> If it were that simple, this article would lose its meaning.</li>
  <li><strong>DevOps is not a tool:</strong> Even though harmonizing and finding the right toolstack may be important for the corresponding product development.</li>
  <li><strong>DevOps is not a separate team:</strong> It may seem at the beginning of a transition as if a test balloon in a separately formed team would be easier. At the end of these balloon, this should no longer be an option for an organization and in the best case it has also shown that it can only work if DevOps is practiced in a team.</li>
  <li><strong>DevOps is not a one-size-fits-all solution:</strong> There are so many business drivers and product development approaches. To suggest that the specific needs of a client can be met with one and the same solution is ridiculous.</li>
  <li><strong>DevOps is not Cloud:</strong> I often hear DevOps in connection with cloud and the opinion that DevOps is the synonym for <em>“Everything in the cloud”</em>. Of course it’s not, there are a lot of businesses that are hard to do in the cloud, but DevOps works great there anyway.</li>
  <li><strong>DevOps is not automation:</strong> Automation is often used interchangeably with DevOps. This is of course complete nonsense. A part of DevOps is also automation, as we will see in the following article. To find out what DevOps really is I recommend my article about <a href="/why-do-we-need-devops">Why do we need DevOps?</a></li>
</ul>

<p>And there is one point which, for my personal conviction, must be mentioned. I say it frankly and I stand by it: <strong>“DevOps is not a role”</strong></p>

<h2 id="calms">CALMS</h2>

<h3 id="culture">Culture</h3>

<p>Behind the aspect of <strong>Culture</strong>, which occurs in all three approaches, hides a complex problem which in many cases is also the main reason for the failure of a product development.</p>

<p><img src="/assets/images/blog/culture.jpg" alt="Culture" /></p>

<p>Before thinking about breaking down silos in an organization, a <strong>common view of the entire system</strong> and <strong>common targets</strong> should be in place and a foundation for a <strong>common responsibility</strong> should be laid. This is key if you want to be successful.</p>

<p>In most cases a common <strong>team vision and mission</strong> can be helpful and is one of the things that can make the difference. There are of course different approaches to developing these, but what is needed is trust, not only the trust of the team members themselves, but also the trust of management in the journey of DevOps and the ROI that comes with it.</p>

<p>To be able to <strong>destroy silos</strong>, cooperation must be encouraged. From the history the developers want to get new features into production as soon as possible. But in operation it’s more about being able to run the system as stable and sustainable as possible and not to endanger the stability by changes that have possibly hardly been tested. Thus the two statements from the history stand in contrast to each other. Which in some cases can lead to points of friction in communication. Which I will not go into here, I think you know the points of discussion.</p>

<p>That’s why it’s important to get everyone involved as early as possible and create a <em>“You build it, you run it, you own it”</em> mentality. The way a system is operated finally has an impact on the architecture and on the cooperation between components. If a system cannot be operated, it is ultimately doomed to failure.</p>

<p>Products where all parties involved have an open and encouraging communication culture are also the products that will be on the market in the coming years, other products have already disappeared due to lack of maintainability and many other aspects.</p>

<p>With the open culture comes <strong>transparency</strong> in communication. Successes as well as failures are part of this culture and are celebrated one way or another. <strong>Failures lead us to improve</strong> in what we do and to continuously work on ourselves, to learn something new and to improve our processes, our daily work.</p>

<p>Even though sharing successes and failures are actually more part of <a href="#sharing">Sharing</a>, they are deeply rooted in the culture. Teams that are not familiar with this culture can slowly be introduced to these aspects through games and a lot of motivation. Decisive for this is a non-blaming culture and the open discussion, the open ear of the team members and management.</p>

<p>Herewith I am approaching the end of the cultural aspect, even if there is still a lot to tell, to be fair. The last point I would like to mention is the topic of team structure or team constellation.</p>

<p>I have seen many companies that always seemed to be seeking the same stereotypes and thus inevitably created one-dimensional thinking. But to be successful, you need <strong>different types</strong> in a team. This does not start at seniority level (junior, professional, senior). It is an advantage to have <strong>different skill sets</strong> in one team. What I want to create are <strong>cross-functional teams</strong> that cover the complete software development life cycle and thus represent the non-plus-ultra for my system.</p>

<p>What it does not mean is that a team should only contain specialists and hard-to-manage individualists. It should mean that in order to develop successful products, it needs not only specialists in their field of expertise, but also generalists who can possibly cover one or more main areas of the SDLC.</p>

<h3 id="automation">Automation</h3>

<p>To put it simply, DevOps teams should strive to <strong>automate as many manual and recurring tasks as possible</strong>. Attention should always be paid to stability, maintainability and simplicity. There are a few points that can be addressed under the main topic of automation. Each of the following principles separately fills whole articles and books, so I would have liked to mention them but will not go into all of them in detail:</p>

<p><img src="/assets/images/blog/automation.jpg" alt="Automation" /></p>

<ul>
  <li><strong>Automate the build process:</strong> Here we should name <em>Continuous Integration</em> as THE principle they should adopt. This includes the right branching strategy as well as automated versioning (e.g.: <em>Semantic Versioning</em>), but of course most of all the automated building of the solution to be able to use a build artifact for further steps.</li>
  <li><strong>Automate the testing process:</strong> There are myths surrounding the testing process within DevOps. Is it a closed process within an SDLC? No. What is sufficient test coverage? Depends. Should we work on our unit tests and do we need integration tests at all? Definitely. Is the feature user friendly and client led? I don’t know. Are our test cases good and fast enough for our pipeline? Probably not. - What am I trying to say? Continuous Quality is a holistic concept that starts with the writing of user stories and ends with the involvement of the client in the process … or even goes beyond that. To make a product successful, you should always try to implement quality where you can. Is 20% unit test coverage sufficient? Probably not, if the product is used for more than 3 months. But does it need a coverage of around 100%? Probably not, if no lives depend on it. I could fill pages on quality, but I think we can leave it at one sentence: <em>We need quality and quality assurance in our system so it doesn’t steal our sleep at the end of the day.</em></li>
  <li><strong>Automate the infrastructure setup process:</strong> <em>Infrastructure as Code (IaC)</em> with the probably well known tools like Ansible, Chef, Terraform, Puppet, Kubernetes or similar, are part of it as well as approaches of GitOps.</li>
  <li><strong>Automate the deployment or release process:</strong> Not only do principles such as <em>Continuous Delivery or Deployment</em> play a major role in this, but there are also issues that are perhaps not so obvious at first glance. These include the automation of a changelog, release notes, archiving artifacts for the necessary traceability and recovery in case of a pitfall.</li>
  <li><strong>Automate the security process:</strong> It starts with static code analysis and ends with compliance policies. In the middle we deal with topics such as Dynamic Application Security Testing (DAST), Static Application Security Testing (SAST), Dependency scanning, Container scanning or Secret detection.</li>
  <li><strong>Automate the monitoring process:</strong> Already at the beginning of the development of a product you should, as mentioned, think about how you want to operate it, but not only that. Besides monitoring the infrastructure, you also have to think about possible application-specific things. Log-file analysis or even more specifically, the correct writing of log output is essential. Also the feedback from clients or the collection of data from A/B or feature tests is an important part of monitoring that should be considered.</li>
</ul>

<blockquote>
  <p><em>Automate the … boring stuff</em></p>
</blockquote>

<h3 id="lean">Lean</h3>

<p>Development teams use lean principles to <strong>eliminate waste</strong> by defining the client value and understand what value is, optimize the value stream, such as <strong>minimizing working in progress (WIP)</strong>, <strong>making work and progress visible</strong> and traceable, <strong>reducing handover complexity</strong> and breaking down steps to ensure that the flow of the remaining steps run smoothly without interruptions and waiting times. This also includes the introduction of cross-functional teams and the training of employees who are versatile and adaptable.</p>

<h3 id="measurement">Measurement</h3>

<p>In order to better understand the possibilities of the current system it is helpful to get some things settled. In order to better assess the <strong>health of the product</strong>, I need well-considered metrics that can give us a starting point for improvement.</p>

<p>In this area we try to collect and analyse data. This can be planning data, product data, quality data or more general team data. It is important to mention that this is always about trust and we plead for the data to be used correctly in the right hands. We do not want to create a surveillance state, we want to get the best out of it for our clients and our product.</p>

<p><img src="/assets/images/blog/monitoring.jpg" alt="Monitoring" /></p>

<p>As mentioned, <strong>Continuous Learning and Improvement</strong> should be a key factor in our DevOps culture. A nice saying that I like to use myself is <em>“Continuous Improvement is the only Constant in DevOps”</em></p>

<p>So what do we want to achieve:</p>

<ul>
  <li>Collect &amp; analyze product and system specific data</li>
  <li>Define metrics and thresholds</li>
  <li>Monitor and track metrics and automate notifications</li>
  <li>Identify mistakes</li>
  <li>Define quality gates</li>
  <li>Create a Continuous Learning and Improvement culture</li>
  <li>Improve efficiency and reduce cycle times</li>
</ul>

<h3 id="sharing">Sharing</h3>

<p>Classically, this includes <strong>establishing a non-blaming culture</strong>, which sounds simple at first glance, but which, as I have learned, is apparently in our nature to reject fault. It requires a lot of experience and understanding and the management should lead by example.</p>

<p>An <strong>open communication</strong> culture should <strong>encourage the ask/share</strong> principle. We want to focus on solving difficulties and not on assigning blame. In this sense, it should also be emphasized that it is always about the overall system and not about micro-problems without looking at the other areas.</p>

<p>We can achieve this by focusing on <strong>Collective Code Ownership</strong> and by putting the team in focus. All in all - <em>“Sharing is caring”</em> and we should always keep this in mind. If we don’t care about the team and the product, why should our clients?</p>

<p>I am eager to hear your opinion and welcome any comments on this topic. So long - stay tuned and healthy!</p>]]></content><author><name>Jürgen Pointinger</name></author><category term="Leadership" /><summary type="html"><![CDATA[Culture, Automation, Lean, Measurement and Sharing In my last article about Why do we need DevOps? I already mentioned a framework that can be used to introduce DevOps in an organization - CALMS. The CALMS framework was originally invented by Jez Humble, co-author of “The DevOps Handbook” and “Accelerate”. It is used as a means to assess whether an organization is ready to adopt DevOps processes or how an organization is progressing in its DevOps transition. The acronym stands for Culture, Automation, Lean, Measurement and Sharing. Although this is my preferred way to assess, the approaches “The Tree Ways”, described in “The Phoenix Project” or “The DevOps Handbook” and “Mature capabilities in technical and management practices” found in high-performing DevOps teams, based on the research presented in “Accelerate” are also mentioned. Nevertheless, I would like to talk a bit more about the CALMS framework. What DevOps is NOT But before I do that, I would like to clarify what DevOps is NOT from my perspective by some debates in my nearer past. I have already mentioned some of these in passing in my last article. However, I would like to emphasize it clearly once again: DevOps is not the simple combination of Development and Operations teams: “We turn the development and operations team into a team - and voila we make DevOps.” If it were that simple, this article would lose its meaning. DevOps is not a tool: Even though harmonizing and finding the right toolstack may be important for the corresponding product development. DevOps is not a separate team: It may seem at the beginning of a transition as if a test balloon in a separately formed team would be easier. At the end of these balloon, this should no longer be an option for an organization and in the best case it has also shown that it can only work if DevOps is practiced in a team. DevOps is not a one-size-fits-all solution: There are so many business drivers and product development approaches. To suggest that the specific needs of a client can be met with one and the same solution is ridiculous. DevOps is not Cloud: I often hear DevOps in connection with cloud and the opinion that DevOps is the synonym for “Everything in the cloud”. Of course it’s not, there are a lot of businesses that are hard to do in the cloud, but DevOps works great there anyway. DevOps is not automation: Automation is often used interchangeably with DevOps. This is of course complete nonsense. A part of DevOps is also automation, as we will see in the following article. To find out what DevOps really is I recommend my article about Why do we need DevOps? And there is one point which, for my personal conviction, must be mentioned. I say it frankly and I stand by it: “DevOps is not a role” CALMS Culture Behind the aspect of Culture, which occurs in all three approaches, hides a complex problem which in many cases is also the main reason for the failure of a product development. Before thinking about breaking down silos in an organization, a common view of the entire system and common targets should be in place and a foundation for a common responsibility should be laid. This is key if you want to be successful. In most cases a common team vision and mission can be helpful and is one of the things that can make the difference. There are of course different approaches to developing these, but what is needed is trust, not only the trust of the team members themselves, but also the trust of management in the journey of DevOps and the ROI that comes with it. To be able to destroy silos, cooperation must be encouraged. From the history the developers want to get new features into production as soon as possible. But in operation it’s more about being able to run the system as stable and sustainable as possible and not to endanger the stability by changes that have possibly hardly been tested. Thus the two statements from the history stand in contrast to each other. Which in some cases can lead to points of friction in communication. Which I will not go into here, I think you know the points of discussion. That’s why it’s important to get everyone involved as early as possible and create a “You build it, you run it, you own it” mentality. The way a system is operated finally has an impact on the architecture and on the cooperation between components. If a system cannot be operated, it is ultimately doomed to failure. Products where all parties involved have an open and encouraging communication culture are also the products that will be on the market in the coming years, other products have already disappeared due to lack of maintainability and many other aspects. With the open culture comes transparency in communication. Successes as well as failures are part of this culture and are celebrated one way or another. Failures lead us to improve in what we do and to continuously work on ourselves, to learn something new and to improve our processes, our daily work. Even though sharing successes and failures are actually more part of Sharing, they are deeply rooted in the culture. Teams that are not familiar with this culture can slowly be introduced to these aspects through games and a lot of motivation. Decisive for this is a non-blaming culture and the open discussion, the open ear of the team members and management. Herewith I am approaching the end of the cultural aspect, even if there is still a lot to tell, to be fair. The last point I would like to mention is the topic of team structure or team constellation. I have seen many companies that always seemed to be seeking the same stereotypes and thus inevitably created one-dimensional thinking. But to be successful, you need different types in a team. This does not start at seniority level (junior, professional, senior). It is an advantage to have different skill sets in one team. What I want to create are cross-functional teams that cover the complete software development life cycle and thus represent the non-plus-ultra for my system. What it does not mean is that a team should only contain specialists and hard-to-manage individualists. It should mean that in order to develop successful products, it needs not only specialists in their field of expertise, but also generalists who can possibly cover one or more main areas of the SDLC. Automation To put it simply, DevOps teams should strive to automate as many manual and recurring tasks as possible. Attention should always be paid to stability, maintainability and simplicity. There are a few points that can be addressed under the main topic of automation. Each of the following principles separately fills whole articles and books, so I would have liked to mention them but will not go into all of them in detail: Automate the build process: Here we should name Continuous Integration as THE principle they should adopt. This includes the right branching strategy as well as automated versioning (e.g.: Semantic Versioning), but of course most of all the automated building of the solution to be able to use a build artifact for further steps. Automate the testing process: There are myths surrounding the testing process within DevOps. Is it a closed process within an SDLC? No. What is sufficient test coverage? Depends. Should we work on our unit tests and do we need integration tests at all? Definitely. Is the feature user friendly and client led? I don’t know. Are our test cases good and fast enough for our pipeline? Probably not. - What am I trying to say? Continuous Quality is a holistic concept that starts with the writing of user stories and ends with the involvement of the client in the process … or even goes beyond that. To make a product successful, you should always try to implement quality where you can. Is 20% unit test coverage sufficient? Probably not, if the product is used for more than 3 months. But does it need a coverage of around 100%? Probably not, if no lives depend on it. I could fill pages on quality, but I think we can leave it at one sentence: We need quality and quality assurance in our system so it doesn’t steal our sleep at the end of the day. Automate the infrastructure setup process: Infrastructure as Code (IaC) with the probably well known tools like Ansible, Chef, Terraform, Puppet, Kubernetes or similar, are part of it as well as approaches of GitOps. Automate the deployment or release process: Not only do principles such as Continuous Delivery or Deployment play a major role in this, but there are also issues that are perhaps not so obvious at first glance. These include the automation of a changelog, release notes, archiving artifacts for the necessary traceability and recovery in case of a pitfall. Automate the security process: It starts with static code analysis and ends with compliance policies. In the middle we deal with topics such as Dynamic Application Security Testing (DAST), Static Application Security Testing (SAST), Dependency scanning, Container scanning or Secret detection. Automate the monitoring process: Already at the beginning of the development of a product you should, as mentioned, think about how you want to operate it, but not only that. Besides monitoring the infrastructure, you also have to think about possible application-specific things. Log-file analysis or even more specifically, the correct writing of log output is essential. Also the feedback from clients or the collection of data from A/B or feature tests is an important part of monitoring that should be considered. Automate the … boring stuff Lean Development teams use lean principles to eliminate waste by defining the client value and understand what value is, optimize the value stream, such as minimizing working in progress (WIP), making work and progress visible and traceable, reducing handover complexity and breaking down steps to ensure that the flow of the remaining steps run smoothly without interruptions and waiting times. This also includes the introduction of cross-functional teams and the training of employees who are versatile and adaptable. Measurement In order to better understand the possibilities of the current system it is helpful to get some things settled. In order to better assess the health of the product, I need well-considered metrics that can give us a starting point for improvement. In this area we try to collect and analyse data. This can be planning data, product data, quality data or more general team data. It is important to mention that this is always about trust and we plead for the data to be used correctly in the right hands. We do not want to create a surveillance state, we want to get the best out of it for our clients and our product. As mentioned, Continuous Learning and Improvement should be a key factor in our DevOps culture. A nice saying that I like to use myself is “Continuous Improvement is the only Constant in DevOps” So what do we want to achieve: Collect &amp; analyze product and system specific data Define metrics and thresholds Monitor and track metrics and automate notifications Identify mistakes Define quality gates Create a Continuous Learning and Improvement culture Improve efficiency and reduce cycle times Sharing Classically, this includes establishing a non-blaming culture, which sounds simple at first glance, but which, as I have learned, is apparently in our nature to reject fault. It requires a lot of experience and understanding and the management should lead by example. An open communication culture should encourage the ask/share principle. We want to focus on solving difficulties and not on assigning blame. In this sense, it should also be emphasized that it is always about the overall system and not about micro-problems without looking at the other areas. We can achieve this by focusing on Collective Code Ownership and by putting the team in focus. All in all - “Sharing is caring” and we should always keep this in mind. If we don’t care about the team and the product, why should our clients? I am eager to hear your opinion and welcome any comments on this topic. So long - stay tuned and healthy!]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://juergenpointinger.github.io/assets/images/blog/devops-frameworks.jpg" /><media:content medium="image" url="https://juergenpointinger.github.io/assets/images/blog/devops-frameworks.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Move from Jira to GitLab</title><link href="https://juergenpointinger.github.io/jira-gitlab-migration/" rel="alternate" type="text/html" title="Move from Jira to GitLab" /><published>2020-05-14T10:00:00+00:00</published><updated>2020-05-14T10:00:00+00:00</updated><id>https://juergenpointinger.github.io/jira-gitlab-migration</id><content type="html" xml:base="https://juergenpointinger.github.io/jira-gitlab-migration/"><![CDATA[<h2 id="motivation">Motivation</h2>

<p>In some of my projects, including my last one, I have to deal with inhomogeneous toolstacks.</p>

<p>Currently we work with tools like Atlassian Confluence, Jira Software, GitLab (CI), AWS and many more. Basically everything is no problem, but little by little the idea arose to combine the planning instrument, to support the agile methodology, with the development.</p>

<p>The team had problems in the traceability of changes in the source code, there was no or only a rudimentary connection between Jira and GitLab. The important aspect for the management to react to obstacles was missing. There was little feedback from development or the status of the product back to business.</p>

<p>In general I am a friend of documentation directly at the source code and so I often suggest Markdown as a basis - similar to my blog. And so I had the idea to at least put the product specific documentation into GitLab. This thought spread further and the idea to use GitLab completely was born.</p>

<p>GitLab (CI) is one of the best tools available today for continuous integration and continuous delivery/deployment, but is that true for Agile Software Development? Can it stand up to a giant like Jira Software? If I compare the <a href="https://about.gitlab.com/blog/2018/03/05/gitlab-for-agile-software-development/">Agile artifacts</a> (Epic, User Story, Task, Points and Estimation, Product backlog, Sprint/iteration, Charts, Agile board), GitLab has a solution for all artifacts. So the answer for me is YES - but a long-term study is still pending.</p>

<p>We made the decision to switch from Jira to GitLab for our product. But how do I get the artifacts collected over time in Jira to it?</p>

<p>Well, GitLab offers a rudimentary <a href="https://docs.gitlab.com/ee/user/project/import/jira.html">import for Jira Ticket</a> starting with version 12.10. The description seems simple, the connection between Jira and GitLab was quickly established, and the import was started with just over 2000 tickets …</p>

<p><img src="/assets/images/blog/jira-import-in-progress.png" alt="Import in Progress" /></p>

<p>What can I say, it seems that GitLab still has some problems with the import. That’s why the import was never finished and there’s no way to restart it or stop the old import. According to GitLab support, the feature is fairly new and an MVP. If you’re looking for more, you’ll find plans on where the feature should go.</p>

<ul>
  <li><a href="https://gitlab.com/gitlab-org/gitlab/-/issues/2780">https://gitlab.com/gitlab-org/gitlab/-/issues/2780</a></li>
  <li><a href="https://gitlab.com/gitlab-org/gitlab/-/issues/217395">https://gitlab.com/gitlab-org/gitlab/-/issues/217395</a></li>
  <li><a href="https://gitlab.com/gitlab-org/gitlab/-/issues/214812">https://gitlab.com/gitlab-org/gitlab/-/issues/214812</a></li>
  <li><a href="https://gitlab.com/gitlab-org/gitlab/-/issues/214810">https://gitlab.com/gitlab-org/gitlab/-/issues/214810</a></li>
  <li><a href="https://gitlab.com/gitlab-org/gitlab/-/issues/210580">https://gitlab.com/gitlab-org/gitlab/-/issues/210580</a></li>
</ul>

<p>But what can I do in this situation? Of course, I could hope that the ticket will be fixed relatively quickly and we can continue with the import in a timely manner. But since I still have some developer gene in me, I have decided to write my own importer.</p>

<h2 id="jira-2-gitlab">Jira 2 GitLab</h2>

<p>Jira and GitLab each offer good RESTful APIs that don’t require much. For more details about the preconditions I refer you directly to the corresponding developer documentation.</p>

<p>Jira:</p>
<ul>
  <li><a href="https://developer.atlassian.com/cloud/jira/software/rest/">https://developer.atlassian.com/cloud/jira/software/rest/</a></li>
</ul>

<p>GitLab:</p>
<ul>
  <li><a href="https://docs.gitlab.com/ee/api/">https://docs.gitlab.com/ee/api/</a></li>
  <li><a href="https://docs.gitlab.com/ee/api/api_resources.html">https://docs.gitlab.com/ee/api/api_resources.html</a></li>
</ul>

<p>The following scripts were tested with Jira Cloud and GitLab.com (Cloud) 12.10. For the scripts I used Python3 and locally Python v3.7.3.</p>

<h2 id="epics">Epics</h2>

<p>In GitLab, epics are managed on a group level and should be viewed separately. Epics are used at this level for roadmap planning and can be planned across multiple projects.</p>

<blockquote>
  <p><strong>NOTE</strong> Epics are only available with a <code class="language-plaintext highlighter-rouge">GitLab.com Gold</code> Subscription.</p>
</blockquote>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># import_epics.py
</span><span class="kn">import</span> <span class="nn">requests</span> 
<span class="kn">from</span> <span class="nn">requests.auth</span> <span class="kn">import</span> <span class="n">HTTPBasicAuth</span>
<span class="kn">import</span> <span class="nn">json</span>

<span class="c1">## Jira specifics
# Jira URL
</span><span class="n">JIRA_URL</span> <span class="o">=</span> <span class="s">'https://your-jira-instance.com/'</span>
<span class="c1"># Jira user credentials (incl. API token)
</span><span class="n">JIRA_ACCOUNT</span> <span class="o">=</span> <span class="p">(</span><span class="s">'your-jira-username'</span><span class="p">,</span> <span class="s">'your-jira-api-token'</span><span class="p">)</span>
<span class="c1"># Jira project ID (short)
</span><span class="n">JIRA_PROJECT</span> <span class="o">=</span> <span class="s">'PRJ'</span>
<span class="c1"># Jira Query (JQL)
</span><span class="n">JQL</span> <span class="o">=</span> <span class="s">'project=%s+AND+issueType=Epic+AND+resolution=Unresolved+ORDER+BY+createdDate+ASC&amp;maxResults=100'</span> <span class="o">%</span> <span class="n">JIRA_PROJECT</span>

<span class="c1"># *False* if Jira / GitLab is using self-signed certificates, otherwhise *True*
</span><span class="n">VERIFY_SSL_CERTIFICATE</span> <span class="o">=</span> <span class="bp">True</span>

<span class="c1"># Read Jira Epics
</span><span class="n">response</span> <span class="o">=</span> <span class="n">requests</span><span class="p">.</span><span class="n">get</span><span class="p">(</span>
  <span class="n">JIRA_URL</span> <span class="o">+</span> <span class="s">'rest/api/latest/search?jql='</span> <span class="o">+</span> <span class="n">JQL</span><span class="p">,</span>
  <span class="n">auth</span><span class="o">=</span><span class="n">HTTPBasicAuth</span><span class="p">(</span><span class="o">*</span><span class="n">JIRA_ACCOUNT</span><span class="p">),</span>
  <span class="n">verify</span><span class="o">=</span><span class="n">VERIFY_SSL_CERTIFICATE</span><span class="p">,</span>
  <span class="n">headers</span><span class="o">=</span><span class="p">{</span><span class="s">'Content-Type'</span><span class="p">:</span> <span class="s">'application/json'</span><span class="p">}</span>
<span class="p">)</span>

<span class="k">if</span> <span class="n">response</span><span class="p">.</span><span class="n">status_code</span> <span class="o">!=</span> <span class="mi">200</span><span class="p">:</span>
  <span class="k">raise</span> <span class="nb">Exception</span><span class="p">(</span><span class="s">"Unable to read Epics from %s!"</span> <span class="o">%</span> <span class="n">JIRA_PROJECT</span><span class="p">)</span>

<span class="n">jira_issues</span> <span class="o">=</span> <span class="n">response</span><span class="p">.</span><span class="n">json</span><span class="p">()</span>

<span class="k">for</span> <span class="n">issue</span> <span class="ow">in</span> <span class="n">jira_issues</span><span class="p">[</span><span class="s">'issues'</span><span class="p">]:</span>
  <span class="k">print</span><span class="p">(</span><span class="s">"Import Epic with Jira-Key "</span> <span class="o">+</span> <span class="n">issue</span><span class="p">[</span><span class="s">'key'</span><span class="p">])</span>
</code></pre></div></div>

<p>Once you run the script, you should see a similar output:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>Import Epic with Jira-Key PRJ-123
<span class="nv">$ </span>Import Epic with Jira-Key PRJ-456
</code></pre></div></div>

<p>Now we can start with the transfer of the Jira Epics. For this we use the GitLab API. We need a few more variables for that. The easiest way is to take over title and description from Jira. But for a more complex variant you could also add labels, start and due date or more. See the <a href="https://docs.gitlab.com/ee/api/epics.html#new-epic">GitLab API</a>.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
<span class="c1">## GitLab specifics
</span><span class="n">GITLAB_URL</span> <span class="o">=</span> <span class="s">'https://gitlab.com/'</span>
<span class="c1"># GitLab token will be used whenever the API is invoked
</span><span class="n">GITLAB_TOKEN</span> <span class="o">=</span> <span class="s">'your-private-gitlab-token'</span>
<span class="c1"># GitLab group that you are importing to
</span><span class="n">GITLAB_GROUP</span> <span class="o">=</span> <span class="s">'your-group-name'</span>
<span class="c1"># GitLab group id.
</span><span class="n">GITLAB_GROUP_ID</span> <span class="o">=</span> <span class="s">'your-group-id'</span>

<span class="p">...</span>

<span class="k">for</span> <span class="n">issue</span> <span class="ow">in</span> <span class="n">jira_issues</span><span class="p">[</span><span class="s">'issues'</span><span class="p">]:</span>
  <span class="k">print</span><span class="p">(</span><span class="s">"Import Epic with Jira-Key "</span> <span class="o">+</span> <span class="n">issue</span><span class="p">[</span><span class="s">'key'</span><span class="p">])</span>

  <span class="n">title</span> <span class="o">=</span> <span class="n">issue</span><span class="p">[</span><span class="s">'fields'</span><span class="p">][</span><span class="s">'summary'</span><span class="p">]</span>
  <span class="n">description</span> <span class="o">=</span> <span class="n">issue</span><span class="p">[</span><span class="s">'fields'</span><span class="p">][</span><span class="s">'description'</span><span class="p">]</span>

  <span class="n">data</span> <span class="o">=</span> <span class="p">{</span> 
    <span class="s">'title'</span><span class="p">:</span> <span class="n">title</span><span class="p">,</span>
    <span class="s">'description'</span><span class="p">:</span> <span class="n">description</span>
  <span class="p">}</span>

  <span class="n">response</span> <span class="o">=</span> <span class="n">requests</span><span class="p">.</span><span class="n">post</span><span class="p">(</span>
    <span class="n">GITLAB_URL</span> <span class="o">+</span> <span class="s">'api/v4/groups/%s/epics'</span> <span class="o">%</span> <span class="n">GITLAB_GROUP_ID</span><span class="p">,</span>
    <span class="n">headers</span><span class="o">=</span><span class="p">{</span><span class="s">'PRIVATE-TOKEN'</span><span class="p">:</span> <span class="n">GITLAB_TOKEN</span><span class="p">},</span>
    <span class="n">verify</span><span class="o">=</span><span class="n">VERIFY_SSL_CERTIFICATE</span><span class="p">,</span>
    <span class="n">data</span><span class="o">=</span><span class="n">data</span>
  <span class="p">)</span>
</code></pre></div></div>

<blockquote>
  <p><strong>NOTE</strong> One more point should be mentioned in addition. The Jira API restricts the data sets to a maximum of 100. This has to be solved by pagination if you have more than 100 epics.</p>
</blockquote>

<p>You can find the complete source code <a href="https://gist.github.com/juergenpointinger/6c6fa147439a2db1608775c2bc37b6b2">here</a>.</p>

<h2 id="issues">Issues</h2>

<p>If we take the script for the Epics as a basis, we can do the same with other ticket types like Stories, Sub-tasks, Tasks, or Spikes. For this I would like to share some code snippets that might be useful for your import.</p>

<h3 id="jira">Jira</h3>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Get Jira attachments and comments
</span><span class="n">requests</span><span class="p">.</span><span class="n">get</span><span class="p">(</span>
  <span class="n">JIRA_URL</span> <span class="o">+</span> <span class="s">'rest/api/latest/issue/%s/?fields=attachment,comment'</span> <span class="o">%</span> <span class="n">issue</span><span class="p">[</span><span class="s">'id'</span><span class="p">],</span>
  <span class="n">auth</span><span class="o">=</span><span class="n">HTTPBasicAuth</span><span class="p">(</span><span class="o">*</span><span class="n">JIRA_ACCOUNT</span><span class="p">),</span>
  <span class="n">verify</span><span class="o">=</span><span class="n">VERIFY_SSL_CERTIFICATE</span><span class="p">,</span>
  <span class="n">headers</span><span class="o">=</span><span class="p">{</span><span class="s">'Content-Type'</span><span class="p">:</span> <span class="s">'application/json'</span><span class="p">}</span>
<span class="p">)</span>
</code></pre></div></div>

<h3 id="gitlab">GitLab</h3>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Get GitLab milestones
</span><span class="n">requests</span><span class="p">.</span><span class="n">get</span><span class="p">(</span>
  <span class="n">GITLAB_URL</span> <span class="o">+</span> <span class="s">'api/v4/projects/%s/milestones'</span> <span class="o">%</span> <span class="n">GITLAB_PROJECT_ID</span><span class="p">,</span>
  <span class="n">headers</span><span class="o">=</span><span class="p">{</span><span class="s">'PRIVATE-TOKEN'</span><span class="p">:</span> <span class="n">GITLAB_TOKEN</span><span class="p">},</span>
  <span class="n">verify</span><span class="o">=</span><span class="n">VERIFY_SSL_CERTIFICATE</span>
<span class="p">)</span>
</code></pre></div></div>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Get GitLab users
</span><span class="n">requests</span><span class="p">.</span><span class="n">get</span><span class="p">(</span>
  <span class="n">GITLAB_URL</span> <span class="o">+</span> <span class="s">'api/v4/users'</span><span class="p">,</span>
  <span class="n">headers</span><span class="o">=</span><span class="p">{</span><span class="s">'PRIVATE-TOKEN'</span><span class="p">:</span> <span class="n">GITLAB_TOKEN</span><span class="p">},</span>
  <span class="n">verify</span><span class="o">=</span><span class="n">VERIFY_SSL_CERTIFICATE</span>
<span class="p">)</span>
</code></pre></div></div>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Create new GitLab issue
</span>
<span class="n">data</span> <span class="o">=</span> <span class="p">{</span> <span class="p">...</span> <span class="p">}</span>

<span class="n">repsonse</span> <span class="o">=</span> <span class="n">requests</span><span class="p">.</span><span class="n">post</span><span class="p">(</span>
  <span class="n">GITLAB_URL</span> <span class="o">+</span> <span class="s">'api/v4/projects/%s/issues'</span> <span class="o">%</span> <span class="n">GITLAB_PROJECT_ID</span><span class="p">,</span>
  <span class="n">headers</span><span class="o">=</span><span class="p">{</span><span class="s">'PRIVATE-TOKEN'</span><span class="p">:</span> <span class="n">GITLAB_TOKEN</span><span class="p">},</span>
  <span class="n">verify</span><span class="o">=</span><span class="n">VERIFY_SSL_CERTIFICATE</span><span class="p">,</span>
  <span class="n">data</span><span class="o">=</span><span class="n">data</span>
<span class="p">)</span>

<span class="n">gl_issue</span> <span class="o">=</span> <span class="n">response</span><span class="p">.</span><span class="n">json</span><span class="p">()</span>
</code></pre></div></div>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Close GitLab issues that were already closed in Jira
</span><span class="k">if</span> <span class="n">issue</span><span class="p">[</span><span class="s">'fields'</span><span class="p">][</span><span class="s">'status'</span><span class="p">][</span><span class="s">'statusCategory'</span><span class="p">][</span><span class="s">'key'</span><span class="p">]</span> <span class="o">==</span> <span class="s">"done"</span><span class="p">:</span>
  <span class="n">requests</span><span class="p">.</span><span class="n">put</span><span class="p">(</span>
    <span class="n">GITLAB_URL</span> <span class="o">+</span> <span class="s">'api/v4/projects/%s/issues/%s'</span> <span class="o">%</span> <span class="p">(</span><span class="n">GITLAB_PROJECT_ID</span><span class="p">,</span> <span class="n">gl_issue</span><span class="p">[</span><span class="s">'iid'</span><span class="p">]),</span>
    <span class="n">headers</span><span class="o">=</span><span class="p">{</span><span class="s">'PRIVATE-TOKEN'</span><span class="p">:</span> <span class="n">GITLAB_TOKEN</span><span class="p">},</span>
    <span class="n">verify</span><span class="o">=</span><span class="n">VERIFY_SSL_CERTIFICATE</span><span class="p">,</span>
    <span class="n">data</span><span class="o">=</span><span class="p">{</span><span class="s">'state_event'</span><span class="p">:</span> <span class="s">'close'</span><span class="p">}</span>
  <span class="p">)</span>
</code></pre></div></div>

<p>See <a href="https://gist.github.com/juergenpointinger/c9f40b5a02e9b376a1fbc67b77c3d87a">Gist</a> for the whole source code.</p>

<h2 id="remove-unused-gitlab-labels">Remove unused GitLab labels</h2>

<p>I also created a script to remove unused labels in GitLab. You can find it <a href="https://gist.github.com/juergenpointinger/6b34a6def83470a196df7c6beb161858">here</a>.</p>

<h2 id="additional-notes">Additional Notes</h2>

<blockquote>
  <p><strong>NOTE</strong> With <code class="language-plaintext highlighter-rouge">GitLab.com</code> you currently do not have the ability to execute SUDO commands. Therefore no user mapping can be done with it. All data is done with the user (private-token) who is also importing the data.</p>
</blockquote>]]></content><author><name>Jürgen Pointinger</name></author><category term="Code &amp; Snippets" /><summary type="html"><![CDATA[Motivation In some of my projects, including my last one, I have to deal with inhomogeneous toolstacks. Currently we work with tools like Atlassian Confluence, Jira Software, GitLab (CI), AWS and many more. Basically everything is no problem, but little by little the idea arose to combine the planning instrument, to support the agile methodology, with the development. The team had problems in the traceability of changes in the source code, there was no or only a rudimentary connection between Jira and GitLab. The important aspect for the management to react to obstacles was missing. There was little feedback from development or the status of the product back to business. In general I am a friend of documentation directly at the source code and so I often suggest Markdown as a basis - similar to my blog. And so I had the idea to at least put the product specific documentation into GitLab. This thought spread further and the idea to use GitLab completely was born. GitLab (CI) is one of the best tools available today for continuous integration and continuous delivery/deployment, but is that true for Agile Software Development? Can it stand up to a giant like Jira Software? If I compare the Agile artifacts (Epic, User Story, Task, Points and Estimation, Product backlog, Sprint/iteration, Charts, Agile board), GitLab has a solution for all artifacts. So the answer for me is YES - but a long-term study is still pending. We made the decision to switch from Jira to GitLab for our product. But how do I get the artifacts collected over time in Jira to it? Well, GitLab offers a rudimentary import for Jira Ticket starting with version 12.10. The description seems simple, the connection between Jira and GitLab was quickly established, and the import was started with just over 2000 tickets … What can I say, it seems that GitLab still has some problems with the import. That’s why the import was never finished and there’s no way to restart it or stop the old import. According to GitLab support, the feature is fairly new and an MVP. If you’re looking for more, you’ll find plans on where the feature should go. https://gitlab.com/gitlab-org/gitlab/-/issues/2780 https://gitlab.com/gitlab-org/gitlab/-/issues/217395 https://gitlab.com/gitlab-org/gitlab/-/issues/214812 https://gitlab.com/gitlab-org/gitlab/-/issues/214810 https://gitlab.com/gitlab-org/gitlab/-/issues/210580 But what can I do in this situation? Of course, I could hope that the ticket will be fixed relatively quickly and we can continue with the import in a timely manner. But since I still have some developer gene in me, I have decided to write my own importer. Jira 2 GitLab Jira and GitLab each offer good RESTful APIs that don’t require much. For more details about the preconditions I refer you directly to the corresponding developer documentation. Jira: https://developer.atlassian.com/cloud/jira/software/rest/ GitLab: https://docs.gitlab.com/ee/api/ https://docs.gitlab.com/ee/api/api_resources.html The following scripts were tested with Jira Cloud and GitLab.com (Cloud) 12.10. For the scripts I used Python3 and locally Python v3.7.3. Epics In GitLab, epics are managed on a group level and should be viewed separately. Epics are used at this level for roadmap planning and can be planned across multiple projects. NOTE Epics are only available with a GitLab.com Gold Subscription. # import_epics.py import requests from requests.auth import HTTPBasicAuth import json ## Jira specifics # Jira URL JIRA_URL = 'https://your-jira-instance.com/' # Jira user credentials (incl. API token) JIRA_ACCOUNT = ('your-jira-username', 'your-jira-api-token') # Jira project ID (short) JIRA_PROJECT = 'PRJ' # Jira Query (JQL) JQL = 'project=%s+AND+issueType=Epic+AND+resolution=Unresolved+ORDER+BY+createdDate+ASC&amp;maxResults=100' % JIRA_PROJECT # *False* if Jira / GitLab is using self-signed certificates, otherwhise *True* VERIFY_SSL_CERTIFICATE = True # Read Jira Epics response = requests.get( JIRA_URL + 'rest/api/latest/search?jql=' + JQL, auth=HTTPBasicAuth(*JIRA_ACCOUNT), verify=VERIFY_SSL_CERTIFICATE, headers={'Content-Type': 'application/json'} ) if response.status_code != 200: raise Exception("Unable to read Epics from %s!" % JIRA_PROJECT) jira_issues = response.json() for issue in jira_issues['issues']: print("Import Epic with Jira-Key " + issue['key']) Once you run the script, you should see a similar output: $ Import Epic with Jira-Key PRJ-123 $ Import Epic with Jira-Key PRJ-456 Now we can start with the transfer of the Jira Epics. For this we use the GitLab API. We need a few more variables for that. The easiest way is to take over title and description from Jira. But for a more complex variant you could also add labels, start and due date or more. See the GitLab API. ## GitLab specifics GITLAB_URL = 'https://gitlab.com/' # GitLab token will be used whenever the API is invoked GITLAB_TOKEN = 'your-private-gitlab-token' # GitLab group that you are importing to GITLAB_GROUP = 'your-group-name' # GitLab group id. GITLAB_GROUP_ID = 'your-group-id' ... for issue in jira_issues['issues']: print("Import Epic with Jira-Key " + issue['key']) title = issue['fields']['summary'] description = issue['fields']['description'] data = { 'title': title, 'description': description } response = requests.post( GITLAB_URL + 'api/v4/groups/%s/epics' % GITLAB_GROUP_ID, headers={'PRIVATE-TOKEN': GITLAB_TOKEN}, verify=VERIFY_SSL_CERTIFICATE, data=data ) NOTE One more point should be mentioned in addition. The Jira API restricts the data sets to a maximum of 100. This has to be solved by pagination if you have more than 100 epics. You can find the complete source code here. Issues If we take the script for the Epics as a basis, we can do the same with other ticket types like Stories, Sub-tasks, Tasks, or Spikes. For this I would like to share some code snippets that might be useful for your import. Jira # Get Jira attachments and comments requests.get( JIRA_URL + 'rest/api/latest/issue/%s/?fields=attachment,comment' % issue['id'], auth=HTTPBasicAuth(*JIRA_ACCOUNT), verify=VERIFY_SSL_CERTIFICATE, headers={'Content-Type': 'application/json'} ) GitLab # Get GitLab milestones requests.get( GITLAB_URL + 'api/v4/projects/%s/milestones' % GITLAB_PROJECT_ID, headers={'PRIVATE-TOKEN': GITLAB_TOKEN}, verify=VERIFY_SSL_CERTIFICATE ) # Get GitLab users requests.get( GITLAB_URL + 'api/v4/users', headers={'PRIVATE-TOKEN': GITLAB_TOKEN}, verify=VERIFY_SSL_CERTIFICATE ) # Create new GitLab issue data = { ... } repsonse = requests.post( GITLAB_URL + 'api/v4/projects/%s/issues' % GITLAB_PROJECT_ID, headers={'PRIVATE-TOKEN': GITLAB_TOKEN}, verify=VERIFY_SSL_CERTIFICATE, data=data ) gl_issue = response.json() # Close GitLab issues that were already closed in Jira if issue['fields']['status']['statusCategory']['key'] == "done": requests.put( GITLAB_URL + 'api/v4/projects/%s/issues/%s' % (GITLAB_PROJECT_ID, gl_issue['iid']), headers={'PRIVATE-TOKEN': GITLAB_TOKEN}, verify=VERIFY_SSL_CERTIFICATE, data={'state_event': 'close'} ) See Gist for the whole source code. Remove unused GitLab labels I also created a script to remove unused labels in GitLab. You can find it here. Additional Notes NOTE With GitLab.com you currently do not have the ability to execute SUDO commands. Therefore no user mapping can be done with it. All data is done with the user (private-token) who is also importing the data.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://juergenpointinger.github.io/assets/images/blog/jira-gitlab-migration.jpg" /><media:content medium="image" url="https://juergenpointinger.github.io/assets/images/blog/jira-gitlab-migration.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Why do we need DevOps</title><link href="https://juergenpointinger.github.io/why-do-we-need-devops/" rel="alternate" type="text/html" title="Why do we need DevOps" /><published>2020-04-18T00:00:00+00:00</published><updated>2020-04-18T00:00:00+00:00</updated><id>https://juergenpointinger.github.io/why-do-we-need-devops</id><content type="html" xml:base="https://juergenpointinger.github.io/why-do-we-need-devops/"><![CDATA[<h2 id="its-not-the-strongest-that-survives">It’s not the strongest that survives</h2>

<p>A long time ago, it was at the beginning of the 18th century, more precisely in 1809, when Charles Darwin made a statement that was formative for that time, a statement that is still valid today and will probably survive for a long time.</p>

<p><img src="/assets/images/blog/darwin.png" alt="Charles Darwin" /></p>

<blockquote>
  <p>It is not the strongest of the species that survives, 
nor the most intelligent. It is the one most adaptable to change.</p>

  <p><strong>Charles Darwin</strong>, 1809</p>
</blockquote>

<p>The statement itself has been adapted somewhat over time, but its basic message has not changed and in times of COVID-19 became more relevant than ever before.</p>

<p>It lays the foundation for something else, it’s part of an area in DevOps that is often simply called a mindset.</p>

<p>Of course, one could now claim that the mindset is not the only thing that is important in the software development life cycle (SDLC). There are things that are just as important as mindset like principles, practices and it is also important to think about harmonizing tools. Nevertheless, without the right mindset, all of the above mentioned are only short-lived painkillers.</p>

<p>Nevertheless, let me briefly explain why we need DevOps with all its aspects so desperately. I want to start with a look at anti-patterns, because it is always easier to find things that DON’T work. And don’t worry, I will come up with solutions later.</p>

<h2 id="why-it-projects-fail">Why IT projects fail</h2>

<p>Gartner studies suggest that 75% of all IT projects in the USA are seen as failures by those who initiated them.</p>

<p>The total number of failed projects results from different areas. Project Initiation and Planning Issues, Technical and Requirements Issues, Project Management Issues and Stakeholder Management and Team Issues.</p>

<p>Many problems can be found especially in the latter area “Stakeholder Management and Team Issues”.</p>

<h3 id="stakeholder-management-and-team-issues">Stakeholder Management and Team Issues</h3>

<p>If we now look at some points why this area is so problematic, the cry for a fix becomes obvious:</p>

<ul>
  <li>Insufficient attention to stakeholders and their needs</li>
  <li>Failure to manage expectations</li>
  <li>Lack of senior management/executive support</li>
  <li>Inadequate visibility of project status</li>
  <li>Denial adopted in preference to hard truths</li>
  <li>People not dedicated to project</li>
  <li>Project team members lack experience and do not have the required skills</li>
  <li>Team lacks authority or decision making ability</li>
  <li>Poor collaboration, communication and teamwork</li>
</ul>

<blockquote>
  <p>It is the same old thinking, that leads to the same old result</p>
</blockquote>

<p>I don’t want to claim that the above statement is mine, but I think that this describes one of the biggest problems in today’s SDLC – the inability to adapt to change.</p>

<p>What I want to convey is that it is often not the tools or the technology that make a project fail. But there are other challenges in the SDLC that we will now look at.</p>

<p><img src="/assets/images/blog/the-sign.jpg" alt="You're not lost, you're here" /></p>

<h2 id="the-challenges">The Challenges</h2>

<p>As an IT consultant and DevOps advocate I have seen a lot of different customer scenarios in the past years and even though, I see different roles in different projects, I also see a lot of similarities. I would like to share those similarities with you and maybe you feel familiar with them:</p>

<table>
  <thead>
    <tr>
      <th style="text-align: left">Topics</th>
      <th style="text-align: left">Challenges</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align: left"><strong>Culture</strong></td>
      <td style="text-align: left">- No “You build it, you run it” mentality<br />- Multiple barriers between departments</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Lack of shared tool</strong></td>
      <td style="text-align: left">- No integration between core components<br />- Inconsistent Toolchain<br />- Continuous Integration Maturity Dashboard missing</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Different Objectives</strong></td>
      <td style="text-align: left">- Lack of DevOps Mindset<br />- Coordination is hard and takes enormous efforts<br />- Different Focus area</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Manual Activities</strong></td>
      <td style="text-align: left">- Manual release process<br />- Irregular versioning<br />- No standardization</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Monitoring</strong></td>
      <td style="text-align: left">- Dashboard views missing<br />- Lack of centralized log management<br />- No application error management<br />- Poor incident management</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Security</strong></td>
      <td style="text-align: left">- Guidelines missing<br />- No security assessment<br />- No built-in security during development</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Environment</strong></td>
      <td style="text-align: left">- Infrastructure provisioning takes weeks<br />- Inconsistency in Continuous Delivery<br />- No centralized user management</td>
    </tr>
    <tr>
      <td style="text-align: left"><strong>Knowledge Gaps</strong></td>
      <td style="text-align: left">- Documentation missing<br />- Troublesome Onshore – Offshore coordination<br />- Knowledge islands<br />- Information hiding</td>
    </tr>
  </tbody>
</table>

<p>Now that we know some of the challenges in an SDLC, we should take a look at the idea of DevOps and how it can be a cure to the itch.</p>

<h2 id="the-idea">The Idea</h2>

<p>If you search the Internet for an explanation of what DevOps exactly means, you will find many different views. Tool vendors try to make it a tool problem, consultants try to make it a fluffy mindset thing. I think the solution lies somewhere in the middle and I have therefore come up with my very own explanation, which I have compiled for you below.</p>

<blockquote>
  <p>DevOps is organizational culture and leadership with lean and agile principles combined with technical practices to enable continuous delivery of value to end users.</p>

  <p>These drive organizational performance and technology performance.</p>
</blockquote>

<p>You could say that this is a high-level explanation, but it helped me a lot to quickly get into conversation with the C-level management.</p>

<p>The reason for that is that this summary covers most of the challenges mentioned before, of course on a very abstract level but also in a very compact and meaningful way and that is always a good way to start a conversation.</p>

<h3 id="whats-so-special-about-devops">What’s so special about DevOps?</h3>

<p>DevOps is based on agility and the agile development. But it goes at least one step further than that, it includes the complete business side and operations. Using the principles of automation, for example, it attempts to gradually level out the overheads that have been built up or turns them into something that adds value.</p>

<p>What DevOps can thus accomplish is:</p>

<ul>
  <li>Higher quality</li>
  <li>Faster delivery</li>
  <li>Lower costs</li>
  <li>More flexibility</li>
</ul>

<p><img src="/assets/images/blog/passion-led-us-here.jpg" alt="Passion led us here" /></p>

<p>So if DevOps manages to make only half of the above mentioned IT projects a success, then we should definitely invest in DevOps transitions and start sooner rather than later.</p>

<p>However, it should also be clearly mentioned that DevOps does not make sense for every project and every organization, but a lot of its principles and practices are always applicable and bring a quick ROI for almost every project and every organization.</p>

<p>How can DevOps be applied to real life scenarios and how can we take the idea of DevOps and turn the mentioned challenges into opportunities?</p>

<p>An approach that I personally found very useful in many situations is the conceptual framework CALMS. Thereby the acronym stands for Culture, Automation, Lean, Measurement and Sharing. You will read about it soon in an upcoming blog of mine.</p>

<p>Until then I am very interested in your thoughts, experiences and feedback. Stay tuned and healthy!</p>]]></content><author><name>Jürgen Pointinger</name></author><category term="Leadership" /><summary type="html"><![CDATA[It’s not the strongest that survives]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://juergenpointinger.github.io/assets/images/blog/why-do-we-need-devops.jpg" /><media:content medium="image" url="https://juergenpointinger.github.io/assets/images/blog/why-do-we-need-devops.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">How to use Flyway with Docker</title><link href="https://juergenpointinger.github.io/use-flyway-with-docker/" rel="alternate" type="text/html" title="How to use Flyway with Docker" /><published>2020-04-16T00:00:00+00:00</published><updated>2020-04-16T00:00:00+00:00</updated><id>https://juergenpointinger.github.io/use-flyway-with-docker</id><content type="html" xml:base="https://juergenpointinger.github.io/use-flyway-with-docker/"><![CDATA[<h2 id="flyway-and-docker">Flyway and Docker</h2>

<p>If we think about a migration of databases, the analysis will probably not get past <a href="https://flywaydb.org/">Flyway</a>.</p>

<p>Flyway has focused on exactly one task and in this area I think they are doing a very good job.</p>

<p>The Community Edition comes with a good set of features. For the development in the <a href="https://flywaydb.org/documentation/database/postgresql">Postgres</a> area this seems to be sufficient, especially in the beginning. It is important to note the “Guaranteed database support timeline”. The Community Edition comes with a 5-year guarantee. Currently this would mean the following for Postgres.</p>

<p>Supported versions: 12, 11, 10, 9.6, 9.5, 9.4 
The Enterprise Edition would extend support to these versions: 9.3, 9.2, 9.1, 9.0</p>

<p>As soon as you think about <a href="https://flywaydb.org/documentation/database/oracle#sqlplus-commands">Oracle PL*SQL</a> and want to use it in your project, there is no way around a Pro or Enterprise Edition.</p>

<p>As soon as you have decided on a version, it is time to prepare your first “migration”.</p>

<p>Of course you can download Flyway with its cross-platform capability to your system and use the CLI of it.</p>

<p>However, I would suggest to use a <a href="https://hub.docker.com/r/flyway/flyway/">dockerized</a> version right away. The advantage is that you can reuse this approach later in a Continuous Integration / Delivery Pipeline.</p>

<p>The <code class="language-plaintext highlighter-rouge">info</code> CLI command prints the details and status information about all the migrations.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>flyway/flyway info
</code></pre></div></div>

<p>Flyway also supports different <a href="https://flywaydb.org/documentation/commandline/#jdbc-drivers">JDBC drivers</a>, if these should not be sufficient, e.g. with Oracle (ojdbc8), then one can simply reinstall the drivers or tell Flyway where the drivers can be found.</p>

<p>A list of useful CLI commands can be found <a href="https://flywaydb.org/documentation/commandline/">here</a>.</p>

<p>Flyway can be used quite nicely as a build image as well and it is almost as easy as using the CLI directly.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker pull flyway/flyway:6-alpine
<span class="nv">$ </span>docker run <span class="nt">--rm</span> <span class="nt">-v</span> <span class="k">${</span><span class="nv">pwd</span><span class="k">}</span>/sql:/flyway/sql flyway/flyway:6-alpine <span class="nt">-url</span><span class="o">=</span>jdbc:postgresql://&lt;HOST&gt;/&lt;DBNAME&gt; <span class="nt">-user</span><span class="o">=</span>&lt;USER&gt; <span class="nt">-password</span><span class="o">=</span>&lt;PASSWORD&gt; info
</code></pre></div></div>

<p>Instead of downloading the Flyway executable file manually, you can download the Docker image by ‘docker pull’ as shown above. Of course you have to tell Flyway where to find the scripts, which you can easily do with volumes. It is also necessary to establish a connection to the database. After that the respective CLI command can be used.</p>

<p>If we get along with the standard JDBC drivers, we can stop here. However, if other drivers are needed, I recommend to create your own Docker image with the appropriate driver pre-installed and possibly even a license defined.</p>

<p>How this works can be seen afterwards:</p>

<div class="language-dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">## Base: https://github.com/flyway/flyway-docker</span>
<span class="k">FROM</span><span class="s"> flyway/flyway:latest-alpine</span>

<span class="k">LABEL</span><span class="s"> maintainer="juergenpointinger" \</span>
      description="This is a build container image to interact with Flyway"

## Supported Volumes
<span class="c"># /flyway/conf: Directory containing a flyway.conf configuration file</span>
<span class="c"># /flyway/drivers: Directory containing the JDBC driver for your database</span>
<span class="c"># /flyway/sql: The SQL files that you want Flyway to use (for SQL-based migrations)</span>
<span class="c"># /flyway/jars: The jars files that you want Flyway to use (for Java-based migrations)</span>
<span class="k">VOLUME</span><span class="s"> [ "/flyway/conf","/flyway/drivers", "flyway/sql", "flyway/jars" ]</span>

<span class="c"># Use specific Oracle JDBC driver</span>
<span class="k">COPY</span><span class="s"> ojdbc8.jar "/flyway/drivers"</span>

<span class="c">## Flyway Edition</span>
<span class="c"># community: Select the Flyway Community Edition (default)</span>
<span class="c"># pro: Select the Flyway Pro Edition</span>
<span class="c"># enterprise: Select the Flyway Enterprise Edition</span>
<span class="k">ENV</span><span class="s"> FLYWAY_EDITION=community</span>
</code></pre></div></div>

<p>Afterwards the newly created image can be used just like the original Flyway image. Have fun with Flyway.</p>]]></content><author><name>Jürgen Pointinger</name></author><category term="Code &amp; Snippets" /><summary type="html"><![CDATA[Flyway and Docker]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://juergenpointinger.github.io/assets/images/blog/use-flyway-with-docker.jpg" /><media:content medium="image" url="https://juergenpointinger.github.io/assets/images/blog/use-flyway-with-docker.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">How to use scoped NPM registry</title><link href="https://juergenpointinger.github.io/scoped-npm-registry/" rel="alternate" type="text/html" title="How to use scoped NPM registry" /><published>2020-04-15T14:30:00+00:00</published><updated>2020-04-15T14:30:00+00:00</updated><id>https://juergenpointinger.github.io/scoped-npm-registry</id><content type="html" xml:base="https://juergenpointinger.github.io/scoped-npm-registry/"><![CDATA[<h2 id="scoped-registry">Scoped registry</h2>

<p>In most projects it is recommended to use a <a href="https://docs.npmjs.com/misc/scope">scoped</a> (private) registry to share self-developed modules.</p>

<p>Sometimes it is not possible or makes no sense to use tools like <a href="https://jfrog.com/artifactory/">JFrog Artifactory</a> or <a href="https://www.sonatype.com/nexus-repository-oss">Sonatype Nexus Repository</a>.</p>

<p>In <a href="https://docs.gitlab.com/ee/user/packages/npm_registry/">GitLab</a>, such registries can now be used relatively easily.</p>

<p>Add the GitLab NPM Registry to your local or global NPM configuration. Replace <code class="language-plaintext highlighter-rouge">@your-scope</code> with your specific scope name (e.g. your organization name):</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>npm config <span class="nb">set</span> @your-scope:registry https://gitlab.com/api/v4/packages/npm/
</code></pre></div></div>

<p>Your config output look like this (on Windows):</p>

<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>; cli configs
metrics-registry = "https://registry.npmjs.org/"
...

; userconfig C:\Users\&lt;userprofile&gt;\.npmrc
@your-scope:registry = "https://gitlab.com/api/v4/packages/npm/"
</code></pre></div></div>

<p>Now you just need to authenticate with the newly created scoped registry. Replace <code class="language-plaintext highlighter-rouge">&lt;your_token&gt;</code> with your personal access token:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>npm config <span class="nb">set</span> <span class="s1">'//gitlab.com/api/v4/packages/npm/:_authToken'</span> <span class="s2">"&lt;your_token&gt;"</span>
</code></pre></div></div>

<p>The commands described above change your NPM userconfig, your user specific .npmrc file. If you want to make the change on a global level, you would have to add “–global” at the end.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>npm config <span class="nb">set</span> @your-scope:registry https://gitlab.com/api/v4/packages/npm/ <span class="nt">--global</span>
<span class="nv">$ </span>npm config <span class="nb">set</span> <span class="s1">'//gitlab.com/api/v4/packages/npm/:_authToken'</span> <span class="s2">"&lt;your_token&gt;"</span> <span class="nt">--global</span>
</code></pre></div></div>

<p>The .npmrc in a global context:</p>

<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>; cli configs
metrics-registry = "https://registry.npmjs.org/"
...

; globalconfig C:\Program Files\nodejs\etc\npmrc
@your-scope:registry = "https://gitlab.com/api/v4/packages/npm/"
</code></pre></div></div>]]></content><author><name>Jürgen Pointinger</name></author><category term="Code &amp; Snippets" /><summary type="html"><![CDATA[Scoped registry]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://juergenpointinger.github.io/assets/images/blog/scoped-npm-registry.jpg" /><media:content medium="image" url="https://juergenpointinger.github.io/assets/images/blog/scoped-npm-registry.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Docker Cheat Sheet</title><link href="https://juergenpointinger.github.io/useful-docker-commands/" rel="alternate" type="text/html" title="Docker Cheat Sheet" /><published>2020-04-15T14:30:00+00:00</published><updated>2020-04-15T14:30:00+00:00</updated><id>https://juergenpointinger.github.io/useful-docker-commands</id><content type="html" xml:base="https://juergenpointinger.github.io/useful-docker-commands/"><![CDATA[<h2 id="cheat-sheet">Cheat Sheet</h2>

<p>To make it easier for me, I started some time ago to create a cheat sheet for Docker. Since I don’t want to keep this knowledge only for myself, I share some of the most important CLI commands I need in my daily do-ing.</p>

<h2 id="login">Login</h2>

<p>When you start to deal with continuous integration tools, you quickly get to the point where you also use a private container registry.</p>

<p>Docker provides a very simple way to use a registry and log in with your credentials:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker login registry.gitlab.com
</code></pre></div></div>

<p>To make the docker login command non-interactive, such as required for GitLab CI, you can set the <code class="language-plaintext highlighter-rouge">--password-stdin</code> flag to provide a password via <code class="language-plaintext highlighter-rouge">STDIN</code>.</p>

<p>Using <code class="language-plaintext highlighter-rouge">STDIN</code> prevents the password from ending up in the shell’s history or log files.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span><span class="nb">echo</span> <span class="s2">"</span><span class="k">${</span><span class="nv">CI_REGISTRY_PASSWORD</span><span class="k">}</span><span class="s2">"</span> | docker login <span class="nt">-u</span> <span class="s2">"</span><span class="k">${</span><span class="nv">CI_REGISTRY_USER</span><span class="k">}</span><span class="s2">"</span> <span class="s2">"</span><span class="k">${</span><span class="nv">CI_REGISTRY</span><span class="k">}</span><span class="s2">"</span> <span class="nt">--password-stdin</span>
</code></pre></div></div>

<h2 id="system-health">System health</h2>

<p>In order to check the current Docker Disk Usage and thus the allocation of the system a simple command</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker system <span class="nb">df</span>
</code></pre></div></div>

<h2 id="purging-all-unused-or-dangling-images-containers-volumes-and-networks">Purging all unused or dangling Images, Containers, Volumes, and Networks</h2>

<p>Docker provides a single command that will clean up any resources — images, containers, volumes, and networks — that are dangling (not associated with a container):</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker system prune
</code></pre></div></div>

<p>To additionally remove any stopped containers and all unused images (not just dangling images), add the -a flag to the command:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker system prune <span class="nt">-a</span>
</code></pre></div></div>

<h2 id="removing-docker-images">Removing Docker Images</h2>

<h3 id="removing-dangling-untagged-images">Removing dangling (untagged) Images</h3>

<p>Docker images consist of multiple layers. Dangling images are layers that have no relationship to any tagged images. They no longer serve a purpose and consume disk space. They can be located by adding the filter flag, <code class="language-plaintext highlighter-rouge">-f</code> with a value <code class="language-plaintext highlighter-rouge">of dangling=true</code> to the docker images command. When you’re sure you want to delete them, you can use one of the following commands:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker rmi <span class="si">$(</span>docker images <span class="nt">-f</span> <span class="s2">"dangling=true"</span> <span class="nt">-q</span><span class="si">)</span>
</code></pre></div></div>

<p>or</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker image prune
</code></pre></div></div>

<blockquote>
  <p><strong>NOTE:</strong> If you build an image without tagging it, the image will appear on the list of dangling images because it has no association with a tagged image.</p>
</blockquote>

<h3 id="removing-specific-images-by-pattern">Removing specific Images by Pattern</h3>

<p>Sometimes it is necessary to search an image for a certain <code class="language-plaintext highlighter-rouge">&lt;&lt;pattern&gt;&gt;</code>. A combination of Docker CLI and <code class="language-plaintext highlighter-rouge">grep</code> can help.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ docker images -a |  grep "&lt;&lt;pattern&gt;&gt;"
</code></pre></div></div>

<p>If you are comfortable with the search/filter, you can use ‘awk’ to pass the IDs to the Docker CLI:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker image <span class="nb">rm</span> <span class="si">$(</span>docker images <span class="nt">-a</span> | <span class="nb">grep</span> <span class="s2">"&lt;&lt;pattern&gt;&gt;"</span> | <span class="nb">awk</span> <span class="s1">'{print $3}'</span><span class="si">)</span>
</code></pre></div></div>

<p>or</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker images <span class="nt">-a</span> | <span class="nb">grep</span> <span class="s2">"&lt;&lt;pattern&gt;&gt;"</span> | <span class="nb">awk</span> <span class="s1">'{print $3}'</span> | xargs docker image <span class="nb">rm</span>
</code></pre></div></div>

<blockquote>
  <p><strong>NOTE:</strong> Instead of <code class="language-plaintext highlighter-rouge">image rm</code> you could use the shorter version with <code class="language-plaintext highlighter-rouge">rmi</code></p>
</blockquote>

<h3 id="removing-all-images">Removing all Images</h3>

<p>Also the deletion of all images can be solved with the Docker CLI. To do this, you can simply append <code class="language-plaintext highlighter-rouge">-a</code> to the command. <code class="language-plaintext highlighter-rouge">-q</code> returns the Image ID, which can be used to pass it on to other commands.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker image <span class="nb">rm</span> <span class="si">$(</span>docker images <span class="nt">-a</span> <span class="nt">-q</span><span class="si">)</span>
</code></pre></div></div>

<p>or</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker image <span class="nb">rm</span> <span class="si">$(</span>docker image <span class="nb">ls</span> <span class="nt">-q</span><span class="si">)</span>
</code></pre></div></div>

<h2 id="removing-docker-containers">Removing Docker Containers</h2>

<p>Similar to the <code class="language-plaintext highlighter-rouge">docker images</code> command, you can also add the parameter <code class="language-plaintext highlighter-rouge">-a</code> to the <code class="language-plaintext highlighter-rouge">docker ps</code> command, which lists the running containers, so that all containers are shown, even those that are no longer running.</p>

<p>The displayed container ID or name can then be reused:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker container <span class="nb">rm</span> <span class="o">&lt;&lt;</span><span class="no">container_id</span><span class="sh">/name&gt;&gt;
</span></code></pre></div></div>

<p>It is also possible to display only the exited containers:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker ps <span class="nt">-a</span> <span class="nt">-f</span> <span class="nv">status</span><span class="o">=</span>exited
</code></pre></div></div>

<h2 id="removing-specific-containers-by-pattern">Removing specific Containers by Pattern</h2>

<!-- ```bash
$ docker container rm -f $(docker ps -aq --filter name=registry.gitlab.com*)
``` -->

<p>And similar to the images it works with the containers.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker container <span class="nb">rm</span> <span class="si">$(</span>docker ps <span class="nt">-a</span> | <span class="nb">grep</span> <span class="s2">"&lt;&lt;pattern&gt;&gt;"</span> | <span class="nb">awk</span> <span class="s1">'{print $1}'</span><span class="si">)</span>
</code></pre></div></div>

<p>or</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker ps <span class="nt">-a</span> | <span class="nb">grep</span> <span class="s2">"&lt;&lt;pattern&gt;&gt;"</span> | <span class="nb">awk</span> <span class="s1">'{print $1}'</span> | xargs docker container <span class="nb">rm</span>
</code></pre></div></div>

<blockquote>
  <p><strong>NOTE:</strong> In this case <code class="language-plaintext highlighter-rouge">{print $1}</code> returns the ID of the container.</p>
</blockquote>

<h2 id="removing-all-stopped-containers">Removing all stopped Containers</h2>

<p>With the knowledge we have just gained, we can now simply delete all the containers that have exited.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Historical command</span>
<span class="nv">$ </span>docker <span class="nb">rm</span> <span class="nt">-f</span> <span class="si">$(</span>docker ps <span class="nt">-aq</span> <span class="nt">-f</span> <span class="nv">status</span><span class="o">=</span>exited<span class="si">)</span>
</code></pre></div></div>

<p>or</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker container prune
</code></pre></div></div>

<blockquote>
  <p><strong>NOTE:</strong> We again only pass the ID to the Docker CLI using <code class="language-plaintext highlighter-rouge">-q</code>.</p>
</blockquote>

<h2 id="removing-all-containers">Removing all Containers</h2>

<p>If we need to remove all containers, the running ones and the stopped ones we can use the following command.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker container <span class="nb">rm</span> <span class="nt">-f</span> <span class="si">$(</span>docker container <span class="nb">ls</span> <span class="nt">-aq</span><span class="si">)</span>
</code></pre></div></div>

<h2 id="removing-all-volumes">Removing all Volumes</h2>

<p>To remove the volumes not used any longer, we can use.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker volume <span class="nb">rm</span> <span class="si">$(</span>docker volume <span class="nb">ls</span> <span class="nt">-q</span><span class="si">)</span>
</code></pre></div></div>

<p>or</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker volume prune
</code></pre></div></div>

<h2 id="removing-all-builder-cache">Removing all Builder cache</h2>

<p>To remove the build cache, we can use.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker builder prune
</code></pre></div></div>

<h2 id="stats-for-all-running-containers">Stats for all running Containers</h2>

<p>The <code class="language-plaintext highlighter-rouge">docker stats</code> command is used to display a live stream of container resource usage statistics. This command can be extended to display the statistics of all containers simultaneously.</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>docker ps <span class="nt">-q</span> | xargs docker stats
</code></pre></div></div>]]></content><author><name>Jürgen Pointinger</name></author><category term="Code &amp; Snippets" /><summary type="html"><![CDATA[Cheat Sheet]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://juergenpointinger.github.io/assets/images/blog/useful-docker-commands.jpg" /><media:content medium="image" url="https://juergenpointinger.github.io/assets/images/blog/useful-docker-commands.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>