Tag: Kubernetes

Knowledge spew on GitOps

In working with a handful of customers the concept of GitOps continues to resonate more and more. Let us dive into a brain dump of some of the conversations related to GitOps and how these customers tackled the task at hand.

First thing to remember is these customers are not massive. They are rather common actually. A Gartner defined “medium-sized” enterprise. Keep in mind these customers have the same issues as the giant enterprises just at a different scale.

In every case there was a user story. At a high level, a common theme was the need to roll out updates to a specific application regularly enough to find ways to entice the consumer to purchase a widget of some sort. Ok, A/B testing. Simple enough.

Each of the customers were in different maturity levels when it came to development processes, kubernetes knowledge, and devops methods. However they all have one thing in common…the need to deliver an application to their customer base on a deadline and continuously improve the application based on user feedback. All three of them were successful in meeting their self imposed deadlines. How?

Simple. Every one of them came together, ironed out a plan, and implemented the plan. The interesting part, every one of them already knew how to get the product to market. All they needed was a bit of guidance on how to overcome obstacles and get shit done. How?

  • Step one. Define the top of the mountain, the finish line, the end result.
  • Step two. The project leads built out a high level timeline from end to beginning.
  • Step three. All of the team members came together to build out the task teams.
  • Step four. Each of the teams built out their respective timeline for contribution.
  • Step five. Build.

Now how does this relate to GitOps? GitOps was the pivotal methodology to get it done. The pipeline was built with all of the parts in mind. If you recall the DevOps “infinity loop“, the key is to use that and combine it with the OODA loop decision model. The combination creates a very powerful decision making framework facilitating agile development with constant improvement. Sound simple? It’s not. It is in theory, but the implementation is like a relationship. Everything is great when dating, but the hard work is when dating turns to marriage. Same goes for creating a product. Designing the product, what it needs to do, all of the moving parts is fun. The real work comes in when the first working build is complete.

This is where GitOps shines. The developers build things, test locally, and commit. The pipelines move it through the process and all of the other teams contribute to each part in this machine. If one part breaks down, the other work stops to crowdsource the problem. The problem is fixed and the machine continues on. GitOps is the magical fairy dust. What about the technology?

The technology is rather mundane actually. Git. A code repository. A CI/CD pipeline. A build system. A test harness. A deployment platform. Git…the tools of choice are Github or Gitlab. Github is pretty slick, but Gitlab will allow for running locally in small environments building closed source deliverables. Each has a pipeline mechanism or there are many other tools such as Texton, Argo, CircleCI and many others with various features depending on what was needed. For build systems, many exist and again each as features as needed. However the deployment platform consistently remains the same, Kubernetes.

Building deploy-able applications at scale is hard. There are many other moving parts, processes, tools, etc. in play. However one thing stands out in all of these engagements…give the right people who have the will to succeed the skills needed to succeed and the execution part will look easy.

It is always fun to be a part of something, but its most precious reward is being able to step away and watch the machine run on it’s own.

That’s the end of this spew. It went everywhere…maybe it’s more like a sneeze.

Peace out.

Declarative vs Imperative in Kubernetes

To be declarative or to be imperative?

Kubernetes is a powerful tool for orchestrating containerized applications across a cluster of nodes. It provides users with two methods for managing the desired state of their applications: the Declarative and Imperative approaches.

The imperative approach

The Imperative approach requires users to manually issue commands to Kubernetes to manage the desired state of their applications. This approach gives users direct control over the state of their applications, but also requires more manual effort and expertise, as well as a more in-depth understanding of Kubernetes. Additionally, the Imperative approach does not provide any version control or rollback capabilities, meaning that users must be more mindful of any changes they make and take extra care to ensure they are not introducing any unintended consequences.

A simple set of imperative commands to create a deployment

To create a Kubernetes deployment using the Imperative approach, users must issue the following commands:

Create a new deployment named my-deployment and use the image my-image:

kubectl create deployment my-deployment --image=my-image

Scale the deployment to 3 pods:

kubectl scale deployment my-deployment --replicas=3

Declarative approach

In the Declarative approach, users express their desired state in the form of Kubernetes objects such as Pods and Services. These objects are then managed by Kubernetes, which ensures that the actual state of the system matches the desired state without requiring users to manually issue commands. This approach also provides version control and rollback capabilities, allowing users to easily revert back to a previous state if necessary.

Below is an example Kubernetes deployment yaml (my-deployment.yaml) which can be used to create the same Kubernetes deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
  labels:
    app: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-container
        image: my-image:latest
        ports:
        - containerPort: 80

To create or update the deployment using this yaml, use the following command:

kubectl apply -f my-deployment.yaml

Infrastructure as Code

The primary difference between the Declarative and Imperative approaches in Kubernetes is that the Declarative approach is a more automated and efficient way of managing applications, while the Imperative approach gives users more direct control over their applications. Using a Declarative approach to Kubernetes gives rise to managing Infrastructure as Code which is the secret sauce in being able to maintain version control and rollback capabilities.

In general, the Declarative approach is the preferred way to manage applications on Kubernetes as it is more efficient and reliable, allowing users to easily define their desired state and have Kubernetes manage the actual state. However, the Imperative approach can still be useful in certain situations where direct control of the application state is needed. Ultimately, it is up to the user to decide which approach is best for their needs.

Creating a pipeline in Gitlab

Creating a pipeline in Gitlab is a great way to deploy applications to various environments. It allows you to automate the process of building, testing, and deploying your application. With Gitlab, you can define a pipeline that consists of a series of stages, which can include tasks such as building the application, running tests, and deploying the application. By leveraging the power of a pipeline, you can ensure that your application is deployed quickly, efficiently, and with minimal hassle. This guide will walk you through the steps to create a pipeline in Gitlab, with code examples.

Step 1: Creating a New Project

To start a new pipeline, you will first need to create a new project in Gitlab. To do this, you can go to Projects > New Project and fill out the form.

Step 2: Creating the Pipeline

Once the project is created, you can go to CI/CD > Pipelines and create a new pipeline. You will be able to define the stages and tasks for the pipeline, and even add code snippets to configure the pipeline. Here is an example of a simple pipeline:

stages:
  - build
  - test
  - deploy

build:
  stage: build
  script:
    - npm install
    - npm build

test:
  stage: test
  script:
    - npm test

deploy:
  stage: deploy
  script:
    - npm deploy

Step 3: Running the Pipeline

Once you have created the pipeline, you can run it by going to CI/CD > Pipelines and clicking the “Run Pipeline” button. This will execute the pipeline, and you can check the progress of the pipeline in the pipeline view.

Step 4: Troubleshooting

If you encounter any issues while running the pipeline, you can click on the pipeline to open up the pipeline view. Here, you will be able to view the log output of each stage, and debug any issues that may be occurring.

Conclusion

Creating a pipeline in Gitlab is a great way to easily deploy applications to various environments. By leveraging the power of a pipeline, you can ensure that your application is deployed quickly, efficiently, and with minimal hassle. This guide has walked you through the steps to create a pipeline in Gitlab, with code examples.

Using CI/CD pipelines in Kubernetes explained

Building efficient development pipelines in Kubernetes requires the right tooling, a lot of planning, and a good understanding of your development processes.

To quickly build and deliver robust products and benefit from automation and efficient collaboration, the software team relies on continuous integration/continuous delivery (CI/CD) pipelines. Implementing CI/CD for cloud native applications makes delivery cycles more robust while streamlining the development and deployment workflow.

Let’s talk about the key components of a CI/CD pipeline, how to optimize these pipelines and some recommended best practices and tools.

What Makes an efficient CI/CD Pipeline

The Kubernetes platform and CI/CD workflows both aim to improve software quality, as well as automate and boost development velocity. So companies benefit from having CI/CD pipelines to use with Kubernetes.

The following are some key components of a Kubernetes-based CI/CD pipeline:

  • Containers help achieve encapsulation of application components while enabling seamless integration through runtimes.
  • Operating clusters deploy the containers for your software build once the CI/CD tool approves the containers.
  • Configuration management stores all details related to the infrastructure setup and identifies any newly introduced change in the system.
  • A version control system (VCS) is a unified source code repository that maintains code changes. This generates the trigger for a CI/CD tool to start the pipeline whenever a new change is pushed into its repository.
  • Image registries store container images.
  • Security testing and audits maintain the equilibrium between rapid development and security of the application by ensuring the pipelines are free from potential security threats.
  • Continuous monitoring and observability allow developers to obtain actionable insights and metrics by providing complete visibility into the application life cycle.

Key Considerations to Make Your Pipeline Effective

CI/CD sits at the core of DevOps practice, enabling a sustainable model to streamline and accelerate production releases. A comprehensive understanding of the workflow is fundamental to building an effective CI/CD pipeline, along with evaluating the enterprise requirement to help choose the right framework.

Below are some key considerations for making your pipeline effective:

  • All-in-one CI/CD tool vs. case-specific solutions: Similar to the infrastructure setup, it is crucial to diligently assess the available CI/CD tools based on use cases, technical requirements and organizational goals.
  • On-premises vs. managed vs. hybrid CI/CD: Each CI/CD pipeline type has its own effectiveness, depending on your requirements and infrastructure. Factors that determine the type of CI/CD pipeline to choose include ease of use, ease of setup, infrastructure and operating system support.
  • Code testing and validation: An effective validation and automated testing framework is one of the core components of a CI/CD pipeline. This ensures a stable build with zero code-quality issues while highlighting potential failure scenarios.
  • Rollbacks: These help organizations redeploy the previous stable release of an application. Implementing a diligently planned rollback mechanism in CI/CD is vital to safeguarding the application in case of failure or security incidents.

Defining a Kubernetes-Based CI/CD Pipeline

While defining a Kubernetes-based CI/CD pipeline, you can go with one of the two major paradigms below.

Push-Based Pipeline

An external system like a CI pipeline generates build triggers to deploy the changes to the Kubernetes cluster following a commit to a version control system repository in a push-based pipeline. Kubernetes cluster credentials are exposed outside the domain of the cluster in such a model.

Pull-Based Pipeline

Kubernetes operators deploy the changes from inside a cluster whenever new images are pushed to the registry in a pull-based pipeline.

Some Best Practices

Here are some recommendations for building an effective Kubernetes CI/CD pipeline. These include some useful best practices.

Avoid Hardcoding Secrets and Configurations in Containers

You should store configurations in configmap and not hardcode them in the containers. This provides the flexibility of deploying the same container in different environments without making environment-specific changes to it.

It’s also recommended to keep secrets out of containers and encrypt and store them in Kubernetes Secrets. This prevents credentials from getting exposed through a version control system in a CI/CD pipeline.

Use Helm for Deployments

Use the Helm package manager for Kubernetes application deployments ​​to keep track of releases or logical groupings.

Enable Git-Based Workflows

To allow for all infrastructure configurations to be stored within git, CI/CD pipelines should follow a GitOps methodology. It makes infrastructure code more accessible to developers, letting them review the changes before they’re deployed.

Git also provides a unified source repository and snapshots of the cluster. These are easy for developers to refer to as needed and recover the application to the last stable state in the case of failure.

Use Canary/Blue-Green Deployment Patterns

Parallel to the running production instances, running a blue-green pattern of instances lets you test changes and switch over traffic when testing is complete, eliminating the need for downtime during deployment.

Cache and Reuse Container Images

Use caching and reuse features of Docker container images to minimize container build times and reduce the risk of introducing defects into the newly built container image.

Tools for Kubernetes CI/CD Pipelines

All-in-One CI/CD Tools

GitHub Actions is an open source CI/CD tool that supports automated build, test and deployment pipelines. It is the preferred CI/CD platform when the source code repository is GitHub.

GitLab CI/CD facilitates the continuous build, test and deployment of software applications without the need for third-party integration. Checkout our article on implementing a Gitlab pipeline for your project.

Jenkins (including Jenkins X) is an open source, automation server that promotes CI and CD in varying levels of cluster complexity, enabling developers to automate application build, test and deployment processes seamlessly across hybrid/multicloud setups. Jenkins X is an upgraded version that facilitates automated CI/CD for cloud native containerized applications and orchestration tools like Kubernetes or Docker.

Rancher Fleet is fundamentally a set of Kubernetes custom resource definitions (CRDs) and controllers that manage GitOps for a single Kubernetes cluster or a large scale deployment of Kubernetes clusters. It is a distributed initialization system that makes it easy to customize applications and manage HA clusters from a single point.

CI Tools

Circle CI is a cloud-based CI tool that uses an API to facilitate automatic Kubernetes deployments. It is intensely focused on testing the new commit before deploying via various methods like unit testing, integration testing, etc. Because of its features for implementing complex pipelines with configurations like caching and resource classes, it is one of the most popular lightweight integration tools for a Kubernetes ecosystem.

Drone CI is an open source CI tool built entirely on Docker that uses a container-first approach. The plugins, components and pipeline stages of Drone are deployed and executed as Docker containers. The platform offers a wide range of flexibility for using different tools and environments for the build, but you have to integrate it with a git repository.

CD Tools

Spinnaker is an open source continuous delivery tool that integrates with multiple cloud providers. Since the platform does not rely on a GitOps model, config files can be stored in the cloud provider’s storage.

Argo CD is a declarative GitOps continuous delivery tool that is lightweight, easy to configure and purpose-built for Kubernetes. The platform considers git the source of truth, which enhances security, making access control and permission management easier to administer.

Automation and Infrastructure Configuration Tools

Terraform by Hashicorp is an open source Infrastructure as Code tool that facilitates DevOps teams’ ability to provision and manage infrastructure programmatically via configuration files.

Red Hat Ansible is an open source automation platform that enables automation for provisioning, configuration management and infrastructure deployment across cloud, virtual and on-premises environments.

Salt by Saltstack contains a robust and flexible configuration management framework, which is built on the remote execution core. This framework executes on the minions, allowing effortless, simultaneous configuration of tens of thousands of hosts, by rendering language specific state files. Unlike Ansible, Salt is agentless, relying instead on secure shell (SSH) connections to complete tasks. For a security architect, Salt is a gem!

Collaboration and Issue Management Tools

Jira is implemented by teams for software collaboration, defect tracking and work management. The tool offers customizable features like an intuitive dashboard, optimized workflows, efficient search, filtering and defect management. Jira is purpose-built to support various use cases of project management, such as capturing requirements, test case management and tracking tasks in real time.

Zendesk is a cloud-based customer support platform that enables an organization to engage with its client through different collaboration channels, including phone, email, chat and social media. Zendesk provides one easy-to-use platform for cross-functional collaboration and customer communications, thereby helping organizations to better manage customer queries and respond quickly.

Security

Open Policy Agent (OPA) is an open source policy engine that supports a high-level declarative language that lets developers specify Policy as Code. The platform is built to impose granular-level policies on different components, including CI/CD pipelines, microservices, Kubernetes clusters, etc.

Kubewarden is an open source policy engine simplifying the adoption of policy-as-code. It does not require any domain specific knowledge or new language constructs and can take existing policies, compile into WebAssembly and deploy into existing pipelines using existing processes.

Kube-bench is an open source tool used to run the CIS Kubernetes Benchmark test on Kubernetes clusters. This ensures that the Kubernetes cluster is secure and deployed according to the security recommendations in the benchmark document.

SUSE NeuVector is a fully open source end to end cloud native security platform to implement zero-trust security in containerized environments. With full support for Openshift, Kubernetes, and simple containerized workloads, SUSE NeuVector allows for complete visibility into your cloud-native network and will prevent any communication not explicitly required for an application or workload to function.

Monitoring Tools

Foresight is an observability product for CI pipelines and tests that enable secure, real-time monitoring of CI/CD pipelines. In addition to tracking metrics, traces and logs, the platform offers live debugging capabilities to facilitate quicker resolution of failures.

Prometheus/Grafana are open source, event-monitoring tools that implement a high-dimensional data model and store metrics along with timestamps in a time-series database. Prometheus ships with a flexible query language and is one of the most popular alerting systems for complex Kubernetes clusters. Based on metrics generated by Prometheus, Grafana offers built-in visualization support for efficient querying and analysis.

Summary

Delivering high-quality software at speed is not easy to sustain and scale. If you develop modern applications today, CI/CD sits at the heart of your software development process because it offers agility, reduces risks of production recessions and ensures quality. It is often considered critical to build an effective CI/CD pipeline for rapid workflow execution. Doing so requires diligent technical analysis, a generous amount of planning and choosing the right set of tools.

Picture showing tools in a circle

7 SRE tools to know today

As an SRE or platform engineer, you’re likely constantly looking for ways to streamline your workflow and make your day-to-day tasks more efficient. One of the best ways to do this is by utilizing popular SRE or DevOps tools. In this post, we’ll take a look at 7 of the most popular tools that are widely used in the industry today and explain their value in terms of how they can help make you more efficient in your day-to-day tasks.

  1. Prometheus: Prometheus is a popular open-source monitoring and alerting system that is widely used for monitoring distributed systems. It allows you to collect metrics from your services and set up alerts based on those metrics. Prometheus is known for its simple data model, easy-to-use query language, and powerful alerting capabilities. With Prometheus, you can quickly and easily identify issues within your systems and be alerted to them before they become a problem.
  2. Grafana: Grafana is a popular open-source visualization tool that can be used to create interactive dashboards and charts based on the metrics collected by Prometheus. It allows you to easily view the health of your systems, identify trends, and spot outliers. With Grafana, you can quickly and easily identify patterns and trends within your data, which can help you optimize your systems and improve their performance.
  3. Kubernetes: Kubernetes is an open-source container orchestration system that allows you to automate the deployment, scaling, and management of containerized applications. It helps you to define, deploy, and manage your application at scale, and to ensure high availability and fault tolerance. With Kubernetes, you can automate many routine tasks associated with deploying and managing your applications, which frees up more time for you to focus on other important tasks.
  4. Ansible: Ansible is an open-source automation tool that can be used to automate the provisioning, configuration, and deployment of your infrastructure. Ansible is known for its simple, human-readable syntax and its ability to easily manage and automate complex tasks. With Ansible, you can automate the provisioning and configuration of your infrastructure, which can help you save time and reduce the risk of errors.
  5. Terraform: Terraform is a popular open-source tool for provisioning and managing infrastructure as code. It allows you to define your infrastructure as code and to use a simple, declarative language to provision and manage resources across multiple providers. With Terraform, you can automate the process of provisioning and managing your infrastructure, which can help you save time and reduce the risk of errors.
  6. Jenkins: Jenkins is an open-source automation server that can be used to automate the building, testing, and deployment of your software. It provides a powerful plugin system that allows you to easily integrate with other tools, such as Git, Ansible, and Kubernetes. With Jenkins, you can automate many routine tasks associated with building, testing, and deploying your software, which frees up more time for you to focus on other important tasks.
  7. GitLab: GitLab is a web-based Git repository manager that provides source code management (SCM), continuous integration, and more. It’s a full-featured platform that covers the entire software development life cycle and allows you to manage your code, collaborate with your team, and automate your pipeline. With GitLab, you can streamline your entire software development process, from code management to deployment, which can help you save time and reduce the risk of errors.

These are just a few examples of the many popular SRE and DevOps tools that are widely used in the industry today.

Devops Toolkit for Automation

In the DevOps methodology automation is likely the most important concept. Use “automate everything” as a mantra daily.

Image by Michal Jarmoluk from Pixabay

As an “operator” working in a DevOps role good tools are a necessity. Tools which allow for automating most everything is crucial to keeping up with the vast amount of changes and updates created in a Agile development environment.

Using the same tools your counterparts on the team use will allow for expediting the learning process. In a lot of cases developers use a IDE (Integrated Development Environment) of some sort. Visual Studio Code comes to the forefront, but some ‘hardcore’ or ‘old school’ developers still use Emacs or even Vim as their development tool of choice. There are many out there and each has its pros and cons. Along with a IDE there will be the need for extensions to make things simpler. Let’s outline a few and focus on Visual Studio Code as the tool of choice.

Visual Studio Code is available for most of the commonly used platforms. It has a ton of extensions, but as a “DevOps Engineer” you’ll need a few to make your life easier. First and foremost you’ll want extensions to make working with your favorite cloud provider easier. There are plugins for AWS, GKE, and AKS as well as plugins for yaml, Kubernetes, and Github.

Another extension necessary for container development is the Remote Development Extension Pack. This extension provides the Dev Containers extension allowing for the opening of files and folders inside a container. It also provides a SSH extension to simplify access to remote machines. The Dev Containers extension will want to use Docker Desktop, but a better alternative is Rancher Desktop.

Rancher Desktop is another superb tool for several reasons.

  • 100% open source
  • Includes K3s as the Kubernetes distribution
  • Can use with dockerd (moby) or containerd
  • Basic dashboard
  • Easy to use

To get started with it, download Rancher Desktop and install on your favorite platform. Follow the installation instructions and once installed go to the preferences page and select “dockerd (moby)” as shown below.

Rancher Desktop Kubernetes Settings

Now that you have Rancher Desktop installed as well as Visual Studio Code with all of the extensions take some time to get familiar with it. Best to start with your github account and create or fork a repository to work with inside Visual Studio Code. Reading through the various getting started docs yields hours of things to try or work with to learn.

To get started with your Rancher Desktop cluster simply click on the Rancher Desktop icon. In most windowed environments there’s a icon in the “task bar”.

Click on the Dashboard link to get access to view the K3s cluster installed when Rancher Desktop started.

Another way to access the cluster is to use kubectl. A number of utilities were installed to ~/.rd/bin. Use kubectl get nodes to view the node(s) in your cluster or use kubectl get pods -A to view all of the pods in the cluster.

Many utilities exist to view/manage Kubernetes clusters. Great learning experiences come from experimentation.

A lot was accomplished in this post. From a bit of reading to manipulating a Kubernetes cluster there is a lot of information to absorb. Visual Studio Code will be the foundation for a lot of the work done in the DevOps world. Containers and Kubernetes will be the foundation for the execution of the work created. This post provided the building blocks to combine the Dev and the Ops with what’s needed to automate the process.

Next up…building a simple CI/CD pipeline.

Getting started in DevOps

Getting started in DevOps doesn’t have to be hard.

Image by Dirk Wouters from Pixabay

How do we get started…starting with some assumptions.

  1. You understand how to install and manage a Kubernetes cluster.
  2. You understand how to ‘git’ around. (heh…like the pun?)
  3. You know how CI/CD pipelines work.
  4. You understand some development. Or at least you know how to get around tools like VSCode.

There’s plenty of knowledge to be found here so let’s get started.

In most cases companies needing people who understand DevOps best practices are either starting on or already executing a Digital Transformation journey. These journeys are just that, a journey so grab a seat, buckle up, and enjoy the ride. This particular ride involves a lot of buzzword bingo games. There will be plenty of opportunity for playing that game later.

Getting started on the journey

The first part of every journey is preparing for it. It helps to learn a bit more about the destination before embarking on the actual journey to that destination so watch for the buzzwords. The first thing to note is a lot of enterprises have a lot of technical debt. Suffice to say there will be a lot of work for far more developers than there are resources for said developers. From ancient Microsoft .net work to crufty java, there’s plenty of history in those binaries. One of the goals may be to modernize these applications. The fabulous book, “The Phoenix Project” describes how “Phil” takes a over budget and behind schedule modernization project to deployment utilizing effective collaboration and communication, crowdsourcing, and the “Three Ways”.

Hopefully “The Phoenix Project” helped to frame what is in store for embarking on the adventure into DevOps. The next steps are to put in practice some of the constructs outlined. One of the key tenants of the book was to ensure the “pipeline” has no obstructions as one single slow down will slow the entire line of work. This slow down will create bottlenecks which, in turn, will create a ripple effect on the entire process. These “pipelines” in a cloud native development world are part of the CI/CD process or continuous improvement, continuous development pipeline.

Other takeaways

Gene Kim outlined a few other takeaways in “The Phoenix Project” worth noting. The first one came from the need to work in smaller groups. Jeff Bezos is credited with creating the “Two Pizza Team” where the teams are limited in size (consume 2 pizzas per team). This is how a lot of the innovation came from within Amazon. Small, competitive teams who communicated very well. This small team concept leads to another concept of “microservices”.

Instead of monoliths where everything runs together, microservices breaks each service into a functional unit. Microservices are focused on putting services into the smallest possible unit of work. With smaller units of work comes smaller changes which can be committed and tested faster as well as tested locally in most cases. Microservices will be a key concept to note on this digital transformation journey. Microservices create the need for cross functional teams where communication and collaboration is key. This is where the concept of DevOps comes into play. Employing the DevOps methodologies is crucial to the success of a transformative project.

Enterprises around the world have endured a massive sea of change in the years since the Covid-19 pandemic started. Even as companies were beginning to embrace the concepts of digital transformation, Covid-19 forced an acceleration of this transformation if the enterprise wanted to survive. Embracing remote work was key to survival.

This post was simply an introduction. With the key concepts outlined subsequent related posts will focus more on a technical guide to the technology underneath embracing a DevOps methodology. With DevOps many tools exist to help in the many facets including how to create a culture within an organization capable of embracing the change needed to adopt ongoing transformation to adapt to even the slightest change in your organizations market.

Next up…the introduction of a Podman setup to start down the path of using, managing, and orchestrating containers.

Simplify using Podman instead of Docker

At this point most everyone using containers know of Docker, but is Docker right for your workload? Maybe not. If you plan on using Kubernetes to run your “cloud native” workloads, then it may be worthwhile to use a tool which was designed to run Kubernetes workloads originally. Another reason would be Podman is daemonless, whereas docker wants to control everything docker. Podman does not need root access to run containers. One final reason is Docker is a “one-stop” shop where Podman is modular. You install and use what you need with Podman like buildah to build images. Podman ends up being lighter weight and leaves the heavy lifting to other tools while maintaining OCI compliance and being more secure overall.

Ok so you are now convinced to move into the growing mainstream using Podman. Since you’ve been running Docker for a while you realized that you can add yourself to the docker group and all is good. It’s not so easy with Podman which is a REALLY good thing and makes Podman more secure. Let’s talk about about the why and we’ll get to the how momentarily.

Podman works a little different than Docker (shocker). Podman uses a subordinate system which is assigned to the user at runtime. With that being said, Podman would end up using more UIDs and SUBUIDs than Docker (docker uses the existing system for it’s UIDs). This means we need to “pre-assign” a block for Podman to use and we probably need to increase the defaults to support those additional UIDs and SUBUIDs.

Installing Podman is quite simple. Podman is available for most OS’s and architectures. For SUSE Linux, simply ‘zypper in podman’ will install it. You will want to also add slirp4netns using ‘zypper in slirp4netns’ (you may need to add the container module using ‘SUSEConnect -p sle-module-containers/15.4/x86_64’ replacing 15.4 with your SLE version and x86_64 with your architecture).

With Podman installed we now need to grant the user we want to run Podman with a block of SUBUIDs and SUBGIDs which may be outside what is normally used. Let’s use 200000-265536. Run the command:

sudo usermod –add-subuids 200000-265536 –add-subgids 200000-265536 $USER

Where $USER is “your user” or the user you want to run Podman commands (remember we’re avoiding sudo or root here).

Now you need to add more namespaces since the user may not have enough by default. Check the number available.

Use sysctl –all –pattern user_namespaces and if it is the default of 1000 you will want to increase that number.

Use sudo nano /etc/sysctl.d/userns.conf

Add user.max_user_namespaces=28633 to bump the available namespaces

Use sudo sysctl -p /etc/sysctl.d/userns.conf to load the new setting

And use sysctl –all –pattern user_namespaces to verify what you added.

Now it will be necessary to configure user networking. To do this we need to enable slirp4netns (it was installed earlier). To enable all of the default settings, reboot your node.

That’s it! A little more involved than using docker without sudo by adding your user to the docker group, but you are now using a more modern and secure tool for managing your containers!

New to DevOps? Start here…

New to the devops scene? Want to get started with a career supporting application operations, managing Kubernetes, running docker, or just browsing around. This site is going to be designed to provide some interesting anecdotes, entertaining articles, and how-tos for getting started with a career in the field of “devops”.

Ah yes…what is this “devops”. Everyone has an opinion for sure. Some call it a “paradigm“. That word has negative connotations though. Some actually feel it is a “engineering position”. Ok. Acceptable. Others just call it what it is…an operator who supports the development efforts of an enterprise. Dev-Ops. No matter, the idea here is to provide info for every opinion.

Want to know more about a topic whether it is Kubernetes, Application Development, Devops, or other enterprise datacenter topic? Speak up. Comments are welcome.

Next up…Getting Started in DevOps