Let's Talk DevOps

Real-World DevOps, Real Solutions

  • Running OpenAI in Kubernetes

    Using the OpenAI API with Python is a powerful way to incorporate state-of-the-art natural language processing capabilities into your applications. This blog post provides a step-by-step walk through for creating an OpenAPI account, obtaining an API key, and creating a program to perform queries using the OpenAI API. Additionally, an example demonstrating how to create a podman image to run the code on Kubernetes is provided.

    Creating an OpenAI Account and API Key

    Before building the code, create an OpenAI account and obtain an API key. Follow these steps:

    1. Go to the OpenAI website.
    2. Click on the “Sign up for free” button in the top right corner of the page.
    3. Fill out the registration form and click “Create Account”.
    4. Once an account has been created, go to the OpenAI API page.
    5. Click on the “Get API Key” button.
    6. Follow the prompts to obtain a API key.

    Installing Required Packages

    To use the OpenAI API with Python, install the OpenAI package. Open a command prompt or terminal and run the following command:

    pip install openai

    Using the OpenAI API with Python

    With the OpenAI account and API key, as well as the required packages installed, write a simple Python program. In this example, create a program that generates a list of 10 potential article titles based on a given prompt.

    First, let’s import the OpenAI package and set our API key:

    import openai
    openai.api_key = "YOUR_API_KEY_HERE"

    Next, define the prompt:

    prompt = "10 potential article titles based on a given prompt"

    Now use the OpenAI API to generate the list of article titles:

    response = openai.Completion.create(
        engine="text-davinci-002",
        prompt=prompt,
        max_tokens=50,
        n=10,
        stop=None,
        temperature=0.5,
    )
    titles = [choice.text for choice in response.choices]
    

    Let’s break this down:

    • engine="text-davinci-002" specifies which OpenAI model to use. This example uses the “Davinci” model, which is the most capable and general-purpose model currently available.
    • prompt=prompt sets the prompt to our defined variable.
    • max_tokens=50 limits the number of tokens (words) in each generated title to 50.
    • n=10 specifies that we want to generate 10 potential article titles.
    • stop=None specifies that we don’t want to include any stop sequences that would cause the generated text to end prematurely.
    • temperature=0.5 controls the randomness of the generated text. A lower temperature will result in more conservative and predictable output, while a higher temperature will result in more diverse and surprising output.

    The response variable contains the API response, which includes a list of choices. Each choice represents a generated title. This will extract the generated titles from the choices list and store them in a separate titles list.

    Finally, print out the generated titles:

    for i, title in enumerate(titles):
        print(f"{i+1}. {title}")

    This will output something like:

    1. 10 Potential Article Titles Based on a Given Prompt
    2. The Top 10 Articles You Should Read Based on This Prompt
    3. How to Come Up with 10 Potential Article Titles in Minutes
    4. The Ultimate List of 10 Article Titles Based on Any Prompt
    5. 10 Articles That Will Change Your Perspective on This Topic
    6. How to Use This Prompt to Write 10 Articles Your Audience Will Love
    7. 10 Headlines That Will Instantly Hook Your Readers
    8. The 10 Most Compelling Article Titles You Can Write Based on This Prompt
    9. 10 Article Titles That Will Make You Stand Out from the Crowd
    10. The 10 Best Article Titles You Can Write Based on This Prompt

    And that’s it! You’ve successfully used the OpenAI API to generate a list of potential article titles based on a given prompt.

    Creating a Podman Image to Run on Kubernetes

    To run the program on Kubernetes, create a podman image containing the necessary dependencies and the Python program. Here are the steps to create the image:

    1. Create a new file called Dockerfile in a working directory.
    2. Add the following code to the Dockerfile:
    FROM python:3.8-slim-buster
    RUN pip install openai
    WORKDIR /app
    COPY your_program.py .
    CMD ["python", "your_program.py"]

    This file tells Docker to use the official Python 3.8 image as the base, install the openai package, set the working directory to /app, copy your Python program into the container, and run the program when the container starts.

    To build the image:

    1. Open a terminal or command prompt and navigate to a working directory.
    2. Build the image by running the following command:
    podman build -t your_image_name . 

    Replace “image_name” with the name for the image.

    To run the image:

    podman run image_name

    This will start a new container using the image created and subsequently run the program created above.

    Verify the image runs and spits out what the desired output. Run it on a Kubernetes cluster as a simple pod. There are two ways to accomplish this in Kubernetes, declaratively or imperatively.

    Imperative

    The imperative way is quite simple:

    kubectl run my-pod --image=my-image

    This command will create a pod with the name “my-pod" and the image “my-image".

    Declarative

    The declarative way of creating a Kubernetes pod involves creating a YAML file that describes the desired state of the pod and using the kubectl apply command to apply the configuration to the cluster.

    apiVersion: v1
    kind: Pod
    metadata:
      name: my-pod
    spec:
      containers:
        - name: my-container
          image: my-image
    

    Save it as “my-pod.yaml”

    Outside of automation running this on the Kubernetes cluster from the command line can be accomplished with:

    kubectl apply -f my-pod.yaml

    This command will create a pod with the name “my-pod" and the image “my-image". The -f option specifies the path to the YAML file containing the pod configuration.

    Obviously this is quite simple and there’s plenty of ways to accomplish running this code in Kubernetes as a deployment or replicaset or some other method.

    Congratulations! Using the OpenAI API with Python and creating a podman image to run a program on Kubernetes is quite straightforward. With these tools available. Incorporating the power of natural language processing into your applications is both straightforward and very powerful.

  • Securing cloud native containers

    Securing cloud native containers

    Security, in and of itself, is a broad topic. Container security adds yet another facet to the already nebulous subject of security. In a lot of enterprises today security is first and foremost and the process for securing applications continues to shift left, meaning security is moving to be integrated earlier into the development process. This post will focus on some of the high level tasks and automations developers and operators can implement to mitigate risk.

    The issues.

    Misconfiguration

    The #1 security risk in any cloud native environment is misconfiguration. How do operators know if what they are deploying is secured properly? In a lot of cases, deployments are left insecure for long periods of time without anyone noticing. This is a massive problem, especially for new technologies such as Kubernetes.

    Software Defects

    Another security risk is software bugs. Every day new vulnerabilities are found in software. Some of the vulnerabilities are minor, but increasingly the discoveries constitute a potentially critical issue when deploying software to a public facing system. Vulnerabilities are “what is known”. There is a signature for each of the known vulnerabilities which can be used to scan software.

    However, “you don’t know what you don’t know”. Keep in mind many defects exist which are not known. These are the zero-day vulnerabilities.

    Defense-in-depth

    Scanning

    Scanning software for known vulnerabilities is an absolute must have in any defense-in-depth strategy. However, even the best scanning tools have unknown vulnerabilities (or known limitations). The best defense is offense so creating a system where your container images go through multiple scanners is always a good strategy.

    It is important to scan at many different points in the development process and also continually when in production. Any change could potentially be a breach. It is also very important to have layers which would support the other layers if a layer is permeable. Impervious security requires layers and your goal as a security architect is to create impervious security. Read on for other layers.

    Network visualization

    “It starts and ends with the network”. Kubernetes, being the orchestrator of choice for cloud native deployments, attempts to keep things simple which has lead to a number of CNIs (container network interface) to give platform engineering many choices when deploying workloads. Having something to visualize the network is important, especially when you can act upon those connections. NeuVector provides these capabilities. Being able to quarantine a pod or take a packet capture is key to ensuring continuous protection against unknown attacks and for any required forensics.

    Data protection

    Many different regulations apply to data for enterprises. Being able to provide audit reports for specific data regulations is massively important for HIPAA or PCI DSS. Being able to provide reporting for SOC2 compliance may be important. If your tool cannot “see” into the packet before it traverses the kernel, then it cannot prevent data from crossing a “domain” or prevent sensitive information from being leaked.

    WAF

    A lot of cloud native security tools have the ability to block layer 3 packets. Very few have true layer 7 capabilities. Being able to manage traffic at layer 7 is critical to anyone who is running applications on Kubernetes. Layer 7 is where a lot of unknown vulnerabilities are stopped, but only if the tool can look into the application packet before it traverses the kernel. This is the important point. Once the packet crosses the kernel you are compromised. Use a tool which will learn your workloads behavior. This behavior is the workloads signature and would be the ONLY traffic allowed to traverse the network.

    Wrapping it up

    Security is the highest scoring word in buzzword bingo these days. Everyone wants to ensure their environments are secure and it takes specialized tools for specific platforms. Don’t use the perimeter firewall as a Kubernetes firewall…it simply will not suffice for complete security inside a Kubernetes cluster. Use a tool which can watch every packet and the data inside every packet ensuring only packets with your workloads signature and nothing else traverses the network. One that allows for visualization of the network along with the traditional scanning, admission control, and runtime security requirements every cloud native implementation requires.

  • Knowledge spew on GitOps

    In working with a handful of customers the concept of GitOps continues to resonate more and more. Let us dive into a brain dump of some of the conversations related to GitOps and how these customers tackled the task at hand.

    First thing to remember is these customers are not massive. They are rather common actually. A Gartner defined “medium-sized” enterprise. Keep in mind these customers have the same issues as the giant enterprises just at a different scale.

    In every case there was a user story. At a high level, a common theme was the need to roll out updates to a specific application regularly enough to find ways to entice the consumer to purchase a widget of some sort. Ok, A/B testing. Simple enough.

    Each of the customers were in different maturity levels when it came to development processes, kubernetes knowledge, and devops methods. However they all have one thing in common…the need to deliver an application to their customer base on a deadline and continuously improve the application based on user feedback. All three of them were successful in meeting their self imposed deadlines. How?

    Simple. Every one of them came together, ironed out a plan, and implemented the plan. The interesting part, every one of them already knew how to get the product to market. All they needed was a bit of guidance on how to overcome obstacles and get shit done. How?

    • Step one. Define the top of the mountain, the finish line, the end result.
    • Step two. The project leads built out a high level timeline from end to beginning.
    • Step three. All of the team members came together to build out the task teams.
    • Step four. Each of the teams built out their respective timeline for contribution.
    • Step five. Build.

    Now how does this relate to GitOps? GitOps was the pivotal methodology to get it done. The pipeline was built with all of the parts in mind. If you recall the DevOps “infinity loop“, the key is to use that and combine it with the OODA loop decision model. The combination creates a very powerful decision making framework facilitating agile development with constant improvement. Sound simple? It’s not. It is in theory, but the implementation is like a relationship. Everything is great when dating, but the hard work is when dating turns to marriage. Same goes for creating a product. Designing the product, what it needs to do, all of the moving parts is fun. The real work comes in when the first working build is complete.

    This is where GitOps shines. The developers build things, test locally, and commit. The pipelines move it through the process and all of the other teams contribute to each part in this machine. If one part breaks down, the other work stops to crowdsource the problem. The problem is fixed and the machine continues on. GitOps is the magical fairy dust. What about the technology?

    The technology is rather mundane actually. Git. A code repository. A CI/CD pipeline. A build system. A test harness. A deployment platform. Git…the tools of choice are Github or Gitlab. Github is pretty slick, but Gitlab will allow for running locally in small environments building closed source deliverables. Each has a pipeline mechanism or there are many other tools such as Texton, Argo, CircleCI and many others with various features depending on what was needed. For build systems, many exist and again each as features as needed. However the deployment platform consistently remains the same, Kubernetes.

    Building deploy-able applications at scale is hard. There are many other moving parts, processes, tools, etc. in play. However one thing stands out in all of these engagements…give the right people who have the will to succeed the skills needed to succeed and the execution part will look easy.

    It is always fun to be a part of something, but its most precious reward is being able to step away and watch the machine run on it’s own.

    That’s the end of this spew. It went everywhere…maybe it’s more like a sneeze.

    Peace out.

  • A chat with the GPT about DevOps.

    The 1st prompt to set the stage

    Me: Tell me why the Devops methodology works the best for cloud native application development. Explain as if you know nothing about development.

    chat.openai.com: DevOps is a method of working where development and operations teams collaborate closely to deliver software and systems more efficiently. This approach works well for cloud native application development because it allows for faster and more frequent updates to be made to the application, which is important when working in a cloud environment. Additionally, the focus on automation and testing in DevOps helps to ensure that the application is stable and reliable, which is critical when running in a cloud environment where resources can be scaled up or down as needed. This can help to minimize downtime and ensure that the application is always available to users. Overall, DevOps helps to make the development and delivery of cloud native applications more efficient and reliable.

    The 2nd prompt

    Me: Explain why AI is important to devops

    chatgpt.openai.com: AI (Artificial Intelligence) can be beneficial to DevOps in several ways. One of the main ways that AI can help is by automating certain tasks and processes. For example, AI can be used to automatically detect and diagnose problems in the system, which can help to reduce downtime and improve overall system reliability. Additionally, AI can be used to perform predictive maintenance, which can help to identify potential issues before they occur, allowing them to be addressed before they cause problems.

    AI can also be used to improve the performance of the system. For example, it can be used to optimize resource allocation, such as scaling up or down the number of servers as needed to meet changing demand, which can help to improve the efficiency and cost-effectiveness of the system.

    AI can also be used to improve the security of the system. For example, it can be used to detect and prevent security threats, such as intrusion attempts, by using machine learning algorithms to identify and respond to abnormal behavior.

    Overall, AI can help to improve the speed, efficiency, reliability, performance, and security of DevOps processes, which can help organizations to deliver applications and services more quickly and effectively.

    The 3rd prompt

    Me: and why is automation a key concept in devops

    chat.openai.com: Automation is a key concept in DevOps because it allows for the efficient and consistent execution of repetitive tasks and processes. By automating these tasks, organizations can reduce the risk of human error, increase the speed and accuracy of their work, and free up human resources to focus on more strategic and value-added activities.

    In the context of software development, automation can be used to automate tasks such as testing, building, and deploying code. For example, automated testing can help to ensure that the code is working as expected and is free of bugs, while automated building and deployment can help to speed up the process of getting code into production.

    In the context of infrastructure management, automation can be used to automate tasks such as provisioning, scaling, and monitoring of resources. For example, automation can be used to automatically spin up new servers as needed to handle increased traffic, and automatically scale back resources when they are no longer needed. This can help to optimize resource utilization and reduce costs.

    Overall, automation is important in DevOps because it helps to increase the speed, efficiency, and reliability of the development and delivery process, which can help organizations to deliver applications and services more quickly and effectively.

    Wrapping up

    Interesting conversation. Instead of a big instruction set, I literally chatted with chat.openai.com. It did manage to describe DevOps fairly well and even though the prose was not exactly advanced chat.openai.com did get the point across based on my prompts. It appears the prose was actually prompted by how the questions were asked. Simple questions, simple responses. Had the questions been more eloquent the responses seem as if they would follow suit. Need to test out that theory next. What fun. Stay tuned.

  • Declarative vs Imperative in Kubernetes

    To be declarative or to be imperative?

    Kubernetes is a powerful tool for orchestrating containerized applications across a cluster of nodes. It provides users with two methods for managing the desired state of their applications: the Declarative and Imperative approaches.

    The imperative approach

    The Imperative approach requires users to manually issue commands to Kubernetes to manage the desired state of their applications. This approach gives users direct control over the state of their applications, but also requires more manual effort and expertise, as well as a more in-depth understanding of Kubernetes. Additionally, the Imperative approach does not provide any version control or rollback capabilities, meaning that users must be more mindful of any changes they make and take extra care to ensure they are not introducing any unintended consequences.

    A simple set of imperative commands to create a deployment

    To create a Kubernetes deployment using the Imperative approach, users must issue the following commands:

    Create a new deployment named my-deployment and use the image my-image:

    kubectl create deployment my-deployment --image=my-image

    Scale the deployment to 3 pods:

    kubectl scale deployment my-deployment --replicas=3

    Declarative approach

    In the Declarative approach, users express their desired state in the form of Kubernetes objects such as Pods and Services. These objects are then managed by Kubernetes, which ensures that the actual state of the system matches the desired state without requiring users to manually issue commands. This approach also provides version control and rollback capabilities, allowing users to easily revert back to a previous state if necessary.

    Below is an example Kubernetes deployment yaml (my-deployment.yaml) which can be used to create the same Kubernetes deployment:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-deployment
      labels:
        app: my-app
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
          - name: my-container
            image: my-image:latest
            ports:
            - containerPort: 80
    

    To create or update the deployment using this yaml, use the following command:

    kubectl apply -f my-deployment.yaml

    Infrastructure as Code

    The primary difference between the Declarative and Imperative approaches in Kubernetes is that the Declarative approach is a more automated and efficient way of managing applications, while the Imperative approach gives users more direct control over their applications. Using a Declarative approach to Kubernetes gives rise to managing Infrastructure as Code which is the secret sauce in being able to maintain version control and rollback capabilities.

    In general, the Declarative approach is the preferred way to manage applications on Kubernetes as it is more efficient and reliable, allowing users to easily define their desired state and have Kubernetes manage the actual state. However, the Imperative approach can still be useful in certain situations where direct control of the application state is needed. Ultimately, it is up to the user to decide which approach is best for their needs.