Let's Talk DevOps

Real-World DevOps, Real Solutions

Tag: Cloud-Native

  • Running OpenAI in Kubernetes

    Using the OpenAI API with Python is a powerful way to incorporate state-of-the-art natural language processing capabilities into your applications. This blog post provides a step-by-step walk through for creating an OpenAPI account, obtaining an API key, and creating a program to perform queries using the OpenAI API. Additionally, an example demonstrating how to create a podman image to run the code on Kubernetes is provided.

    Creating an OpenAI Account and API Key

    Before building the code, create an OpenAI account and obtain an API key. Follow these steps:

    1. Go to the OpenAI website.
    2. Click on the “Sign up for free” button in the top right corner of the page.
    3. Fill out the registration form and click “Create Account”.
    4. Once an account has been created, go to the OpenAI API page.
    5. Click on the “Get API Key” button.
    6. Follow the prompts to obtain a API key.

    Installing Required Packages

    To use the OpenAI API with Python, install the OpenAI package. Open a command prompt or terminal and run the following command:

    pip install openai

    Using the OpenAI API with Python

    With the OpenAI account and API key, as well as the required packages installed, write a simple Python program. In this example, create a program that generates a list of 10 potential article titles based on a given prompt.

    First, let’s import the OpenAI package and set our API key:

    import openai
    openai.api_key = "YOUR_API_KEY_HERE"

    Next, define the prompt:

    prompt = "10 potential article titles based on a given prompt"

    Now use the OpenAI API to generate the list of article titles:

    response = openai.Completion.create(
        engine="text-davinci-002",
        prompt=prompt,
        max_tokens=50,
        n=10,
        stop=None,
        temperature=0.5,
    )
    titles = [choice.text for choice in response.choices]
    

    Let’s break this down:

    • engine="text-davinci-002" specifies which OpenAI model to use. This example uses the “Davinci” model, which is the most capable and general-purpose model currently available.
    • prompt=prompt sets the prompt to our defined variable.
    • max_tokens=50 limits the number of tokens (words) in each generated title to 50.
    • n=10 specifies that we want to generate 10 potential article titles.
    • stop=None specifies that we don’t want to include any stop sequences that would cause the generated text to end prematurely.
    • temperature=0.5 controls the randomness of the generated text. A lower temperature will result in more conservative and predictable output, while a higher temperature will result in more diverse and surprising output.

    The response variable contains the API response, which includes a list of choices. Each choice represents a generated title. This will extract the generated titles from the choices list and store them in a separate titles list.

    Finally, print out the generated titles:

    for i, title in enumerate(titles):
        print(f"{i+1}. {title}")

    This will output something like:

    1. 10 Potential Article Titles Based on a Given Prompt
    2. The Top 10 Articles You Should Read Based on This Prompt
    3. How to Come Up with 10 Potential Article Titles in Minutes
    4. The Ultimate List of 10 Article Titles Based on Any Prompt
    5. 10 Articles That Will Change Your Perspective on This Topic
    6. How to Use This Prompt to Write 10 Articles Your Audience Will Love
    7. 10 Headlines That Will Instantly Hook Your Readers
    8. The 10 Most Compelling Article Titles You Can Write Based on This Prompt
    9. 10 Article Titles That Will Make You Stand Out from the Crowd
    10. The 10 Best Article Titles You Can Write Based on This Prompt

    And that’s it! You’ve successfully used the OpenAI API to generate a list of potential article titles based on a given prompt.

    Creating a Podman Image to Run on Kubernetes

    To run the program on Kubernetes, create a podman image containing the necessary dependencies and the Python program. Here are the steps to create the image:

    1. Create a new file called Dockerfile in a working directory.
    2. Add the following code to the Dockerfile:
    FROM python:3.8-slim-buster
    RUN pip install openai
    WORKDIR /app
    COPY your_program.py .
    CMD ["python", "your_program.py"]

    This file tells Docker to use the official Python 3.8 image as the base, install the openai package, set the working directory to /app, copy your Python program into the container, and run the program when the container starts.

    To build the image:

    1. Open a terminal or command prompt and navigate to a working directory.
    2. Build the image by running the following command:
    podman build -t your_image_name . 

    Replace “image_name” with the name for the image.

    To run the image:

    podman run image_name

    This will start a new container using the image created and subsequently run the program created above.

    Verify the image runs and spits out what the desired output. Run it on a Kubernetes cluster as a simple pod. There are two ways to accomplish this in Kubernetes, declaratively or imperatively.

    Imperative

    The imperative way is quite simple:

    kubectl run my-pod --image=my-image

    This command will create a pod with the name “my-pod" and the image “my-image".

    Declarative

    The declarative way of creating a Kubernetes pod involves creating a YAML file that describes the desired state of the pod and using the kubectl apply command to apply the configuration to the cluster.

    apiVersion: v1
    kind: Pod
    metadata:
      name: my-pod
    spec:
      containers:
        - name: my-container
          image: my-image
    

    Save it as “my-pod.yaml”

    Outside of automation running this on the Kubernetes cluster from the command line can be accomplished with:

    kubectl apply -f my-pod.yaml

    This command will create a pod with the name “my-pod" and the image “my-image". The -f option specifies the path to the YAML file containing the pod configuration.

    Obviously this is quite simple and there’s plenty of ways to accomplish running this code in Kubernetes as a deployment or replicaset or some other method.

    Congratulations! Using the OpenAI API with Python and creating a podman image to run a program on Kubernetes is quite straightforward. With these tools available. Incorporating the power of natural language processing into your applications is both straightforward and very powerful.

  • Securing cloud native containers

    Securing cloud native containers

    Security, in and of itself, is a broad topic. Container security adds yet another facet to the already nebulous subject of security. In a lot of enterprises today security is first and foremost and the process for securing applications continues to shift left, meaning security is moving to be integrated earlier into the development process. This post will focus on some of the high level tasks and automations developers and operators can implement to mitigate risk.

    The issues.

    Misconfiguration

    The #1 security risk in any cloud native environment is misconfiguration. How do operators know if what they are deploying is secured properly? In a lot of cases, deployments are left insecure for long periods of time without anyone noticing. This is a massive problem, especially for new technologies such as Kubernetes.

    Software Defects

    Another security risk is software bugs. Every day new vulnerabilities are found in software. Some of the vulnerabilities are minor, but increasingly the discoveries constitute a potentially critical issue when deploying software to a public facing system. Vulnerabilities are “what is known”. There is a signature for each of the known vulnerabilities which can be used to scan software.

    However, “you don’t know what you don’t know”. Keep in mind many defects exist which are not known. These are the zero-day vulnerabilities.

    Defense-in-depth

    Scanning

    Scanning software for known vulnerabilities is an absolute must have in any defense-in-depth strategy. However, even the best scanning tools have unknown vulnerabilities (or known limitations). The best defense is offense so creating a system where your container images go through multiple scanners is always a good strategy.

    It is important to scan at many different points in the development process and also continually when in production. Any change could potentially be a breach. It is also very important to have layers which would support the other layers if a layer is permeable. Impervious security requires layers and your goal as a security architect is to create impervious security. Read on for other layers.

    Network visualization

    “It starts and ends with the network”. Kubernetes, being the orchestrator of choice for cloud native deployments, attempts to keep things simple which has lead to a number of CNIs (container network interface) to give platform engineering many choices when deploying workloads. Having something to visualize the network is important, especially when you can act upon those connections. NeuVector provides these capabilities. Being able to quarantine a pod or take a packet capture is key to ensuring continuous protection against unknown attacks and for any required forensics.

    Data protection

    Many different regulations apply to data for enterprises. Being able to provide audit reports for specific data regulations is massively important for HIPAA or PCI DSS. Being able to provide reporting for SOC2 compliance may be important. If your tool cannot “see” into the packet before it traverses the kernel, then it cannot prevent data from crossing a “domain” or prevent sensitive information from being leaked.

    WAF

    A lot of cloud native security tools have the ability to block layer 3 packets. Very few have true layer 7 capabilities. Being able to manage traffic at layer 7 is critical to anyone who is running applications on Kubernetes. Layer 7 is where a lot of unknown vulnerabilities are stopped, but only if the tool can look into the application packet before it traverses the kernel. This is the important point. Once the packet crosses the kernel you are compromised. Use a tool which will learn your workloads behavior. This behavior is the workloads signature and would be the ONLY traffic allowed to traverse the network.

    Wrapping it up

    Security is the highest scoring word in buzzword bingo these days. Everyone wants to ensure their environments are secure and it takes specialized tools for specific platforms. Don’t use the perimeter firewall as a Kubernetes firewall…it simply will not suffice for complete security inside a Kubernetes cluster. Use a tool which can watch every packet and the data inside every packet ensuring only packets with your workloads signature and nothing else traverses the network. One that allows for visualization of the network along with the traditional scanning, admission control, and runtime security requirements every cloud native implementation requires.

  • Declarative vs Imperative in Kubernetes

    To be declarative or to be imperative?

    Kubernetes is a powerful tool for orchestrating containerized applications across a cluster of nodes. It provides users with two methods for managing the desired state of their applications: the Declarative and Imperative approaches.

    The imperative approach

    The Imperative approach requires users to manually issue commands to Kubernetes to manage the desired state of their applications. This approach gives users direct control over the state of their applications, but also requires more manual effort and expertise, as well as a more in-depth understanding of Kubernetes. Additionally, the Imperative approach does not provide any version control or rollback capabilities, meaning that users must be more mindful of any changes they make and take extra care to ensure they are not introducing any unintended consequences.

    A simple set of imperative commands to create a deployment

    To create a Kubernetes deployment using the Imperative approach, users must issue the following commands:

    Create a new deployment named my-deployment and use the image my-image:

    kubectl create deployment my-deployment --image=my-image

    Scale the deployment to 3 pods:

    kubectl scale deployment my-deployment --replicas=3

    Declarative approach

    In the Declarative approach, users express their desired state in the form of Kubernetes objects such as Pods and Services. These objects are then managed by Kubernetes, which ensures that the actual state of the system matches the desired state without requiring users to manually issue commands. This approach also provides version control and rollback capabilities, allowing users to easily revert back to a previous state if necessary.

    Below is an example Kubernetes deployment yaml (my-deployment.yaml) which can be used to create the same Kubernetes deployment:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-deployment
      labels:
        app: my-app
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
          - name: my-container
            image: my-image:latest
            ports:
            - containerPort: 80
    

    To create or update the deployment using this yaml, use the following command:

    kubectl apply -f my-deployment.yaml

    Infrastructure as Code

    The primary difference between the Declarative and Imperative approaches in Kubernetes is that the Declarative approach is a more automated and efficient way of managing applications, while the Imperative approach gives users more direct control over their applications. Using a Declarative approach to Kubernetes gives rise to managing Infrastructure as Code which is the secret sauce in being able to maintain version control and rollback capabilities.

    In general, the Declarative approach is the preferred way to manage applications on Kubernetes as it is more efficient and reliable, allowing users to easily define their desired state and have Kubernetes manage the actual state. However, the Imperative approach can still be useful in certain situations where direct control of the application state is needed. Ultimately, it is up to the user to decide which approach is best for their needs.

  • Using a dev container in VSCode

    How to Use Dev Containers in VSCode

    Dev containers are a powerful tool for developers to use when coding, testing, and debugging applications. VSCode provides an integrated development environment (IDE) for developers to use when working with dev containers. This guide will show you how to get started with dev containers in VSCode and how to use them to your best advantage.

    1. Install the Remote – Containers extension
    2. Create a dev container configuration file
    3. Launch the dev container
    4. Connect to the dev container
    5. Start coding!

    Installing the Remote – Containers Extension

    The first step to using dev containers is to install the Remote – Containers extension. This extension allows you to create dev container configurations and launch them from within VSCode. To install the extension, open the Extensions panel in VSCode and search for Remote – Containers. Click the ‘Install’ button to install the extension. After installation, you will need to restart VSCode for the extension to take effect.

    Creating a Dev Container Configuration File

    Once the Remote – Containers extension is installed, you can create a dev container configuration file. This file will define the environment for your dev container. For example, you can define the programming language, libraries, and other settings for your dev container. You can also specify a base image to be used by the dev container, such as a Linux or Windows image.

    Example Dev Container Configuration File

    Below is an example of a dev container configuration file. This configuration file specifies a base image of Ubuntu 18.04, a programming language of Python, and a library of TensorFlow.

    {
        "name": "example-dev-container",
        "dockerFile": "Dockerfile",
        "settings": {
            "terminal.integrated.shell.linux": "/bin/bash"
        },
        "remoteUser": "devuser",
        "forwardPorts": [],
        "mounts": [],
        "image": {
            "name": "ubuntu:18.04",
            "remote": false
        },
        "workspaceMount": "/workspace",
        "runArgs": [
            "-v",
            "/workspace:/workspace",
            "-it",
            "--rm",
            "python:3.7.5-stretch"
        ],
        "extensions": [
            "ms-python.python"
        ],
        "libraries": [
            "tensorflow"
        ],
        "postCreateCommand": "",
        "remoteType": "wsl"
    }
    

    Launching the Dev Container

    Once your dev container configuration file is created, you can launch the dev container. To do this, open the Remote – Containers view in VSCode. You should see your dev container configuration file listed. Click the Launch button to start the dev container. Once the dev container is launched, you will be able to access a terminal window, allowing you to control the dev container.

    Connecting to the Dev Container

    Once the dev container is running, you can connect to it. To do this, open the Remote – SSH view in VSCode. You should see your dev container listed. Click the Connect button to connect to the dev container. Once connected, you will be able to access the dev container’s terminal window and run commands.

    Start Coding!

    Now that you’ve connected to the dev container, you can start coding! You can use the integrated development environment (IDE) to write, debug, and test your code. This allows you to work on your project within the dev container, without the need for additional setup. Once you’re done, you can close the dev container and move on to the next project.

  • Creating a pipeline in Gitlab

    Creating a pipeline in Gitlab is a great way to deploy applications to various environments. It allows you to automate the process of building, testing, and deploying your application. With Gitlab, you can define a pipeline that consists of a series of stages, which can include tasks such as building the application, running tests, and deploying the application. By leveraging the power of a pipeline, you can ensure that your application is deployed quickly, efficiently, and with minimal hassle. This guide will walk you through the steps to create a pipeline in Gitlab, with code examples.

    Step 1: Creating a New Project

    To start a new pipeline, you will first need to create a new project in Gitlab. To do this, you can go to Projects > New Project and fill out the form.

    Step 2: Creating the Pipeline

    Once the project is created, you can go to CI/CD > Pipelines and create a new pipeline. You will be able to define the stages and tasks for the pipeline, and even add code snippets to configure the pipeline. Here is an example of a simple pipeline:

    stages:
      - build
      - test
      - deploy
    
    build:
      stage: build
      script:
        - npm install
        - npm build
    
    test:
      stage: test
      script:
        - npm test
    
    deploy:
      stage: deploy
      script:
        - npm deploy
    
    

    Step 3: Running the Pipeline

    Once you have created the pipeline, you can run it by going to CI/CD > Pipelines and clicking the “Run Pipeline” button. This will execute the pipeline, and you can check the progress of the pipeline in the pipeline view.

    Step 4: Troubleshooting

    If you encounter any issues while running the pipeline, you can click on the pipeline to open up the pipeline view. Here, you will be able to view the log output of each stage, and debug any issues that may be occurring.

    Conclusion

    Creating a pipeline in Gitlab is a great way to easily deploy applications to various environments. By leveraging the power of a pipeline, you can ensure that your application is deployed quickly, efficiently, and with minimal hassle. This guide has walked you through the steps to create a pipeline in Gitlab, with code examples.