Category: General

How to Create a Pull Request Using GitHub Through VSCode

Visual Studio Code (VSCode) has risen as a favorite among developers due to its extensibility and tight integration with many tools, including GitHub. In this tutorial, we’ll cover how to create a pull request (PR) on GitHub directly from VSCode. Given that our audience is highly technical, we’ll provide detailed steps along with screenshots and necessary code.

Prerequisites:

  • VSCode Installed: If not already, download and install from VSCode’s official website.
  • GitHub Account: You’ll need a GitHub account to interact with repositories.
  • Git Installed: Ensure you have git installed on your machine.
  • GitHub Pull Requests and Issues Extension: Install it from the VSCode Marketplace.

Steps:

Clone Your Repository

First, ensure you have the target repository cloned on your local machine. If not:

git clone <repository-url>

Open Repository in VSCode

Navigate to the cloned directory:

cd <repository-name>

Launch VSCode in this directory:

code .

Create a New Branch

Before making any changes, it’s best practice to create a new branch. In the bottom-left corner of VSCode, click on the current branch name (likely main or master). A top bar will appear. Click on + Create New Branch and give it a meaningful name related to your changes.

Make Your Changes

Once you’re on your new branch, make the necessary changes to the code or files. VSCode’s source control tab (represented by the branch icon on the sidebar) will list the changes made.

Stage and Commit Changes

Click on the + icon next to each changed file to stage the changes. Once all changes are staged, enter a commit message in the text box and click the checkmark at the top to commit.

Push the Branch to GitHub

Click on the cloud-upload icon in the bottom-left corner to push your branch to GitHub.

Create a Pull Request

With the GitHub Pull Requests and Issues Extension installed, you’ll see a GitHub icon in the sidebar. Clicking on this will reveal a section titled GitHub Pull Requests.

Click on the + icon next to it. It’ll fetch the branch and present a UI to create a PR. Fill in the necessary details:

  • Title: Summarize the change in a short sentence.
  • Description: Provide a detailed description of what changes were made and why.
  • Base Repository: The repository to which you want to merge the changes.
  • Base: The branch (usually main or master) to which you want to merge the changes.
  • Head Repository: Your forked repository (if you’re working on a fork) or the original one.
  • Compare: Your feature/fix branch.

Once filled, click Create.

Review and Merge

Your PR is now on GitHub. It can be reviewed, commented upon, and eventually merged by maintainers.

Conclusion

VSCode’s deep integration with GitHub makes it a breeze to handle Git operations, including creating PRs. By following this guide, you can streamline your Git workflow without ever leaving your favorite editor!

7 things all devops practitioners need from Git

Git is a powerful tool for version control, enabling multiple developers to work together on the same codebase without stepping on each other’s toes. It’s a complex system with many features, and getting to grips with it can be daunting. Here are seven insights that I wish I had known when I started working with Git.

The Power of git log

The git log command is much more powerful than it first appears. It can show you the history of changes in a variety of formats, which can be extremely helpful for understanding the evolution of a project.

# Show the commit history in a single line per commit
git log --oneline

# Show the commit history with graph, date, and abbreviated commits
git log --graph --date=short --pretty=format:'%h - %s (%cd)'

Branching is Cheap

Branching in Git is incredibly lightweight, which means you should use branches liberally. Every new feature, bug fix, or experiment should have its own branch. This keeps changes organized and isolated from the main codebase until they’re ready to be merged.

# Create a new branch
git branch new-feature

# Switch to the new branch
git checkout new-feature

Or do both with:

# Create and switch to the new branch
git checkout -b new-feature

git stash is Your Friend

When you need to quickly switch context but don’t want to commit half-done work, git stash is incredibly useful. It allows you to save your current changes away and reapply them later.

# Stash your current changes
git stash

# List all stashes
git stash list

# Apply the last stashed changes and remove it from the stash list
git stash pop

git rebase for a Clean History

While merging is the standard way to bring a feature branch up to date with the main branch, rebasing can often result in a cleaner project history. It’s like saying, “I want my branch to look as if it was based on the latest state of the main branch.”

# Rebase your current branch on top of the main branch
git checkout feature-branch
git rebase main

Note: Rebasing rewrites history, which can be problematic for shared branches.

The .gitignore File

The .gitignore file is crucial for keeping your repository clean of unnecessary files. Any file patterns listed in .gitignore will be ignored by Git.

# Ignore all .log files
*.log

# Ignore a specific file
config.env

# Ignore everything in a directory
tmp/**

git diff Shows More Than Just Differences

git diff can be used in various scenarios, not just to show the differences between two commits. You can use it to see changes in the working directory, changes that are staged, and even differences between branches.

# Show changes in the working directory that are not yet staged
git diff

# Show changes that are staged but not yet committed
git diff --cached

# Show differences between two branches
git diff main..feature-branch

The Reflog Can Save You

The reflog is an advanced feature that records when the tips of branches and other references were updated in the local repository. It’s a lifesaver when you’ve done something wrong and need to go back to a previous state.

# Show the reflog
git reflog

# Reset to a specific entry in the reflog
git reset --hard HEAD@{1}

Remember: The reflog is a local log, so it only contains actions you’ve taken in your repository.


Understanding these seven aspects of Git can make your development workflow much more efficient and less error-prone. Git is a robust system with a steep learning curve, but with these tips in your arsenal, you’ll be better equipped to manage your projects effectively.

AI Workloads for Kubernetes


Introduction

In recent years, Kubernetes has emerged as the go-to solution for orchestrating containerized applications at scale. But when it comes to deploying AI workloads, does it offer the same level of efficiency and convenience? In this blog post, we delve into the types of AI workloads that are best suited for Kubernetes, and why you should consider it for your next AI project.

Model Training and Development

Batch Processing

When working with large datasets, batch processing becomes a necessity. Kubernetes can efficiently manage batch processing tasks, leveraging its abilities to orchestrate and scale workloads dynamically.

  • Example: A machine learning pipeline that processes terabytes of data overnight, utilizing idle resources to the fullest.
Hyperparameter Tuning

Hyperparameter tuning involves running numerous training jobs with different parameters to find the optimal configuration. Kubernetes can streamline this process by managing multiple parallel jobs effortlessly.

  • Example: An AI application that automatically tunes hyperparameters over a grid of values, reducing the time required to find the best model.

Model Deployment

Rolling Updates and Rollbacks

Deploying AI models into production environments requires a system that supports rolling updates and rollbacks. Kubernetes excels in this area, helping teams to maintain high availability even during updates.

  • Example: A recommendation system that undergoes frequent updates without experiencing downtime, ensuring a seamless user experience.
Auto-Scaling

AI applications often face variable traffic, requiring a system that can automatically scale resources. Kubernetes’ auto-scaling feature ensures that your application can handle spikes in usage without manual intervention.

  • Example: A voice recognition service that scales up during peak hours, accommodating a large number of simultaneous users without compromising on performance.
Placeholder: Diagram showing the auto-scaling feature of Kubernetes

Data Engineering

Data Pipeline Orchestration

Managing data pipelines efficiently is critical in AI projects. Kubernetes can orchestrate complex data pipelines, ensuring that each component interacts seamlessly.

  • Example: A data ingestion pipeline that collects, processes, and stores data from various sources, running smoothly with the help of Kubernetes orchestration.
Stream Processing

For real-time AI applications, stream processing is a crucial component. Kubernetes facilitates the deployment and management of stream processing workloads, ensuring high availability and fault tolerance.

  • Example: A fraud detection system that analyzes transactions in real-time, leveraging Kubernetes to maintain a steady flow of data processing.

Conclusion

Kubernetes offers a robust solution for deploying and managing AI workloads at scale. Its features like auto-scaling, rolling updates, and efficient batch processing make it an excellent choice for AI practitioners aiming to streamline their operations and bring their solutions to market swiftly and efficiently.

Whether you are working on model training, deployment, or data engineering, Kubernetes provides the tools to orchestrate your workloads effectively, saving time and reducing complexity.

To get started with Kubernetes for your AI projects, consider exploring the rich ecosystem of tools and communities available to support you on your journey.

Streamline Kubernetes Management through Automation

Automation in managing Kubernetes clusters has burgeoned into an essential practice that enhances efficiency, security, and the seamless deployment of applications. With the exponential growth in containerized applications, automation has facilitated streamlined operations, reducing the room for human error while significantly saving time. Let’s delve deeper into the crucial role automation plays in managing Kubernetes clusters.

Section 1: The Imperative of Automation in Kubernetes

1.1 The Kubernetes Landscape

Before delving into the nuances of automation, let’s briefly recapitulate the fundamental components of Kubernetes, encompassing pods, nodes, and clusters, and their symbiotic relationships facilitating a harmonious operational environment.

1.2 The Need for Automation

Automation emerges as a vanguard in managing complex environments effortlessly, fostering efficiency, reducing downtime, and ensuring the optimal utilization of resources.

1.2.1 Efficiency and Scalability

Automation in Kubernetes ensures that clusters can dynamically scale based on the workload, fostering efficiency, and resource optimization.

1.2.2 Reduced Human Error

Automating repetitive tasks curtails the scope of human error, facilitating seamless operations and mitigating security risks.

1.2.3 Cost Optimization

Through efficient resource management, automation aids in cost reduction by optimizing resource allocation dynamically.

Section 2: Automation Tools and Processes

2.1 CI/CD Pipelines

Continuous Integration and Continuous Deployment (CI/CD) pipelines are at the helm of automation, fostering swift and efficient deployment cycles.

pipeline:
  build:
    image: node:14
    commands:
      - npm install
      - npm test
  deploy:
    image: google/cloud-sdk
    commands:
      - gcloud container clusters get-credentials cluster-name --zone us-central1-a
      - kubectl apply -f k8s/

Code snippet 1: A simple CI/CD pipeline example.

2.2 Infrastructure as Code (IaC)

IaC facilitates the programmable infrastructure, rendering a platform where systems and devices can be managed through code.

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mycontainer
    image: nginx

Code snippet 2: Defining a Kubernetes pod using IaC.

2.3 Configuration Management

Tools like Ansible and Chef aid in configuration management, ensuring system uniformity and adherence to policies.

- hosts: kubernetes_nodes
  tasks:
    - name: Ensure Kubelet is installed
      apt: 
        name: kubelet
        state: present

Code snippet 3: Using Ansible for configuration management.

Section 3: Automation Use Cases in Kubernetes

3.1 Auto-scaling

Auto-scaling facilitates automatic adjustments to the system’s computational resources, optimizing performance and curtailing costs.

3.1.1 Horizontal Pod Autoscaler

Kubernetes’ Horizontal Pod Autoscaler automatically adjusts the number of pod replicas in a replication controller, deployment, or replica set based on observed CPU utilization.

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50

Code snippet 4: Defining a Horizontal Pod Autoscaler in Kubernetes.

3.2 Automated Rollouts and Rollbacks

Kubernetes aids in automated rollouts and rollbacks, ensuring application uptime and facilitating seamless updates and reversions.

3.2.1 Deployment Strategies

Deployment strategies such as blue-green and canary releases can be automated in Kubernetes, facilitating controlled and safe deployments.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v2

Code snippet 5: Configuring a rolling update strategy in a Kubernetes deployment.

Conclusion: The Future of Kubernetes with Automation

As Kubernetes continues to be the frontrunner in orchestrating containerized applications, the automation integral to its ecosystem fosters efficiency, security, and scalability. Through a plethora of tools and evolving best practices, automation stands central in leveraging Kubernetes to its fullest potential, orchestrating seamless operations, and steering towards an era of self-healing systems and zero-downtime deployments.

In conclusion, the ever-evolving landscape of Kubernetes managed through automation guarantees a future where complex deployments are handled with increased efficiency and reduced manual intervention. Leveraging automation tools and practices ensures that Kubernetes clusters not only meet the current requirements but are also future-ready, paving the way for a robust, scalable, and secure operational environment.


References:

  1. Kubernetes Official Documentation. Retrieved from https://kubernetes.io/docs/
  2. Jenkins, CI/CD, and Kubernetes: Integrating CI/CD with Kubernetes (2021). Retrieved from https://www.jenkins.io/doc/book/
  3. Infrastructure as Code (IaC) Explained (2021).
  4. Understanding Kubernetes Operators (2021).

DevOps and the Möbius Loop

Harnessing the Möbius Loop for a Revolutionary DevOps Process

In the world of DevOps, continual improvement and iteration are the name of the game. The Möbius loop, with its one-sided, one-boundary surface, can serve as a vivid metaphor and blueprint for establishing a DevOps process that is both unified and infinitely adaptable. Let’s delve into the Möbius loop concept and see how it beautifully intertwines with the principles of DevOps.

Understanding the Möbius Loop

The Möbius loop or Möbius strip is a remarkable mathematical concept — a surface with only one side and one boundary created through a half-twist of a strip of paper that then has its ends joined. This one-sided surface represents a continuous, never-ending cycle, illustrating an ever-continuous pathway that can epitomize the unceasing cycle of development in DevOps.

Reference: Möbius Strip – Wikipedia

The Möbius Loop and DevOps: A Perfect Harmony

In the ecosystem of DevOps, the Möbius loop signifies a continuous cycle where one phase naturally transitions into the next, establishing a seamless feedback loop that fosters continuous growth and development. This philosophy lies at the heart of DevOps, promoting an environment of collaboration and iterative progress.

Reference: DevOps and Möbius Loop — A Journey to Continuous Improvement

Crafting a Möbius Loop-Foundation DevOps Process

Building a DevOps process based on the Möbius loop principle means initiating a workflow where each development phase fuels the next, constituting a feedback loop that constantly evolves. Here is a step-by-step guide to create this iterative and robust system:

1. Define Objectives

  • Business Objectives: Set clear business goals and metrics.
  • User Objectives: Align the goals with user expectations.

2. Identify Outcomes

  • Expected Outcomes: Envision the desired outcomes for business and users.
  • Metrics: Design metrics to measure the effectiveness of strategies.

3. Discovery and Framing

  • Research: Invest time in understanding user preferences and pain points.
  • Hypothesis: Develop hypotheses to meet business and user objectives.

4. Develop and Deliver

  • Build: Employ agile methodologies to build solutions incrementally.
  • Deploy: Use CI/CD pipelines for continuous deployment.

Reference: Utilizing Agile Methodologies in DevOps

5. Operate and Observe

  • Monitor: Utilize monitoring tools to collect data on system performance.
  • Feedback Loop: Establish channels to receive user feedback.

6. Learning and Iteration

  • Analyze: Scrutinize data and feedback from the operate and observe phase.
  • Learn: Adapt based on the insights acquired and enhance the solution.

7. Feedback and Adjust

  • Feedback: Facilitate feedback from all stakeholders.
  • Adjust: Revise goals, metrics, or the solution based on the feedback received.

8. Loop Back

  • Iterative Process: Reiterate the process, informed by the learning from previous cycles.
  • Continuous Improvement: Encourage a mindset of perpetual growth and improvement.

Tools to Embark on Your Möbius Loop Journey

Leveraging advanced tools and technologies is vital to facilitate this Möbius loop-founded DevOps process. Incorporate the following tools to set a strong foundation:

  • Version Control: Git for source code management.
  • CI/CD: Jenkins, Gitlab, or ArgoCD for automating deployment.
  • Containerization and Orchestration: Podman and Kubernetes to handle the orchestration of containers.
  • Monitoring and Logging: Tools like Prometheus for real-time monitoring.
  • Collaboration Tools: Slack or Rocket.Chat to foster communication and collaboration.

Reference: Top Tools for DevOps

Conclusion

Embracing the Möbius loop in DevOps unveils a path to continuous improvement, aligning with the inherent nature of the development-operations ecosystem. It not only represents a physical manifestation of the infinite loop of innovation but also fosters a system that is robust, adaptable, and user-centric. As you craft your DevOps process rooted in the Möbius loop principle, remember that you are promoting a culture characterized by unending evolution and growth, bringing closer to your objectives with each cycle.

Feel inspired to set your Möbius loop DevOps process in motion? Share your thoughts and experiences in the comments below!

A chat with the GPT about DevOps.

The 1st prompt to set the stage

Me: Tell me why the Devops methodology works the best for cloud native application development. Explain as if you know nothing about development.

chat.openai.com: DevOps is a method of working where development and operations teams collaborate closely to deliver software and systems more efficiently. This approach works well for cloud native application development because it allows for faster and more frequent updates to be made to the application, which is important when working in a cloud environment. Additionally, the focus on automation and testing in DevOps helps to ensure that the application is stable and reliable, which is critical when running in a cloud environment where resources can be scaled up or down as needed. This can help to minimize downtime and ensure that the application is always available to users. Overall, DevOps helps to make the development and delivery of cloud native applications more efficient and reliable.

The 2nd prompt

Me: Explain why AI is important to devops

chatgpt.openai.com: AI (Artificial Intelligence) can be beneficial to DevOps in several ways. One of the main ways that AI can help is by automating certain tasks and processes. For example, AI can be used to automatically detect and diagnose problems in the system, which can help to reduce downtime and improve overall system reliability. Additionally, AI can be used to perform predictive maintenance, which can help to identify potential issues before they occur, allowing them to be addressed before they cause problems.

AI can also be used to improve the performance of the system. For example, it can be used to optimize resource allocation, such as scaling up or down the number of servers as needed to meet changing demand, which can help to improve the efficiency and cost-effectiveness of the system.

AI can also be used to improve the security of the system. For example, it can be used to detect and prevent security threats, such as intrusion attempts, by using machine learning algorithms to identify and respond to abnormal behavior.

Overall, AI can help to improve the speed, efficiency, reliability, performance, and security of DevOps processes, which can help organizations to deliver applications and services more quickly and effectively.

The 3rd prompt

Me: and why is automation a key concept in devops

chat.openai.com: Automation is a key concept in DevOps because it allows for the efficient and consistent execution of repetitive tasks and processes. By automating these tasks, organizations can reduce the risk of human error, increase the speed and accuracy of their work, and free up human resources to focus on more strategic and value-added activities.

In the context of software development, automation can be used to automate tasks such as testing, building, and deploying code. For example, automated testing can help to ensure that the code is working as expected and is free of bugs, while automated building and deployment can help to speed up the process of getting code into production.

In the context of infrastructure management, automation can be used to automate tasks such as provisioning, scaling, and monitoring of resources. For example, automation can be used to automatically spin up new servers as needed to handle increased traffic, and automatically scale back resources when they are no longer needed. This can help to optimize resource utilization and reduce costs.

Overall, automation is important in DevOps because it helps to increase the speed, efficiency, and reliability of the development and delivery process, which can help organizations to deliver applications and services more quickly and effectively.

Wrapping up

Interesting conversation. Instead of a big instruction set, I literally chatted with chat.openai.com. It did manage to describe DevOps fairly well and even though the prose was not exactly advanced chat.openai.com did get the point across based on my prompts. It appears the prose was actually prompted by how the questions were asked. Simple questions, simple responses. Had the questions been more eloquent the responses seem as if they would follow suit. Need to test out that theory next. What fun. Stay tuned.

Becoming Successful in DevOps: A Beginners Guide

Where to start

As a platform engineer, you will be tasked with ensuring that software projects run smoothly and efficiently. This includes managing all of the various components needed to develop and launch an app or system.

This may include choosing your development platform, setting up servers for hosting, finding vendors for integrations, and more. Beyond just technology, there are also time-management strategies to manage your day.

As you can imagine, being a part of this team comes with its own set of challenges and responsibilities. Being a senior member of the team takes some work — but it is totally worth it!

Being a senior developer means that you know what’s going on technologically, and you’re able to teach others that knowledge. You’ll have the opportunity to make an impact by helping other engineers get things done effectively.

It’s also important to remember that not everyone has a career in tech like you do, so don’t feel like you need to take over every task.

Plan your career

7 things to know as a devops engineer

As mentioned earlier, being a successful platform engineer is more than just knowing how to work in different environments having separate and distinctly different conversations with various people about what tools you use for each environment, and how to manage your time effectively between projects and departments. It also means planning your career ahead of time.

Most experienced devs will tell you that it’s impossible to become a senior developer without first becoming a team lead or manager. This makes sense, because you need to establish relationships within the department and organization that depend on you for leadership, guidance, and resources.

And since most tech companies are moving towards agile project management systems like Scrum, you need to be able to facilitate those meetings, talk about logistics, and make sure everyone has everything they need to complete their assignments on schedule.

All of these things require someone who leads by example, sets appropriate priorities, and keeps people motivated so that they feel comfortable coming to you for help when needed. You can’t expect anyone lower up the chain-of-command to do this if they don’t see you putting in the effort into developing your leadership skills.

It’s totally normal to want to move up the ladder, but making changes to your position requires proving that you’re capable of doing so while still keeping quality control over your tasks and setting realistic deadlines.

Create a good work-life balance

7 things to know as a devops engineer

Being a platform engineer means having close relationships with other people in your organization that depend on you for success, so make sure you are not overworking yourself.

It is important to enjoy what you do for a living, and staying motivated requires finding time to unwind and relax. You can’t expect to be passionate about your job if you don’t take care of yourself outside of work.

You need to recognize that it takes a lot out of you, both physically and mentally, and give yourself breaks to keep up momentum. Take frequent vacations, organize social events outside of work, and try to limit how many hours you spend working every day.

In addition to taking care of yourself, use the same energy you put into your career on your personal life to promote healthy relationships and friendships.

Having friends who treat you well will help you feel more balanced and confident in your position, and they will help you stay happier at home. Try to meet someone new once a week, if possible, to maintain strong ties with family members and colleagues.

Be a creative thinker

7 things to know as a devops engineer

As a platform engineer, you will be tasked with many different projects and responsibilities. This can make it hard to know what your next job is! As such, being able to think outside of the box and brainstorm new ideas and solutions to current issues is important.

You will have to use your creativity to figure out how to best solve these problems.

A great way to hone this skill is by taking part in various tech events or masterclasses. You may even get chance to present your own project or idea!

By attending events like these, not only do they give you knowledge about the field, but also lots of opportunities to improve yourself professionally.

Stay up-to-date with technology

7 things to know as a devops engineer

As a platform engineer, you will be tasked with keeping up with new technologies. This is not only important for your career, but also to satisfy users’ demands for quality services they can rely on.

As more companies implement automation via software or robotics, there are always new tools being used for this process. You will need to know what these tools are and how to use them.

You will also have to stay up-to-date with the ever-evolving practices in both development and operations. For example, when developers adopt newer agile methodologies like scrum, you must understand why that approach is better than the traditional waterfall model.

Likewise, understanding lean startup methodology means knowing why it works and if it applies to your organization. These concepts help bring down the overhead of launching an app by reducing the number of steps needed to get things going.

By staying informed about such developments, you will make sure apps launch as quickly as possible while still meeting standards.

Be a good leader

7 things to know as a devops engineer

As a platform engineer, you will be in charge of many different departments within your organization. You can expect that not every department in your own company or others you work with have strong leadership qualities.

If this is something that worries you, don’t worry! It’s totally normal.

It takes time to develop leadership skills, but you are never too young or old to learn them.

You can always look to those around you for examples of how to lead and learn from their mistakes. There are several books and courses available online and through educational institutions that focus on developing leadership abilities.

There are also mentorship programs where experienced leaders share knowledge and resources with younger people. By being part of these groups, you could gain some valuable lessons yourself.

Be a good team member

7 things to know as a devops engineer

As a platform engineer, your success will depend on how well you collaborate with others. You can’t have successful collaborations if one party is constantly trying to get their way or take over the process.

This isn’t helpful for anyone involved in the project, and it won’t set well with leadership either. Make every effort to work with people around you using sound reasoning and evidence, and keep discussions focused on topics that are productive and positive.

Avoid gossip unless you want to deal with a lot of resentment later, and be willing to go beyond what is practical or appropriate to help someone feel better about themselves or the situation.

Here’s to devops…a poem

In devops, we're constantly on call 
Our work is never done, no matter how small 
We're always ready to troubleshoot and fix 
Our skills are diverse, our knowledge is mixed
We're agile and flexible, always adapting 
We're proactive, we're never static 
We're experts in automation and efficiency 
We're the bridge between development and IT
We're passionate about our craft 
We strive for continuous improvement, it's what we're after 
We're the glue that holds everything together 
We're the unsung heroes, working in all kinds of weather
So here's to devops, the backbone of technology 
We may not always get the recognition, but we do it proudly 
We're a vital part of the team, and we know our worth 
We're the devops engineers, bringing stability to this earth

The journey

My journey into this space began a very long time ago so I’ll skip all the gory details and jump into how I managed to get to this point. The gist of it is I was a “sysadmin” running IT for what was at the time a very large systems integrator. We had ~200 or so people and processed roughly 300 orders per day. It was a “paperless” warehouse running on OS/2. Told you it was “long ago”. Let’s fast forward…

Started working for a company who was starting their transformation journey (this is well before Covid) and had been using large numbers of virtual machines. I was brought in because they had an outage due to a hardware failure and new management decided it was best to take advantage of AWS. The current AWS infrastructure was created by the old school datacenter admins so it, too was a catastrophe waiting to happen.

I had been working with AWS for a while so I had been taking my knowledge of puppet and applying it to help automate some of the mundane tasks. The old school devs were used to using mercurial so I encouraged them to pull some of the tasks created in puppet into their code and got them started deploying their apps on their own to AWS. This was working.

Fast forward a bit and the dev teams had evolved to a really nice Agile based setup. I had spent a ton of time learning how to manage pipelines, transitioning from puppet to terraform, and had educated a lot of the newer operations folks on automating everything.

My journey is probably very similar to most. In my case I never really got into windows. I had started down the linux path very early in the big scheme of things and stuck with it. This really helped when it came to running and managing things in the cloud. Transitioning all of the various scripting languages was pretty straightforward because I had a good grasp on how to script. I had bash scripts for everything and that helped tremendously when I dove into Terraform and other tools. Plus I was very comfortable with the command line.

The one thing I suggest is read, read, read. O’Reilly is a great resource. No matter where you are coming from the most important thing to remember is look at the box and think INSIDE of it. Never try to apply knowledge from what you are currently doing into what you want to do. What I mean by this is managing VMs is very different from managing container images, but they are the same. Yea…I know confusing, but so is “cloud-native”.

I did it. You can, too.