How to Create a Pull Request Using GitHub Through VSCode

Visual Studio Code (VSCode) has risen as a favorite among developers due to its extensibility and tight integration with many tools, including GitHub. In this tutorial, we’ll cover how to create a pull request (PR) on GitHub directly from VSCode. Given that our audience is highly technical, we’ll provide detailed steps along with screenshots and necessary code.

Prerequisites:

  • VSCode Installed: If not already, download and install from VSCode’s official website.
  • GitHub Account: You’ll need a GitHub account to interact with repositories.
  • Git Installed: Ensure you have git installed on your machine.
  • GitHub Pull Requests and Issues Extension: Install it from the VSCode Marketplace.

Steps:

Clone Your Repository

First, ensure you have the target repository cloned on your local machine. If not:

git clone <repository-url>

Open Repository in VSCode

Navigate to the cloned directory:

cd <repository-name>

Launch VSCode in this directory:

code .

Create a New Branch

Before making any changes, it’s best practice to create a new branch. In the bottom-left corner of VSCode, click on the current branch name (likely main or master). A top bar will appear. Click on + Create New Branch and give it a meaningful name related to your changes.

Make Your Changes

Once you’re on your new branch, make the necessary changes to the code or files. VSCode’s source control tab (represented by the branch icon on the sidebar) will list the changes made.

Stage and Commit Changes

Click on the + icon next to each changed file to stage the changes. Once all changes are staged, enter a commit message in the text box and click the checkmark at the top to commit.

Push the Branch to GitHub

Click on the cloud-upload icon in the bottom-left corner to push your branch to GitHub.

Create a Pull Request

With the GitHub Pull Requests and Issues Extension installed, you’ll see a GitHub icon in the sidebar. Clicking on this will reveal a section titled GitHub Pull Requests.

Click on the + icon next to it. It’ll fetch the branch and present a UI to create a PR. Fill in the necessary details:

  • Title: Summarize the change in a short sentence.
  • Description: Provide a detailed description of what changes were made and why.
  • Base Repository: The repository to which you want to merge the changes.
  • Base: The branch (usually main or master) to which you want to merge the changes.
  • Head Repository: Your forked repository (if you’re working on a fork) or the original one.
  • Compare: Your feature/fix branch.

Once filled, click Create.

Review and Merge

Your PR is now on GitHub. It can be reviewed, commented upon, and eventually merged by maintainers.

Conclusion

VSCode’s deep integration with GitHub makes it a breeze to handle Git operations, including creating PRs. By following this guide, you can streamline your Git workflow without ever leaving your favorite editor!

By

Read More

7 things all devops practitioners need from Git

Git is a powerful tool for version control, enabling multiple developers to work together on the same codebase without stepping on each other’s toes. It’s a complex system with many features, and getting to grips with it can be daunting. Here are seven insights that I wish I had known when I started working with Git.

The Power of git log

The git log command is much more powerful than it first appears. It can show you the history of changes in a variety of formats, which can be extremely helpful for understanding the evolution of a project.

# Show the commit history in a single line per commit
git log --oneline

# Show the commit history with graph, date, and abbreviated commits
git log --graph --date=short --pretty=format:'%h - %s (%cd)'

Branching is Cheap

Branching in Git is incredibly lightweight, which means you should use branches liberally. Every new feature, bug fix, or experiment should have its own branch. This keeps changes organized and isolated from the main codebase until they’re ready to be merged.

# Create a new branch
git branch new-feature

# Switch to the new branch
git checkout new-feature

Or do both with:

# Create and switch to the new branch
git checkout -b new-feature

git stash is Your Friend

When you need to quickly switch context but don’t want to commit half-done work, git stash is incredibly useful. It allows you to save your current changes away and reapply them later.

# Stash your current changes
git stash

# List all stashes
git stash list

# Apply the last stashed changes and remove it from the stash list
git stash pop

git rebase for a Clean History

While merging is the standard way to bring a feature branch up to date with the main branch, rebasing can often result in a cleaner project history. It’s like saying, “I want my branch to look as if it was based on the latest state of the main branch.”

# Rebase your current branch on top of the main branch
git checkout feature-branch
git rebase main

Note: Rebasing rewrites history, which can be problematic for shared branches.

The .gitignore File

The .gitignore file is crucial for keeping your repository clean of unnecessary files. Any file patterns listed in .gitignore will be ignored by Git.

# Ignore all .log files
*.log

# Ignore a specific file
config.env

# Ignore everything in a directory
tmp/**

git diff Shows More Than Just Differences

git diff can be used in various scenarios, not just to show the differences between two commits. You can use it to see changes in the working directory, changes that are staged, and even differences between branches.

# Show changes in the working directory that are not yet staged
git diff

# Show changes that are staged but not yet committed
git diff --cached

# Show differences between two branches
git diff main..feature-branch

The Reflog Can Save You

The reflog is an advanced feature that records when the tips of branches and other references were updated in the local repository. It’s a lifesaver when you’ve done something wrong and need to go back to a previous state.

# Show the reflog
git reflog

# Reset to a specific entry in the reflog
git reset --hard HEAD@{1}

Remember: The reflog is a local log, so it only contains actions you’ve taken in your repository.


Understanding these seven aspects of Git can make your development workflow much more efficient and less error-prone. Git is a robust system with a steep learning curve, but with these tips in your arsenal, you’ll be better equipped to manage your projects effectively.

By

Read More

AI Workloads for Kubernetes


Introduction

In recent years, Kubernetes has emerged as the go-to solution for orchestrating containerized applications at scale. But when it comes to deploying AI workloads, does it offer the same level of efficiency and convenience? In this blog post, we delve into the types of AI workloads that are best suited for Kubernetes, and why you should consider it for your next AI project.

Model Training and Development

Batch Processing

When working with large datasets, batch processing becomes a necessity. Kubernetes can efficiently manage batch processing tasks, leveraging its abilities to orchestrate and scale workloads dynamically.

  • Example: A machine learning pipeline that processes terabytes of data overnight, utilizing idle resources to the fullest.
Hyperparameter Tuning

Hyperparameter tuning involves running numerous training jobs with different parameters to find the optimal configuration. Kubernetes can streamline this process by managing multiple parallel jobs effortlessly.

  • Example: An AI application that automatically tunes hyperparameters over a grid of values, reducing the time required to find the best model.

Model Deployment

Rolling Updates and Rollbacks

Deploying AI models into production environments requires a system that supports rolling updates and rollbacks. Kubernetes excels in this area, helping teams to maintain high availability even during updates.

  • Example: A recommendation system that undergoes frequent updates without experiencing downtime, ensuring a seamless user experience.
Auto-Scaling

AI applications often face variable traffic, requiring a system that can automatically scale resources. Kubernetes’ auto-scaling feature ensures that your application can handle spikes in usage without manual intervention.

  • Example: A voice recognition service that scales up during peak hours, accommodating a large number of simultaneous users without compromising on performance.
Placeholder: Diagram showing the auto-scaling feature of Kubernetes

Data Engineering

Data Pipeline Orchestration

Managing data pipelines efficiently is critical in AI projects. Kubernetes can orchestrate complex data pipelines, ensuring that each component interacts seamlessly.

  • Example: A data ingestion pipeline that collects, processes, and stores data from various sources, running smoothly with the help of Kubernetes orchestration.
Stream Processing

For real-time AI applications, stream processing is a crucial component. Kubernetes facilitates the deployment and management of stream processing workloads, ensuring high availability and fault tolerance.

  • Example: A fraud detection system that analyzes transactions in real-time, leveraging Kubernetes to maintain a steady flow of data processing.

Conclusion

Kubernetes offers a robust solution for deploying and managing AI workloads at scale. Its features like auto-scaling, rolling updates, and efficient batch processing make it an excellent choice for AI practitioners aiming to streamline their operations and bring their solutions to market swiftly and efficiently.

Whether you are working on model training, deployment, or data engineering, Kubernetes provides the tools to orchestrate your workloads effectively, saving time and reducing complexity.

To get started with Kubernetes for your AI projects, consider exploring the rich ecosystem of tools and communities available to support you on your journey.

By

Read More

Kubernetes quickstarts – AKS, EKS, GKE

There has been a lot of inquiries about how to get started quickly with what is commonly referred as the hyperscalers. Let’s dive in for a super quick primer!

All of these quickstarts assume the reader has accounts in each service with the appropriate rights and in most cases the reader needs to have the client installed.

Starting with Google Kubernetes Engine (GKE)

export NAME="$(whoami)-$RANDOM"
export ZONE="us-west2-a"
gcloud container clusters create "${NAME}" --zone ${ZONE} --num-nodes=1
glcoud container clusters get-credentials "${NAME}" --zone ${ZONE}

Moving on to Azure Kubernetes Service (AKS)

export NAME="$(whoami)-$RANDOM"
export AZURE_RESOURCE_GROUP="${NAME}-group"
az group create --name "${AZURE_RESOURCE_GROUP}" -l westus2
az aks create --resource-group "${AZURE_RESOURCE_GROUP}" --name "${NAME}"
az aks get-credentials --resource-group "${AZURE_RESOURCE_GROUP}" --name "${NAME}"

For Elastic Kubernetes Service (EKS)

export NAME="$(whoami)-$RANDOM"
eksctl create cluster --name "${NAME}"

As you can see setting up these clusters is very simple. Now that you have a cluster what are you going to do with it? Ensure you’ve installed the tools needed to manage the cluster. You’ll want to get the credentials from each copy into ~/{user}/.kube/config (except with eksctl as it copies the kubeconfig to the appropriate place automagically). To manipulate the cluster, install kubectl with your favorite package manager and to install applications the easiest way is via helm.

As you can see the setup of a kubernetes cluster in one of the major hyperscalers is very easy. A few lines of code and you’re up and running. Add those lines into a shell script and standing up clusters can be a single command…just don’t forget to tear it down when you’re done!

By

Read More

Streamline Kubernetes Management through Automation

Automation in managing Kubernetes clusters has burgeoned into an essential practice that enhances efficiency, security, and the seamless deployment of applications. With the exponential growth in containerized applications, automation has facilitated streamlined operations, reducing the room for human error while significantly saving time. Let’s delve deeper into the crucial role automation plays in managing Kubernetes clusters.

Section 1: The Imperative of Automation in Kubernetes

1.1 The Kubernetes Landscape

Before delving into the nuances of automation, let’s briefly recapitulate the fundamental components of Kubernetes, encompassing pods, nodes, and clusters, and their symbiotic relationships facilitating a harmonious operational environment.

1.2 The Need for Automation

Automation emerges as a vanguard in managing complex environments effortlessly, fostering efficiency, reducing downtime, and ensuring the optimal utilization of resources.

1.2.1 Efficiency and Scalability

Automation in Kubernetes ensures that clusters can dynamically scale based on the workload, fostering efficiency, and resource optimization.

1.2.2 Reduced Human Error

Automating repetitive tasks curtails the scope of human error, facilitating seamless operations and mitigating security risks.

1.2.3 Cost Optimization

Through efficient resource management, automation aids in cost reduction by optimizing resource allocation dynamically.

Section 2: Automation Tools and Processes

2.1 CI/CD Pipelines

Continuous Integration and Continuous Deployment (CI/CD) pipelines are at the helm of automation, fostering swift and efficient deployment cycles.

pipeline:
  build:
    image: node:14
    commands:
      - npm install
      - npm test
  deploy:
    image: google/cloud-sdk
    commands:
      - gcloud container clusters get-credentials cluster-name --zone us-central1-a
      - kubectl apply -f k8s/

Code snippet 1: A simple CI/CD pipeline example.

2.2 Infrastructure as Code (IaC)

IaC facilitates the programmable infrastructure, rendering a platform where systems and devices can be managed through code.

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mycontainer
    image: nginx

Code snippet 2: Defining a Kubernetes pod using IaC.

2.3 Configuration Management

Tools like Ansible and Chef aid in configuration management, ensuring system uniformity and adherence to policies.

- hosts: kubernetes_nodes
  tasks:
    - name: Ensure Kubelet is installed
      apt: 
        name: kubelet
        state: present

Code snippet 3: Using Ansible for configuration management.

Section 3: Automation Use Cases in Kubernetes

3.1 Auto-scaling

Auto-scaling facilitates automatic adjustments to the system’s computational resources, optimizing performance and curtailing costs.

3.1.1 Horizontal Pod Autoscaler

Kubernetes’ Horizontal Pod Autoscaler automatically adjusts the number of pod replicas in a replication controller, deployment, or replica set based on observed CPU utilization.

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50

Code snippet 4: Defining a Horizontal Pod Autoscaler in Kubernetes.

3.2 Automated Rollouts and Rollbacks

Kubernetes aids in automated rollouts and rollbacks, ensuring application uptime and facilitating seamless updates and reversions.

3.2.1 Deployment Strategies

Deployment strategies such as blue-green and canary releases can be automated in Kubernetes, facilitating controlled and safe deployments.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: myapp:v2

Code snippet 5: Configuring a rolling update strategy in a Kubernetes deployment.

Conclusion: The Future of Kubernetes with Automation

As Kubernetes continues to be the frontrunner in orchestrating containerized applications, the automation integral to its ecosystem fosters efficiency, security, and scalability. Through a plethora of tools and evolving best practices, automation stands central in leveraging Kubernetes to its fullest potential, orchestrating seamless operations, and steering towards an era of self-healing systems and zero-downtime deployments.

In conclusion, the ever-evolving landscape of Kubernetes managed through automation guarantees a future where complex deployments are handled with increased efficiency and reduced manual intervention. Leveraging automation tools and practices ensures that Kubernetes clusters not only meet the current requirements but are also future-ready, paving the way for a robust, scalable, and secure operational environment.


References:

  1. Kubernetes Official Documentation. Retrieved from https://kubernetes.io/docs/
  2. Jenkins, CI/CD, and Kubernetes: Integrating CI/CD with Kubernetes (2021). Retrieved from https://www.jenkins.io/doc/book/
  3. Infrastructure as Code (IaC) Explained (2021).
  4. Understanding Kubernetes Operators (2021).

By

Read More

DevOps and the Möbius Loop

Harnessing the Möbius Loop for a Revolutionary DevOps Process

In the world of DevOps, continual improvement and iteration are the name of the game. The Möbius loop, with its one-sided, one-boundary surface, can serve as a vivid metaphor and blueprint for establishing a DevOps process that is both unified and infinitely adaptable. Let’s delve into the Möbius loop concept and see how it beautifully intertwines with the principles of DevOps.

Understanding the Möbius Loop

The Möbius loop or Möbius strip is a remarkable mathematical concept — a surface with only one side and one boundary created through a half-twist of a strip of paper that then has its ends joined. This one-sided surface represents a continuous, never-ending cycle, illustrating an ever-continuous pathway that can epitomize the unceasing cycle of development in DevOps.

Reference: Möbius Strip – Wikipedia

The Möbius Loop and DevOps: A Perfect Harmony

In the ecosystem of DevOps, the Möbius loop signifies a continuous cycle where one phase naturally transitions into the next, establishing a seamless feedback loop that fosters continuous growth and development. This philosophy lies at the heart of DevOps, promoting an environment of collaboration and iterative progress.

Reference: DevOps and Möbius Loop — A Journey to Continuous Improvement

Crafting a Möbius Loop-Foundation DevOps Process

Building a DevOps process based on the Möbius loop principle means initiating a workflow where each development phase fuels the next, constituting a feedback loop that constantly evolves. Here is a step-by-step guide to create this iterative and robust system:

1. Define Objectives

  • Business Objectives: Set clear business goals and metrics.
  • User Objectives: Align the goals with user expectations.

2. Identify Outcomes

  • Expected Outcomes: Envision the desired outcomes for business and users.
  • Metrics: Design metrics to measure the effectiveness of strategies.

3. Discovery and Framing

  • Research: Invest time in understanding user preferences and pain points.
  • Hypothesis: Develop hypotheses to meet business and user objectives.

4. Develop and Deliver

  • Build: Employ agile methodologies to build solutions incrementally.
  • Deploy: Use CI/CD pipelines for continuous deployment.

Reference: Utilizing Agile Methodologies in DevOps

5. Operate and Observe

  • Monitor: Utilize monitoring tools to collect data on system performance.
  • Feedback Loop: Establish channels to receive user feedback.

6. Learning and Iteration

  • Analyze: Scrutinize data and feedback from the operate and observe phase.
  • Learn: Adapt based on the insights acquired and enhance the solution.

7. Feedback and Adjust

  • Feedback: Facilitate feedback from all stakeholders.
  • Adjust: Revise goals, metrics, or the solution based on the feedback received.

8. Loop Back

  • Iterative Process: Reiterate the process, informed by the learning from previous cycles.
  • Continuous Improvement: Encourage a mindset of perpetual growth and improvement.

Tools to Embark on Your Möbius Loop Journey

Leveraging advanced tools and technologies is vital to facilitate this Möbius loop-founded DevOps process. Incorporate the following tools to set a strong foundation:

  • Version Control: Git for source code management.
  • CI/CD: Jenkins, Gitlab, or ArgoCD for automating deployment.
  • Containerization and Orchestration: Podman and Kubernetes to handle the orchestration of containers.
  • Monitoring and Logging: Tools like Prometheus for real-time monitoring.
  • Collaboration Tools: Slack or Rocket.Chat to foster communication and collaboration.

Reference: Top Tools for DevOps

Conclusion

Embracing the Möbius loop in DevOps unveils a path to continuous improvement, aligning with the inherent nature of the development-operations ecosystem. It not only represents a physical manifestation of the infinite loop of innovation but also fosters a system that is robust, adaptable, and user-centric. As you craft your DevOps process rooted in the Möbius loop principle, remember that you are promoting a culture characterized by unending evolution and growth, bringing closer to your objectives with each cycle.

Feel inspired to set your Möbius loop DevOps process in motion? Share your thoughts and experiences in the comments below!

By

Read More

A success for DevOps: Developing and Deploying Mission Critical Applications

In today’s digital age, businesses rely heavily on technology to remain competitive. This means that the development and deployment of business-critical applications must be efficient and seamless.

Enter DevOps, a methodology that integrates development and operations teams to improve communication, collaboration, and automation to streamline the development and delivery of software. In this success story, we’ll explore how adopting new methodologies helped a company develop and deploy a business-critical application with great success.

The Challenge

A company that provides financial services to small businesses needed to develop and deploy a web-based application that would provide their clients with access to their financial data in real-time. The application needed to be secure, scalable, and available 24/7 to ensure uninterrupted access to critical financial information. The company’s IT team had experience in developing applications but struggled with the deployment process, which was manual and error-prone.

The company was facing challenges in the deployment process, and that was affecting the time to market for their application. They were unable to release new features quickly and were also struggling to ensure the application was always available to clients.

The Solution

To overcome these challenges, the company adopted new methodologies to streamline the development and deployment process. The development team worked closely with the operations team to identify bottlenecks and streamline the process. They implemented continuous integration and delivery (CI/CD) pipelines to automate the build, test, and deployment process. They also used infrastructure as code (IaC) to manage infrastructure and reduce the risk of configuration errors.

The newly adopted methodology allowed the development team and operations team to work together seamlessly, identify the issues in the deployment process, and implement the necessary changes. They automated the process of building, testing, and deploying the application to ensure that it was reliable and scalable. They also used IaC to manage the infrastructure as code, which allowed them to make changes to the infrastructure quickly and efficiently.

The Results

The adoption of a different set of methodologies led to a significant improvement in the development and deployment process. The company was able to roll out new features and updates to the application much faster and with fewer errors. The continuous deployment process also allowed the company to respond quickly to any issues that arose, ensuring that the application was always available to clients. The company’s IT team was able to focus on developing new features and improving the user experience, rather than dealing with deployment issues.

The company was able to reduce its time to market, which allowed them to release new features faster and stay ahead of the competition. They were also able to ensure that the application was always available to clients, which helped to build trust and loyalty with their customers.

Interpretation of Success

The success of this project can be attributed to the effective implementation of DevOps or Platform Engineering methodologies. The integration of development and operations teams allowed for better communication and collaboration, resulting in a streamlined process that reduced errors and improved efficiency. Automation through CI/CD pipelines and IaC reduced the risk of human error and ensured that deployments were consistent and reliable.

The company was able to provide their clients with a secure, scalable, and highly available application that met their business needs. The success of this project demonstrates the effectiveness of DevOps in developing and deploying business-critical applications.

In conclusion, adopting DevOps methodologies can help companies streamline their development and deployment process, reduce errors, and improve efficiency. This can allow companies to release new features faster, reduce their time to market, and ensure that applications are always available to clients.

By

Read More

10x the DevEX!

Recently there has been a shift in language surrounding System Reliability Engineering (SRE) and DevOps to Platform Engineering. Granted these terms have been used in various ways for a while, but how language terms are used gives way to how markets evolve. This post provides a few key areas of thought around ways to ultimately get products to production faster. Remember…code means nothing until it’s in production.

No matter the title, anyone in the pipeline touching production code is part of the team of ensuring success of critical applications in an enterprise. This is an important concept because everyone is part of the larger team and how teams work together ultimately determines the success of any project.

The focus here will be on the actual development team who is primarily writing the code. The code in question would be delivered as microservices running on a K8S cluster. Keep in mind the use of microservices will lend itself to multiple teams individually creating a service for other teams to consume. Already there is significant dependencies and a single line of code has yet to be written.

Each team ultimately needs to consume one or more code repositories, one or more “testing” systems, at least one pipeline for continuous integration, continuous delivery/deployment (CI/CD), and many other systems to get code to production.

The Platform Engineering team is ultimately responsible for ensuring the “platforms” are working in a way to support the developers. Ensuring a great experience is paramount.

The question is how do Platform Engineers continually improve the great developer experience? The answer many teams turn to is to create powerful systems with guardrails or opinions on how they are to be utilized based on the collective understanding of the teams modus operandi or how they work most effectively.

The key to how is reducing the repetitive work, the mundane menial tasks which take a toll on the cognitive workload of developers allowing them to be able to focus on writing good, clean code.

Giving the power to the developers to consume what is needed in a self-service fashion is one major step as is giving a limited set of choices in what toolsets to use. Make it easy for developers to build and deliver software without removing the useful capabilities of the core services.

In the ideal world, limit restrictions on the how allowing choices in using GitOps or ClickOps or using a API vs CLI vs UI. Use a “as a service” approach to create a system built iteratively by the entirety of the team based on direct feedback.

What it all comes down to is the fact that everyone has different ways they want to work. Its the platform engineering team who can help ensure all of the tools are available and functional to create a great developer experience which in turn will increase productivity and get new, shiny things to market faster.

By

Read More

90 days to success in DevOps

In most enterprises on boarding new talent is typically left to the new employee. This is very unfortunate because the first 90 days of a new role will impact not only the new employee, but their immersion into the culture and their view of the company. Bottom line, in most cases it is up to the new employee to “learn the ropes” in navigating their new position.

Starting a new role? Maybe this is the first foray into DevOps or Platform Engineering? What is needed to “hit the ground running” in a new role? Leaders in high positions of a company typically have a “100 day rule” to prove themselves. Let’s round it out with 3 months of progress for success.

In most enterprises on boarding new talent is typically left to the new employee. This is very unfortunate because the first 90 days of a new role will impact not only the new employee, but their immersion into the culture and their view of the company. Bottom line, in most cases it is up to the new employee to “learn the ropes” in navigating their new position.

The first 30 days

This month is usually the most important for everyone. The first thing a new employee needs to do is find a good mentor especially if they are not assigned one. Seek out those with institutional knowledge who knows how to navigate the company politics. Find someone who knows how the systems work, how to gain the access needed to be successful in the role. The mentor would have knowledge of “how things work” and what is seen as best practice for accomplishing the tasks at hand.

Some things to know:

  • Who’s who in the organization? – an org chart
  • How mature are they as a development organization?
  • What are the processes to put code into production?
  • Are the processes manual or automated?
  • What is the expectation of you on a day to day basis?

There is plenty more to uncover, but this will help to get started. Once the processes are understood and access is granted to perform the role, find some quick wins. Listen closely to where the frustrations may lie within your organization. Maybe the previous employee in this role didn’t automate certain tasks…submit a small PR to help.

It’s important to find some quick wins for many reasons. First it helps “break the ice”. It also shows strengths. Maybe there’s a way to improve some docs. There may be some ideas brought in from previous experience to help with a particular pain point.

The first 30 days is important to uncover the expectations of the team. Talking to stakeholders and “the customer” is important to get a big picture of what works and what doesn’t in order to find quick wins to make an impact early.

Days 30-60

The first 4 weeks are usually greeted with firehose sessions daily. Take a bit to digest everything. Review notes, brainstorm ideas, understand how the team and the company works. Armed with the broader knowledge about the organization, the team, and how things work at a high level it’s time to dig deeper into where the biggest impacts can be achieved.

In this 30 day block uncover:

  • The maturity of the team?
  • What is the approval process for delivering code to production?
  • What steps are needed to approve PRs?
  • How does code flow through the various systems?
  • What amount of QA is performed?

Find ways to help the team be more efficient. Listen to the complaints and see where possible improvements could be made. Again, quick wins are key at this stage. As a fresh face, a lot of times gaining access to otherwise inaccessible groups within the organization is usually fairly easy. Keep an ear to the ground to find ways to create impactful suggestions

It is important to remember as people get to know a new employee the interactions have lasting impacts. Ensure there is adequate listening and relevant questions to get underneath a complaint. Avoid making off hand suggestions, but rather find some common issues. Start to tackle the common issues and socialize improvements. The key here to to avoid “calling the baby ugly”.

Days 60-90

This is where a new employee’s impact can accelerate. At this stage having the access needed to be successful would be complete. Hopefully there’s been a few quick wins, new co-workers are impressed, and there’s been positive impact on the team.

Regular interaction with your leader would have been established. A solid understanding of what is expected is created and the mentor has made an impact. Knowing where to go to get answers if there is a roadblock and knowing how to avoid the “potholes in the road” is key.

This stage is where the “rubber hits the road”. Gaining traction in the day to day and making regular impact to the business is routine at this point. This is where all of the knowledge gained in the first 60 days can be parlayed into a winning hand.

What success looks like

The first 3 months of any new position sets the stage for every new employee. Creating a positive impression on the team helps build credibility within the broader organization and is key to instilling the confidence needed to being successful overall.

It may take far more than 90 days to feel comfortable with the role and that is okay. As long as there is a consistent method for learning and mistakes are not repeated the impact new employees make is usually sustainable for a long time. Make the best of it and keep track of the wins and losses for the inevitable review with “the boss”.

You got this. Go.

By

Read More

What’s missing in Kubernetes

Kubernetes is an open-source container orchestration system that automates the deployment, scaling, and management of containerized applications. It is widely used for its ability to manage containers at scale and is the de facto standard for container orchestration. However, despite its broad adoption, there are still a few missing pieces that need to be addressed to make it fully functional.

Kubernetes is an open-source container orchestration system that automates the deployment, scaling, and management of containerized applications. It is widely used for its ability to manage containers at scale and is the de facto standard for container orchestration. However, despite its broad adoption, there are still a few missing pieces that need to be addressed to make it fully functional.

Network Setup

One of the main missing pieces in Kubernetes is a proper network setup. Kubernetes allows for the creation of multiple clusters, each with its own set of nodes, but it requires a well-defined network setup to manage communication between these clusters.

Without proper network setup, nodes in the same cluster may not be able to communicate with each other, or there may be issues with cross-cluster communication. This can result in application downtime, loss of data, and other issues that can impact business operations.

One solution to this problem is to use a software-defined networking (SDN) approach that allows for the creation of a virtual network infrastructure. An SDN controller can be used to manage the virtual network infrastructure and provide network services such as load balancing, routing, and security. With SDN, Kubernetes clusters can be properly connected, and communication between clusters can be streamlined.

Security

Another missing piece in Kubernetes is security. Kubernetes provides some basic security features such as role-based access control (RBAC) and network policies, but these are not always enough to secure the entire system.

Security is a critical aspect of any container orchestration system, and Kubernetes is no exception. Kubernetes clusters are complex systems with many components, and securing them requires a multi-layered approach.

To enhance security, Kubernetes clusters should be set up with secure communication channels and encrypted data storage. Additionally, it is important to create and enforce security policies that prevent unauthorized access to the system. This includes implementing identity and access management (IAM) policies, network segmentation, and regular vulnerability scanning.

Monitoring and Logging

Kubernetes also lacks inbuilt monitoring and logging capabilities. While Kubernetes includes some basic monitoring features, such as health checks and resource usage metrics, it does not provide comprehensive monitoring and logging capabilities.

In a production environment, it is essential to have comprehensive monitoring and logging capabilities to ensure the health and availability of the system. Kubernetes clusters should be set up with a logging and monitoring stack that can collect and analyze logs and metrics from all nodes in the cluster. This can provide insights into the health and performance of the system, as well as help identify and troubleshoot issues.

Conclusion

Kubernetes is a powerful container orchestration system, but there are still a few missing pieces that need to be addressed to make it fully functional. A well-defined network setup, enhanced security, and proper monitoring and logging are all essential components of a fully functional Kubernetes environment.

With the increasing adoption of containers and cloud-native applications, Kubernetes is becoming more important than ever. As organizations continue to adopt Kubernetes, it is important to ensure that the missing pieces are addressed to provide a reliable and scalable platform for containerized applications. By addressing these missing pieces, Kubernetes can continue to evolve and improve, providing a robust and secure platform for developers and IT teams.

By

Read More

× Close