Advanced Environment Management: Taking Your DevOps Skills to the Next Level

In our previous guide, you learned the basics of environment variables and shell profiles. Now that you’re comfortable with setting variables and organizing your ~/.bashrc file, let’s explore some powerful advanced techniques that will make your DevOps workflow much more efficient.

Think of this as graduating from using basic tools to having a fully customized workshop that adapts to whatever project you’re working on.

Creating Functions in Your Profile

Functions are like creating your own custom commands. Instead of typing the same series of commands over and over, you can bundle them into a function and call it with a simple name.

What Are Functions?

A function is a mini-program that you write once and use many times. It’s like creating a recipe - you write down the steps once, then you can follow that recipe whenever you need it.

Think about your daily routine. Instead of remembering “open terminal, navigate to projects, list files, check git status” every time, you could just think “do my project setup routine.” Functions work the same way - they bundle multiple commands into one easy-to-remember name.

Let’s start with a simple example. Add this to your ~/.bashrc:

1
2
3
4
5
# Function to quickly navigate to your projects folder
go_projects() {
    cd ~/projects
    ls -la
}

Now, whenever you type go_projects in your terminal, it will take you to your projects folder and show you what’s inside. Instead of typing two separate commands every time, you just type one word and both actions happen automatically.

The structure is simple:

  • go_projects() - This is the function name (what you’ll type to call it)
  • { - Opens the function (like opening a recipe book)
  • cd ~/projects and ls -la - These are the commands that run (the recipe steps)
  • } - Closes the function (like closing the recipe book)

Real DevOps Functions

Here are some practical functions that DevOps engineers use daily. Each one solves a common problem - instead of running multiple commands and remembering complex syntax, you just call one function name.

The Problem: You constantly need to check if Docker containers are running, what images you have, and their status. Normally, you’d run several docker ps and docker images commands.

The Solution: Bundle them into one function:

Quick Docker Container Check:

1
2
3
4
5
6
7
docker_status() {
    echo "=== Docker Containers ==="
    docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
    echo
    echo "=== Docker Images ==="
    docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"
}

The Problem: You need to quickly check if your server is running out of disk space, memory, or if the CPU is overloaded. Normally you’d run df -h, free -h, and uptime separately.

The Solution: One function that checks everything:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
health_check() {
    echo "=== System Health Check ==="
    echo "Disk Usage:"
    df -h | head -5
    echo
    echo "Memory Usage:"
    free -h
    echo
    echo "CPU Load:"
    uptime
}

The Problem: You’re working on a Git project and need to quickly see what branch you’re on, what files have changed, and what recent commits were made. This usually requires multiple Git commands.

The Solution: A comprehensive Git overview function:

1
2
3
4
5
6
7
8
9
git_overview() {
    echo "=== Git Repository Status ==="
    echo "Current branch: $(git branch --show-current)"
    echo "Status:"
    git status --short
    echo
    echo "Recent commits:"
    git log --oneline -5
}

Functions with Parameters

Functions can also accept inputs (called parameters). This is like giving your recipe different ingredients to work with.

Think of it like a coffee machine - you press one button (the function name), but you can choose different settings (parameters) like “espresso” or “latte” to get different results.

Here’s a function that helps you quickly switch between different environments. Instead of remembering and typing different environment variables every time, you just tell the function which environment you want:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
switch_env() {
    local environment=$1
    
    if [ -z "$environment" ]; then
        echo "Usage: switch_env [dev|staging|prod]"
        echo "Current environment: ${CURRENT_ENV:-not set}"
        return
    fi
    
    case $environment in
        dev)
            export DATABASE_URL="localhost:5432/myapp_dev"
            export API_URL="http://localhost:3000"
            export LOG_LEVEL="debug"
            ;;
        staging)
            export DATABASE_URL="staging-db.company.com:5432/myapp"
            export API_URL="https://api-staging.company.com"
            export LOG_LEVEL="info"
            ;;
        prod)
            export DATABASE_URL="prod-db.company.com:5432/myapp"
            export API_URL="https://api.company.com"
            export LOG_LEVEL="error"
            ;;
        *)
            echo "Unknown environment: $environment"
            return
            ;;
    esac
    
    export CURRENT_ENV=$environment
    echo "Switched to $environment environment"
}

Now you can type switch_env dev or switch_env prod to quickly change your environment settings.

Let’s break down how this works:

  • local environment=$1 - This captures the first word you type after the function name
  • The case statement is like a multiple-choice question - “if they said ‘dev’, do this; if they said ‘prod’, do that”
  • Each environment sets up different variables automatically
  • If someone types an unknown environment, it shows an error message

This saves you from manually typing export DATABASE_URL=... and other variables every time you switch environments.

Using Different Profiles for Different Projects

As you work on multiple projects, you’ll find that each one has its own tools, environment variables, and shortcuts. Instead of cluttering your main ~/.bashrc file, you can create project-specific configurations.

This is like having different toolboxes for different jobs. A carpenter doesn’t carry plumbing tools, and a plumber doesn’t need woodworking tools. Similarly, your Node.js project doesn’t need Python-specific settings, and vice versa.

The Problem

Let’s say you’re juggling these projects:

  • Web Application: Needs Node.js, specific port settings, database connections
  • Data Analysis: Requires Python, Jupyter notebooks, data file paths
  • Infrastructure: Uses Docker, Kubernetes, cloud provider tools

If you put all the environment variables and shortcuts for these projects in your main ~/.bashrc file, it becomes a mess. Plus, variables from one project might conflict with another (like if both need different database URLs).

The Solution: Project-Specific Environments

Instead of one giant configuration file, we’ll create a system where your shell automatically loads the right configuration based on which project you’re working on.

It’s like having your computer automatically switch to “work mode” or “gaming mode” - everything adapts to what you’re trying to do.

First, let’s create a special folder for project configurations:

1
mkdir -p ~/.config/projects

This creates a dedicated place to store all your project-specific settings. The -p flag means “create parent directories if they don’t exist” - so if ~/.config doesn’t exist, it’ll create that too.

Now, for each project, create a configuration file. Think of each file as a “recipe” for setting up that project’s environment.

~/.config/projects/webapp.sh (for your Node.js project):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# Node.js web application environment
export PROJECT_NAME="webapp"
export NODE_ENV="development"
export PORT="3000"
export DATABASE_URL="postgresql://localhost:5432/webapp_dev"

# Project-specific aliases
alias start="npm run dev"
alias test="npm test"
alias build="npm run build"

echo "Loaded webapp environment"

What this does: When you load this project, it sets up everything you need for web development - the right database connection, port number, and creates shortcuts like start (instead of typing npm run dev).

~/.config/projects/datatools.sh (for your Python project):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# Python data processing environment  
export PROJECT_NAME="datatools"
export PYTHONPATH="$HOME/projects/datatools/src"
export DATA_DIR="$HOME/data"
export OUTPUT_DIR="$HOME/output"

# Project-specific aliases
alias run="python main.py"
alias test="pytest"
alias notebook="jupyter notebook"

echo "Loaded datatools environment"

What this does: This sets up Python-specific paths and creates shortcuts for data analysis work. When you type run, it automatically runs your main Python script. When you type notebook, it starts Jupyter.

Automatic Project Loading

Now comes the magic part - we’ll create a function that loads these project configurations automatically. Add this function to your ~/.bashrc:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
load_project() {
    local project_name=$1
    local project_config="$HOME/.config/projects/${project_name}.sh"
    
    if [ -f "$project_config" ]; then
        source "$project_config"
    else
        echo "Project configuration not found: $project_name"
        echo "Available projects:"
        ls ~/.config/projects/*.sh 2>/dev/null | xargs -n1 basename | sed 's/.sh$//' | sed 's/^/  - /'
    fi
}

# Quick aliases for common projects
alias work-webapp="load_project webapp"
alias work-data="load_project datatools"

How this works:

  • load_project webapp looks for a file called webapp.sh in your projects config folder
  • If it finds the file, it runs all the commands in it (that’s what source does)
  • If it doesn’t find the file, it shows you which projects are available
  • The aliases at the bottom create shortcuts - work-webapp is easier to type than load_project webapp

Now you can quickly switch between project environments by typing work-webapp or work-data.

Even Smarter: Auto-Detection

You can make your shell even smarter by automatically detecting which project you’re working on based on your current directory. This is like having your phone automatically switch to silent mode when you enter a movie theater.

Here’s how it works: When you navigate to a project folder, your shell checks if there’s a special file that tells it which project configuration to load. If it finds one, it automatically loads the right environment.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
auto_load_project() {
    local current_dir=$(pwd)
    
    # Look for a .project file in current directory or parent directories
    while [ "$current_dir" != "/" ]; do
        if [ -f "$current_dir/.project" ]; then
            local project_name=$(cat "$current_dir/.project")
            load_project "$project_name"
            return
        fi
        current_dir=$(dirname "$current_dir")
    done
}

# Override the cd command to auto-load projects
cd() {
    builtin cd "$@"
    auto_load_project
}

What this does:

  • Every time you change directories (cd), it automatically checks for project configuration
  • It looks for a file called .project in your current folder and all parent folders
  • If it finds the file, it reads the project name from inside and loads that configuration
  • builtin cd "$@" means “do the normal cd command first, then do our extra stuff”

Then, in each project directory, create a .project file containing the project name:

1
2
3
4
5
# In your webapp project directory
echo "webapp" > .project

# In your datatools project directory  
echo "datatools" > .project

The magic moment: Now, whenever you navigate to a project directory (cd ~/projects/webapp), the right environment loads automatically! You’ll see “Loaded webapp environment” appear, and all your project-specific aliases and variables will be ready to use.

This means you can jump between projects and your shell automatically adapts to each one. It’s like having a smart assistant that prepares your workspace before you even ask.

Learning About Other Shells: Meet Zsh

While bash is the default shell on most Linux systems, there’s another popular shell called Zsh (Z Shell) that many developers love. It’s like bash’s more user-friendly cousin.

Think of shells like different cars - they all get you from point A to point B, but some have better features, more comfortable seats, or advanced safety features. Bash is like a reliable sedan that works everywhere, while Zsh is like a luxury car with extra features that make the ride more pleasant.

Why Consider Zsh?

Zsh offers several advantages that can make your daily work easier:

  • Better auto-completion: It can complete commands, file names, and even command options. For example, if you type git ch and press Tab, it might complete to git checkout and then show you available branches.

  • Spelling correction: If you type cdd instead of cd, Zsh will ask “did you mean cd?” and let you press ‘y’ to fix it automatically.

  • Themes and plugins: Easy customization with frameworks like Oh My Zsh. You can change how your prompt looks, add colors, and install plugins for specific tools.

  • Better history: More powerful command history features. For example, you can search through your command history more intelligently.

The best part? Most of your bash knowledge transfers directly to Zsh. It’s like upgrading from a basic phone to a smartphone - you still know how to make calls, but now you have extra features available.

Installing Zsh

On Ubuntu/Debian:

1
sudo apt install zsh

On CentOS/RHEL:

1
sudo yum install zsh

To make zsh your default shell:

1
chsh -s $(which zsh)

Oh My Zsh: Zsh Made Easy

Oh My Zsh is a framework that makes configuring zsh much easier. Think of it like a starter kit - instead of having to configure everything from scratch, it gives you a beautiful, functional setup right out of the box.

It’s like buying a car that comes with GPS, heated seats, and premium sound system already installed, versus having to install each feature separately.

Install it with:

1
sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"

After installation, you’ll have a new configuration file: ~/.zshrc (similar to ~/.bashrc for bash). This is where you’ll customize your Zsh experience.

What you’ll notice immediately:

  • Your terminal prompt looks prettier with colors and useful information
  • Tab completion works better (try typing cd and pressing Tab twice)
  • You get helpful plugins for common tools

Useful Zsh Plugins for DevOps

Plugins are like apps for your shell - they add specific functionality for different tools. Here are some that make DevOps work much easier.

Add these to your ~/.zshrc file in the plugins section:

1
2
3
4
5
6
7
8
plugins=(
    git          # Git aliases and auto-completion
    docker       # Docker command completion  
    kubectl      # Kubernetes command completion
    terraform    # Terraform command completion
    aws          # AWS CLI completion
    node         # Node.js/npm completion
)

What each plugin does:

  • git: When you type git checkout and press Tab, it shows you available branches
  • docker: Auto-completes container names, image names, and Docker commands
  • kubectl: Helps complete Kubernetes resource names and commands (no more typing long pod names!)
  • terraform: Completes Terraform commands and resource types
  • aws: Auto-completes AWS services and parameters
  • node: Helps with npm commands and package names

Think of it like having a smart assistant that remembers all the complex names and commands for you.

Migrating Your Bash Configuration

The good news is that most of your bash configuration will work in zsh without changes. Your functions, aliases, and environment variables can be copied over.

Here’s how to migrate:

1
2
# Copy your bash functions and aliases to zsh
cat ~/.bashrc >> ~/.zshrc

Then edit ~/.zshrc to clean up any bash-specific parts.

What you might need to adjust:

  • Some advanced bash-specific features might work slightly differently
  • Zsh has different history settings (but usually better ones)
  • Prompt customization works differently (but Oh My Zsh handles most of this)

Pro tip: Keep your original ~/.bashrc as a backup. You can always switch back to bash if needed by changing your default shell.

Integrating with Docker and Kubernetes

Now let’s see how environment variables and profiles integrate with the tools you’ll use most as a DevOps engineer. This is where all the concepts we’ve learned really pay off - instead of remembering complex commands and typing the same things repeatedly, your customized shell does the heavy lifting.

Docker Integration

Docker uses environment variables extensively. Here are some practical patterns that solve real problems you’ll face:

The Problem: Starting your development environment requires running multiple Docker commands, setting environment variables, and checking that everything started correctly. It’s easy to forget a step or make a typo.

The Solution: Create functions that handle entire workflows:

Development Environment with Docker:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# Function to start your development stack
dev_start() {
    export COMPOSE_PROJECT_NAME="myapp-dev"
    export DATABASE_PASSWORD="dev-password"
    export API_KEY="dev-api-key"
    
    docker-compose -f docker-compose.dev.yml up -d
    echo "Development environment started"
    docker-compose ps
}

# Function to stop and clean up
dev_stop() {
    docker-compose -f docker-compose.dev.yml down
    docker system prune -f
    echo "Development environment stopped and cleaned"
}

What this does: Instead of remembering multiple commands and environment variables, you just type dev_start and your entire development environment spins up. The function sets the right environment variables, starts the containers, and shows you the status. dev_stop cleans everything up.

Environment-Specific Docker Builds:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
docker_build() {
    local environment=${1:-dev}
    local image_name="myapp:${environment}-$(date +%Y%m%d)"
    
    export BUILD_ENV=$environment
    export IMAGE_TAG=$image_name
    
    docker build -t $image_name --build-arg ENV=$environment .
    echo "Built image: $image_name"
}

What this does: This function builds Docker images with environment-specific settings. If you type docker_build prod, it creates a production-ready image with today’s date in the tag. If you don’t specify an environment, it defaults to ‘dev’. The function automatically generates unique image names so you don’t accidentally overwrite existing images.

Kubernetes Integration

Kubernetes also relies heavily on environment variables and configuration. The challenge with Kubernetes is that commands are often long and complex, with many options to remember.

The Problem: You’re working with multiple Kubernetes clusters (development, staging, production) and need to switch between them frequently. Each time you switch, you need to remember different cluster names, namespaces, and configurations.

The Solution: Functions that handle context switching and common operations:

Kubernetes Context Management:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
k8s_switch() {
    local context=$1
    
    case $context in
        dev)
            kubectl config use-context development-cluster
            export KUBE_NAMESPACE="myapp-dev"
            ;;
        staging)
            kubectl config use-context staging-cluster  
            export KUBE_NAMESPACE="myapp-staging"
            ;;
        prod)
            kubectl config use-context production-cluster
            export KUBE_NAMESPACE="myapp-prod"
            ;;
        *)
            echo "Available contexts: dev, staging, prod"
            return
            ;;
    esac
    
    echo "Switched to $context cluster"
    kubectl get nodes
}

What this does: Instead of remembering complex cluster names and manually switching contexts, you just type k8s_switch dev or k8s_switch prod. The function automatically switches to the right cluster and sets the correct namespace, then shows you the cluster nodes to confirm you’re connected.

Deployment Functions:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
k8s_deploy() {
    local app_name=$1
    local image_tag=${2:-latest}
    
    if [ -z "$app_name" ]; then
        echo "Usage: k8s_deploy <app-name> [image-tag]"
        return
    fi
    
    kubectl set image deployment/$app_name $app_name=myregistry/$app_name:$image_tag -n $KUBE_NAMESPACE
    kubectl rollout status deployment/$app_name -n $KUBE_NAMESPACE
}

k8s_logs() {
    local app_name=$1
    kubectl logs -f deployment/$app_name -n $KUBE_NAMESPACE
}

Putting It All Together: A Complete Workflow

Let’s see how all these pieces work together in a real DevOps workflow:

  1. Navigate to project: cd ~/projects/webapp
  2. Auto-load project environment: (happens automatically)
  3. Switch to development environment: switch_env dev
  4. Start local development stack: dev_start
  5. Switch Kubernetes context: k8s_switch dev
  6. Deploy to development cluster: k8s_deploy webapp v1.2.3
  7. Check application logs: k8s_logs webapp

All of this is possible because of the environment variables, functions, and profiles you’ve set up!

Best Practices for Advanced Setups

Keep It Organized

As your configuration grows, organization becomes crucial:

1
2
3
4
5
6
7
8
# ~/.bashrc structure
# ==========================================
# 1. Environment Variables
# 2. PATH modifications  
# 3. Aliases
# 4. Functions
# 5. Project configurations
# 6. Tool integrations

Test Your Functions

Always test new functions in a separate terminal before adding them to your profile:

1
2
3
4
5
6
7
# Test a function before adding to profile
my_test_function() {
    echo "This is a test"
}

# Call it to test
my_test_function

Backup Your Configuration

Your shell configuration becomes valuable over time. Back it up:

1
2
3
4
5
6
7
8
# Create a backup
cp ~/.bashrc ~/.bashrc.backup

# Or better yet, use version control
cd ~
git init
git add .bashrc .zshrc
git commit -m "Initial shell configuration"

Document Your Functions

Add comments to complex functions:

1
2
3
4
5
6
# Deploy application to Kubernetes
# Usage: k8s_deploy <app-name> [image-tag] [namespace]
# Example: k8s_deploy myapp v1.2.3 production
k8s_deploy() {
    # Function implementation here
}

Troubleshooting Common Issues

Function Not Found: Make sure you’ve reloaded your profile after adding functions:

1
source ~/.bashrc

Variables Not Set: Check if you’re in the right environment and the function ran successfully:

1
echo $CURRENT_ENV

Docker/Kubernetes Commands Failing: Verify your environment variables are set correctly:

1
env | grep -E "(DOCKER|KUBE|AWS)"

Ready-to-Use Advanced Configuration

Here’s a complete advanced setup you can add to your ~/.bashrc:

Project Management System

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# Project configuration directory
export PROJECTS_CONFIG_DIR="$HOME/.config/projects"

# Create projects config directory if it doesn't exist
mkdir -p "$PROJECTS_CONFIG_DIR"

# Load project environment
load_project() {
    local project_name=$1
    local project_config="$PROJECTS_CONFIG_DIR/${project_name}.sh"
    
    if [ -f "$project_config" ]; then
        source "$project_config"
        export CURRENT_PROJECT=$project_name
    else
        echo "Project '$project_name' not found. Available projects:"
        ls "$PROJECTS_CONFIG_DIR"/*.sh 2>/dev/null | xargs -n1 basename | sed 's/.sh$//' | sed 's/^/  - /'
    fi
}

# Auto-detect project based on .project file
auto_load_project() {
    local current_dir=$(pwd)
    
    while [ "$current_dir" != "/" ]; do
        if [ -f "$current_dir/.project" ]; then
            local project_name=$(cat "$current_dir/.project")
            load_project "$project_name"
            return
        fi
        current_dir=$(dirname "$current_dir")
    done
}

# Enhanced cd command
cd() {
    builtin cd "$@"
    auto_load_project
}

Environment Switching System

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# Environment switching with validation
switch_env() {
    local env=$1
    
    if [ -z "$env" ]; then
        echo "Current environment: ${CURRENT_ENV:-not set}"
        echo "Usage: switch_env [dev|staging|prod]"
        return
    fi
    
    case $env in
        dev)
            export CURRENT_ENV="dev"
            export LOG_LEVEL="debug"
            export DATABASE_URL="localhost:5432/myapp_dev"
            export API_URL="http://localhost:3000"
            ;;
        staging)
            export CURRENT_ENV="staging"
            export LOG_LEVEL="info"  
            export DATABASE_URL="staging-db.company.com:5432/myapp"
            export API_URL="https://api-staging.company.com"
            ;;
        prod)
            export CURRENT_ENV="prod"
            export LOG_LEVEL="error"
            export DATABASE_URL="prod-db.company.com:5432/myapp"
            export API_URL="https://api.company.com"
            ;;
        *)
            echo "Unknown environment: $env"
            echo "Available: dev, staging, prod"
            return 1
            ;;
    esac
    
    echo "Environment switched to: $env"
}

# Quick environment aliases
alias env-dev="switch_env dev"
alias env-staging="switch_env staging"  
alias env-prod="switch_env prod"

Docker Integration Functions

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# Docker development workflow
docker_dev() {
    local action=$1
    
    case $action in
        start)
            export COMPOSE_PROJECT_NAME="${CURRENT_PROJECT:-myapp}-dev"
            docker-compose -f docker-compose.dev.yml up -d
            docker-compose ps
            ;;
        stop)
            docker-compose -f docker-compose.dev.yml down
            ;;
        logs)
            docker-compose -f docker-compose.dev.yml logs -f
            ;;
        restart)
            docker-compose -f docker-compose.dev.yml restart
            ;;
        *)
            echo "Usage: docker_dev [start|stop|logs|restart]"
            ;;
    esac
}

# Docker cleanup function
docker_cleanup() {
    echo "Cleaning up Docker resources..."
    docker container prune -f
    docker image prune -f
    docker volume prune -f
    docker network prune -f
    echo "Docker cleanup complete"
}

Kubernetes Integration Functions

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# Kubernetes context and namespace management
k8s_env() {
    local env=$1
    
    case $env in
        dev)
            kubectl config use-context dev-cluster
            export KUBE_NAMESPACE="myapp-dev"
            ;;
        staging)
            kubectl config use-context staging-cluster
            export KUBE_NAMESPACE="myapp-staging"
            ;;
        prod)
            kubectl config use-context prod-cluster
            export KUBE_NAMESPACE="myapp-prod"
            ;;
        *)
            echo "Current context: $(kubectl config current-context)"
            echo "Current namespace: ${KUBE_NAMESPACE:-default}"
            echo "Usage: k8s_env [dev|staging|prod]"
            return
            ;;
    esac
    
    echo "Kubernetes environment: $env"
    echo "Context: $(kubectl config current-context)"
    echo "Namespace: $KUBE_NAMESPACE"
}

# Quick deployment function
k8s_deploy() {
    local app=$1
    local image_tag=${2:-latest}
    local namespace=${KUBE_NAMESPACE:-default}
    
    if [ -z "$app" ]; then
        echo "Usage: k8s_deploy <app-name> [image-tag]"
        return
    fi
    
    kubectl set image deployment/$app $app=myregistry/$app:$image_tag -n $namespace
    kubectl rollout status deployment/$app -n $namespace
}

System Status Functions

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Comprehensive system check
system_status() {
    echo "=== System Status ==="
    echo "Date: $(date)"
    echo "Uptime: $(uptime -p)"
    echo "Load: $(uptime | awk -F'load average:' '{print $2}')"
    echo
    echo "=== Disk Usage ==="
    df -h | head -5
    echo
    echo "=== Memory Usage ==="
    free -h
    echo
    echo "=== Docker Status ==="
    if command -v docker >/dev/null 2>&1; then
        docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | head -10
    else
        echo "Docker not installed"
    fi
}

# Development environment status
dev_status() {
    echo "=== Development Environment Status ==="
    echo "Current Project: ${CURRENT_PROJECT:-not set}"
    echo "Current Environment: ${CURRENT_ENV:-not set}"
    echo "Kubernetes Context: $(kubectl config current-context 2>/dev/null || echo 'not set')"
    echo "Kubernetes Namespace: ${KUBE_NAMESPACE:-not set}"
    echo "Docker Compose Project: ${COMPOSE_PROJECT_NAME:-not set}"
}

Copy these snippets into your ~/.bashrc file and restart your terminal to start using advanced environment management!