Skip to main content

Common Docker Workflow (common.docker.yaml)

The Common Docker Workflow is a reusable template that provides standardized Docker image building and pushinty. It's designed to be called by other workflows (like build.docker.yaml) to eliminate code duplication and ensure consistent build processes across all services.

🔧 Overview

  • File: .github/workflows/common.docker.yaml
  • Type: Reusable workflow (workflow_call)
  • Purpose: Standardized Docker build and push logic
  • Registry: Google Cloud Registry (GCR)
  • Features: Image caching, duplicate build prevention, multi-environment support

📥 Input Parameters

The workflow accepts the following required and optional inputs:

Required Inputs

ENVIRONMENT: # Target environment (development/staging/production)
CONTEXT: # Docker build context directory
SERVICE: # Service name for image tagging
SANDBOX_ID: # Unique identifier for sandbox deployments
DOCKERFILE: # Dockerfile name to use for building

Optional Inputs

BUILD_SCRIPT: # Pre-build script to execute (default: none)
WITH_NPMRC: # Whether to create .npmrc for GitHub Packages (default: false)

🏗️ Build Process

1. Environment Setup

# Generate dynamic variables
commit_tag=${{ github.sha }}
sandbox_tag=${{ inputs.SANDBOX_ID }}
image_name=gcr.io/dash-208313/dashclicks-${{ inputs.ENVIRONMENT }}-${{ inputs.SERVICE }}

2. Google Cloud Authentication

# Setup Google Cloud CLI
echo '${{ secrets.GKE_SA_KEY }}' > key.json
gcloud auth activate-service-account --key-file=key.json
gcloud --quiet config set project ${{ secrets.GKE_PROJECT }}
gcloud --quiet auth configure-docker gcr.io
rm key.json

3. Duplicate Build Prevention

# Check if image with commit hash already exists
if gcloud container images list-tags "$image_name" \
--filter="tags=$commit_tag" \
--format="get(tags)" | grep -q .; then
echo "Image exists - skipping build"
else
echo "Image does not exist - proceeding with build"
fi

4. Conditional Build Steps

The workflow only builds new images if they don't already exist with the current commit hash:

Node.js Service Setup (if not api-ai)

# Setup Node.js environment
uses: actions/setup-node@v3
with:
node-version: '18'

Pre-build Script Execution (if BUILD_SCRIPT provided)

# Execute custom build script
${{ inputs.BUILD_SCRIPT }}

GitHub Packages Authentication (if WITH_NPMRC is true)

cd "${{ inputs.CONTEXT }}"
echo "@dashclicks-llc:registry=https://npm.pkg.github.com" > .npmrc
echo "//npm.pkg.github.com/:_authToken=${{ secrets.PACKAGE_TOKEN }}" >> .npmrc

Docker Build and Push

cd "${{ inputs.CONTEXT }}"
docker build --cache-from=type=registry,ref=$image_name \
--cache-to=type=inline \
-t $image_name:$commit_tag \
-f ${{ inputs.DOCKERFILE }} .

docker push $image_name:$commit_tag

# Add sandbox tag to the built image
gcloud container images add-tag \
$image_name:$commit_tag \
$image_name:$sandbox_tag \
--quiet

5. Tag Management for Existing Images

If the image already exists, only add the sandbox tag:

gcloud container images add-tag \
$image_name:$commit_tag \
$image_name:$sandbox_tag \
--quiet

🎯 Optimization Features

Docker Layer Caching

The workflow uses Docker's built-in caching mechanism:

--cache-from=type=registry,ref=$image_name  # Pull existing layers
--cache-to=type=inline # Store cache in image

Benefits:

  • Faster builds by reusing unchanged layers
  • Reduced network bandwidth
  • Lower build costs and execution time

Duplicate Build Prevention

Prevents unnecessary rebuilds by checking existing images:

  1. Check: Query GCR for images with current commit hash
  2. Skip: If image exists, only update sandbox tag
  3. Build: If image doesn't exist, execute full build process

Advantages:

  • Saves build time and resources
  • Prevents rate limiting issues
  • Ensures consistent image content across environments

Environment-Specific Images

Each service gets environment-specific image names:

gcr.io/dash-208313/dashclicks-development-api-internal
gcr.io/dash-208313/dashclicks-staging-api-internal
gcr.io/dash-208313/dashclicks-production-api-internal

🔐 Security & Authentication

Required Secrets

GKE_SA_KEY: # Google Cloud service account JSON key
GKE_PROJECT: # Google Cloud project ID
PACKAGE_TOKEN: # GitHub Packages authentication token (optional)

Service Account Permissions

The Google Cloud service account requires:

{
"roles": [
"roles/storage.admin", // Push/pull container images
"roles/containerregistry.serviceAgent" // Registry management
]
}

Temporary Credential Handling

# Secure credential handling
echo '${{ secrets.GKE_SA_KEY }}' > key.json # Create temp file
gcloud auth activate-service-account --key-file=key.json
rm key.json # Remove temp file

🔄 Service Integration Patterns

Standard Node.js Service

uses: ./.github/workflows/common.docker.yaml
with:
ENVIRONMENT: production
SANDBOX_ID: v1.2.3
CONTEXT: './internal'
SERVICE: 'api-internal'
BUILD_SCRIPT: 'cp -r shared/* internal/api/v1'
DOCKERFILE: 'Dockerfile'
WITH_NPMRC: 'false'
secrets: inherit

GitHub Packages Service

uses: ./.github/workflows/common.docker.yaml
with:
ENVIRONMENT: production
SANDBOX_ID: v1.2.3
CONTEXT: './notifications'
SERVICE: 'notifications'
BUILD_SCRIPT: ''
DOCKERFILE: 'Dockerfile'
WITH_NPMRC: 'true'
secrets: inherit

Deno Service (AI)

uses: ./.github/workflows/common.docker.yaml
with:
ENVIRONMENT: production
SANDBOX_ID: v1.2.3
CONTEXT: './ai-service'
SERVICE: 'api-ai'
BUILD_SCRIPT: ''
DOCKERFILE: 'Dockerfile'
WITH_NPMRC: 'false'
secrets: inherit

🐳 Docker Build Strategies

Multi-stage Dockerfile Support

The workflow supports various Dockerfile configurations:

# Standard multi-stage build
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

FROM node:18-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]

Custom Dockerfile Naming

Support for specialized Dockerfiles:

  • Dockerfile - Standard build
  • Dockerfile.general - General queue manager
  • Dockerfile.puppeteer - Puppeteer-enabled queue manager
  • Dockerfile.production - Production-optimized build

📊 Build Monitoring

Output Information

Each build provides detailed information:

# Environment variables set
commit_tag: ${{ github.sha }}
sandbox_tag: ${{ inputs.SANDBOX_ID }}
image_name: gcr.io/dash-208313/dashclicks-[env]-[service]

Build Status Tracking

The workflow outputs:

  • Image existence check results

  • Build execution confirmation

  • Tag assignment success

  • Registry push verification

  • Layer caching: 60-80% faster rebuilds

  • Duplicate prevention: Skip unnecessary builds

  • Parallel execution: Multiple services build simultaneously

  • Optimized base images: Smaller, faster Alpine Linux images

Resource Efficiency

  • Minimal resource usage: Only build when necessary
  • Cache sharing: Reuse layers across builds
  • Cleanup: Automatic temporary file removal
  • Network optimization: Efficient layer pulling/pushing

📋 Implementation Requirements

Required Inputs

The workflow requires these inputs to function:

ENVIRONMENT: Target deployment environment
CONTEXT: Docker build context directory path
SERVICE: Service name for image naming
SANDBOX_ID: Tag identifier for deployment
DOCKERFILE: Dockerfile name to use

Optional Configuration

BUILD_SCRIPT: Pre-build commands (default: empty)
WITH_NPMRC: GitHub Packages authentication flag (default: false)

Authentication Dependencies

  • GKE_SA_KEY: Google Cloud service account JSON key
  • GKE_PROJECT: Google Cloud project identifier
  • PACKAGE_TOKEN: GitHub Packages token (when WITH_NPMRC is true)

💬

Documentation Assistant

Ask me anything about the docs

Hi! I'm your documentation assistant. Ask me anything about the docs!

I can help you with:
- Code examples
- Configuration details
- Troubleshooting
- Best practices

Try asking: How do I configure the API?
09:31 AM