Part 4 of 6
    45 min

    Advanced Pipeline Patterns

    Master complex workflows with fan-in/fan-out, monorepo strategies, approval gates, and dynamic pipelines

    Real-world CI/CD often requires more than linear job sequences. This guide covers advanced patterns for handling complex scenarios: monorepos, multi-environment deployments, approval gates, and self-modifying pipelines.

    Fan-Out / Fan-In Patterns

    Fan-Out: Parallel Deployments

    Deploy to multiple environments simultaneously:

    Fan-out pattern
    resources:
    - name: app-image
      type: registry-image
      source:
        repository: myorg/app
        tag: latest
    
    jobs:
    - name: build
      plan:
      - task: build-image
        # ... build task
    
    # Fan-out: One image triggers three parallel deploys
    - name: deploy-us-east
      plan:
      - get: app-image
        trigger: true
        passed: [build]
      - task: deploy
        params:
          REGION: us-east-1
    
    - name: deploy-us-west
      plan:
      - get: app-image
        trigger: true
        passed: [build]
      - task: deploy
        params:
          REGION: us-west-2
    
    - name: deploy-eu-west
      plan:
      - get: app-image
        trigger: true
        passed: [build]
      - task: deploy
        params:
          REGION: eu-west-1

    Fan-In: Aggregating Results

    Wait for multiple jobs before proceeding:

    Fan-in pattern
    jobs:
    - name: deploy-us-east
      # ...
    
    - name: deploy-us-west
      # ...
    
    - name: deploy-eu-west
      # ...
    
    # Fan-in: Only runs after ALL regional deploys succeed
    - name: run-global-smoke-tests
      plan:
      - get: app-image
        trigger: true
        passed:
        - deploy-us-east
        - deploy-us-west
        - deploy-eu-west
      - task: global-tests
        file: ci/tasks/smoke-tests.yml
    
    - name: update-cdn
      plan:
      - get: app-image
        trigger: true
        passed: [run-global-smoke-tests]
      - task: purge-cdn-cache
        # ...

    The visual flow looks like:

                           ┌─> [deploy-us-east]  ─┐
                           │                      │
    [build] ─> [app-image] ├─> [deploy-us-west]  ─┼─> [smoke-tests] ─> [update-cdn]
                           │                      │
                           └─> [deploy-eu-west]  ─┘

    Monorepo Pipelines

    Handle multiple services in a single repository efficiently.

    Path-Based Triggering

    Only trigger builds when relevant files change:

    Monorepo with path filters
    resources:
    - name: api-changes
      type: git
      source:
        uri: ((repo-uri))
        branch: main
        paths:
        - services/api/**
        - shared/**
    
    - name: frontend-changes
      type: git
      source:
        uri: ((repo-uri))
        branch: main
        paths:
        - services/frontend/**
        - shared/**
    
    - name: worker-changes
      type: git
      source:
        uri: ((repo-uri))
        branch: main
        paths:
        - services/worker/**
        - shared/**
    
    jobs:
    - name: build-api
      plan:
      - get: api-changes
        trigger: true
      - task: test-and-build
        file: api-changes/services/api/ci/tasks/build.yml
    
    - name: build-frontend
      plan:
      - get: frontend-changes
        trigger: true
      - task: test-and-build
        file: frontend-changes/services/frontend/ci/tasks/build.yml
    
    - name: build-worker
      plan:
      - get: worker-changes
        trigger: true
      - task: test-and-build
        file: worker-changes/services/worker/ci/tasks/build.yml

    Shared Infrastructure Changes

    Trigger all builds when shared code changes:

    resources:
    - name: shared-lib-changes
      type: git
      source:
        uri: ((repo-uri))
        branch: main
        paths:
        - shared/**
        ignore_paths:
        - shared/docs/**
    
    jobs:
    - name: rebuild-all-on-shared-change
      plan:
      - get: shared-lib-changes
        trigger: true
      
      - in_parallel:
        - trigger-job: build-api
        - trigger-job: build-frontend
        - trigger-job: build-worker

    Dynamic Pipelines

    Pipelines that modify themselves based on code or configuration.

    Self-Updating Pipeline

    Keep your pipeline definition in sync with your repo:

    Self-updating pipeline
    resources:
    - name: ci-repo
      type: git
      source:
        uri: ((repo-uri))
        branch: main
        paths:
        - ci/**
    
    jobs:
    - name: update-pipeline
      plan:
      - get: ci-repo
        trigger: true
      
      - set_pipeline: self
        file: ci-repo/ci/pipeline.yml
        vars:
          repo-uri: ((repo-uri))
          # ... other vars

    Generating Pipelines from Configuration

    Create pipelines dynamically from a config file:

    services.yml in your repo
    services:
    - name: auth-service
      path: services/auth
      dockerfile: Dockerfile
    - name: billing-service
      path: services/billing
      dockerfile: Dockerfile.production
    - name: notification-service
      path: services/notifications
      dockerfile: Dockerfile
    meta-pipeline.yml
    resources:
    - name: repo
      type: git
      source:
        uri: ((repo-uri))
        branch: main
    
    jobs:
    - name: generate-pipelines
      plan:
      - get: repo
        trigger: true
      
      - task: generate-service-pipelines
        config:
          platform: linux
          image_resource:
            type: registry-image
            source: { repository: python, tag: "3.11-alpine" }
          inputs:
          - name: repo
          outputs:
          - name: generated-pipelines
          run:
            path: python
            args:
            - -c
            - |
              import yaml
              import os
              
              with open('repo/services.yml') as f:
                  config = yaml.safe_load(f)
              
              for service in config['services']:
                  pipeline = {
                      'resources': [{
                          'name': 'source',
                          'type': 'git',
                          'source': {
                              'uri': '((repo-uri))',
                              'branch': 'main',
                              'paths': [service['path'] + '/**']
                          }
                      }],
                      'jobs': [{
                          'name': f"build-{service['name']}",
                          'plan': [
                              {'get': 'source', 'trigger': True},
                              {'task': 'build', 'file': f"source/{service['path']}/ci/build.yml"}
                          ]
                      }]
                  }
                  
                  with open(f"generated-pipelines/{service['name']}.yml", 'w') as f:
                      yaml.dump(pipeline, f)
      
      - load_var: services
        file: repo/services.yml
        format: yaml
      
      # Set pipeline for each service
      - across:
        - var: service
          values: ((.:services.services))
        set_pipeline: ((.:service.name))
        file: generated-pipelines/((.:service.name)).yml

    Approval Gates and Manual Triggers

    Implement human-in-the-loop workflows.

    Manual Production Deployment

    Require explicit trigger for production:

    jobs:
    - name: deploy-staging
      plan:
      - get: app-image
        trigger: true
        passed: [build]
      - task: deploy-staging
        # ...
    
    # No automatic trigger - requires manual intervention
    - name: deploy-production
      plan:
      - get: app-image
        passed: [deploy-staging]
        # Note: NO trigger: true
      - task: deploy-production
        # ...

    Time-Locked Deployments

    Only allow production deploys during business hours:

    resources:
    - name: business-hours
      type: time
      source:
        start: 9:00 AM
        stop: 5:00 PM
        days: [Monday, Tuesday, Wednesday, Thursday, Friday]
        location: America/New_York
    
    jobs:
    - name: deploy-production
      plan:
      - get: business-hours
        # Will fail outside business hours
      
      - get: app-image
        passed: [deploy-staging]
      
      - task: deploy
        # ...

    Multi-Stage Approval

    Implement approval chains:

    resources:
    # Manual approval resource (version bumped by external system or webhook)
    - name: qa-approval
      type: semver
      source:
        driver: git
        uri: ((approvals-repo))
        branch: main
        file: qa-approved-version
    
    - name: security-approval
      type: semver
      source:
        driver: git
        uri: ((approvals-repo))
        branch: main
        file: security-approved-version
    
    jobs:
    - name: deploy-staging
      plan:
      - get: app-image
        trigger: true
        passed: [build]
      - task: deploy
        # ...
      - put: qa-approval
        params: { bump: patch }  # Signal ready for QA
    
    - name: qa-verification
      plan:
      - get: qa-approval
        trigger: true
      - get: app-image
        passed: [deploy-staging]
      - task: run-qa-suite
        # ...
      - put: security-approval
        params: { bump: patch }  # Signal ready for security review
    
    - name: deploy-production
      plan:
      - get: security-approval
        trigger: true
      - get: app-image
        passed: [qa-verification]
      - task: deploy
        # ...

    Resource Pools and Locking

    Manage limited resources like deployment slots or test environments.

    Pool Resource

    Control access to limited resources:

    resource_types:
    - name: pool
      type: registry-image
      source:
        repository: concourse/pool-resource
    
    resources:
    - name: test-environments
      type: pool
      source:
        uri: ((locks-repo))
        branch: main
        pool: test-envs
        private_key: ((git-key))
    
    jobs:
    - name: integration-tests
      plan:
      # Acquire a test environment
      - put: test-environments
        params: { acquire: true }
      
      - get: source-code
        trigger: true
      
      # Run tests (environment name in test-environments/name)
      - task: run-integration-tests
        file: source-code/ci/tasks/integration-tests.yml
        input_mapping:
          environment: test-environments
      
      ensure:
        # Always release the environment, even on failure
        put: test-environments
        params: { release: test-environments }

    Pool repo structure:

    locks-repo/
    └── test-envs/
        ├── unclaimed/
        │   ├── env-1
        │   ├── env-2
        │   └── env-3
        └── claimed/

    Deployment Locking

    Prevent concurrent deployments:

    resources:
    - name: deploy-lock
      type: pool
      source:
        uri: ((locks-repo))
        branch: main
        pool: deploy-locks
    
    jobs:
    - name: deploy-production
      serial: true  # Also prevents parallel builds of this job
      plan:
      - put: deploy-lock
        params: { claim: production }
      
      - get: app-image
        passed: [deploy-staging]
      
      - task: deploy
        file: ci/tasks/deploy.yml
      
      ensure:
        put: deploy-lock
        params: { release: deploy-lock }

    Matrix Builds

    Test across multiple configurations efficiently.

    Using `across`

    jobs:
    - name: test-matrix
      plan:
      - get: source-code
        trigger: true
      
      - across:
        - var: node-version
          values: ["18", "20", "22"]
        - var: os
          values: ["linux", "alpine"]
        task: run-tests
        config:
          platform: linux
          image_resource:
            type: registry-image
            source:
              repository: node
              tag: ((.:node-version))-((.:os))
          inputs:
          - name: source-code
          run:
            path: sh
            args:
            - -c
            - |
              cd source-code
              npm ci
              npm test

    This creates 6 parallel test runs (3 versions × 2 OS variants).

    Limiting Parallelism

    Control resource usage:

    - across:
      - var: version
        values: ["18", "20", "22"]
        max_in_flight: 2  # Only 2 at a time
      task: heavy-test
      # ...

    Pipeline Instancing

    Create multiple instances of the same pipeline with different parameters.

    Instance Groups

    template-pipeline.yml
    resources:
    - name: source
      type: git
      source:
        uri: ((repo-uri))
        branch: ((branch))
    
    jobs:
    - name: build
      plan:
      - get: source
        trigger: true
      - task: build
        params:
          ENVIRONMENT: ((environment))
    Create instances
    # Create instances for different environments
    fly -t main set-pipeline -p app-dev \
      -c template-pipeline.yml \
      -v repo-uri=git@github.com:org/app.git \
      -v branch=develop \
      -v environment=development \
      --instance-var branch=develop
    
    fly -t main set-pipeline -p app-staging \
      -c template-pipeline.yml \
      -v repo-uri=git@github.com:org/app.git \
      -v branch=main \
      -v environment=staging \
      --instance-var branch=main

    Instance vars allow managing related pipelines as a group.

    Error Handling Patterns

    Retry Failed Steps

    - task: flaky-integration-test
      file: ci/tasks/integration.yml
      attempts: 3  # Retry up to 3 times

    Comprehensive Hooks

    jobs:
    - name: build-and-deploy
      plan:
      - get: source
        trigger: true
      
      - task: build
        file: ci/tasks/build.yml
      
      - task: deploy
        file: ci/tasks/deploy.yml
      
      on_success:
        in_parallel:
        - put: slack
          params:
            text: ":white_check_mark: Deploy successful"
        - put: metrics
          params:
            metric: deploy.success
            value: 1
      
      on_failure:
        in_parallel:
        - put: slack
          params:
            text: ":x: Deploy failed"
        - put: pagerduty
          params:
            event_type: trigger
      
      on_abort:
        put: slack
        params:
          text: ":warning: Deploy was aborted"
      
      on_error:
        put: slack
        params:
          text: ":boom: Pipeline error (check config)"
      
      ensure:
        # Always runs, regardless of outcome
        task: cleanup
        file: ci/tasks/cleanup.yml

    Timeout Management

    - task: long-running-test
      file: ci/tasks/test.yml
      timeout: 30m  # Fail if exceeds 30 minutes
    
    jobs:
    - name: build
      build_log_retention:
        builds: 100
        days: 30

    Resource Optimization

    Caching Dependencies

    - task: build
      config:
        platform: linux
        image_resource:
          type: registry-image
          source: { repository: node, tag: "20" }
        inputs:
        - name: source
        caches:
        - path: source/node_modules      # npm cache
        - path: source/.npm              # npm global cache
        - path: /root/.cache/pip         # pip cache
        run:
          path: sh
          args:
          - -c
          - |
            cd source
            npm ci
            npm run build

    Worker Tags

    Route jobs to specific workers:

    jobs:
    - name: build-arm
      plan:
      - get: source
      - task: build
        tags: [arm64]  # Only runs on workers with this tag
        config:
          platform: linux
          # ...
    
    - name: gpu-training
      plan:
      - task: train-model
        tags: [gpu]  # Route to GPU-enabled workers
        # ...

    Configure workers with tags:

    concourse worker \
      --tag=gpu \
      --tag=high-memory

    Next Steps

    You now have the patterns needed for enterprise-grade CI/CD workflows. In Part 5, we'll secure your Concourse installation with authentication, authorization, and secrets management.