Part 2 of 6
    35 min

    Core Concepts & Architecture

    Understanding the building blocks of Concourse pipelines: Resources, Jobs, Tasks, and the ATC/Worker architecture

    Before diving into complex pipelines, it's essential to understand Concourse's core concepts. Unlike traditional CI tools that rely on plugins and global configuration, Concourse uses a small set of primitives that combine to create powerful workflows.

    Architecture Overview

    Concourse consists of three main components that work together:

    Web Node (ATC)

    • • Pipeline scheduling
    • • Web UI serving
    • • API endpoints
    • • User authentication
    • • Resource version tracking

    Worker Nodes

    • • Run task containers
    • • Cache resource versions
    • • Manage build artifacts
    • • Report status to ATC
    • • Stateless & scalable

    TSA (Gateway)

    • • Worker registration via SSH
    • • Key-based authentication
    • • Secure ATC-worker tunnel

    In production, you typically run multiple ATC instances behind a load balancer for high availability. Workers are stateless—they can be added or removed without affecting running pipelines.

    Resources

    Resources represent external versioned artifacts that your pipeline interacts with. Every input and output flows through resources.

    Example resources
    resources:
    - name: my-repo
      type: git
      source:
        uri: https://github.com/myorg/myapp.git
        branch: main
    
    - name: docker-image
      type: registry-image
      source:
        repository: myorg/myapp
        username: ((docker-username))
        password: ((docker-password))
    
    - name: deployment-bucket
      type: s3
      source:
        bucket: my-deployment-artifacts
        access_key_id: ((aws-access-key))
        secret_access_key: ((aws-secret-key))
        region_name: us-east-1

    Common Resource Types

    TypePurpose
    gitGit repositories
    registry-imageDocker/OCI images
    s3AWS S3 buckets
    timePeriodic triggers
    semverSemantic versioning
    poolResource locking

    Resources are versioned—Concourse tracks every version (commit, image digest, file) and displays them in the UI. This provides complete traceability for what version triggered which build.

    Resource Types

    Resource types define how resources behave. Concourse ships with built-in types, but you can add custom ones:

    Custom resource types
    resource_types:
    - name: slack-notification
      type: registry-image
      source:
        repository: cfcommunity/slack-notification-resource
        tag: latest
    
    - name: pull-request
      type: registry-image
      source:
        repository: teliaoss/github-pr-resource
    
    resources:
    - name: slack-alert
      type: slack-notification
      source:
        url: ((slack-webhook-url))

    The resource type ecosystem is extensive—check the Concourse Resource Types catalog for community-contributed types.

    Jobs

    Jobs define what work gets done. Each job has a plan that specifies the sequence of steps:

    Job definition
    jobs:
    - name: build-and-test
      plan:
      - get: my-repo
        trigger: true
      - task: run-tests
        file: my-repo/ci/tasks/test.yml
      - put: docker-image
        params:
          image: image/image.tar

    Key Job Properties

    • plan: Ordered list of steps to execute
    • serial: Run builds one at a time (default: parallel)
    • max_in_flight: Limit concurrent builds
    • build_log_retention: How long to keep build logs
    • on_success / on_failure / on_abort: Hooks for notifications

    Tasks

    Tasks are the units of work—containerized scripts that do the actual building, testing, and processing:

    Inline task definition
    - task: run-unit-tests
      config:
        platform: linux
        image_resource:
          type: registry-image
          source:
            repository: node
            tag: "20-alpine"
        inputs:
        - name: my-repo
        outputs:
        - name: test-results
        caches:
        - path: my-repo/node_modules
        run:
          path: sh
          args:
          - -exc
          - |
            cd my-repo
            npm ci
            npm test
            cp -r coverage ../test-results/

    Task Properties

    • platform: Usually linux, can be windows
    • image_resource: Container image to run in
    • inputs: Resources/outputs from previous steps
    • outputs: Directories to pass to subsequent steps
    • caches: Persistent directories across builds
    • run: The command to execute

    Pipelines

    Pipelines tie everything together—they're the complete definition of your CI/CD workflow:

    pipeline.yml
    resource_types:
    - name: slack-notification
      type: registry-image
      source:
        repository: cfcommunity/slack-notification-resource
    
    resources:
    - name: source-code
      type: git
      source:
        uri: https://github.com/myorg/myapp.git
        branch: main
    
    - name: slack
      type: slack-notification
      source:
        url: ((slack-webhook))
    
    jobs:
    - name: test
      plan:
      - get: source-code
        trigger: true
      - task: unit-tests
        file: source-code/ci/tasks/test.yml
      on_failure:
        put: slack
        params:
          text: "Build failed!"

    Step Types

    Jobs contain steps that execute in sequence. Here are the primary step types:

    Get Step

    Fetches a resource version:

    - get: my-repo
      trigger: true      # Start job when new versions appear
      passed: [test]     # Only versions that passed the "test" job
      params:
        depth: 1         # Shallow clone (resource-specific)

    Put Step

    Pushes to a resource:

    - put: docker-image
      params:
        image: build-output/image.tar
      get_params:
        skip_download: true  # Don't fetch the pushed version

    Task Step

    Executes a containerized task:

    - task: build
      file: my-repo/ci/tasks/build.yml  # External task file
      input_mapping:
        source: my-repo                  # Rename input
      output_mapping:
        artifacts: build-output          # Rename output
      params:
        ENVIRONMENT: production          # Environment variables

    Set Pipeline Step

    Dynamically update pipelines:

    - set_pipeline: deploy-pipeline
      file: my-repo/ci/pipelines/deploy.yml
      vars:
        environment: staging

    Load Var Step

    Load values from files into variables:

    - load_var: version
      file: version/version
      reveal: true  # Show in UI (default: hidden)
    
    - task: deploy
      params:
        VERSION: ((.:version))  # Use loaded variable

    The Inputs/Outputs Model

    Understanding how data flows between steps is crucial. Each step runs in an isolated container with only explicitly declared inputs available.

    How It Works

    1. Get steps fetch resources into named directories
    2. Task outputs become available for subsequent steps
    3. Put steps can use any available directory
    Data flow example
    jobs:
    - name: build-flow-example
      plan:
      # Step 1: Fetch code - creates "source" directory
      - get: source
        resource: my-repo
      
      # Step 2: Build - receives "source", produces "binary"
      - task: compile
        config:
          platform: linux
          image_resource:
            type: registry-image
            source: { repository: golang, tag: "1.21" }
          inputs:
          - name: source
          outputs:
          - name: binary
          run:
            path: sh
            args:
            - -c
            - |
              cd source
              go build -o ../binary/app ./cmd/app
      
      # Step 3: Package - receives "source" AND "binary"
      - task: create-package
        config:
          platform: linux
          image_resource:
            type: registry-image
            source: { repository: alpine }
          inputs:
          - name: source
          - name: binary
          outputs:
          - name: package
          run:
            path: sh
            args:
            - -c
            - |
              cp binary/app package/
              cp source/config.yml package/
              tar -czf package/release.tar.gz -C package app config.yml
    
      # Step 4: Upload package
      - put: releases
        params:
          file: package/release.tar.gz

    Variables & Secrets

    Concourse supports several ways to parameterize pipelines.

    Static Variables

    Set when deploying the pipeline:

    pipeline.yml
    resources:
    - name: repo
      type: git
      source:
        uri: ((git-uri))
        branch: ((branch))
    Set with fly
    fly -t main set-pipeline -p my-pipeline -c pipeline.yml \
      -v git-uri=https://github.com/org/repo.git \
      -v branch=main

    Credential Managers

    For secrets, integrate with a credential manager:

    Secrets fetched at runtime
    resources:
    - name: docker-image
      type: registry-image
      source:
        repository: myorg/app
        username: ((docker.username))
        password: ((docker.password))

    Supported credential managers:

    • Vault (HashiCorp)
    • AWS Secrets Manager
    • AWS SSM Parameter Store
    • Kubernetes Secrets
    • CredHub (Cloud Foundry)

    Teams & RBAC

    Concourse uses teams for access control. Each team has its own pipelines, isolated from others.

    Managing Teams

    # Create a team with local users
    fly -t main set-team -n staging \
      --local-user=staging-admin \
      --local-user=developer1
    
    # Create team with GitHub auth
    fly -t main set-team -n production \
      --github-org=myorg \
      --github-team=myorg:platform-team
    
    # List teams
    fly -t main teams
    
    # Login to specific team
    fly -t staging login -n staging -c http://concourse.example.com

    Team Roles

    RolePermissions
    ownerFull control (manage team, pipelines, builds)
    memberManage pipelines and builds
    pipeline-operatorTrigger builds, pause/unpause
    viewerRead-only access

    Configuration Best Practices

    Externalize Task Definitions

    Keep task configs in your repository, not inline:

    ci/tasks/test.yml (in your repo)
    platform: linux
    image_resource:
      type: registry-image
      source:
        repository: node
        tag: "20"
    inputs:
    - name: source
    run:
      path: source/ci/scripts/test.sh
    pipeline.yml
    - task: test
      file: source/ci/tasks/test.yml

    Use YAML Anchors for Reuse

    YAML anchors reduce duplication:

    # Define once
    image-config: &node-image
      type: registry-image
      source:
        repository: node
        tag: "20-alpine"
    
    jobs:
    - name: test
      plan:
      - task: lint
        config:
          platform: linux
          image_resource: *node-image  # Reuse
          # ...
      - task: unit-test
        config:
          platform: linux
          image_resource: *node-image  # Reuse
          # ...

    Organize Pipeline Files

    For complex projects:

    my-repo/
    ├── ci/
    │   ├── pipeline.yml
    │   ├── tasks/
    │   │   ├── test.yml
    │   │   ├── build.yml
    │   │   └── deploy.yml
    │   └── scripts/
    │       ├── test.sh
    │       ├── build.sh
    │       └── deploy.sh

    Core Concepts Complete!

    You now understand Concourse's architecture and primitives

    In Part 3, we'll create a complete CI/CD pipeline from scratch—testing, building, and deploying an application.