Skip to main content

CI/CD Integration

Run tests automatically; gate deployments; fail fast.

TL;DR

Continuous Integration (CI) runs tests automatically on every commit, providing rapid feedback before code reaches production. Build a pipeline: unit tests (< 5 min) on every commit, integration tests (5-15 min) on PR or after, contract tests (2-5 min) on API changes, E2E tests (10-30 min) on nightly or pre-release, security scanning (5-10 min) on every commit, performance tests (15-60 min) on releases. Use GitHub Actions, GitLab CI, or Jenkins. Gate deployments: fail if tests fail, coverage drops, security issues found. Monitor lead time (commit → production), deployment frequency, MTTR, change failure rate. Fast feedback loops (< 15 min to know if commit is good) drive quality and velocity.

Learning Objectives

After reading this article, you will understand:

  • How to design CI/CD pipelines for testing
  • Which tests run when (commit, PR, nightly, release)
  • Deployment gates and quality checks
  • How to balance speed vs. thoroughness
  • Key metrics: lead time, deployment frequency, MTTR
  • Best practices for CI/CD implementation

Motivating Scenario

Developers push code without testing. A week later, integration fails and blocks release. Another developer pushed a change that broke database compatibility; no one caught it. With CI, every commit runs tests immediately. Breaking changes fail in CI, not production. Developers get feedback in 15 minutes, not a week.

Core Concepts

Test Pipeline Stages

Typical CI/CD pipeline: quick feedback early, thorough testing late
StageTestsTimeRuns
CommitUnit, lint, static analysis< 5 minEvery commit
PRIntegration, contracts5-15 minEvery PR
Pre-releaseE2E, perf, security DAST30-60 minBefore release
NightlyFull suite, chaos, soak4+ hoursOff-hours

Key Metrics

  • Lead Time: Commit → production. Target: hours, not days.
  • Deployment Frequency: How often you release. Target: daily.
  • MTTR: Mean time to recover from failure. Target: < 15 min.
  • Change Failure Rate: % of deployments causing incidents. Target: < 15%.

Practical Example

name: CI/CD Pipeline
on: [push, pull_request]

jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3

# Unit tests: fast, run first
- name: Run Unit Tests
run: npm test -- --coverage

- name: Check Coverage
run: npm run coverage:check -- --threshold=80

# Lint and static analysis
- name: Lint Code
run: npm run lint

# Security scanning
- name: SAST Scan
uses: sonarsource/sonarcloud-github-action@master

# Integration tests (only on PR)
- name: Run Integration Tests
if: github.event_name == 'pull_request'
run: npm run test:integration

# Build
- name: Build
run: npm run build

# E2E tests: slower, conditional
e2e:
if: github.event_name == 'pull_request'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install dependencies
run: npm ci
- name: Build
run: npm run build
- name: E2E Tests
run: npm run test:e2e

# Deploy to staging (after tests pass)
deploy-staging:
if: github.ref == 'refs/heads/main'
needs: [test, e2e]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Deploy to Staging
run: ./scripts/deploy.sh staging
env:
DEPLOY_KEY: ${{ secrets.DEPLOY_KEY }}

When to Use / When Not to Use

Use CI/CD When:
  1. You have multiple developers/branches
  2. You deploy frequently (daily or more)
  3. You want rapid feedback (minutes, not hours)
  4. Quality is important (catch bugs early)
  5. You value deployment reliability
Avoid (or Simplify) When:
  1. Solo developer, no branching (still good practice)
  2. Deploy once a year (automate anyway)
  3. No automated tests yet (set up tests first)

Patterns and Pitfalls

CI/CD Best Practices and Anti-Patterns

Fast feedback: Commits should know pass/fail in < 15 min. Stage tests by speed: Fast tests first (unit), slow tests last (E2E). Gate deployments: Fail if tests fail, coverage drops, security issues. Monitor metrics: Lead time, deployment frequency, MTTR. Parallel runs: Run independent stages concurrently. Cache dependencies: Don't re-download npm/pip every run. Logs & artifacts: Save test reports for investigation. Notification: Slack/email when CI fails; unblock developers.
Slow CI: 30+ minutes for feedback; developers ignore it. No gates: Tests fail but deployment proceeds. Flaky tests in CI: Tests pass locally, fail in CI; noise. No parallelization: Tests run sequentially; slow. Ignore metrics: No visibility into lead time, MTTR. No logging: CI fails; no idea why. Manual deployment: No automation; error-prone. Skip security/perf: 'Too slow for CI; test later'; problems in production.

Design Review Checklist

  • Fast stage (unit, lint) runs on every commit, < 5 min
  • Medium stage (integration, contracts) runs on PR, < 15 min
  • Slow stage (E2E, perf) runs pre-release or nightly
  • Pipeline fails if tests fail (gates deployment)
  • Coverage gates enforced (e.g., > 80%)
  • Security scanning runs automatically
  • Flaky tests identified and quarantined
  • Parallelization used to speed up pipeline
  • Logs and artifacts saved for debugging
  • Notifications alert developers of failures
  • Deployment gates prevent broken code reaching production
  • Performance benchmarks tracked over time
  • Metrics monitored: lead time, deployment frequency, MTTR
  • Dependencies cached (npm, pip, etc.)
  • Manual approval required for production

Self-Check Questions

  • Q: How fast should CI be? A: < 15 min for feedback on every commit. Slow CI defeats purpose (developers ignore it).

  • Q: Should E2E tests run on every commit? A: No, too slow. Run on PR or nightly. Run faster tests (unit, integration) on every commit.

  • Q: What does 'gate' mean? A: Deployment is blocked if tests fail. Can't deploy without passing gates.

  • Q: Why monitor lead time? A: It measures how fast you can ship. Lower lead time = faster iteration = competitive advantage.

  • Q: What if CI is flaky? A: Quarantine flaky tests immediately. Fix root cause. Re-enable. Flaky CI is worse than no CI.

Next Steps

  1. Set up CI/CD — GitHub Actions, GitLab CI, or Jenkins
  2. Stage tests — Fast (< 5 min) run first; slow run late
  3. Gate deployments — Tests must pass before deployment
  4. Monitor metrics — Lead time, deployment frequency, MTTR
  5. Improve pipeline — Cache, parallelize, optimize
  6. Notify developers — Slack/email on failures
  7. Fix flakiness — Quarantine and fix flaky tests
  8. Iterate — Measure improvements; keep optimizing

References

  1. GitHub Actions ↗️
  2. GitLab CI/CD ↗️
  3. Jenkins ↗️
  4. CircleCI ↗️
  5. Google Cloud CI/CD Solutions ↗️