Most CI/CD tools can run a build and ship a deployment. Where they diverge is what happens when your delivery needs get real: a monorepo with a dozen services, microservices spread across multiple repositories, deployments to dozens of environments, or a platform team trying to enforce standards without becoming a bottleneck.GitLab's pipeline execution model was designed for that complexity. Parent-child pipelines, DAG execution, dynamic pipeline generation, multi-project triggers, merge request pipelines with merged results, and CI/CD Components each solve a distinct class of problems. Because they compose, understanding the full model unlocks something more than a faster pipeline. In this article, you'll learn about the five patterns where that model stands out, each mapped to a real engineering scenario with the configuration to match.The configs below are illustrative. The scripts use echo commands to keep the signal-to-noise ratio low. Swap them out for your actual build, test, and deploy steps and they are ready to use.1. Monorepos: Parent-child pipelines + DAG executionThe problem: Your monorepo has a frontend, a backend, and a docs site. Every commit triggers a full rebuild of everything, even when only a README changed.GitLab solves this with two complementary features: parent-child pipelines (which let a top-level pipeline spawn isolated sub-pipelines) and DAG execution via needs (which breaks rigid stage-by-stage ordering and lets jobs start the moment their dependencies finish).A parent pipeline detects what changed and triggers only the relevant child pipelines:# .gitlab-ci.ymlstages: - triggertrigger-services: stage: trigger trigger: include: - local: '.gitlab/ci/api-service.yml' - local: '.gitlab/ci/web-service.yml' - local: '.gitlab/ci/worker-service.yml' strategy: dependEach child pipeline is a fully independent pipeline with its own stages, jobs, and artifacts. The parent waits for all of them via strategy: depend so you get a single green/red signal at the top level, with full drill-down into each service's pipeline. This organizational separation is the bigger win for large teams: each service owns its pipeline config, changes in one cannot break another, and the complexity stays manageable as the repo grows.One thing worth knowing: when you pass multiple files to a single trigger: include:, GitLab merges them into a single child pipeline configuration. This means jobs defined across those files share the same pipeline context and can reference each other with needs:, which is what makes the DAG optimization possible. If you split them into separate trigger jobs instead, each would be its own isolated pipeline and cross-file needs: references would not work.Combine this with needs: inside each child pipeline and you get DAG execution. Your integration tests can start the moment the build finishes, without waiting for other jobs in the same stage.# .gitlab/ci/api-service.ymlstages: - build - testbuild-api: stage: build script: - echo "Building API service"test-api: stage: test needs: [build-api] script: - echo "Running API tests"Why it matters: Teams with large monorepos typically report significant reductions in pipeline runtime after switching to DAG execution, since jobs no longer wait on unrelated work in the same stage. Parent-child pipelines add the organizational layer that keeps the configuration maintainable as the repo and team grow.2. Microservices: Cross-repo, multi-project pipelinesThe problem: Your frontend lives in one repo, your backend in another. When the frontend team ships a change, they have no visibility into whether it broke the backend integration and vice versa.GitLab's multi-project pipelines let one project trigger a pipeline in a completely separate project and wait for the result. The triggering project gets a linked downstream pipeline right in its own pipeline view.The frontend pipeline builds an API contract artifact and publishes it, then triggers the backend pipeline. The backend fetches that artifact directly using the Jobs API and validates it before allowing anything to proceed. If a breaking change is detected, the backend pipeline fails and the frontend pipeline fails with it.# frontend repo: .gitlab-ci.ymlstages: - build - test - trigger-backendbuild-frontend: stage: build script: - echo "Building frontend and generating API contract..." - mkdir -p dist - | echo '{ "api_version": "v2", "breaking_changes": false }' > dist/api-contract.json - cat dist/api-contract.json artifacts: paths: - dist/api-contract.json expire_in: 1 hourtest-frontend: stage: test script: - echo "All frontend tests passed!"trigger-backend-pipeline: stage: trigger-backend trigger: project: my-org/backend-service branch: main strategy: depend rules: - if: $CI_COMMIT_BRANCH == "main"# backend repo: .gitlab-ci.ymlstages: - build - testbuild-backend: stage: build script: - echo "All backend tests passed!"integration-test: stage: test rules: - if: $CI_PIPELINE_SOURCE == "pipeline" script: - echo "Fetching API contract from frontend..." - | curl --silent --fail \ --header "JOB-TOKEN: $CI_JOB_TOKEN" \ --output api-contract.json \ "${CI_API_V4_URL}/projects/${FRONTEND_PROJECT_ID}/jobs/artifacts/main/raw/dist/api-contract.json?job=build-frontend" - cat api-contract.json - | if grep -q '"breaking_changes": true' api-contract.json; then echo "FAIL: Breaking API changes detected - backend integration blocked!" exit 1 fi echo "PASS: API contract is compatible!"A few things worth noting in this config. The integration-test job uses $CI_PIPELINE_SOURCE == "pipeline" to ensure it only runs when triggered by an upstream pipeline, not on a standalone push to the backend repo. The frontend project ID is referenced via $FRONTEND_PROJECT_ID, which should be set as a CI/CD variable in the backend project settings to avoid hardcoding it.Why it matters: Cross-service breakage that previously surfaced in production gets caught in the pipeline instead. The dependency between services stops being invisible and becomes something teams can see, track, and act on.3. Multi-tenant / matrix deployments: Dynamic child pipelinesThe problem: You deploy the same application to 15 customer environments, or three cloud regions, or dev/staging/prod. Updating a deploy stage across all of them one by one is the kind of work that leads to configuration drift. Writing a separate pipeline for each environment is unmaintainable from day one.GitLab's dynamic child pipelines let you generate a pipeline at runtime. A job runs a script that produces a YAML file, and that YAML becomes the pipeline for the next stage. The pipeline structure itself becomes data.# .gitlab-ci.ymlstages: - generate - trigger-environmentsgenerate-config: stage: generate script: - | # ENVIRONMENTS can be passed as a CI variable or read from a config file. # Default to dev, staging, prod if not set. ENVIRONMENTS=${ENVIRONMENTS:-"dev staging prod"} for ENV in $ENVIRONMENTS; do cat > ${ENV}-pipeline.yml