Introduction

The core feature of pipelines is building and deploying applications. They can also be used for recurrent activities, such us website monitoring and data backups.

Common use cases:

Building a pipeline

Pipelines consist of actions executed in a specific order. For example, you can create a pipeline that will test and compile your PHP application, and deploy it to the server. In case something goes wrong (e.g. tests fail to pass) it will send you a message to your Slack channel:

Pipeline examplePipeline example

Another use case involves building a Docker image of a Node.js application and pushing it to the registry:

Docker pipeline exampleDocker pipeline example

Triggering pipelines

A pipeline can be triggered in three different ways:

Pipeline trigger conditionsPipeline trigger conditions

You can also specify for which branches, tags, or pull requests the pipeline will be triggered:

  • a single branch – e.g. master branch for a production pipeline
  • on every push to the repository – wildcard *, e.g. for a pipeline running unit tests
  • after pushing a tag that fulfils a specific pattern – wildcard refs/tags/v*, e.g. for a pipeline releasing a new version of the app.

Branch selectionBranch selection

Trigger on every push

Selecting On push as the trigger mode will run the pipeline whenever a commit is pushed to the repo. For example, if you want to run unit tests for every change to the repository, you can use a wildcard with *:

For example, when you want to run unit tests after each push, you choose options On every push and Wildcard \* :

Wildcard trigger conditionsWildcard trigger conditions

You can also use it when you want to automatically deploy changes from the DEV branch to the STAGING server after a change in the dev branch: Single branch trigger conditionsSingle branch trigger conditions

Trigger pipelines recurrently

You can set your pipeline to be triggered at a certain time of the day. For example, you can schedule a pipeline to run integration tests every day at 5 p.m.:

Setting recurrent pipeline executionSetting recurrent pipeline execution

The time is set according to the timezone of the user and converted to UTC upon saving. Please make sure to update your settings after seasonal time changes (1 hour ahead or back).

Cron expressions allow you to set the time when the pipeline should be run with additional rules, e.g. run at 10:15 AM every Monday, Tuesday, Wednesday, Thursday and Friday:

Advanced recurrence settingsAdvanced recurrence settings

Cron expressions always use the UTC format.

Trigger pipelines manually

For production pipelines, it is best to set them to manual mode and restrict project access to senior devs only.

When a pipeline is triggered manually, you can set up the following options:

  • for which revision the pipeline will be run
  • if the cache should be cleared before execution
  • if the deployments should be based on the changesets or be made from scratch Manual pipeline triggerManual pipeline trigger

Pipeline history

The pipeline history is stored in the Executions tab. Here you can find information about who triggered the pipeline, when was it and for which revision it was done. Pipeline execution historyPipeline execution history

Clicking an execution will bring up its details:

  • activity logs
  • duration
  • trigger mode
  • whether the cache was cleared Revision detailsRevision details

If you want to learn more about the performance of your builds, check out the Analytics tab. It allows you to quickly check the time of builds, average execution time, and error frequency: Performance detailsPerformance details

Pipeline filesystem

Every pipeline has its own filesystem attached. The filesystem contains a clone of your repository in the newest revision together with artifacts generated in your pipeline. It serves as the primary cache for your pipeline: this way, you don't need to fetch the whole repository and dependencies on every execution. Pipeline filesystemPipeline filesystem

Artifacts

All files created during the execution will land in the filesystem. You can browse and download them via the UI or with cURL (using a dedicated URL).

Configuration and static files

Not all files should be stored in the repository. For example, configuration files for a specific environment (dev/stage/production) or those that contain sensitive data. You can, however, upload them manually to the filesystem. This way they will be uploaded together with the artifacts and repo files.

Environment variables

For each pipeline you can specify environment variables. These variables can be used during the configuration of an action and during builds. Variables tabVariables tab

Visibility & permissions

Visibility settings in the pipeline allow you to restrict its visibility to individual users and groups. Visibility settingsVisibility settings

You can also impose certain permissions on pipelines and specify the rights for each member individually. The permissions can be restricted to:

  • view only – you can only see history and configuration of a pipeline
  • run only – you can run a pipeline, but you cannot edit it in any way
  • manage – you can run, add, modify and delete the pipeline

Advanced pipeline settings

Switching to the Settings tab will reveal a couple of advanced features that will let you fine-tune your pipeline:

Target URL

The target URL puts a label on your pipeline that lets you quickly access the associated website, e.g. to review changes after a deployment. Setting target URLSetting target URL

Trigger condition

Usually, your application is first built and then deployed to the server. However, not every change in the repository requires a build. In such cases, you can select certain conditions that will trigger the build. Setting trigger conditionsSetting trigger conditions

Clone depth

The clone depth specifies how many commits should be cloned to the filesystem on the pipeline execution. Creating a shallow clone is useful if your .git/ directory occupies too much space. Clone presetsClone presets

Clear cache before execution

The cache stores a clone of the repository and dependencies required by your build, which massively reduces build times. In some cases, however, you may need to fetch the dependencies on every build execution. To do that, select the option Automatically clear cache before running the pipeline. It will force Buddy to download the packages every time the pipeline is run.

Always deploy from scratch

Most of deployment actions are based on changesets, which means only the files from the latest revision are deployed. Checking Always deploy files from scratch will force Buddy to deploy all files from the repository on every execution.

Always run all queued executions

A pipeline cannot be undergoing more than one execution at a time. If another user triggers a pipeline that's already in progress, the execution will be queued and won't start until the first one is over. If there are more executions queued (for example 5), Buddy will only run the newest execution (5th) and skip the rest (2-4).

If you check Always run all queued executions Buddy will run every execution one by one. This feature is useful if you want to test every single commit. Pipeline trigger configurationPipeline trigger configuration

Pipeline list

Each pipeline has its own configuration and a separate filesystem attached. You can create multiple pipelines that will run different tasks within one repository. The pipeline view gives you quick access to the the most important information:

  • execution status (passed, failed, in progress, on hold)
  • trigger mode (on push, manual, recurrent)
  • time of last execution
  • assigned branch
  • whether it's deployed to the newest revision or how many commits behind the branch it is List of example pipelinesList of example pipelines

See also

Last modified on February 24, 2023

Questions?

Not sure how to configure a pipeline for your process? Reach out on the live-chat or contact support

Get Started

Sign up for free and deploy your project in less than 10 minutes.