The core feature of pipelines is building and deploying applications. They can also be used for recurrent activities, such us website monitoring and data backups.
Common use cases:
- Run test after every push
- Deployment pipelines
- Daily integration tests
- Selenium tests
- Monitoring pipelines
- Manual deployment approval
Building a pipeline
Pipelines consist of actions executed in a specific order. For example, you can create a pipeline that will test and compile your PHP application, and deploy it to the server. In case something goes wrong (e.g. tests fail to pass) it will send you a message to your Slack channel:
Another use case involves building a Docker image of a Node.js application and pushing it to the registry:
A pipeline can be triggered in three different ways:
You can also specify for which branches, tags, or pull requests the pipeline will be triggered:
- a single branch – e.g. master branch for a production pipeline
- on every push to the repository – wildcard
*, e.g. for a pipeline running unit tests
- after pushing a tag that fulfils a specific pattern – wildcard
refs/tags/v*, e.g. for a pipeline releasing a new version of the app.
Trigger on every push
Selecting On push as the trigger mode will run the pipeline whenever a commit is pushed to the repo. For example, if you want to run unit tests for every change to the repository, you can use a wildcard with
For example, when you want to run unit tests after each push, you choose options On every push and Wildcard
You can also use it when you want to automatically deploy changes from the DEV branch to the STAGING server after a change in the dev branch:
Trigger pipelines recurrently
You can set your pipeline to be triggered at a certain time of the day. For example, you can schedule a pipeline to run integration tests every day at 5 p.m.:
You can use Cron expression to set the time when the pipeline should be run, and define any rule you need e.g. Fire at 10:15 AM every Monday, Tuesday, Wednesday, Thursday and Friday:
Trigger pipelines manually
For production pipelines, it is best to set them to manual mode and restrict project access to senior devs only.
When a pipeline is triggered manually, you can set up the following options:
- for which revision the pipeline will be run
- if the cache should be cleared before execution
- if the deployments should be based on the changesets or be made from scratch
The pipeline history is stored in the Executions tab. Here you can find information about who triggered the pipeline, when was it and for which revision it was done.
Clicking an execution will bring up its details:
- activity logs
- trigger mode
- whether the cache was cleared
If you want to learn more about the performance of your builds, check out the Analytics tab. It allows you to quickly check the time of builds, average execution time, and error frequency:
Every pipeline has its own filesystem attached. The filesystem contains a clone of your repository in the newest revision together with artifacts generated in your pipeline. It serves as the primary cache for your pipeline: this way, you don't need to fetch the whole repository and dependencies on every execution.
All files created during the execution will land in the filesystem. You can browse and download them via the UI or with cURL (using a dedicated URL).
Configuration and static files
Not all files should be stored in the repository. For example, configuration files for a specific environment (dev/stage/production) or those that contain sensitive data. You can, however, upload them manually to the filesystem. This way they will be uploaded together with the artifacts and repo files.
For each pipeline you can specify environment variables. These variables can be used during the configuration of an action and during builds.
Visibility & permissions
Visibility settings in the pipeline allow you to restrict its visibility to individual users and groups.
You can also impose certain permissions on pipelines and specify the rights for each member individually. The permissions can be restricted to:
- view only – you can only see history and configuration of a pipeline
- run only – you can run a pipeline, but you cannot edit it in any way
- manage – you can run, add, modify and delete the pipeline
Advanced pipeline settings
Switching to the Settings tab will reveal a couple of advanced features that will let you fine-tune your pipeline:
The target URL puts a label on your pipeline that lets you quickly access the associated website, e.g. to review changes after a deployment.
Usually, your application is first built and then deployed to the server. However, not every change in the repository requires a build. In such cases, you can select certain conditions that will trigger the build.
The clone depth specifies how many commits should be cloned to the filesystem on the pipeline execution. Creating a shallow clone is useful if your
.git/ directory occupies too much space.
Clear cache before execution
The cache stores a clone of the repository and dependencies required by your build, which massively reduces build times. In some cases, however, you may need to fetch the dependencies on every build execution. To do that, select the option Automatically clear cache before running the pipeline. It will force Buddy to download the packages every time the pipeline is run.
Always deploy from scratch
Most of deployment actions are based on changesets, which means only the files from the latest revision are deployed. Checking Always deploy files from scratch will force Buddy to deploy all files from the repository on every execution.
Always run all queued executions
A pipeline cannot be undergoing more than one execution at a time. If another user triggers a pipeline that's already in progress, the execution will be queued and won't start until the first one is over. If there are more executions queued (for example 5), Buddy will only run the newest execution (5th) and skip the rest (2-4).
If you check Always run all queued executions Buddy will run every execution one by one. This feature is useful if you want to test every single commit.
Each pipeline has its own configuration and a separate filesystem attached. You can create multiple pipelines that will run different tasks within one repository. The pipeline view gives you quick access to the the most important information:
- execution status (passed, failed, in progress, on hold)
- trigger mode (on push, manual, recurrent)
- time of last execution
- assigned branch
- whether it's deployed to the newest revision or how many commits behind the branch it is