Production-grade CI/CD for React.js apps

Production-grade CI/CD for React.js apps

Hint
In this article, we will set a Continuous Delivery pipeline for a React.JS application. Enabling CI/CD in the delivery process reduces the risk of manual errors, provides standardized development feedback loops, and enables fast product iterations.

Image loading...React.js delivery pipeline

Tools and Techs used

  • GitHub: code hosting provider
  • Amazon Services
    • S3: hosting for the React.JS application
    • CloudFront: content delivery network
    • Route53: domain management service
    • Amazon Certificate Manager (ACM): certification provision for our domains
  • Node.JS (version >= 10 is preferred)
  • Buddy: a CI/CD tool that automates software delivery steps, such as initiating code builds, running tests and deployment to the server
Warning
Familiarity with the tools and technologies used in this article is preferred but not necessary – we will show you how to set up everything step by step.

Staging and Production environment

In this tutorial, we'll create a pipeline that will test, build, and deploy a React.js app according to the following scheme:

Image loading...CI/CD process in a nutshell

We are going to create separate environments for staging and production, each with its own S3 bucket and CloudFront distribution:

  • The staging environment will use the 'develop' branch in our GitHub repository and deploy to the staging server. This is where changes are run against production-equivalent infrastructure and data to ensure that they will work properly when released.

  • The production environment will use the 'master' branch in our GitHub repository and deploy to the live server (the master branch should always be ready for a deployment to live).

Hint
The principle of Continous Integration is that every change should be immediately tested before integrating it with the rest of the code. This is performed in a dedicated testing environment, either locally or in a QA or UAT environment. However, to make things simpler, in this tutorial, we'll stick with staging and production only.

Step 1: Setting up React application

For the purposes of this guide, we'll use the demo application at github.com/daumie/buddy-demo-reactjs. Fork the repository and run the following commands:

bash
git clone https://github.com/daumie/buddy-demo-reactjs cd buddy-demo-reactjs npm run preinstall npm install npm start$$$$$

This will spin your application at http://localhost:3000

Step 2: Configuring AWS

Before we get down to pipelines, let's start with setting up the required AWS services.

Required permissions: AWS S3

s3:ListAllMyBuckets s3:GetObject s3:PutObject s3:PutObjectAcl s3:DeleteObject s3:ListBucket

Required permissions: CloudFront

cloudfront:ListDistributions cloudfront:CreateInvalidation
Hint
If you need more specific instructions, check out our docs on setting up AWS integrations and permissions.

AWS S3

  1. Create a bucket with desired names for your production and staging deployments. For my project, I used the following domain names:
  2. Next, configure your S3 bucket for Static Website Hosting.

Route 53 and CloudFront

  1. Create and configure a Route 53 hosted zone with your domain name that we'll use to route DNS traffic to your website bucket (you can also do this on your DNS provider e.g. GoDaddy). For Route 53, see the guide on routing traffic to a website hosted in an Amazon S3 bucket.

  2. The next step is setting up Amazon CloudFront to speed up the distribution of your static content (such as HTML, CSS, JS, and image files) to your users. There's a separate guide on distributing traffic from Amazon S3 with CloudFront, too.

Step 3: Creating Staging pipeline

  1. Log in to Buddy and select your GitHub repository from the list. Buddy should intelligently work out that our project is a React application. Then click Add a new pipeline.

  2. Set the trigger mode to On push and the branch to 'develop'. This will force Buddy to trigger the execution on every change to the development code. You can enter any name that you want. I picked "Staging Site: Test, Build and Deploy Sample React Frontend to AWS S3 and CloudFront"

Preparing environment

Now, we shall configure our first action that will prepare our application environment. We will use the Node.js action as it has the tools we need (npm).

Image loading...Node action

In the action details, add the following commands:

bash
npm run preinstall npm install npm run pretest$$$

Image loading...Node #1: Environment preparation

  • The npm run preinstall command checks and ensures that the recommended version of npm is installed.
  • The npm install command installs the dependencies in the local node_modules folder. The installed packages are persisted through the rest of the actions by the pipeline filesystem.
  • The npm run pretest command prepares the application for testing. The command runs a linter that checks the source code for programmatic and stylistic errors. The command also removes any cached .coverage folder from previous builds.

Running tests

After successfully setting up our environment for testing, we will run the test suite using another Node.js action. This time, add these commands:

bash
npm run test npm run test:clean$$

Image loading...Node #2: Tests

  • The npm run test command sets the variable NODE_ENV=test and uses the jest JavaScript test framework to ensure the correctness of the codebase. It also generates coverage reports in the .coverage folder which can be manipulated by other tools.
  • The npm run test:clean command removes the generated .coverage folder generated in the last step.

Preparing deployment build

Once the specified test cases have passed, the application can be prepared for deployment to the staging/production environments using the lat Node.js action with appropriate build commands:

bash
npm run prebuild npm run build$$

Image loading...Node #3: Build preparation

  • The npm run prebuild command runs the prebuild script described in our package.json file. This removes existing ./build folders from past builds.
  • The npm run build command runs the build field from the package.json. This command sets the NODE_ENV=production and creates a ./build directory with a minified production build of the app. If you’re benchmarking or experiencing performance problems in your React apps, make sure you’re testing with the minified production build.

The build artifact (contents of the build folder) will be deployed to our staging and production environments.

Hint
In Buddy, dependencies and artifacts are cached in the pipeline filesystem and shared across actions. This means that npm install, tests, and build commands could be successfully run within one isolated container (one action) as well. However, to make things more clear, we decided to split them into individual entities.

Optional: Deployment confirmation

The next action will allow us to choose between Continuous Deployment and Continuous Delivery. The difference between the two terms is the scope of automation employed:

  1. The Continuous Delivery process typically involves a mandatory time lag in the final release – a manual step of approving the initiation of a deploy to production.
  2. Continuous Deployment, on the other hand, is a process in which every change in the source code is deployed to production automatically, without explicit approval from a developer.

A developer’s job typically ends at reviewing a pull request from a teammate and merging it to the master branch. Buddy then takes over from there by running all tests and deploying the code to production, while keeping the team notified through channels like Slack (we shall discuss setting up Slack notifications later).

Image loading...Manual confirmation action details

Here's its location in the pipeline, just before the deployment:

Image loading...Manual approval in a deployment pipeline

Deployment to S3

Since we want to deploy our static site to AWS S3, we'll use the S3 action to transfer the contents of the ./build folder to our bucket:

Image loading...S3 in the Amazon roster

A modal will appear requesting your AWS user Access Key and Secret Access Key that we created earlier in this guide. Paste them to the inputs and click “Add integration”.

Once authenticated, specify the Source Path (the path in the filesystem with our artifact), and the bucket ID.

Image loading...AWS S3 action configuration

Since we had enabled Static Website Hosting on the bucket’s settings, you can now access the site from the URL provided on the Static Website Hosting Card under the S3 bucket’s Properties tab. Make sure to confirm that you can access the site on the domain.

Image loading...S3 bucket endpoint

Cache invalidation

Next, we will use AWS CloudFront to speed distribution of the static and dynamic web contents of our site:

Image loading...CloudFront in the Amazon Roster

The CloudFront action invalidates existing S3 objects, removing them from the CloudFront’s Distributions Cache. AWS usually takes 10 to 15 minutes to complete your invalidation request, depending on the size of your request. After the invalidation is complete, you will be able to access the latest changes to your site.

Image loading...CloudFront action details

Tip
If your browser caches the website contents you can ‘hard refresh’ by using Shift + Refresh icon to get the latest changes

Optional: Conditional notification

Buddy provides a conditional set of actions depending on the build status of the pipeline. An example of good use of these actions would be to integrate Buddy with Slack to keep your team updated with automatic notifications on finished builds and deployments. You can also use it to trigger and get the status of your pipelines with slash commands.

Image loading...Conditional actions

You can have different notification channels on Slack for different deployment pipelines. Essentially your deployment and production pipelines should have different slack channels for notifications. Check Slack integration for more information.

Image loading...Slack Notification

Once configured, the complete pipeline should look like below. Once you're ready, click Run pipeline and watch Buddy test, build, and deploy your application to the S3 bucket. Since the pipeline is set to automatic mode, the whole operation will be performed on every push to the selected branch.

Image loading...Full pipeline

Hint
You can use a YAML configuration file to configure buddy CI/CD. This helps store the history of configuration changes in the application’s repository. It also makes it easier to share config files.

Step 4: Creating Production Pipeline

After a successful deployment to the staging environment, we would want to replicate the same for a production environment using the master branch. Rinse and repeat the procedure for setting up the staging pipeline making sure to change:

  • Domain name
  • Git branch (master branch)
  • AWS S3 bucket
  • Cloudfront Distribution
Warning
Since this is deployment to production, we recommend setting the trigger mode to Manual.

The complete delivery environment should look like this:

Image loading...Pipeline menu

Conclusion

And there you have it! If you followed through successfully, you can now test, prepare and deploy builds on every change to code.

The automation of manual software delivery processes can significantly reduce the software development cycle time. By creating a deployment pipeline, teams can release software in a fast, repeatable, and reliable manner – a feat that no developer should disregard.

Dominic Motuka

Dominic Motuka

DevOps Engineer

Experienced DevOps/Cloud Operations Engineer skilled in Google Cloud Platform (GCP) and Amazon Web Services (AWS) public clouds, Kubernetes Development and Administration, Infrastructure As Code and Site Reliability Engineering.