Production-grade CI/CD for React.js apps
Image loading...
Tools and Techs used
- GitHub: code hosting provider
- Amazon Services
- S3: hosting for the React.JS application
- CloudFront: content delivery network
- Route53: domain management service
- Amazon Certificate Manager (ACM): certification provision for our domains
- Node.JS (version >= 10 is preferred)
- Buddy: a CI/CD tool that automates software delivery steps, such as initiating code builds, running tests and deployment to the server
Actions used in this guide:
Staging and Production environment
In this tutorial, we'll create a pipeline that will test, build, and deploy a React.js app according to the following scheme:
Image loading...
We are going to create separate environments for staging and production, each with its own S3 bucket and CloudFront distribution:
The staging environment will use the 'develop' branch in our GitHub repository and deploy to the staging server. This is where changes are run against production-equivalent infrastructure and data to ensure that they will work properly when released.
The production environment will use the 'master' branch in our GitHub repository and deploy to the live server (the master branch should always be ready for a deployment to live).
Step 1: Setting up React application
For the purposes of this guide, we'll use the demo application at github.com/daumie/buddy-demo-reactjs. Fork the repository and run the following commands:
bashgit clone https://github.com/daumie/buddy-demo-reactjs cd buddy-demo-reactjs npm run preinstall npm install npm start
$$$$$
This will spin your application at http://localhost:3000
Step 2: Configuring AWS
Before we get down to pipelines, let's start with setting up the required AWS services.
Required permissions: AWS S3
s3:ListAllMyBuckets
s3:GetObject
s3:PutObject
s3:PutObjectAcl
s3:DeleteObject
s3:ListBucket
Required permissions: CloudFront
cloudfront:ListDistributions
cloudfront:CreateInvalidation
AWS S3
- Create a bucket with desired names for your production and staging deployments. For my project, I used the following domain names:
- Next, configure your S3 bucket for Static Website Hosting.
Route 53 and CloudFront
Create and configure a Route 53 hosted zone with your domain name that we'll use to route DNS traffic to your website bucket (you can also do this on your DNS provider e.g. GoDaddy). For Route 53, see the guide on routing traffic to a website hosted in an Amazon S3 bucket.
The next step is setting up Amazon CloudFront to speed up the distribution of your static content (such as HTML, CSS, JS, and image files) to your users. There's a separate guide on distributing traffic from Amazon S3 with CloudFront, too.
Step 3: Creating Staging pipeline
Log in to Buddy and select your GitHub repository from the list. Buddy should intelligently work out that our project is a React application. Then click Add a new pipeline.
Set the trigger mode to On push and the branch to 'develop'. This will force Buddy to trigger the execution on every change to the development code. You can enter any name that you want. I picked "Staging Site: Test, Build and Deploy Sample React Frontend to AWS S3 and CloudFront"
Preparing environment
Now, we shall configure our first action that will prepare our application environment. We will use the Node.js action as it has the tools we need (npm).
Image loading...
In the action details, add the following commands:
bashnpm run preinstall npm install npm run pretest
$$$
Image loading...
- The
npm run preinstall
command checks and ensures that the recommended version of npm is installed. - The
npm install
command installs the dependencies in the localnode_modules
folder. The installed packages are persisted through the rest of the actions by the pipeline filesystem. - The
npm run pretest
command prepares the application for testing. The command runs a linter that checks the source code for programmatic and stylistic errors. The command also removes any cached.coverage
folder from previous builds.
Running tests
After successfully setting up our environment for testing, we will run the test suite using another Node.js action. This time, add these commands:
bashnpm run test npm run test:clean
$$
Image loading...
- The
npm run test
command sets the variableNODE_ENV=test
and uses the jest JavaScript test framework to ensure the correctness of the codebase. It also generates coverage reports in the.coverage
folder which can be manipulated by other tools. - The
npm run test:clean
command removes the generated.coverage
folder generated in the last step.
Preparing deployment build
Once the specified test cases have passed, the application can be prepared for deployment to the staging/production environments using the lat Node.js action with appropriate build commands:
bashnpm run prebuild npm run build
$$
Image loading...
- The
npm run prebuild
command runs the prebuild script described in ourpackage.json
file. This removes existing./build
folders from past builds. - The
npm run build
command runs thebuild
field from thepackage.json
. This command sets theNODE_ENV=production
and creates a./build
directory with a minified production build of the app. If you’re benchmarking or experiencing performance problems in your React apps, make sure you’re testing with the minified production build.
The build artifact (contents of the build folder) will be deployed to our staging and production environments.
Optional: Deployment confirmation
The next action will allow us to choose between Continuous Deployment and Continuous Delivery. The difference between the two terms is the scope of automation employed:
- The Continuous Delivery process typically involves a mandatory time lag in the final release – a manual step of approving the initiation of a deploy to production.
- Continuous Deployment, on the other hand, is a process in which every change in the source code is deployed to production automatically, without explicit approval from a developer.
A developer’s job typically ends at reviewing a pull request from a teammate and merging it to the master branch. Buddy then takes over from there by running all tests and deploying the code to production, while keeping the team notified through channels like Slack (we shall discuss setting up Slack notifications later).
Image loading...
Here's its location in the pipeline, just before the deployment:
Image loading...
Deployment to S3
Since we want to deploy our static site to AWS S3, we'll use the S3 action to transfer the contents of the ./build
folder to our bucket:
Image loading...
A modal will appear requesting your AWS user Access Key and Secret Access Key that we created earlier in this guide. Paste them to the inputs and click “Add integration”.
Once authenticated, specify the Source Path (the path in the filesystem with our artifact), and the bucket ID.
Image loading...
Since we had enabled Static Website Hosting on the bucket’s settings, you can now access the site from the URL provided on the Static Website Hosting Card under the S3 bucket’s Properties tab. Make sure to confirm that you can access the site on the domain.
Image loading...
Cache invalidation
Next, we will use AWS CloudFront to speed distribution of the static and dynamic web contents of our site:
Image loading...
The CloudFront action invalidates existing S3 objects, removing them from the CloudFront’s Distributions Cache. AWS usually takes 10 to 15 minutes to complete your invalidation request, depending on the size of your request. After the invalidation is complete, you will be able to access the latest changes to your site.
Image loading...
Optional: Conditional notification
Buddy provides a conditional set of actions depending on the build status of the pipeline. An example of good use of these actions would be to integrate Buddy with Slack to keep your team updated with automatic notifications on finished builds and deployments. You can also use it to trigger and get the status of your pipelines with slash commands.
Image loading...
You can have different notification channels on Slack for different deployment pipelines. Essentially your deployment and production pipelines should have different slack channels for notifications. Check Slack integration for more information.
Image loading...
Once configured, the complete pipeline should look like below. Once you're ready, click Run pipeline and watch Buddy test, build, and deploy your application to the S3 bucket. Since the pipeline is set to automatic mode, the whole operation will be performed on every push to the selected branch.
Image loading...
Step 4: Creating Production Pipeline
After a successful deployment to the staging environment, we would want to replicate the same for a production environment using the master branch. Rinse and repeat the procedure for setting up the staging pipeline making sure to change:
- Domain name
- Git branch (master branch)
- AWS S3 bucket
- Cloudfront Distribution
The complete delivery environment should look like this:
Image loading...
Conclusion
And there you have it! If you followed through successfully, you can now test, prepare and deploy builds on every change to code.
The automation of manual software delivery processes can significantly reduce the software development cycle time. By creating a deployment pipeline, teams can release software in a fast, repeatable, and reliable manner – a feat that no developer should disregard.
Dominic Motuka
DevOps Engineer
Experienced DevOps/Cloud Operations Engineer skilled in Google Cloud Platform (GCP) and Amazon Web Services (AWS) public clouds, Kubernetes Development and Administration, Infrastructure As Code and Site Reliability Engineering.