Webinar #15

April 6th, 2022

Length: 1h 8 min

How to automate CI checks for PRs in WordPress delivery

Learn how to trigger tests on pull requests in WordPress themes and plugins, enforce code quality standards with linters, and scale it up to an enterprise level.

Who is this webinar for:

  • Developers and DevOps enthusiasts interested in adding automated code quality checks to pull requests
  • CEOs of digital and WordPress agencies who want to shorten the feedback loop between their team and the client and improve the code quality of their product
  • CTOs and Software Engineers of mid-to-enterprise businesses assessing their CI/CD toolset

What you're going to learn:

  • How to use Buddy to run automated tests and linting checks against WordPress plugins and themes for front-end and back-end code
  • How to set up a pipeline for WordPress themes and plugins
  • How Continuous Integration helps improve code quality


00:00  Countdown

01:14  Introduction

03:50  What you’re going to learn

04:45  Peer review

09:20  Testing principles

11:40  Fixing issues

13:20  An example project

23:13  Testing pipeline configuration

25:09  Checking for gitignored files

25:34  Checking for merge conflicts

26:34  PHP tests

44:00  npm tests and building

49:00  Trigger conditions

50:00  Git hooks

01:01:24  Q&A





Hello, everyone, welcome to yet another Buddy webinar. Today we will talk a bit about how to automate continuous integration checks for pull requests in WordPress delivery. So if you are one of those people who think that WordPress is just for simple blogs today, Kevin and Ben will show you that this is not true that WordPress is a really great platform for enterprise grade websites or applications. So guys, tell us a few words about yourself.



Yeah, my name's Ben Bolton, and my official title is the director of development operations here at allI, I got my start out as a Windows sysadmin. And, oh, man, I'm old now. 20 years ago, moved over to Linux System Administration, started manual testing team and tier two support previous companies and automated testing was a thing moved into the cloud while it was the site reliability engineering team, and then came to Alley and it's been awesome.



Yep, I'm Kevin Fodness. I'm the Vice President of Software Development at Alley also one of Alley's partners. And like Ben, I got my start in what mostly Windows systems administration with a web and database focus is a federal government contractor down in Washington, DC. And then I had a detour where I went to grad school and got a PhD in science and technology studies with a focus on web accessibility. And then I went back to making websites which has been a passion of mine for a really long time, is basically a full stack developer. And ever since I've been with Alley, it's been a lot of JavaScript, a lot of WordPress, a lot of PHP API's and things like that, as well as configuring CI and CD and automated testing and all that kind of stuff.

What you're going to learn



And my name is Maciek Palmowski I work here at Buddy as a WordPress ambassador, and I'm a regular webinar host. So first let's, let's start and some of what we'll talk about today. So first of all, we'll talk about how enterprise level WordPress companies do code review because this is something a bit different. When you compare it to those small agencies, what can be automated in such a process? Also what can be tested, and how to set up a pipeline for WordPress themes and pipelines? Before we go forward? If you have any questions, don't forget to send them in the comments either on YouTube or LinkedIn or on Facebook. It depends where are you watching us. Also, if you like the webinars that we are doing, don't forget to subscribe to our channel. We also launched a small poll on YouTube. So feel free to post your answers. We will sum up everything in the end. Okay, so I think we are ready to go guys, the scene is yours.

Peer review



Awesome. Okay, Kevin, I'm sure we've got attendees or viewers later who are coming from a wide variety of engineering practices. Maybe they've got a small team and got a big team. Alley we have this practice of peer code review. What is it? Why do we do it?



Yeah, peer code review seeks to fix published a couple of goals, the first of which is, of course, making sure that any code that we ship is going to meet strict performance and security concerns, right? We want to make sure that, you know, we're not pushing a change that's going to overload the database or take some aspect of the infrastructure down. And we want to make sure that everything that we write is not going to be exploited by bad actors. And so every line of code that we write gets peer reviewed by someone else at the company, in order to ensure that all of those things are correct. The other thing is, you know, we work on a relatively large team, there are over 50 developers that Alley, and we want to make sure that across all of those individuals, the code that we are writing looks and feels similar. So it's not going to be a shock, sort of moving from one project to another where all of a sudden, the code is changing on you based on who wrote it. And so we also take advantage of linters to enforce code style. And so you know, things like whether you put your opening brace on the same line as the function definition, or the next line, those kinds of things, spacing, etc. Those are all covered by linting standards. And so we'll pay attention to those in code review as well. We should be writing code that feels and looks similar as a group, as opposed to being very different depending on the individual who wrote it.



Alright, so we're stepping in this code, code, review, pull request, typically on GitHub. And it takes it takes some time, you're going through these files, and you know, in the back of your head, there's just probably a delineation, we're going to talk about things that humans can do and can look for versus automating. Yep. But where's that line fall for you? And for Alice, in particular, if you're just getting started with pull request reviews? How can it not seem so daunting?



Yeah, you know, you want to focus on what's most important, you want to focus on what's going to provide kind of the most value in doing the pull request review. And the performance and security concerns, I think, are the two that are the most important, most valuable. So you want to look for ways that the database is being used, where you might have a query, that's going to be really expensive. Are there ways to optimise that query, can you offload that query to a caching service, or an external system like Elasticsearch? Security, especially make sure you're escaping all of your output, don't trust the user input, those kinds of things, those are going to have the biggest impact. And then other things like code style, if you decide to adopt a code style, and a linting tool that will check that for you later. Oftentimes, they can auto format a lot of those things for you. And so, you know, really lean on the tooling in those cases, as opposed to making someone Yeah, check all of the spacing, and all of the places in the manual pull request review.



Yeah, I, I can think of a couple, you know, massive looking PRs that were Airbnb changed, you know, the recommendation. And now, now it looks like this. Yeah, absolutely. What, okay, you're you're a shop, you're looking into your code review is going to take you more time than before automated testing, unit testing, that's all going to take you more time than before. What's What's the cell? How do you how do you want to slow things down?

Testing principles



Yeah, it might, but the you, it should accelerate you over time. So it's sort of an investment that you're putting in place up front in order to be able to reap more rewards later. And so you know, one of those rewards, for example, is catching bugs before anybody else does. You don't want a situation where you've now pushed something that has introduced a performance problem and everyone's scrambling to resolve it, you want to be able to catch that before it becomes an issue. Because certainly, you'll end up spending more time resolving that problem, once it becomes an issue out in the wild that you will catching it and addressing it ahead of time. The same is definitely true of security things, you don't want a situation where you have a corruption in your database that you now have to try to root out it's much better to be really rigorous about doing those checks ahead of time in the source code. And, you know, the same is true with testing as well, you know, writing any sort of automated tests, you're going to be able to catch issues before they become problems that are noticed by users or by clients or anyone else, right. So yeah, it can take a little bit more time or feel like it takes more time up front, but you are saving yourself time on the backend and saving yourself time later. And so instead of spending your time fixing bugs, you're spending your time developing new features and adding more value to what you're providing to the folks you're building the site for.



Also, I see it as when you have all all your tests ready. It's easier to to manage the time of the project because there are less bugs did appear out of nowhere at some point. And this is the moment when you are starting to quick fixing to add to your technological depth, and things like this. And at some point, it will come back even a bit harder. So it's really great to always have those testings toots, always test test stuff. And thanks to this, yeah, you can just manage your time easier. Yeah, of course, you will spend those either No, four, six hour more to write them. But you know that it will take only four to six hours and not some random number that will appear somewhere out of nowhere.



Yeah, that's a great point, you know, and the thing that I always come back to is, it's really easy to use test fixtures in a unit test context to be able to set things up in a very specific way. And then and then test for those particular situations, right. So I can write a function that, you know, formats a value, and I can throw all sorts of edge cases at it really easily. I can, you know, say what happens if it's zero, what happens if it's negative, what happens if it's null, what happens if it's undefined. And then and then make sure that it's durable enough to be able to handle all of those circumstances versus trying to create a situation in which you know, that would actually happen inside of a running application, it's much more difficult to do that. But writing a unit test, especially test driven, that allows you to, you know, throw a bunch of different values into a function really allows you to check all of those edge cases much faster than you would be able to using manual QA.

Fixing issues



We want to get into like more of the practical side of things. But before we do if a shop is starting this out, okay? They, they bought into the fact that it's going to cost them a little bit more time on upfront, they bought into the value of pure code review. But now they're in a peer code review. And they either maybe the automated test surface something or in, you know, the human does that says, You know what, this is another issue. Right? How do you approach a? Do we carry the tech debt? How do you how do you approach? Do we fix it now, versus backlog that and fix it later?



Yeah. And it's, it's a judgement call. And so for me, it comes down to whether the the issue was introduced in the pull request or discovered in the pull request. That's an important distinction for me. So, you know, if there's a change that is introduced in the pull request that causes a performance problem or causes a security issue, then that definitely should be addressed before the pull request is merged. But if you're just discovering something else that's already in the code, where, you know, hey, this was never caught before we should really address this. I'm a big fan of just ticketing that and then prioritising it as a fix after the the current work goes up. Unless it's, you know, really such a hot button issue that it really must be addressed before anything else. But that just comes back to prioritisation, like how how significant and serious is the issue that we've discovered? And doesn't need to be addressed immediately? Or, you know, can this be prioritised alongside other work



and love it? Show us show us how we can actually make these sorts of things happen?

An example project



Sure. So I'm here in the Buddy interface. And there's a couple things you have to do in order to actually get started with hooking all of this up. And so the first thing you'll have to do is, you know, within your organisation here, there's a button to create a new project. And so when you do this, you pick your Git provider. And so we use GitHub, this as an example accounts, we don't have this hooked up, but you'll be able to choose your Git provider, and then you choose your repository, name your project and hit create. And then at that point, it's going to, you know, to give you your project view, and then so from within your project settings, you do have to make one change, in order for pull requests to work, you have to go down and there's this little box that says enable support for pull requests, you have to check this. The reason why that's not enabled by default is if you have, let's say, a public repository, then anybody can create a pull request against your public repository. And so if you have a popular project that a lot of people are creating pull requests for that can create quite a lot of churn and your your Buddy account. So it's an opt in feature. So you have to check this box to actually have it run on on pull requests. And then you hit save changes, and then you can actually start building out the the particular checks that you're going to use. So before I go into the specific checks, I just want to talk about the example repository that we created for this and this is going to get shared out as well if it hasn't already been. This is a public repository that we've created in our GitHub organisation that was for demo purposes here. So this is a relatively, you know, bare bones stripped down example repository for how we might have a WordPress site that has a theme and a plugin. And really quickly for those of you who are WordPress developers who may not be familiar with this pattern, it's essentially a separation of concerns. And so the plugin is going to contain anything that is related to the way that data is, is stored and accessed in the database. So that will be things like custom post type registrations, taxonomy registrations, anything related to building Gutenberg blocks, slot fills those kinds of things, you're going to want to persist, even if you change the look and feel on the front end of the site. And then the theme is, that's all of your UX layer stuff. So that is like styles, front end JavaScript, PHP templates, for the front end of the site, and all of that, it just makes it much easier if you go through, let's say, a brand refresh, and you want something to change in terms of how things are displayed in the front end. But you still need to make you know, all of make sure all those data structures still exist. And so that's why we've got a plugin and a theme that is separate. It's also the case that you let's say you have a multi site installation in WordPress, you're gonna have multiple themes most likely, you know, even if it's a parent child relationship, typically you're dealing with multiple themes. And you're likewise you might be dealing with multiple plugins. So the structure of this repository is what we call wp content routed. So in a WordPress installation, you would just instal this repository instead of the normal wp content folder. So you basically instal WordPress, delete wp content, then clone down this Git repo. And this is where you have all of your plugins and themes and configuration, and so on. So that's the general structure of the repo. And then from within here, there's a Buddy folder that contains a YAML configuration, which we'll talk about later. I know a lot of folks use the the GUI to build out the workflow in Buddy, we tend to save ours as Yamo files, because then it's it's able to be version controlled. We have a pull request template here in this GitHub folder. This is really nice, because it sort of encourages folks to write good pull requests. Oftentimes, without this, you'll see people that write a pull request that has a vague title of no description. So this just helps give people a prompt. We have a cache folder for phpcs. This enables phpcs to run a bit faster. And this cache folder then gets preserved up on Buddy and buddys cache, which is nice. And then inside of this plugin folder, I have an example plugin. And then in the theme, I have an example theme. And so these plugins and themes have been configured so that, you know, in the plugin, we are building a slot fill, we're using the Gutenberg interface to do that. We've got some tests set up here for PHP unit, we're running phpcs. Against this, we have es lint set up and styled and for checking the quality of our JavaScript in our SCSS as well. And the same is true for the theme. And so, you know, the particular ways in which these are going to be used are going to be a little bit different, but a lot of the checks are going to be very similar. Then, you know, normal boilerplate stuff editor config, get ignore. We have a phpcs configuration for linting, our PHP code. And so then this relies on a few rulesets that are publicly available. There's a WordPress one for Styles and Formatting. It also includes you know, some security and performance stuff a lot of our sites are hosted because their enterprise grade WordPress sites are hosted on WordPress VIP and WordPress VIP is the sort of enterprise hosting arm of automatic, which is the company that was created by Matt Mullenweg, the creator of WordPress. This also lets us do things like you know testing for compatibility against different versions of WordPress different versions of PHP for the most part, we just use the style or the the rule sets unmodified, but we have a couple of opinions in here like short array syntax and things like that. And a few few exclusions down here for things that we don't want it to look at. So this this is more or less the right balance for us of things that we care about while then you know not having it scan files that we don't want it to look at. Yeah,



okay, I just want to add for shops that are not used to wp content routed repository like this or are coming into this and thinking hey, I'll just I'll use this like a React starter, wp content WordPress. A couple things are worth noting first, there, one of the reasons that you will not see all of WordPress source controlled here, or you won't see are built files in this and and this repository also includes, you know, effectively some development folder, some dot files and folders that we would not ultimately deploy. Tell us a little bit about where that line is for you where the deployment happens where software development happens.

Testing pipline configuration



Yeah, that's that's a great point, you know, we, we don't include any of those built assets, because we basically don't want to junk up good history. And that can be something that is part of a CI CD pipeline. And so we actually use Buddy for some of our CI CD pipelines. And in some cases, the hosting provider actually has a CI CD pipeline that they use that we hook into. But in any case, you know, we are running those builds when code gets merged. So we have a, you know, a pull request is required in order to merge code. And then once that pull request is approved, and then merge that kicks off the CI CD process that builds those assets. That will also do things like installing composer dependencies, if we have them, it'll instal any node dependencies for for NPM packages, it'll run those build processes to generate the JavaScript and CSS files, and then those get copied up to the servers. And at that point, we can also exclude files that we don't want to make it up, right. So you know, we won't deploy our Buddy configuration to the production server, because it doesn't need to be there. And that provides us some some avenues for that as well, that the specifics of how that works are more or less outside of the context of what we're talking about today. But it is important, as Ben said, to understand sort of the rationale behind what we include versus not in the Git repo. Yeah, other other things that are notable, we use basically, we run all of our commands through NPM or through composer. And so in this composer file, you'll see this is going to be important, because we'll see these commands in what I'm going to set up in Buddy. There's a script section in the composer file. And so this lets me say, I want to run PHP, CS or the PHP code sniffer. This is the linter for PHP files. And I can run this against the plugin or the theme. And it's basically just a shortcut to specify the paths and to specify, you know, what I'm running and where we also have a helper down here, that's a setup for initial setup locally for development, which is nice. Yeah, and then just really quickly, when you know, so when you get this set up, you can go to branches, and you can set up branch protection rules. And so this is where you would make sure that somebody has to do a pull request review before being able to merge and require that certain status checks pass. And so the important things here are to check the box to require a pull request and require approval, we just require one approval, depending on the significance of the changes or the sensitivity of the repo, you may actually require more than that, or, you know, require review from code owners, which is a specific list of people, and then requiring status checks to pass before merging. And so you know, once you've created your system of tests within Buddy, you can require that they pass before the pull request gets merged. So that's what we have set up over here in terms of branch protections, and it's a pretty good workflow. So alright, so I'll jump back over into the Buddy configuration now. So I can look through this in the GUI, like I said, we have the save data YAML file, you can actually switch it back from YAML to the GUI, which is sometimes nice, if you want to work within the GUI to build out certain actions, and then just export them as yaml. You can also edit what's here. And so if I, for example, if I want to make a change to this, I could make a change. And I can actually just hit this generate Yamo file or button and it'll give me the text to paste back into the Yamo file that's in my in my source code, it doesn't really matter, you know, you can certainly do whatever works best for your shop. But as with anything else, you know, we want to make sure like because we're code reviewing every line of code that goes up. And so we don't want something to slip in. Like where someone could say, I'm going to turn off one of these checks, and just go into the Buddy UI and make that change, just disable one of the required checks. We want to make sure that that's all you know, in version control. And it's handled by a source control there. We do a number of checks against our WordPress repositories there are there, you know, slight differences from project to project. But this is pretty representative of what we're going to be checking across the board. So we have a philosophy here, that is essentially fail fast. So if there's going to be an issue, we want to know about what that issue is and just stop the build as quickly as possible. One of the reasons for that is the the way that the plans on Buddy work, it has to do with how many concurrent pipelines you can run and how many concurrent actions you can run. And so if you have a pull request check that's going to fail. You want it to fail as quickly as possible so that it's not using up that concurrency and you can leave that concurrency for other checks. So we found that, excuse me, there's a few things that are really related to get that are important to check really quickly. One of them is making sure that no files that have been get ignored, have been committed to the repository, because even if you specify that something should be get ignored, it's still possible to add it manually to get and have get keep track of it. So this is just a really lightweight check that says you have any files been committed to this branch that should be get ignored. And then it just halts the pull request to check at that point. The same thing for Git merge conflicts, you don't want a situation where someone has merged in something from upstream and has, you know, inadvertently left some of the git merge markers in place. So this is a lightweight check, that will just make sure that there are no git merge markers and the code that you're trying to merge in. And then after that, we start good.

Checking for gitignored files



When when we did a drop down before and I saw those two simple checks for the good files. It was really amazing. I mean, I didn't use those tests before, and I am now I'm sure that they will be in my default pipeline all the time, because they they're really great for this fight, fail fast. Methodology, let's call it really simple and amazing and powerful. So

PHP Tests



yep. Yeah, these typically only take one second to run. So super fast. And you know, just make sure that you're not inadvertently committing something you ought not to be, then we move into the PHP checks. And so the basically, we're doing all of our PHP checks first, and then we're moving on to our JavaScript checks. And we're going to instal our composer dependencies first. And so this is just running using the WordPress latest environment. So this is a public Docker image that is, has a lot of what you would need to work with WordPress pre installed. And the there's, if you go to the environment tab, there's a couple of things that we're doing. In addition to just running our checks at that, we have to sort of do some table settings to make sure that dependencies are installed. And so composer does not exist in the WordPress latest image by default. And so we have to instal it. So there's just a line here to to instal composer before this runs, something that we do, as La as a company is we have actually made our own Docker image that has this stuff installed already. And that just speeds things up for us because it doesn't need to run this on every execution. It's pulling an image that already has it installed. But I wanted to make this demo as general purpose as possible. And so we're, we're not, we're not using our custom image, we're using the public WordPress image. Yeah, you have to instal composer



them to buddies, two buddies credit, if you add those environment, kind of preparatory steps that you're seeing there. And you don't have any other options selected, it will attempt to cache the container out the state post it. So it's a great way to extend containers, if you don't want to go the full route like Ally's done and set up your own Docker file and build it your way and have we include WP CLI and composer and several libraries that we know that we're going to use. But the environment tabs like little relief valve, if you just want to start with a different date. Yeah, just



just just this also grid grid, when you want to just try to test something. So So personally, as a, as a person who isn't the biggest expert when it comes to Docker, for me the environment tab is is something that really helped me a lot, many times.



Yeah, and then from here, once we've got composer available, we just run composer instal with the dash Q flag, so it doesn't generate a bunch of output. But yeah, so once we have composer installed, then we can use composer to run phpcs and to run PHP unit. And so like I said, we've got that composer json file that instals PHP unit as a dependency that instals, you know, phpcs and the, the checks that we're using as dependencies. So then we can just run phpcs from composer, and then run that against the plugin. And the theme, you'll see this little purple icon over here is lit up. And this means that these two actions will run in parallel. So this is kind of an important time to think about how the cache works on Buddy. So essentially, what's happening is you have a Docker container that's running. And then there's a file system that basically gets attached to that, and that file system will persist. And so when you have something like this, where you've got these two actions, these two actions depend on composer instal being run first. So I'm going to run that and then that's not going to be in parallel with anything else. But then when I've got these two phpcs checks The only dependency for these is that composer instal, which is already done. So now that that's done, I can run these two checks in parallel. And that saves you time because it's able to, you know, to do these these checks at the same time, it's just thinking through when you're building out your pipeline, like what is actually a requirement for something else. And then what can I run in parallel, you'll see this as well, when we're preparing our WordPress test environment. But then we can run PHP unit against the plugin and the theme, again in parallel. So that that helps to speed up your pipelines overall. And then inside of the action for this, you know, again, in this environment, we have to make sure that composers installed, but then after that, we can just run this the shortcut commands, which again, this is defined in composer dot JSON is one of our custom scripts, and this will run phpcs against the plugin. The nice thing about doing it this way is that by it's the same with NPM, if you prefix this with composer, then it's able to run phpcs from inside of your vendor folder. And likewise, if you're running something from NPM, it's running that from inside if your node modules folder, so it's local to the project, it doesn't require a global installation of any of these tools. That also lets you keep your versions different. So if you have you know, a particular project that needs to support older versions of WordPress, you might want to run an older version of PHP unit. But if you're doing something where you know, it's a for a site, you're building, and you control the versions of WordPress and PHP that are in use, you can use a newer version of PHP unit. So having that be able to be configured on a per project basis is really useful for us. And so that's why we tend to run things through composer or through NPM, as opposed to relying on them being installed globally.



Kevin, for some of these larger enterprise organisations that are multisite. Is it the case that we would ever have a different, let's say, PHP unit version, per theme?



Yeah, I mean, typically, we try not to, there may be cases where you have a theme that was built a few years ago, and you know, it's really just not, it's not being updated, it just needs to be maintained. You may not, you know, prioritise updating the version of PHP unit or anything else that you're using there, as long as it's continuing to work well, and you're not concerned about security or performance, being degraded from, you know, updates to WordPress, and that kind of thing. So, you know, yeah, you can definitely sometimes make a case for having that be on a per, you know, per theme or plugin basis. But wherever possible, we try to maintain that at the project level. And so then it would be the same version of PHP unit and phpcs that are used for all themes and plugins within that project. The next thing we're going to do is we're going to prepare the WordPress test environment. So until this point, you know, these, these checks are pretty straightforward. This becomes a bit more complicated. And so, you know, Buddy has a doc about how to do this, we set up ours a little bit differently than what's in sort of the official documentation for a particular reason. And it has to do with being able to run these things in parallel. So we separate out the steps to actually set up WordPress, from the steps we're running PHP unit. So what this lets us do then is we can run PHP unit against multiple themes, multiple plugins in parallel, and we only have to the setup step once. So we are storing some information in this Buddy test folder. And we're using an object cache, we're connecting this to mem cache D. Because we are, you know, in the hosting environments that we use, there's, you know, there's always caching available. And we want to make sure that that's reflected here as well and is used in our tests. And so before I go into the specific steps, you know, there are some additional things that need to be configured for this beyond what we were doing in just the composer instal or phpcs. At this point, because we're working with WordPress itself, we need to actually have a database and we need to have a a caching system installed. So on the Services tab, this is where you'd be able to see this. And the so yeah, the next step where we have PHP unit, will you actually be able to connect some of these things in this one where we're preparing our files to be able to do that right. So we are pulling downs from from WP CLI. WP CLI has a script that lets you instal the dependencies to run WordPress tests. And so that would be the WordPress developer environment. And so we're pulling that down and running it. And then we're configuring our mem cache server address. And just dropping that into the tests config file. And then at this point, we are actually copying over items from the wp content folder that is local into this WordPress test environment. And so this this sort of requires nothing Standing of how like the Buddy cache is going to work and how the file system is going to work. When you're installing the WordPress test environment, it's putting it in a different folder than the default folder that runs when this check is running. And so the default folder that runs when the check is running is the root of your Git repository. So we're inside of this like wp content rooted git repository. But we're not inside of a WordPress installation. So what this is doing here is we're copying these files over from what Buddy has come down from GitHub, into the WordPress folder that's just been set up by that script. And then we're we're grabbing the object cache last time putting that in there as well.



There's some there's some trade offs that we're making here that I think other shops may not want to make, in an effort to ensure that we're always grabbing and really working from WordPress latest. We're doing this every time. And it's overhead, because you've got to curl down the WordPress instal and effectively copy this wp content folder over and on top of it. Any advice for a shop that's considering just version locking this in shipping a container with their Wordpress version versus always pulling the latest? How do you make that decision?



Yeah, I mean, you might be able to take advantage of environment caching for that, right. Like you said, you know, some of the stuff you put in an environment like that's not necessarily run every time. That's also something that you could bake into a custom Docker image. So let's say for example, you wanted to create a Docker image that's like locked to the version of WordPress when, when it's publicly released, you can create a, you know, an image there and say, Okay, we're testing this against five, nine. And then when 591 comes out, you can update that to 591. And so you can take advantage of caching in that way, as well. There's a number of different strategies for it, we try to get alerted to things as quickly as they become issues. It helps us get out ahead of a few things, but it can also be disruptive. A good example of this is when WordPress five nine was under development, they were building in support for newer versions of PHP unit. And the way they were doing that is there was this PHP unit polyfills package that was developed by Yoast that essentially added some functionality to older versions of PHP unit that existed in newer versions of PHP unit. And it has become a dependency then in running tests in anything that was developed for WordPress 5.9. So there was a day when all of a sudden, all of our pull request tests were failing, because none of us had included this package, and it became a dependency. So there's a mild scramble to go and you know, add this line into our projects and get our PR tests passing again. And that's a trade off, you know, either you are learning about this and trying to put this into place and then roll it out in a non disruptive way, but maybe later than you otherwise would have wanted to or you're sort of being alerted to this as soon as it becomes a problem and it causes a bit of a scramble. I'm not sure that one is necessarily better than the other because they definitely have their their upsides and downsides for us, we try to be aware of these issues as quickly as we can so we can address them even if it's a bit disruptive at times.



Yeah, I remember this problem with that with the polyfills we had we have an article about WordPress unit testing on how to set up them on Buddy and the moment when the five time version was shipped the article became broke we we also had to had to fix it because yeah, because I missed it I missed it that something like this will be shaped and and exactly the same situation happened as you mentioned,



right. Yeah, so once we've got our files installed, and we're ready to go with our testing then we run our plugin our theme tests in parallel. So this is where the services come into play. You can use MySQL you can use Maria dB, we're personally using Maria DB mostly because a lot of our sites are WordPress VIP and WordPress VIP has official containers that they they release for local development and they've standardised the Maria DB as well. So we're actually version locked to those particular versions of that software that they released to be as close as we can to the production environment. And so we've configured Maria DB and memcached D and for Maria DB I mean it's just you know, the the version that's going to be running on your production environment and then this hostname, which will be important later standard port login and password root root you know, no one else is going to connect to this WordPress underscore test is a pretty standard name for a test database. And then memcached D is the same thing. What version is being run in production? Just give it a hostname of memcached D and then then standard port and then within this you know There's a couple of things we have to configure in the environment. We need to make sure we have Flybe zip and live memcached D, because then we're going to have to pickle instal memcached and Redis. And then we're gonna have to enable the mem cache extension, and then we're gonna have to instal composer. So all of those things are required to make all of those pieces connect up to each other. And then in our run step, you know, because we already have our WordPress files, we already have our object cache file, we're now making a determination of whether we're in multisite or not. And this is in the environment variables. And I'll take a look at that here in a bit. But then we're just running the installer manually. And then we're telling it whether we want it to be Multiset or not. And then that's going to connect up to our, our database server that's going to connect to memcached D, that's going to instal WordPress. And then we're going to going to switch to this wp content directory in the WordPress test folder. And then we're going to run our tests. So at this point, you know, this is very similar to what you'd be doing locally. And running tests and verifying and runs the plugin test, the configuration for the theme is exactly the same. The only differences were running composer, PHP, unit colon, the theme instead of plugin. Otherwise, the you know, the environment settings are the same, the services settings are the same.



And all of that, Kevin, if I'm a shop, and I'm asking myself why you have men cash, anything, I've never heard of object cache? What I mean, where do I go? Where do I start? Just knowing if you haven't wired up an object cache for WordPress, you probably ought to do that.



Right. And that really goes to your hosting. So depending on where you're hosting, you know, is there an object cache that's available? Is there a persistent in memory cache, it's available, right Memcached, or Redis, or any of those, you can drop an object cache php file in, but that's only going to cache things for the lifetime of the request. And it's of limited utility. I wouldn't say it's of no utility, but it's of limited utility. If you use transients, those are going to cache in the database. But in memory cache is really valuable for especially in high traffic sites and enterprise sites where you're performing an expensive operation, you want to hang on to that information. And so those WP cache functions, right, so cache and cache get that kind of thing. That's what that's going to interact with. So if you have a, you know, a Memcached, or Redis, or whatever, you could use that to connect there. So if that's available on your host that is available on WordPress VIP, it's also available on pantheon. And a number of other hosts, I'm sure have it, if you are self hosting, you would have to create that and configure that service. But then it will be available for use as well there. So generally, my recommendation is your test environment should mirror your production environment to the extent possible match versions of whatever software match versions of database software. In memory caching software, the version of WordPress that's being used the version of PHP that's in use, like all of those things, just make it as even as possible, because you don't want to run your automated tests and then have it go into this environment that's different and fail because of the fact that it's different.



Yeah, it would be it it was working on my machine, rather than Yes.



Right. Right.



Working on Buddy is, yeah, yeah, that's



the new Yeah, the new statements.



Yeah, the new version of it's working on my machine,

npm tests and building



right. So yeah, that's, that's kind of the most difficult thing is this piece of it, where you're configuring WordPress, and you're configuring those services to connect to and you're running PHP unit. Because really, you know, testing WordPress is, by definition, an integration test. Because you're testing how your code interacts with the CMS and the database and the cache and all of those things. So just about any test you're going to write in WordPress are going to have some dependency on that ecosystem. So either you have to set up things in the database, you know, to create posts and create the conditions under which you're going to test or, you know, you're relying on WordPress specific functions, or action hooks and filters, or, yes, you can write a pure functional test in PHP unit for WordPress, but a lot of it is integrating with WordPress as a CMS and the way we're storing data. Beyond this, you know, these are all of our NPM tests, which I'll go through fairly quickly, because they're all pretty straightforward. We're running an NPM audit. We specify this as we only care about things that are higher critical, and we only care about things that are in the main dependencies, not dev dependencies. So you know if there's a you know, Something that would involve package that's used in something that's producing JavaScript that is not actually going to end up running in a user's browser, right? Like it would be a problem if it was on a node server that was running on a public port. But it's not a problem. If it's running on my machine or a CI/CD machine when it's building, you know, production JavaScript assets, we're not going to worry about it, we're not going to let that block us. And so that's how we handle NPM. Audit. And then we're running NPM CI, so if any of you are still running NPM instal, instead of NPM CI, I would strongly recommend that you switch, and NPM CI will instal exactly the versions that are specified in your package lock, and NPM instal can actually update those versions can actually change your package lock. So we always use NPM, ci, even locally, we use NPM CI. So they're all working with the same version of the dependencies, unless we are intentionally modifying a version in which case we'll actually use NPM instal NPM, ci tends to also be a bit faster, it plays nicely with the cache, and all of that. And then we're running our, our linter. And so this linter is, yes, lint, NPM run, lint is hooked up to ES lint to apply, we use the Airbnb standards. And so that's running that against all of the JavaScript files. And then we're running style lint against our s CSS files. So again, this is just an NPM command we've configured, then we're running tests in our test is running jest. And so this is configured in our package JSON to point to jest. And then we're running our build. And so again, with the you know, the fail fast, certainly, if any of like NPM audit runs really quickly. And if there's a problem, we're going to know about it really, really fast. And then we instal our dependencies, which, of course, are necessary for anything that follows this. At which point running, yes, lint, which runs fairly quickly, style, it runs fairly quickly, our just tests are going to run a bit slower. But we'll verify that everything's working correctly, before we actually attempt the build, the build tends to take the longest and is just the final step to make sure everything's correct. The other important thing is, just as a matter of developer habit, let's say, you should really encourage everybody to run all of these things locally, before they create their pull request. Otherwise, it ends up just creating churn on your Buddy account. Whereas you could check these things yourself before you ever create the PR.



Yeah, I want to talk just a little bit about the turn on the Buddy account. It's kind of funny how, you know, gun shy we are about how fast how long these things will take. Because the reality is, this could have been two pipelines effectively, one that handled the WordPress side of things, one that is also triggered on every pull request that's handling the front end side of things. And you may be a shop that you need that test to get done fast. So you're going for a high pipeline count high parallelisation. But those are those are questions that that you'll have to answer. Yeah.



And then what this looks like on the GitHub side is, you know, if you have a pull request, which I did, I created a branch to convert the plugin over to TypeScript. And you'll see that in this check section, and it looks like this. And then you can go to the details will open up and Buddy and show you you know what succeeded or what failed. And this runs quickly. I mean, we've done all of our WordPress, unit tests, and all of our linting and front end stuff and back end stuff and a plugin and the theme and the whole thing ran in two minutes and 34 seconds.



And that's probably with at least some uncashed containers. Absolutely. Yeah.






Also, if I remember that, at some points, you use the conditions, because this is also a great way to just be that pipeline. And I know that many that many developers don't use it. I don't know why. And this is a really great option.

Trigger conditions



Yeah, that's a good point. And so yeah, in this in this condition tab, you can just specify, if you know, you only want this to run if there's files that are changed a particular path. This is a you know, quasi naive check, like if you could change the contents of a readme file, and that would cause this to execute. But at a minimum, you know, let's say for example, you have multi site, you have multiple themes, and you're making an update to one theme, you can sort of lock down the checks that are being run to just that one theme. But it's it's a it's a judgement call. You might have a theme that you haven't touched in a long time but you also don't want it to break when a new version of WordPress comes out. So I think there's still some benefit in running things like the you know, the unit tests and to some extent, the linter against all of your files because that will guard you against updates to WordPress, let's say breaking something that you weren't expecting to break In one of your older themes, or one of your older plugins,

Git Hooks



Kevin, can you talk a little bit about where, for us, the line is between what stops us and what's more of an alert behaviour. I think he's specifically of Webpack loaders, I'm thinking of several of the hooks that you can instal, like Husky are similar.



Yeah, and that's, that's another judgement call to where, you know, you could configure, you know, let's say pre commit, or pre push checks, that would run through a lot of these things. But you know, like I was showing on the, the tab and the pull requests, where it took about two and a half minutes to run, waiting two and a half minutes to be able to commit something locally, can slow developers down. And and I'm very much of the mind that we should be encouraging folks to commit early and commit often. And anything that stands in the way of that is, is problematic. In my view, I would rather have people commit things that are works in progress. For the sake of getting their code to the remote server. And having restore points and things like that. It's especially important to things like test driven development, like I think it's valuable to write the failing test, and then commit and push that even though technically, you've broken the bill, because you have a failing test. But that's the point. Like there's value in creating that failing test, and then you write the code, make the test pass, but committing the failing test, when you're done writing it is valuable. Yeah, with things like husky, you can pass flags to skip the checks. But if you're using a GUI to do your commits, that can sometimes be more difficult. I personally prefer to not have the pre commit or pre push checks, to rely on people to do that manually. And then, of course, it's all going to get run in Buddy or your CI/CD system when you're doing your pull request tests. And it'll block a pull request from being merged if it doesn't pass.



Awesome. Yeah. I mean, maybe some of the lightweight, maybe there's some of the Git ignore file checks or things like those that could be run, say pre pre push. Right, right. Maybe not pre commit. But yeah,



yeah, like a subset might be useful, right, as opposed to the entire suite.



Yeah, but still running the whole package will be just annoying, just annoying. On the other hand, we will have this problem when, at some point developer could start constantly using a flag, just just push his code. I mean, this is exactly the same case, because we can use also, for example, code sniffers, in our in our code, code editors. And they are great because they will underline all the all the wrong syntax, the right way to write it. Although I saw some cases of developers who just get so used to the fact that their code is broken in terms of coding standards, they just didn't even solve this, this underlined under whole of their code. So so this would be exactly the same case, like constantly pushing, even with Husky installed with the flag that will skip it. So yeah. So that's why the CI/CD is also something important, because this is something you can skip, this is something required. And I know that it can be stressful, especially when you are trying to push a quick fix because something is wrong on production. And you see that this counter, you know that all the tests will take like, even five minutes and but still, it's better to push the correct code once rather than try to do I don't know, like 10 or 20 quick fixes just to fix a previous quick fix.



Yeah. And that's what it goes to the way that we've thought about how we incorporate these these tasks is different things. There was a point in our history where the linter was part of the build process. And so there you get Webpack loaders for Yes, lint, and for style, and, and so on, and you can actually have that run as part of Webpack. And that'll it'll, it'll check it before it runs the build. It'll check it actually during the dev task as well. And that was ridiculously annoying, because, you know, I'd be I'd be in the middle of development. And I would say, okay, you know, I need to work on a React project and you import react, you need to import prop types, and you had to find prop types. And then, you know, my, my linter be like, Well, you've imported prop types, but you haven't actually used it yet. So then I would go and I'd say, Okay, I put my prop types on my component. And you have to write a whole bunch of code to satisfy the linter before anything actually happens in the browser. Yeah,



it's like the modern day version of Clippy that you're trying to



Yes, we create a React component. And then what that leads to is developers go to the top of their file, and they say, Yes, lint disable. And then that has a tendency to make it into production code, because you forget to delete it. And that becomes a problem too. So, you know, I would rather say, let's, let's get out of the developers way, let's let them develop, you know, running a dev flag or watch flag and not have the linter, yell at them, let them configure the linter in their, their code editor, you know, make sure all these things are addressed before the the code gets merged. Yeah, the other thing is, then our build process does not rely on running the linter on our separate system, that's actually running the build of the assets. So that means our actual production deploys go faster, because it's not running a linter. Like that saved for a pull request check. So in our case, you know, we've determined that separating these concerns helps us out immensely.



Yeah, I mean, I was one of those, one of those people who disabled linters. Yeah, I really understand it was it was terribly annoying, especially when you are a back end developer and you just have to do a small fix in JavaScript, and you don't feel great about even changing something there. Right? And here, you have this linter that is just annoying to you more and more so.



Right? Yeah, even in the case of like a failure, new Wordpress version polyfill issue or what have you, when going back to Kevin's comment that we store these things in yaml. Typically, you could in that same pull request, see the committer, move that check to an optional pass, right temporarily or whatever, it's still move through the pipeline, and then see it restored in a future pull request as well. So our recommendation is, is hopefully to steer more infrastructure as code is a thing. Testing as code is a thing, especially. Especially with the advent of GitHub actions, and being able to store that thing. Those here in Buddy is a is a big benefit for auditability. But a lot of a lot of teams. And we use the I've heard of said that, you know, something like brakes allow you to go fast. And your team, your teams are going to decide what makes the best brakes. What makes the full stop assembly line? No, it's got to pass this before it gets through here. Yeah.



Yeah. So. So in general, really taking those tests seriously is, this is this is one of the most important things when when you are when you are a serious company. I mean, this is this, I think that having such a test, it's such a pipeline. By the way, I think this is the longest pipeline that we ever presented in the webinars. So So yes, this you're in for now you are the record holder. This is this is the thing that differs. Those enterprise great companies, not only when it comes to WordPress, when it comes to every language, from those who are trying to be those better companies. And some of some, some of them will never learn because their mission is to be for example, a chip company. So they just try to push everything as fast as possible, without any testing. And at some point, the client will probably go back to, to an agency like yours, because he will be annoyed with with constant box with constant fixing, and constant paying for, for things not working.



Yeah, that's true on the development side of things as well, because they're your are as a developer, and you step in that code base from that other project. And oops, it's got a totally different style coding style. And it's got, maybe it does or does not have tests, and it's probably a good if you are one of those developers that's out there that's frustrated with the level code quality or whatever. Ollie is hiring as probably. Maybe Kevin playing it more, but



yeah, yeah, we are. We're hiring for a number of folks right now. So if you like what you heard today, and you're just working with us, we'd be interested in working with you. It's le.co and then our careers tab and we're hiring developers of all different types as well as a designer and a marketing manager. that link also in the bottom of the readme on the demo repo, if you'd like we shard




this repository in the comments, so you should be able to find it. So I think we, we covered everything, like I said, Really, the longest pipeline, you really explained everything perfectly. Apart from from, from those two small tests at the beginning that that, like I said before, I really fell in love with them, because the simplicity and the power. And on the other hand, I really like your approach when it comes to those pre commits and pre push. Hooks, because this is interesting. Many people look at these only from the part of this straight, we will be sure that all the tests will run in on the local machine, rather than pushing it and blocking the CI/CD. But the part about sometimes we need to push a failing comment that we won't be able to merge, because as at the beginning, you shall how to block something like this in GitHub, and we all should do this if we are relying on pull requests. So So yeah, I really hope that everyone you learned a lot. And we have, we have some questions. So we got one from from Alexander, do you manage all plugins via the repo or only the private custom ones? Is it part of composer Jason? Also, when and how do you update WordPress car?



It's a great question, we we kind of use a mix of approaches. The approach we've historically used for managing plugins is sub modules where we can I'm not a great fan of this approach, because sub modules are kind of a pain to work with. And we've recently started using composer json to manage our plugin dependencies on some of our newer sites, which generally is working a lot better, you can just composer instal, you don't have to deal with, you know, accidentally reverting to an older version of a sub module. Because it requires the developer to pay a lot of attention when they're working with sub modules when something changes. And so especially if you've got frequent updates to plugins, it becomes really difficult to work with and it's it's actually surprisingly easy to push a revert on a plugin version. So that composer variant ends up working a lot better. And there's some tooling available where the I think it's actually called WP packages where it's essentially a mirror of the wordpress.org plugins repository that is available as packages and composer so you can specify you know, he, these are the plugins I want, and the versions that I want, and then it will pull those down from essentially from wordpress.org. So it's the equivalent of you know, going out and installing those plugins, it just happens programmatically, which is great. So that's an approach we're taking on more and more of our repos, because it tends to work quite a bit better, and is less error prone. And for the second part of the question about how we manage WordPress Core version updates, that depends on the hosting provider. So WordPress, VIP will handle that for us. So WordPress, VIP will automatically apply Wordpress version updates. And that, you know, that's actually an important point, because that is also part of why we tend to test new versions of WordPress really proactively in all of our unit tests, because we want to make sure that nothing is going to break when that new version comes out. We'll actually oftentimes requests that WordPress VIP set up a separate environment for us or update an existing pre production environment to track the release candidate branch. So we can test all of that out both, you know, in an automated way, by unit tests and integration tests, but also via QA, to make sure that nothing's gonna break when that new version comes out. on platforms like Pantheon, we have to apply those updates ourselves by pulling in the upstream. So it does differ from environment to environment.



Yeah, and if you're self hosted, well, if you're self hosted, there's a large possibility you've custom built your own containers and you want to integrate them into a test suite before you started using them. So you might see I'm testing PHP seven, four, I'm testing PHP eight, side by side and yeah, I'm only living right now on seven four, but I've got a much higher confidence level that my site is going to be happy on. Hey,



it's also possible to instal WordPress as a composer dependency that's not something that I've personally done because it's not how our hosting providers are set up. But it is also a possibility.



Yeah, bedrock is based on this solution and it works really nicely I use bedrock and I always also have WordPress core as as a dependency itself and it it works. It just works. I have ever in one place, I just run composer instal and I, the only thing that I have to prepare is the environment file and my website is working just like that. And also Jared mentioned, I can attest to this, I once wrote some very bad web chrome code without the code review eventually led to huge problems. This is the thing about we discussed at the beginning. Okay, so I think it's time to wrap everything up. Thank you guys, you really showed us how to do WordPress in a proper way, just like this. And so so I've I really hope that many of you learned learn something from this and you will all try to to start using practices, like like Alley. We also did a poll at the beginning. It was we asked Do you automate your pull request checks and it turns out that 67% automated our pull request checks, so I hope that the rest after today will also start because having access to this Alley repo, they should really have a bit easier task, especially to start and our next webinar will be on April 29. And we will have we will have Henri who will, and will, I will talk about why website performance matters and how to improve it. So this is this is something of a very important topic. Apart from this, remember to sign up to our meetup Buddy CI/CD group, this is a great place to back to get informed about all the upcoming webinars. And also, don't forget to subscribe to our YouTube channel, you will have a chance to get notified about also every upcoming webinar and you'll have a chance to watch all those webinars again. So for example, you could once again listen to Kevin and Ben saying those very smart things about WordPress automation. Also, I hope that you will all join our Discord channel so we will have a chance to discuss a bit a bit more about about the topic we we had today. So you can find the link in the comments. So guys, thank you once more. It was really a pleasure listening to you and see you everyone in two weeks. Have a nice day or night. Thanks for having bye everyone.

Deepen your knowledge