How to Audit and Boost Your Nonprofit’s Web Performance
- May 2, 2022
- 49:50 Watch
Google’s research tells us we have less than 4 seconds to load your web content before you start experiencing significant bounce rates. Making sure your site performs at a high level is crucial to delivering a good user experience — and bolstering conversion rates.
Metrics such as the Core Web Vitals can help you measure how your site is doing. They can tell not only how fast your content is loading but also how users perceive content loading on their screens.
Learn about a variety of free and paid tools to test and measure these metrics, as well as ways to improve your site’s performance.
We walk through a few ways to test your website and review what metrics you should be paying attention to and what they mean. Finally, we review test results and spotlight actionable ways to improve site performance.
Transcript
Mark: Hi. Welcome to How to Audit and Boost Your Web Performance. We’ll discuss the importance of performance on your website and ways to improve it. My name is Mark Leta, and I am the director of business analysis and quality assurance at the Allegiance Group. And today, I’m joined by my friend, Rob.
Rob: Hey folks, how’s it going?
I’m Rob, director of marketing analytics at Truth Initiative. Just a quick background on who we are—a nonprofit public health organization. Our mission is to make tobacco use and nicotine addiction a thing of the past. And I guess just so you know how Mark and I are related, if you will, at Truth, we have a couple of brands, but I work mostly on thetruth.com campaign, which is our organization’s flagship brand and is probably what most people know us for. Those marketing efforts are specifically targeted at young people in America to ensure they’re educated on the harmful effects of nicotine addiction. But it’s on those efforts that Mark and I collaborate quite a bit.
So I will kick it back over to you, Mark.
Mark: Sure. So what are we covering today? We have three areas, the first of which is to talk about what is a good performance. What do we mean by that?
Particularly, what do we care about for digital products or websites? What are the user expectations around performance? Where are they at, and what do they think they should be getting out of their experience? And then how do we know if performance is really good?
What metrics should we be paying attention to? What do we care about? And then finally, as part of this first section, we want to talk about the relationship between performance and user experience because it’s really an important one. From there, we’ll move into more of the case study information, where we’ll cover the work that Rob and I have done on thetruth.com in terms of redesigning and subsequent performance enhancements. And then also on related campaign work where tuning performance and ensuring that things are working well from that standpoint were important. We’ll talk about what affected Truth specifically in terms of performance, some of the challenges we faced, the testing and the auditing processes we used, and some of the recommendations that we came up with, both the ones that were quick wins and some of the harder ones to get into.
And we’ll close out that section by talking about some of the results. In the last part of our session today, we’re really going to get into some of the tools and talk about what you can do now to improve your own site’s performance and take a look at some of what’s out there to help you out.
What is good website performance?
So what do we really mean by good performance? There are really three aspects to consider here. It is how fast things actually load within a browser on a page for a user. It’s how fast users perceive things load and how quickly they can interact with it. Those three aspects altogether really make up a user’s sense of how well a user interface is performing.
What about those user expectations? Unfortunately, they’re shaped by users’ experiences across the entire web and not just based on what they’re experiencing on your own site or even competitor sites because they’re constantly browsing highly performant applications and sites built by tech giants with deep pockets and endless resources to make sure the performance is exceptional.
Their expectations are really set by Google, Twitter, Amazon, and Pinterest. And so as such, they’re used to expecting three seconds load time. Their expectations are really high. And after that point, frustration does start to set in on some level. And as those load times increase, frustration sets in even more.
In fact, Google research found that for e-commerce sites, around 53% of mobile site visitors will leave a page that takes over three seconds to load. On the slide here, we’ve got a stat that gets trotted out quite a lot when people are talking about performance, and it really shows that relationship between every second above one second of load time. What happens in terms of bounce rates?
In this case, they’re showing the probability that a user will bounce. And you can see that as you go up from about three seconds to six seconds, there’s an exponential rise in the probability that your users are going to bounce, and it levels off some, but you can see that as you get up to 10 seconds, you’re over 120% probability that the user is going to bounce.
So we have clearly expectations that are really high, and we have to figure out what we can do to meet them as best we can.
How do we know if performance is good?
It’s simple, we test. We need to understand what it is that our users are going to experience first. So we need to research who they are and what their end-user experience is like, understand what kind of devices and operating systems they’re using, the browsers they’re using, and learn whatever we can about their network connections.
In most cases, we won’t have analytics that tells us their connections, but we can make some inferences, try to simulate what those conditions might be, and run tests. In doing our testing, we also want to use field data as much as possible. Google provides a really nice thing in their Chrome UX report, which is data gathered by users out there on the web using the Chrome browser that can provide real-world user experience feedback into your actual site and what they’re experiencing in browsing.
That tells us a lot about the real world and what’s happening on your site. We also want to use simulated experiences as well. So lab type data that we can create. And when you’re doing your testing, you should focus on the three areas we started discussing.
And that’s the load, the actual time of load, the perception of loading, and, in many cases, visual stability is a focus. And then the interactivity that the user is experiencing with the site.
What performance metrics are most important?
Core Web Vitals
So what metrics, what do we really care about? Certainly the Core Web Vitals is probably the first thing that comes to mind for most people. These have gotten the most attention, particularly in recent years. And these are developed by Google. They’re really a subset of a larger group of metrics.
And for now, these are the ones that Google has settled on as being the most important to take a look at to judge loading perception of load and interactivity. And this is after having focused on other metrics previously and doing a lot of testing and studying the results to try and find the best way to represent these different ideas.
LCP is one of these, and this is the Largest Contentfull Paint, and the time needed to render the largest image or block of text on a page after the page starts to load. It’s a simpler way to judge and capture the load time from the user’s experience perspective. And instead of focusing on hard load times for things like the DOM or a speed index, or even the time it takes to start painting content on the page, Google feels that the Largest Contentful Paint is a better metric, even though it’s simpler to tell us more about what’s happening in terms of the raw load.
CLS s the Cumulative Layout Shift, which gets at the user’s perception of loading. And it’s a score that measures how as your page is coming in, content may be shifting or redrawing, or adjusting on the page. And it’s really about that perception of what’s happening. For example, it can give the user a sense of instability if they see the page redrawing or re-shifting as it moves down.
FID is the First Input Delay. And this is a measure of the time between when the user interacts with something on the page and when the browser can start processing that event. So the time between when somebody clicks, taps, selects, or uses some control and when the browser can respond. And in this way, you’re really measuring the first impression of how the user perceives your site’s interactivity.
One of the reasons people have been focused on Core Web Vitals so much is their SEO or perceived SEO impacts. And I wanted to take a moment to talk about that because it is such a concern when we’re talking about performance testing. A lot has been written about it, and initially, Google had just said that your core web vital scores would start to be used in their algorithms for determining SERPs.
And this caused a lot of concern and hand-wringing, and people were worried that, based on their site’s performance, they would lose a lot of Google juice or SERP placement that had been built up over the years. Initially, the deployment was delayed, and then we learned that it would really just be a soft-ranking factor in the algorithm, one of a hundred or so factors that go into ranking pages.
The Core Web Vitals’ impact on SERPs wouldn’t be as great as first thought. And as the mobile ranking factors rolled out last year, that’s how it played out. Changes were observed for sure, but not many shifts in search results.
And as the desktop ranking factors are rolling out now and are due to be done by the end of this month, it should be similar. And that there shouldn’t be a huge impact on SERPs based on the Core Web Vitals. So while we should be working to make these scores as good as they can be, there’s not really a need to panic from an SEO perspective.
And, perhaps, if you’re trying to figure out where to put your resources in terms of SEO, spending a lot of time on the core of vitals may not be as effective as doing other traditional SEO activities, increasing your backlinks or improving your headline or headers in terms of the quality of the descriptions and what you’re writing about, or consolidating short content, maybe into larger articles or up the same topic. Those types of traditional SEO activities might yield better results than focusing on just the Core Web Vitals. Rob, I think you can echo how seeing some of those Core Web Vitals for the first time can be a little bit upsetting because, in a lot of cases, people aren’t doing as great as they would hope to be.
Rob: I would say jarring is the word, actually.
And folks, if you’re on the phone or in the meeting here and LCP, FID, and CLS don’t, maybe they mean something to you guys. They mean absolutely nothing to me. I’ve learned a lot since working with Mark, and they mean a little bit more now, but when Mark was first walking us through all of these metrics and how they’re measured and their importance, it was, again, jarring. Overwhelming is another word. But Mark has walked me back from the ledge a little bit and said that some of those traditional SEO, tried and true practices, like making sure your meta descriptions and titles and things are in working order, are still very impactful. That’s good to hear. You want to focus on those, and this world that Mark has uncovered for me personally is important in as we start to move into bigger and better and more advanced analytics and figure out how it all works; these things will become more important. But, yeah, it was a very interesting meeting the first time Mark walked us through it.
Additional Metrics
Mark: So in addition to those Core Web Vitals, there’s a number of other additional metrics that we pay attention to when we’re thinking about performance on the left-hand side here, really from top down, these are ones that come into play progressively as the page loads.
First, we see the Time to First Byte, how quick your site is delivering data to the Start Rendering Time, when the page starts to come together versus the First Contentful Paint, which is when we actually start to see pixels. A Speed Index is a complicated metric that evaluates the completeness of a page loading over time.
It’s good for comparison, but it’s really just a number that is taken out of context, maybe not as meaningful. Total Blocking Time is interesting because it’s the amount of time between the First Contentful Paint for when you’re first seeing pixels and the time to being interactive. So during that time, what’s going on with assets, scripts, or files coming down is really blocking the load. And then, Time to Interactive is the time when the page starts to when it can be able to respond to interactivity. So these other six metrics here are ones we pay attention to quite a bit.
On the right, other metrics are interesting but are ones we really don’t focus a whole lot on, and these are the ones of around complete times for things like completing the document load or the DOM load or the total number of bytes. And these things really are not that important, particularly in this day and age where most pages are dynamic and things are coming in on the side, and we don’t really care ultimately where the process ends as much as we do that initially, the page is loaded in such a way that the user perceives it to be done and that they can interact with it.
Sometimes things like the number of requests are interesting and important because we can go back and look at those requests and either reduce some of them to improve the performance or at least make sure that we’re getting the right requests in there so that we’re not doing anything extraneous in terms of the load.
Good Performance is good UX
So for the last slide in this section of the session, we really wanted to talk about good performance in the context of UX because it’s really a key part of the user experience. And we can’t have a good user experience without having good performance. We put a lot of work into doing research in design as part of our digital experiences, and we develop systems and interfaces and bring those designs to life on screen.
We create content and tie the user experience with functionality, creating these great optimal paths for users to achieve their goals and our business goals. We test everything, ensure it works, and try to meet all those design specs. However, we often do all of this without knowing how the interface is really going to perform for our audience.
And in some cases, we may not even measure performance until the final testing stages or even after deployment, even as an afterthought. And if the site doesn’t perform well in the end, we’ll really, by definition, won’t have a good user experience. The two are inextricably linked. So when we’re thinking about good performance, I think we really need to think about it in the context of the user experience.
And we really want to incorporate some of that thinking into the process. The idea of performance design is out there, and there are several tenants that you can bake into your work or think about as you go through a design process for UX. Certainly, thinking about mobile first is a given nowadays, but you want to go beyond that and think about how your designs will not only work on mobile but how there’ll be super fast and efficient for the user.
You want to think about simplifying things wherever possible. You want to critically review what you have in terms of the imagery, the styles and the scripts, and the contents in play, and ask yourself, do we really need all of this? Does it serve the goals that we have for the user or the goals that we have as an organization?
Or are we just adding weight to the page so that we can pull some of it back, can we achieve the same goals, maybe with a simpler design or simpler concepts? And then once you’ve done that, try to make it feel and perform fast if you want to do things like optimize vector graphics, sure. You want to optimize the images in the media, both in terms of the resolutions you’re using and the sizes you have out there, all to reduce the load. You want to offload the loading of images so that they get loaded on the side and are only shown when needed on the screen.
And they’re not all included on the page as it comes in. You may want to consider simplifying fonts. Surprisingly fonts can cause a lot of problems in terms of performance. It takes a while for them to load in, and they tie up the threads. And so, maybe even choosing a system font over a downloaded font could be an option to simplify things.
You also want to provide an indication of loading if you can. This helps with the perception that things are happening. So using something like a skeleton screen that sort of blocks out where the content will be will give the user the sense of load happening and content coming in, or even subtle loading animations that show a response.
And I’m not talking about sort of the spinning circle, but really just a subtle animation that indicates, okay, the system is doing something that it all helps that perception of loading as the user is sitting there waiting. And then we want to think about, too, maybe as part of our user experience work, thinking about maybe a speed budget. This is a great way to address performance and design.
You have to really set a goal for what you think will be acceptable in terms of performance. You can establish this early, and you may need to adjust as you go, but at least having that goalpost out there gives you something to strive and aim for.
It gives you a reason to trim back on your design and your concept as you go to try and achieve better performance. We also want to incorporate as part of our process testing early and testing often. And this may follow the user experience process, create the designs, and build them into a UI.
But as soon as you can, as soon as you have a testable UI with assets and tact, you really want to start testing for performance. Because if you can find a problem earlier, of course, making fundamental changes to the design or the code and functionality will be much easier the sooner you find it than if you wait. The other thing I suggest is that you report on performance regularly, as you can, as part of your analytics.
And you want to socialize those metrics and get decision-makers to pay attention to them. Certainly, if you need to get buy-in later on for doing a project or work to improve performance, having people understand its importance in providing a good user experience is worthwhile and good.
So that kind of wraps up our first section in terms of what is considered good performance. And now I’m going to pass it on to Rob to talk a little bit about some of the work we did for the Truth site.
Case Study
Rob: Cool. Thank you, Mark. As Mark mentioned, there’s a lot to look at and I consider myself a fairly technically savvy person.
I’ve been doing digital marketing and analytics for the better part of 15 years. And it, again, came as a shock when Mark started uncovering some of these things for us. And I want to give a real-life experience, our experience personally, with thetruth.com website and, just to show how that can play out, how it can go when you have a partner, like Mark, that knows what they’re talking about.
The challenge
The quick backstory here is several years ago, we knew we were in desperate need of a design refresh of our website. There were a couple of other updates we were looking at as well.
We needed to get updated to Drupal eight. I don’t even know what version we were on before that. With a redesign of a website comes new website analytics implications. What are we going to track? How are we going to track it? We have a pretty hefty marketing budget here to drive folks to thetruth.com, and on all of the platforms that we operate on, we make use of the pixels that they provide so that we can do optimizations in those platforms, places like Snapchat and Tik Tok, where you want to be able to optimize to a conversion or to a video view or whatever the case may be. We had to make sure to QA all of those and make sure that they were functioning.
All of this is to say that, with a redesign of this magnitude, it was a large one, there’s a lot of moving pieces, a ton of moving pieces. It took several months to really to get it off the ground. and so while we’re doing that, UX was thoroughly thought out. We validated what we believed was a good user experience using fairly modern techniques, like card sorting and others.
Solutions
We thought mobile-first, of course, we already knew, I knew, and was very clear about the fact that almost 90% of our traffic comes from a mobile device. Our target demographic is the young people of America from 13 to 24. Where are they? They’re on mobile. But performance, and I hate to even say this now, looking back on it, was never even considered. It wasn’t even something I thought of. it wasn’t something we even brought to the table. And did I know that speed matters? Speed just the broad sense, like you want a fast website. Do I know that speed matters? Did I know then? Yes, of course, but the need for a modern, updated website really put blinders on me regarding speed.
And I mentioned all of this to highlight a point that I think is important. And that is the balance between managing content and design and UX and all of the tangible stuff that people can see, and we, as digital marketers, need to do our jobs. And then how all of that stuff affects your performance.
Let’s just be really blunt about it. Not public-facing at all. And I think, again, looking back on it and knowing what I know now, it’s too often left to the wayside by marketing teams. So, it was, it was a learning process for us.
And so, in the end, what do we have? We have a really flashy-looking website. It’s on the latest version of Drupal. That’s a big win. Analytics is functioning, the pixels are working, and it’s time to celebrate, pop some champagne, and we have a new website. Let’s do the celebration.
Because it’s a new site, we start paying closer attention to things like page load speed in a system that we utilize here, New Relic. Mark, will get into that system later, but I’m looking at things like, all right, how much traffic are we driving? Are they engaging with the website?
But then also, really starting to focus on a very basic metric. Page load speed. And I see that we are averaging 10 seconds per page load. I knew enough to know that wasn’t good. I panicked a little bit, to be frank with you guys. And I emailed Mark and company right away and said we have to address this. What can we do? And Mark came back, as you can probably tell, he knows his stuff, with a laundry list of improvements. And some of the things on that laundry list we managed in-house. Things like minification of CSS and JS and optimizing image size and resolution. Again, even for someone like me, on the fairly technical side, the rest of what he was suggesting was just, in, I’ll call it, the performance or performance design weeds.
It was a lot. And so we said, Mark, let’s go and set them to the task. And here, I’ll pass it back, as this is exactly what I did in real life. I was like, all right, cool, hear you, appreciate all of that. Can you please go and implement and make us better? And so I’ll pass it back to him to talk about what those things were that he suggested and what he’s implementing.
Mark: Yeah. we started out looking at what was affecting performance. And we tend to think about these things a lot of times as the backend and the front end. When we think about the backend, it’s the services, the application layer, the database, and the CMS.
And there, it’s interesting because we’re all using mature platforms and cloud services that have been optimized, like Drupal in a managed hosting environment. A lot of the traditional bottlenecks and roadblocks from the past have really been alleviated.
And we don’t have to spend a whole lot of time on things like database query optimization or the like, and we can move into the CMS to look for opportunities to improve performance. So that’s kind of what we did our focus here was more on the Drupal side of things in the backend and how we could go about either working with different modules or setting the system configuration up in such a way that we could serve content to the front end better or more efficiently. And there were really three areas that we worked on: caching within Drupal, and how the CSS and the JavaScript files were being served because some of that was inefficient. And then the templating that was going on in the system. We were using a pattern library that had helped speed up development. But doing so resulted in quite a lot of extra overhead in terms of markup and nodes in the markup.
And we had a lot of CSS and JavaScript included on most pages without easy ways to exclude them from pages where they weren’t necessarily needed. And then, on the front end, this is where we saw a lot of opportunity in creating performance improvements. And there were three areas here.
One was within the markup, the styles, and the scripts themselves, trying to reduce some of that to bring down page weight. And then optimize the asset files that were in use. Some of the content was problematic. All of the images on the page were initially loading along with the page, we weren’t doing lazing loading yet.
So we worked on that, and we knew that was an opportunity. And then as the image and the media were not always as optimized as they could be. So we spent some time there. And then the CDN was an area in particular where a lot of opportunity presented itself because of some of the tools that were available.
So we had some specific challenges that we faced in taking a look at the performance for Truth. It wasn’t a key consideration during design and development, as Rob had mentioned. And we were really looking at the testing and auditing post-launch, plus we had it in a lot of cases, particularly recently, fast turnaround times for campaign work. So we didn’t have a lot of time to spend on the performance and specific to some of the development challenges. As Rob mentioned, a lot of the design is very image-rich and aimed at a younger audience, so it’s flashy. It’s cool, but it has a lot of imagery to contend with. We also have these limitations within Drupal that we have to contend with for the templates.
And there’s a lot of styling and scripting going on. There’s FoUC, there’s a flash of unstyled content that happens based on having to load external things, and also third-party scripts and beacons.
Rob, do you want to talk about the FoUC example in particular?
Rob: Yeah. Yeah, again, new concept to me, but Mark’s been great about educating me about it along the way.
And here on the page, you can see some of the stuff I’m going to chat about real quick. It’s a little bit small, but one of our main website KPIs is on-page engagement. We are intentionally designing and incorporating elements to pique a young person’s curiosity.
We want to foster engagement on the website. We want them to go deeper on the site and learn more because, ultimately, it’s their decision whether or not they will embark down the path that big tobacco wants them to embark down. And so we want to ensure they’re informed and have the information.
And we do that by, like I said, having, Mark mentioned it, like flashy, cool things. They pique their interest and get them to engage. One of our most effective methods of doing that is what we affectionately refer to as the right-side sticky. It’s just a simple overlay that expands out from the right side of the page, but it captures your attention. And we load that on the page using Adobe Target. We here at Truth are an Adobe marketing shop. We have some custom code that loads that overlay up using Adobe Target. And I can hear some of you saying like wrong, don’t do it. In fact, Mark was one of them. I’ll touch on that in a second, but I hear you on that, but using target on our site at least allows someone like me to control it.
More importantly, it allows me to test different versions in a live environment. So for our recent Breath of Stress Air launch, which is a new campaign we just launched, we decided to leave it off, right? Because when Target needs to load on the page, it results in this brief flicker. So on the page here, you can see the Breath of Stress Air landing page is buried underneath that top image. And then, you can see the overlay as it flies out. The overlay is there again just to capture user’s attention and let them know that if they are stressed, they can click here. They can go and get information about what to do under those circumstances.
And so, from a UX perspective, I have to be honest, I like it. It looks cool. It looks flashy. It’s an effective tool to capture attention. But then, Mark comes in and very politely says, hey, it will result in the flicker again. And we initially said, okay, you know what, Mark, that’s right. We are trying to think about performance. I want to say performance first. We’re not there yet, but we are considering performance along the way. And so we said, all right, let’s just leave it off. We launched the page and almost immediately saw less than desired on-page engagement, and we realize we want it back—a bit of a LOL moment. And we begin the effort to reinstate, and again, Mark was quick to point out, fine, get it. If you do that, you’re going to see the flicker. So we launched the overlay in a test environment to get the full effect. And you can see that here.
This is one of Mark’s very cool tools that he’s introduced me to. And I can’t see the actual, the seconds there, but you can see it’s, it’s black as it’s loading, here comes the landing page, and here comes some content, and then boom, at the, like whatever it is, three and a half second mark, Adobe Target loads, and it goes back to what this black flicker. And so, is that an ideal user experience? No, it’s not. But we decided that the potential increase in on-page engagement was worth the brief flicker. And so I tell this sort of long-winded story to highlight. my previous statement about managing the balance between performance design and front-end user experience and that there will likely need to be tradeoffs between the two.
You have business goals versus front-end user expectations versus the back-end systems management of it. And all of those, as I’m coming to learn very quickly, need to be weighed. There’s going to be some tradeoffs between them. Another good example I mentioned earlier is the ad pixels. We rely heavily on third-party ad pixels to ensure we can effectively run, measure, and optimize our ad campaigns.
But the more you have, the more connections you have to make to those services and platforms, which ties up those browser threads and can cause a backend backup and slow things down. It’s one of these, this is a classic example of the trade-off. In this case, these pixels cannot go.
We absolutely need them. We know they slow our performance. This is where I would lean on my partnership with Mark and say, Mark, where else can we improve to balance that out?
Mark: So, how did we try to address these challenges? The first thing we did was really to implement a testing and auditing process. Here it was important for us to get into place a process that we could rely on. And we started by identifying the key pages. We wanted to focus on really where the traffic is because improving the performance for a larger part of your audience is all about getting the most for your effort there.
So we looked at the analytics, and we focused on key pages. And then, we created baselines because we wanted to know where we were starting from using the metrics that we had and that we wanted to track, we established baselines. We also established which testing sources we were using in terms of the tools in the simulated network speeds and devices we would use.
And then, from that initial run through the testing process, we determined recommendations. These came from the tools primarily, but also from our own sort of observations and looking at what was happening as we were examining the waterfalls and taking a look at performance. And with those recommendations, we prioritize those based on the level of effort that we felt it would take to implement and then our perceived effectiveness, what we were really going to get out of implementing, and the recommendation in terms of benefit to the performance of the page and site. And then, we made the process cyclical. So we implemented one or two recommendations. We deployed those out. We retested, remeasured against the baselines to see how we might’ve improved or, in some cases, not, and then repeated the process as we went, working our way through the recommendations we had.
So we found some quick wins in going through this process, certainly image optimization. I’m sure we’ve said that quite a few times during the session, but it really turns out to be something to come back to because it is so important in terms of load. We’re able to find ways, a lot of times, even after we think things have been optimized to optimize them further, both in terms of size and resolution or maybe even using different file types, maybe we can get away with using a JPEG instead of a PNG somewhere and save some weight in terms of a file.
We were able to get rid of some excess scripts, styles, and fonts in some cases. Once or twice, I think we found even the same script to being called twice. So we were able to remove some of those calls that were unnecessary. The styles can add up, so the more you trim down, the better. And we were finding with the fonts, in some cases, we were downloading the entire complement of site fonts unnecessarily. And as mentioned, those add weight and tie up the threads. So you want to limit the number of fonts that you’re downloading, if possible. There were some access requests that were in there too. Some of these were from old services that weren’t being used or unnecessary services, we were able to get rid of those. Within the meta-tags, we implemented pre-loading of fonts that helps.
And we also implemented a pre-connection to third-party domains. That helps you establish the connection to services or other domains you’re using. It starts the DNS lookup and other processes at the front of the page loading process. As soon as the call is made, you’re not having to wait to do the handshake for the DNS and the TLS negotiation, et cetera.
Those are some of the quicker things we’re able to do. And then digging in a little bit deeper. there was some script refactoring that we did. We implemented the image lazy loading, which wasn’t always straightforward or simple. We addressed some of the flashings of unstable content that were happening to help perceived or helped with the perception of load.
We did a lot of tuning within Drupal to make it unnecessary for browsers to go all the way back to Drupal, for instance, in cases when they didn’t have to. And then, on the CDN level, we did a lot of cache tuning there. We really saw improvements from that. And then we got into the performance tools that the CDN provides, and here we found a lot of opportunity and a lot of great stuff. There were three particular tools that we used. One was a load prioritization tool. This helped determine when parts of the JavaScript are needed as a page loads and prioritizes parts of the script accordingly. So it allows for the page to keep loading as the JavaScript is coming down. That helped quite a bit.
And then, we shifted the aggregation and the modification to the CDN and had that handle the JavaScript and the CSS files. That helped too. And we were able to cache a lot of that stuff up there. And then the third tool that really helped was an image optimization tool. They’ve got a great tool in the CDN that we’re using that serves up next-generation file types on the fly.
So, for example, if a browser can handle a WebP, it’ll serve that up because it can download it faster. It also does things like progressively loading images into the page. So it starts with a low-res version, and it increases the resolution as the image comes down and handles some lazy loading activity as well.
Results
Those are some of the deeper recommendations that we were able to implement. And then the results were pretty good. We saw our first go round, we saw some of the first byte, start rendering, and speed index metrics go down quite a bit. We had a follow on effort on the main site where we looked more specifically at some of the Core Web Vitals, and we got our largest Contentful Paint down up to about 40% on some key pages.
We were able to reduce the Cumulative Layout Shift and the Blocking Time. So really, this is about how much time was spent blocking some of the threads. And we brought that down to improve some of the loading. And then, more recently, on the campaign side, we were able to do similar activities and go through our process that we now have in place for doing performance testing and optimization.
And we saw improvements, particularly around First Contentful Paint, Largest Contentful Paint, Total Blocking Time, total number of Bytes, and the Total Requests. We could do quite a bit here and see some good results. That being said, though, there were still places where we fell short, and there were areas where we were not able to improve the performance as much as we would have liked. One area, in particular, is with the Drupal templates. Part of this is based on the development path we’ve taken, and part of it is based on how that sort of Drupal works.
We still have a lot of unused styles and scripts that are coming down into the page, and to prevent that from happening, we would have to do a lot of rearchitecting or a lot of rethinking in terms of how the modules are used within Drupal to build out the pages. And that kind of rearchitecting would be really expensive and/or difficult or impossible to do.
Some of that prevents us from doing things we know would improve performance, like in-lining some of the critical JavaScript and CSS. Part of that, again, is the dev path we’ve chosen, but also part of the way that Drupal works. And we still have the flashing ones, unstyled content that’s going on that we have to live with.
And then there are still a lot of third-party cookies and things that are happening just based on the business needs of the site. There are things there that could still be done that we haven’t been able to address. A lot of what’s left is a heavy lift. And so those are things that have been shelved for now, and we’re going forward and continuing to improve performance with new development efforts or with the campaign efforts as they come. But we haven’t circled back around to address some of those harder things yet.
So Rob, after having gone through that process, both with the site and the campaign work we’ve done, how would things be different in terms of the next go-round? What would be the one main thing we would do differently?
Rob: To me, it’s a pretty easy answer, which is to ensure you have a seat at the table from the beginning. The tools Mark will show here in a minute are fascinating. And I don’t know that many people like me, I know that for a fact, but the outside agencies that we have used for design in the past are looking at that or even considering it. Again, it’s this line between what you know will attract a user to your site and keep them engaged, but also, on the back end, give it that performance that will allow it to meet or exceed their expectations. So to me, it’s just having that conversation upfront, showing them examples. And now we have plenty of examples and lots of data for the next time we do any sort of major launch. I will ensure that we did it for this, and when we did it for Breath of Stress Air, We. Decided that we weren’t going to, a small thing, but we weren’t going to use the overlay. And unfortunately, we had to turn that one back on. But I think just having the conversation earlier in the design process is really critical because you’re going to get to a point where the design is loved by everyone, it’s been worked and reworked, and you get sign-off from the highest levels, and then all of a sudden you’re going to have a Mark swoop in and be like, hang on just a second. That’s a hard conversation to have once you’ve already gotten things approved. Having you at the table much earlier in the process, I think, is probably my long-winded short answer to that question.
What can you do now to improve your site?
Mark: Okay, sounds good. So now we want to move into the third section of the session and talk about some of the tools you can use to improve your site. And before doing so, we have a slide here that we want to focus on. Really, what can you do now to improve your site? What are the things that may be short-term you could start doing today?
Test your website performance regularly
And I would say the number one thing is to start a performance testing program. You want to set yourself up so that you are able to run tests and monitor and audit performances regularly.
And that you take the time to figure out the pages or the UIs that you need to focus on, probably based on the top pages that you’re seeing in your analytics. You want to set that up, and then you want to create initial benchmarks. Draw a line in the sand. This is where we stand today.
And, going forward, we want to find ways to improve our performance, and we’ll compare them against those benchmarks as we go forward. You want to make performance testing, too, if you can, be part of every new significant development effort. and as Rob was just saying, getting performance into the conversation, getting that voice as part of the design effort to say, but we also need to think about the performance. What are the end users going to experience here?
Adjust your CMS settings
Who are they, and what will they use to look at our UIs? We want to make sure that performance is as optimal as possible so that the overall user experience really flies and is something we’re proud of. Specifically, if you’re running a CMS, things to look at are probably compression. And so this is serving up assets compressed versus uncompressed, which just speeds up the load time. And then minification, as we had mentioned, is a similar concept where you’re reducing the amount of space that’s used for the CSS and the JavaScript in particular, in terms of serving up assets. And then caching, take a look at the caching that’s going on in your CMS.
Not everything needs to be pulled each time from the CMS, and you can cache a lot of information that’s unchanging out to the end user to improve their performance. You also want to focus on content and, again, images primarily always offer up opportunity, it seems, to either resize them to make them smaller, to use more modern file types, to change the file type, if you can improve the performance by doing so.
Check your page content
So take a hard look at your images, and not only those that are part of the content but also those that may be part of the design. I don’t know if you remember this, Rob, but one of the things that we found was that a background image initially on thetruth.com site, which was just there to provide sort of some interest in the background, was like a pattern type of thing, but it was unnecessarily large.
And we were able to shave off, I don’t know, four or 500 kilobytes on that. And that in and of itself helped things out quite a bit. And then you also want to socialize these things as you can. You want to make people aware that performance is important. As we mentioned, you want to get that back to content creators and the decision makers so that people are aware of these things, and going forward, you have some buy-in to address them.
Get a CDN (Content Delivery Network)
If you don’t have a CDN, I recommend getting one. They’re inexpensive in this day and age. And as I had mentioned about the tools that we’ve been able to use for Truth, they really provide a lot of great stuff for you guys to work with to improve performance.
Rob: If I could just add one thing here, again, Mark being the very technically savvy person, he’s walked me back from the ledger a couple of times. As I said, my takeaway is don’t try to boil the ocean and get hung up on all the very many available pieces of data. Mark’s about to show some of the stuff you can see with his tools.
You guys are probably already using them. The funny thing with the tools is that, and Mark and I were doing this, you can run any website through them, and there are some big brands out there that require just as much work as the regular Joe website. Mark and I have agreed that he’s already implemented a lot of this stuff, but starting small, work your way up, take that low-hanging fruit, grab it, and implement it, and you’ll see immediate results. We keep coming back to the image, but the image optimizer or optimizing your images can make a big difference. And that’s something that many people can do right away. So don’t try to boil the ocean; just take your wins where you can, and then start working through some of these more complex practices because it’s a lot to look at.
Tools for performance monitoring
New Relic
Mark: So what are the tools? We talked about New Relic, and we wanted to start here because this is a pretty interesting one, particularly for what it provides to nonprofit organizations. The tool is really meant to be a continual performance and troubleshooting monitoring tool. One of the things that we get on the performance monitoring side is this cool report.
And Rob had alluded to it earlier. That gives us week-by-week what the load times are. And this is an average load time across all the pages it’s looking at. And so we use this as a metric, and because it’s based really on the volume of people going to the most traffic pages, we can see that generally, across the most traffic pages, the load time in seconds is what it is.
And we’ve seen that go down, which is exciting for us over time. So we get happy when we get this email on a Monday, and the number is low.
Rob: Yeah, remember that that number, on the second row used to be a 10 for us folks. a ten. And that is where I had that moment of panic. Six is remarkably better, and three is pretty wicked. I have to say. So kudos to Mark.
Mark: And the tool does a whole lot of stuff other than performance monitoring. It does a lot of deep diagnostic troubleshooting and stack tracing and things like that for people who need to diagnose stuff on your backend, that has a great alerting system.
And the cost is, there’s a free version of New Relic that’s pretty wonderful. And they have this great generous Observability for Good program. And as a nonprofit organization, you can have TechSoup validate your credentials as an NPO and get the free account enhanced with additional users and storage space for data, which is a great deal.
So my pitch to any nonprofit is always to get a New Relic account. It’s not difficult to implement on your site or even on your mobile apps. And it’s something you should look into if you’re not using this one.
WebPageTest.org
So next, let’s take a look at WebPateTest.org. This is another free tool to use it’s out there. This grew out of an internal tool that was initially developed at AOL. It’s great because it’s easy to use and it’s easy to configure for different network connections and devices. It delivers interesting top-line metrics, and they hit close to the Core Web Vitals but don’t include all of them for some reason. I’m not quite sure why. But it’s got a nice interface. The waterfall review is great. It’s easy to take a look at everything that’s coming down in the load, and it gives you a better view than you might get using dev tools in your browser.
The image analysis is awesome as well. In the bottom left, you can see a screen capture of a tool called Cloudinary plugged into WebPageTest.org. And what it’ll do is it’ll take a look at all the images on your page, and it’ll suggest improvements that you could get from doing some optimization. And it’ll even perform that optimization for you to give you an optimized image to download right from the tool, which is pretty great.
The other thing I like about WebPageTest is the ability to do comparison tests. You can see the before and after implementing a set of changes and how that affected your site, which is pretty nice. We use WebPageTest.org quite a bit.
Rob: Yeah, Walmart’s doing that. I will say those visuals. for any time, you need to have somebody at the very top having a basic baseline understanding because that can be a technical conversation with somebody all the way. It depends, right, it can be difficult. That visual, right there, speaks for itself.
The visuals have been really helpful for us to grasp the concepts and have been helpful for me to make sure that we can. educate the folks that are above me that ultimately make decisions on what’s going to go onto the website.
Mark: Yeah, so we use these quite a bit. We look at the top-line metrics and the waterfalls, and then this is the visual comparison view where you can see where you can run multiple tests, rerun tests, and compare them before and after.
And you can even store some of your tests on the site for a while, or you can export them, which is great.
Google Search Console
In addition to WebPageTest, we’ve got the Google Search Console, and I’m sure most of the people who are watching are very familiar with Google Search Console, but it’s worth taking time regularly to take a look at the opportunities you have in the performance section here.
And to just take a look at what they’re reporting in terms of what’s coming back as poor, or needs improvement, because you’ll be able to find some recommendations in here as they evaluate your pages and give you example URLs and point out some of the specific problems that they’re noticing.
And then you can use those to come back with your own recommendations for improving your site. So that’s a good one to look at regularly.
Google PageSpeed Insights
I think some people, maybe you’re a little bit confused between this and Lighthouse and how they work. They’re not exactly the same, and some key differences exist here. I like Google PageSpeed Insights primarily because it gives you direct access to the Chrome UX report.
And that’s the field data that we were talking about before. So the ability to look at real-world data from users who have experienced your site and see what metrics Google is returning from that information. And the other thing they do a good job of, as they do in Lighthouse, is providing a good list of metrics and recommendations for things you can work on. For both Google PageSpeed Insights and for Lighthouse, the simulated network connection is really a fast 3G, so it’s the lowest common denominator for your users because it’s probably not the case globally, but for most users in the US, a fast 3G connection is probably on the lower end of what people are experiencing in terms of accessing your site.
So the numbers are scary when you’re looking at such a slow connection, but they give you a good idea of where you stand and a good way to look at a series of recommendations. The other thing that’s been great lately, and I’ll toggle over to a view there, is they’ve been including specific recommendations for platforms people use a lot.
So, for instance, a lot of people are using WordPress and Drupal, so what they do is inline. They have recommendations here regarding actual plugins that you can use that are either from WordPress or Drupal to address some of their recommendations. That’s what we’re seeing here.
This is a list below the lab data I’ve run, and I’ve got a list of a number of different opportunities, as they call them, that I could look at to improve the performance on this page.
So this is a really useful tool. Lighthouse does something similar, and it gives you similar types of information. But what they don’t have is this section at the top here, which is based on the Chrome UX report here. So these Web Core Vital stats are based on your actual user experience and not based on that data.
Use Browser Dev Tools
And then the final tool we wanted to highlight was the browser dev tools. Folks are probably used to using this a lot as well. The elements and inspector tab is great for reviewing your assets and looking at the markup quickly. You can find errors within the console; the network waterfall is pretty good.
There’s a performance tab now where you can run the profiler, look and see the specific elements loading into the page and investigate them. There’s a filmstrip there. They overlay the Web Core Vitals as well. So if you have a long task that’s happening as part of the load, that’s a good place to really get into the weeds and investigate what’s going on as part of the load.
And then, of course, Lighthouse is there as one of the tabs now, too, in the dev tools. And everybody knows how to get access to their dev tools. There’s a shortcut within Chrome, and I think it’s the same in Firefox. The F12 will open up your dev tools, so you can start looking at what’s happening under the hood there.
Rob, do you have anything to add regarding what you’ve seen from the tools before we wrap up?
Rob: Nothing other than just, I love the visual aspect of it. I’m a visual person. I think a lot of people are, so those are really helpful for me, and like I said, raising it up through the ranks.
Mark: Okay, great. That’s what we have in terms of our talk today.
Rob: Thank you, Mark, for letting me join you on this and thank you everyone for listening.