Nonprofits, Technology Evaluations, UX & Visual Design

Google’s research tells us we have less than 4 seconds to load your web content before you start experiencing significant bounce rates. Making sure your site performs at a high level is crucial to delivering a good user experience — and bolstering conversion rates.

Metrics such as the Core Web Vitals can help you measure how your site is doing. They can tell not only how fast your content is loading, but also how users are perceiving content loading on their screens.

Learn about a variety of free and paid tools to test and measure these metrics, as well as ways to improve your site’s performance.

We walk through a few ways to test your website and review what metrics you should be paying attention to and what they mean. Finally, we review test results and spotlight actionable ways to improve site performance.

Slide Deck Download

Transcript

Mark: Hi. Welcome to How to Audit and Boost Your Web Performance. We’ll be talking about the importance of performance on your website, and ways to improve it. My name is Mark Leta, and I am the director of business analysis and quality assurance at the Allegiance Group. And today I’m joined by my friend, Rob.

Rob: Hey folks, how’s it going?

I’m Rob, director of marketing analytics at Truth Initiative. Just a quick background on who we are. Non-profit public health organization. Our mission is making tobacco use and nicotine addiction a thing of the past. And I guess just so you know how Mark and I are related, if you will, at Truth, we have a couple brands, but I work mostly on thetruth.com campaign, which is our organization’s flagship brand and is probably what most people know us for. Those marketing efforts are specifically targeted to young people in America to make sure they’re educated on the harmful effects of nicotine addiction. But it’s on those efforts that Mark and I collaborate quite a bit.

So I will kick it back over to you Mark.

Mark: Sure. So what are we covering today? We have three areas, the first of which is to talk about what is good performance. What do we mean by that?

Particularly, what do we care about for digital products or for websites? What are the user expectations around performance? Where are they at and what they think that they should be getting out of their experience. And then how do we know if performance is really good?

What metrics should we be paying attention to? What do we care about? And then finally, as part of this first section, we want to talk about the relationship between performance and user experience, because it’s really an important one. From there we’ll move into some more of the case study information where we’ll cover the work that Rob and I have done on thetruth.com in terms of doing a redesign and subsequent performance enhancements. And then also on related campaign work where tuning performance and making sure that things are working well from that standpoint, were important. We’ll talk about what affected Truth specifically in terms of performance, some of the challenges we faced, the testing and the auditing processes we used, and some of the recommendations that we came up with, both the ones that were quick wins and some of the harder ones to get into.

And we’ll close out that section talking about some of the results. The last part of our session today, we’re really going to get into some of the tools and talk about what you can do now to improve your own site’s performance and take a look at some of what’s out there to help you out.

So what do we really mean by good performance? There’s really three aspects to consider here. It is how fast things actually load within a browser on a page for a user. It’s how fast users perceive things load, and it’s really how quickly they can interact with it. Those three aspects altogether really make up a user’s sense of how well a user interface is performing.

What about those user expectations? Unfortunately they’re shaped by users’ experiences across the entire web and not just based on what they’re experiencing on your own site or even competitor sites, because they’re constantly browsing, highly performant applications and sites built by tech giants with deep pockets and endless resources in order to make sure the performance is exceptional.

Their expectations are really set by Google, Twitter, Amazon, and Pinterest. And so as such, they’re used to expecting three seconds load time. They’re expectations are really high. And after that point, frustration does start to set in on some level. And as those load times increase frustration sets in even more.

In fact, Google doing research found that for e-commerce sites that around 53% of mobile site visitors will leave a page that takes over three seconds to load. On the slide here, we’ve got a stat that gets trotted out quite a lot when people are talking about performance and it really shows that relationship between for every second above one second of load time, what happens in terms of bounce rates?

In this case, they’re showing the probability that a user is going to bounce. And you can see that as you go up from about three seconds to six seconds, there’s an exponential rise in the probability that your users are going to bounce and it levels off some, but you can see that as you get up to 10 seconds, you’re over 120% probability that the user is going to bounce.

So we have clearly expectations that are really high, and we have to figure out what we can do to meet them as best we can.

How do we know if performance is good? It’s simple, we test. We need to understand what it is that our users are going to experience first. So we need to do some research into who they are and what their end user experience is like, understand what kind of devices and operating systems they’re using, the browsers they’re using, and learn whatever we can about the network connections that they have.

In most cases, we won’t have analytics that tell us specifically, what their connections are, but we can make some inferences and try to simulate what those conditions might be and run tests. In doing our testing, we also want to use field data as much as possible. Google provides a really nice thing in their Chrome UX report, which is data that’s been gathered by users who are out there on the web using the Chrome browser that can provide real world user experience feedback into your actual site and what they’re experiencing in browsing.

So that tells us quite a bit about the real world and what’s happening on your site. We also want to use simulated experiences as well. So lab type data that we can create. And when you’re doing your testing, you’re really wanting to focus on those same three areas that we started talking about.

And that’s the load, the actual time of load, the perception of loading and, in a lot of cases, visual stability is a focus. And then the interactivity that the user is experiencing with the site.

So what metrics, what do we really care about? Certainly the Core Web Vitals is probably the first thing that come to mind for most people. These have gotten the most attention, particularly in the last couple of years. And these are developed by Google. They’re really a subset of a larger group of metrics.

And for now these are the ones that Google has settled on as being the most important to take a look at to judge loading perception of load and interactivity. And this is after having focused on other metrics previously and doing a lot of testing and a lot of studying of the results to really try and find the best way to represent these different ideas.

LCP is one of these, and this is the Largest Contentfull Paint, and the time needed to render the largest image or block of text on a page after the page starts to load. It’s meant to be a simpler way to try and judge and capture the load time from the user’s experience perspective. And instead of focusing on hard load times for things like the DOM or a speed index, or even the time it takes to start painting content on the page, Google feels like that the Largest Contentful Paint is a better metric, even though it’s simpler to tell us more about what’s happening in terms of the raw load.

CLS is the Cumulative Layout Shift, and this gets at the user’s perception of how loading is going. And it’s a score that measures how as your page is coming in, content may be shifting or redrawing or adjusting on the page. And it’s really about that perception of what’s happening. It can give the user a sense of instability, for example, if they’re seeing the page redrawing or re shifting as the page moves down. FID is the First Input Delay. And this is a measure of time between when the user interacts with something on the page, and when the browser can start processing that event. So the time between when somebody clicks or taps or selects or uses some control and when the browser is able to respond. And in this way, you’re measuring really the first impression of how the user perceives the interactivity of your site.

One of the reasons that people have been focused on Core Web Vitals so much is the SEO or perceived SEO impacts that they have. And I wanted to take a moment to talk about that because it is such a concern when we’re talking about performance testing. A lot has been written about it and initially Google had just said that your core web vital scores would start to be used in their algorithms for determining SERPs.

And this caused a lot of concern and hand wringing, and people were worried that, based on the performance of their site, they were going to lose a lot of Google juice or SERP placement that had been built up over years. Initially the deployment was delayed and then we learned that it would really just be a soft-ranking factor in the algorithm, one of a hundred or so factors that go into ranking pages.

So the impact of the Core Web Vitals in terms of SERPs wouldn’t be as great maybe as first thought. And as the mobile ranking factors rolled out last year that’s really how it played out. There were changes observed for sure, but not many shifts in terms of search results.

And as the desktop ranking factors are rolling out now, and they’re due to be done by the end of this month, it should be similar. And that there shouldn’t be a huge impact to SERPs based on the Core Web Vitals. So while we should be working to make these scores as good as they can be, there’s not really a need to panic from an SEO perspective.

And, perhaps, if you’re trying to figure out where to put your resources in terms of SEO, spending a lot of time on the core of vitals may not be as effective as doing other traditional SEO activities, increasing your backlinks or improving your headline or headers in terms of the quality of the descriptions and what you’re writing about, or consolidating short content, maybe into larger articles or up the same topic. Those types of traditional SEO activities might even yield better results than focusing a lot on just the Core Web Vitals. Rob, I think you can echo how seeing some of those Core Web Vitals for the first time can be a little bit upsetting because in a lot of cases, people aren’t doing as great as they would hope to be.

Rob: I would say jarring is the word, actually.

And folks, if you’re on the phone or in the meeting here and LCP and FID and CLS don’t, maybe they mean something to you guys. They mean absolutely nothing to me. I’ve learned a lot since working with Mark and they mean a little bit more now, but when Mark was first walking us through all of these metrics and how they’re measured and their importance, it was, I, again, jarring, overwhelming is another word. But you know, Mark has walked me back from the ledge a little bit and said that some of those traditional SEO, tried and true practices, like making sure your meta descriptions and titles and things are in working order, are still very impactful. That’s good to hear. You want to focus on those, and this world that Mark has uncovered for me personally, is important in as we start to move into bigger and better and more advanced, analytics and figuring out how it all works, these things will become more important. But, yeah, it was, a very interesting meeting to say the least the first time Mark walked us through it.

Mark: So in addition to those Core Web Vitals, there’s a number of other additional metrics that we pay attention to when we’re thinking about performance. on the left-hand side here, really from top down, these are ones that come into play progressively as the page loads. First, we see the Time to First Byte, how quick your site is delivering data to the start rendering time, which is when the page starts to come together versus the First Contentful Paint, which is when we actually start to see pixels. A Speed Index is a complicated metric that evaluates the completeness of a page loading over time.

It’s good for comparison, but it’s really just a number that is taken out of context, maybe not as meaningful. Total Blocking Time is interesting because it’s the amount of time between the First Contentful Paint for when you’re first seeing pixels and the time to being interactive. So during that time, what’s going on with assets or scripts or files coming down that are really blocking the load. And then Time to Interactive is the time for when the page starts to when it can be able to respond to interactivity. So these other six metrics here are ones we pay attention to quite a bit.

On the right, there are other metrics that are interesting, but once we really don’t focus a whole lot on, and these are the ones of around complete times for things like completing the document load or the DOM load, or the total number of bytes. And these things really are not that important, particularly in this day and age where most pages are dynamic and things are coming in on the side and we don’t really care ultimately where the process ends as much as we do that initially, the page is loaded in such a way that the user perceives it to be done and that they can interact with it.

Sometimes things like the number of requests are interesting and important because we can go back and look at those requests and either reduce some of those in order to improve the performance, or at least make sure that we’re getting the right requests in there so that we’re not doing anything extraneous in terms of the load.

So for the last slide in this section of the session, we really wanted to talk about good performance in the context of UX, because it’s really a key part of the user experience. And we can’t have a good user experience without having good performance. We put a lot of work into doing research in design as part of our digital experiences and we develop systems and interfaces, and we bring those designs to life on screen.

We create content and we tie the user experience with functionality, create these great optimal paths for users to achieve their goals and to achieve our business goals. We test everything, we make sure it works and we try to meet all those design specs. However, we often do all of this without knowing how the interface is really going to perform for our audience.

And in some cases, we may not even measure performance until final testing stages or even after deployment, even as an afterthought. And if the site doesn’t perform well in the end, we’ll really, by definition, won’t have a good user experience. The two are inextricably linked. So when we’re thinking about good performance, I think we really need to think about it in the context of the user experience.

And we really want to incorporate some of that thinking into the process. There is the idea of performance design that’s out there, and there are several tenants that you can bake into your work or think about, as you go through a design process for UX. Certainly, thinking about mobile first is a given nowadays, but you want to go beyond that and think about how your designs will not only work on mobile, but how there’ll be super fast and efficient for the user.

You want to think about simplifying things, wherever possible. You want to critically review what you have in terms of the imagery and the styles and the scripts and the contents that are in play, and ask yourself, do we really need all of this? Is it really serving the goals that we have for the user or the goals that we have as an organization?

Or maybe are we just adding weight to the page and that we can pull some of it back, can we achieve the same goals, maybe with a simpler design or simpler concepts. And then once you’ve done that, try and make it feel and perform really fast. You want to do things like optimize vector graphic, sure. You want to optimize the images in the media, both in terms of the resolutions that you’re using and the sizes that you have out there, all in an effort to reduce the amount of load. You want to offload the loading of images in such a way so that they get loaded in the side and only shown when needed on the screen.

And they’re not being all included on the page as the page comes in. If possible, you want to maybe consider simplifying fonts. Surprisingly fonts can cause a lot of problems in terms of performance. It takes a while for them to load in and they tie up the threads. And so maybe even choosing a system font over a downloaded font, it could be an option to make things simpler.

You also want to provide an indication of loading if you can. This helps with the perception that things are happening. So using something like a skeleton screen, that sort of blocks out where the content is going to be will give the user the sense of load happening and content coming in, or even subtle loading animations that show a response.

And I’m not talking about sort of the spinning circle, but really just a subtle animation that indicates, okay, the system is doing something that it all helps that perception of loading as the user is sitting there waiting. And then we want to think about too, maybe as part of our user experience work, thinking about maybe a speed budget. This is a great way to address performance and design.

You have to really set a goal for what we think is going to be acceptable in terms of the performance. This is something that you can establish early, and you may certainly need to adjust as you go, but at least having that that goalpost out there, gives you something to strive and to aim for.

It gives you a reason maybe to trim back on your design and your concept as you go in order to try and achieve better performance. We also want to incorporate as part of our process, testing early and testing often. And this may follow the user experience process and having created the designs and building them out into a UI.

But as soon as you can, as soon as you have a testable UI with assets and tact, you really want to start testing for performance. Because if you’re able to find a problem earlier, of course, making fundamental changes to the design or to the code and functionality is going to be much easier the sooner you find it, than if you wait. The other thing I would suggest too, is that you want to report on performance regularly, as you can, as part of your analytics.

And you want to socialize those metrics and get decision makers to pay attention to them. Certainly, if you need to get buy in later on for doing a project or for work to improve the performance, having people understand its importance in the role that it plays in providing a good user experience, is worthwhile and good.

So that kind of wraps up our first section there, in terms of what is good performance. And so now I’m going to pass it on over to Rob, to talk a little bit about some of the work that we did, for the Truth site.

Rob: Cool. Thank you, Mark. As Mark mentioned, there’s a lot to look at and I consider myself a fairly technically savvy person.

I’ve been doing digital marketing and analytics for the better part of 15 years. And it, again, it came as a shock when Mark started uncovering some of these things for us. And I want to give a real life experience, our experience personally, with thetruth.com website and, just to show how that can play out, how it can go when you have a partner, like Mark, that knows what they’re talking about. The quick backstory here is several years ago, we knew we were in pretty desperate need of a design refresh of our website. There were a couple other updates we were looking at as well.

We needed to get updated to Drupal eight. I don’t even know what version we were on before that. With a redesign of a website comes new website analytics implications. What are we going to track? How are we going to track it? We have a pretty hefty marketing budget here to drive folks to thetruth.com, and on all of the platforms that we operate on we make use of the pixels that they provide so that we are able to do optimizations in those platforms, places like Snapchat and Tik Tok, where you want to be able to optimize to a conversion or to a video view or whatever the case may be. We had to make sure to QA all of those and make sure that they were functioning.

All of this is to say that, with a redesign of this magnitude, it was a pretty large one, there’s a lot of moving pieces, a ton of moving pieces. It took several months to really, to get it off the ground. and so while we’re doing that, of course, UX was thoroughly thought out. We validated what we believed was a good user experience using fairly modern techniques, like card sorting and others.

We thought mobile-first of course, we already knew, I knew, and was very clear about the fact that, almost 90% of our traffic comes from a mobile device. Our target demographic are the young people of America from 13 to 24. Where are they? They’re on mobile. But performance, and I hate to even say this now, looking back on it, it was never even considered. It wasn’t even something I thought of it wasn’t something we even brought to the table. And did I know do I know that speed matters? Speed, just the broad sense, like you want a fast website. Do I know that speed matters? Did I know then? Yes, of course, but the need for a modern updated website, it really put blinders on me as far as speed went.

And I mentioned all of this to bring up a point that I think is important to highlight here. And that is, the balance between managing content and design and UX and all of the tangible stuff that people can see, and we as digital marketers, need to do our jobs. And then how all of that stuff affects your performance.

Let’s just be really blunt about it, not public facing at all. And I think, again, looking back on it and knowing what I know now, it’s too often left to the wayside by marketing teams. So, it was, it was a learning process for us.

And so in the end, what do we have? We have a really flashy looking website. It’s on the latest version of Drupal, that’s a big win. Analytics is functioning, the pixels are working, and it’s time to celebrate, pop some champagne and we have a new website. Let’s do the celebration.

Because it’s a new site, we start paying closer attention to things like page load speed, in a system that we utilize here, New Relic. Mark, will get into that system a little bit later, but I’m looking at things like, all right, how much traffic are we driving? Are they engaging with the website?

But then also, really starting to focus on a very basic metric. Page load speed. And I see that we are averaging 10 seconds per page load. I knew enough to know that wasn’t good. I panicked a little bit to be very frank with you guys. And I emailed Mark and company right away and said, we have to address this, what can we do? And Mark came back, as you can probably tell, he knows his stuff, with a laundry list of improvements. And some of the things on that laundry list, we were able to manage in house. Things like minification of CSS and JS, optimizing image size and resolution. Again, even for someone like me, that is, like I said, on the fairly technical side, the rest of what he was suggesting was just, in, I’ll call it the performance or performance design weeds.

It was a lot. And so we honestly said, Mark, all right, let’s go and set them to the task. And here, I’ll pass it back as this is exactly what I did in real life. I was like, all right, cool, hear you, appreciate all of that. Now can you please go and implement, and make us better? And so I’ll pass it back to him to talk about what those things were that he suggested and what he’s implementing.

Mark: Yeah. we started out looking at what was affecting performance. And we tend to think about these things a lot of times as the backend and the front end. When we think about the backend, it’s really the services and the application layer and the database and the CMS.

And there, it’s interesting because we’re all using mature platforms and we’re using cloud services that have been optimized, like Drupal in a managed hosting environment. A lot of the traditional bottlenecks and roadblocks from the past have really been alleviated.

And we don’t have to spend a whole lot of time on things like, database query optimization or the like, and we can move in to the CMS to look for opportunity in order to improve performance. So that’s kind of what we did our focus here was more on the Drupal side of things in the backend and how we could go about either working with different modules or setting the system configuration up in such a way that we could serve content to the front end better or more efficiently. And there were really three areas that we worked on and those were caching within Drupal, how the CSS and the JavaScript files were being served, because some of that was inefficient. And then the templating that was going on in the system. We were using a pattern library that had helped speed up development. But in doing so, that resulted in quite a lot of extra overhead in terms of markup and nodes in the markup.

And we had a lot of CSS and JavaScript that was being included on most pages without real easy ways to exclude them from pages where they weren’t necessarily needed. And then on the front end, this is really where we saw a lot of opportunity in terms of creating performance improvements. And there were three areas here.

One was within the markup and the styles and the scripts themselves, trying to reduce some of that to bring down page weight. And then to optimize the asset files that were in use. Some of the content was problematic. All of the images on the page were initially loading along with the page, we weren’t doing lazing loading yet.

So we worked on that, and we knew that was an opportunity. And then as the image and the media were not always as optimized as they could be. So we spent some time there. And then the CDN was an area in particular where a lot of opportunity presented itself because of some of the tools that were available.

So we had some specific challenges that we faced in taking a look at the performance for Truth. it, wasn’t a key consideration, as Rob had mentioned, during design and development. And we were really looking at the testing and auditing post-launch, plus we had it in a lot of cases, particularly recently, fast turnaround times for campaign work. So we didn’t have a lot of time to spend on the, on the performance and specific to some of the development challenges, as Rob mentioned, a lot of the design is very image-rich and aimed at a younger audience, so it’s flashy. It’s cool, but it has a lot of imagery to contend with. We also have these limitations within Drupal that we have to contend with for the templates.

And there’s a lot of styling and scripting going on. There’s FoUC, there’s a flash of unstyled content that happens based on having to load external things, and also third-party scripts, and beacons.

Rob, do you want to talk about the FoUC example in particular?

Rob: Yeah. Yeah, again, new concept to me, but Mark’s been great about educating me about it along the way.

And I think here on the page, you can see some of the stuff I’m going to just chat about real quick. It’s a little bit small, but one of our main website KPIs is on-page engagement. We are intentionally designing and incorporating elements to peak a young person’s curiosity.

We want to foster engagement on the website. We want them to go deeper on the site and learn more, because ultimately, it’s their decision, right, whether or not they’re going to embark down the path that big tobacco wants them to embark down. And so we want to make sure they’re informed, and they have the information.

And we do that by, like I said, having, Mark mentioned it, like flashy, cool things. They kinda pique their interest and get them to engage. One of our most effective methods of doing that as something that we affectionately refer to as the right side sticky. It’s just a simple overlay that expands out from the right side of the page, but it captures your attention. And we load that on the page using Adobe Target. We here at Truth are an Adobe marketing shop. We have some custom code that loads that overlay up using Adobe Target. And I can probably hear some of you may be saying like wrong, don’t do it, in fact, Mark was one of them. I’ll touch on that in a second, but I hear you on that, but using target on our site at least allows someone like me to control it.

It more importantly allows me to test different versions in a live environment. So for our recent Breath of Stress Air launch, which is a new campaign we just launched, we decided to leave it off, right? Because, when Target needs to load on the page, it results in this brief flicker. So on the page here, you can see, the Breath of Stress Air landing page is buried underneath that top image. And then you can see the overlay. As it flies out. The overlay is there again, just to capture user’s attention, let them know that if they are stressed, they can click here. They can go and get information about what to do under those circumstances.

And so from a UX perspective, I have to be honest, I like it. It looks cool. It looks flashy. It’s an effective tool to capture attention. But then, Mark comes in and very politely says, hey, it’s going to result in the flicker, again. And we initially said, okay, you know what, Mark, that’s right. We are trying to think performance. I want to say performance first. We’re not there yet, but we are considering performance along the way. And so we said, all right, let’s just leave it off. We launched the page and almost immediately, we’re seeing less than desired on-page engagement, and we realize we want it back. A bit of a. LOL moment, if you will. And we begin the effort to reinstate and again, Mark was quick to point out, fine, get it. If you do that, you’re going to see the flicker. So we launched the overlay in a test environment to get the full effect. And you can see that here.

This is one of Mark’s very cool tools, that he’s introduced me to. And I can’t see the actual, the seconds there, but you can see it’s, it’s black as it’s loading, here comes the landing page and here comes some content, and then boom, at the, like whatever it is, three and a half second mark, Adobe Target loads, and it goes back to what this black flicker. And so is that an ideal user experience? No, it’s not. But we decided that the potential increase in on-page engagement was worth the brief flicker. And so I tell this sort of long-winded story to highlight. my previous statement about managing the balance between performance design and front end user experience, and that there will very likely need to be tradeoffs between the two.

You have business goals versus front end user expectation versus the backend systems management of it. And all of those are, as I’m coming to learn very quickly, need to be weighed. There’s going to be some tradeoffs between them. Another good example, I mentioned it earlier as the ad pixels. We rely heavily on the third-party ad pixels to make sure that we can effectively run, measure, optimize our ad campaigns.

But the more you have the more connections to those services and platforms you have to make, and that ties up those browser threads, which can cause a backup on the backend and slow things down. It’s one of these, this is a classic example of the trade-off. In this case, we have to say, these pixels cannot go.

We absolutely need them. We know they slow our performance. So you know, this is where I would lean on my partnership with Mark and say, Mark, where else can we improve to maybe balance that out?

Mark: So how did we try and address these challenges that we had? The first thing we did was really to implement a testing and auditing process. Here it was important for us to get into place a process that we could rely on. And we started by identifying the key pages. We wanted to focus on really where the traffic is because improving the performance for a larger part of your audience is all about getting the most for your effort there.

So we looked at the analytics and we focused on key pages. And then we created baselines because we wanted to know where we’re starting from using the metrics that we had and that we wanted to track, we established baselines. We also established which testing sources we were using in terms of the tools in the simulated network speeds and devices that we’re going to use.

And then from that initial run through the testing process, we determined recommendations. These came from the tools primarily, but also on our own sort of observations and looking at what was happening as we were examining the waterfalls and taking a look at performance. And with those recommendations, we prioritize those based on the level of effort that we felt like it would take to implement and then our perceived effectiveness, what we were really going to get out of implementing and the recommendation in terms of benefit to the performance of the page and site. And then we made the process cyclical. So we implemented one or two recommendations. We deployed those out. We retested, remeasured against the baselines to see how we might’ve improved or in some cases not, and then repeated the process as we went, working our way through the recommendations we had.

So there were definitely some quick wins that we found in going through this process, certainly image optimization. It sounds like I’m sure that we’ve said that quite a few times during the session, but it really it turns out to be something to come back to because it is so important in terms of load. We’re able to find, ways, a lot of times, even after we think things have been optimized to optimize them further, both in terms of size and resolution, or maybe even using different file types, maybe we can get away with using a JPEG instead of a PNG somewhere and save some weight in terms of a file.

We were able to get rid of some excess scripts and styles and fonts in some cases. Once or twice, I think we found even the same script to being called twice. So we were able to remove some of those calls that were unnecessary. The styles can add up, so the more that you can trim down the better. And we were finding with the fonts, in some cases, we were downloading the entire compliment of site fonts unnecessarily. And as mentioned, those add weight and tie up the threads. So you want to limit the number of fonts that you’re downloading, if possible. There were some access requests that were in there too. Some of these were maybe from old services that weren’t being used or unnecessary services, we were able to get rid of those. Within the meta-tags we implemented pre-loading of fonts that helps.

And we also implemented a pre-connection to third-party domains. So that helps you to establish the connection to maybe services or other domains that you’re using. It starts the DNS lookup and other processes at the front of the page loading process. As soon as the call is made, so you’re not having to wait in order to do the handshake for the DNS and the TLS negotiation, et cetera.

So those are some of the quicker things we’re able to do. And then digging in a little bit deeper. there was some script refactoring that we did. We implemented the image lazy loading, which wasn’t always straightforward or simple. We were able to address some of the flashing of unstable content that was happening, to help perceived or helped with the perception of load.

We did a lot of tuning within Drupal in order to make it unnecessary for browsers to go all the way back to Drupal, for instance, in cases when they didn’t have to. And then on the CDN level, we did a lot of cache tuning there. We really saw improvements from that. And then we got into the performance tools that the CDN provides, and here we found a lot of opportunity and a lot of great stuff. There were three particular tools that we used. One was a load prioritization tool. This helped determine when parts of the JavaScript is needed as a page loads and prioritizes parts of the script accordingly. So it allows for the page to keep loading as the JavaScript is coming down. That helped quite a bit.

And then we shifted the aggregation and the modification to the CDN and had that handle the JavaScript and the CSS files. That helped too. And we were able to cache a lot of that stuff up there. And then, the third tool that really helped was an image optimization tool. They’ve got a great tool in the CDN that we’re using that serves up next generation file types on the fly, if possible.

So for example, if a browser can handle a WebP, it’ll serve that up because it’s able to download it faster. It also does things like progressively load images into the page. So it starts with a low-res version, and it increases the resolution as the image comes down, and handle some lazy loading, activity as well.

So those are some of the deeper recommendations that we were able to implement. And then the results were pretty good. We saw our first go round, we saw some of the first byte start rendering speed index metrics go down quite a bit. We had a follow on effort on the main site where we looked more specifically at some of the Core Web Vitals, and we were able to get our largest Contentful Paint down up to about 40% on some key pages.

We were able to reduce the Cumulative Layout Shift and the Blocking Time. So really, this is about how much time was spent blocking some of the threads. And we were able to bring that down to improve some of the loading. And then more recently on the campaign side, we were able to do similar activities and go through our process that we now have in place for doing performance testing and optimization.

And we were able to see improvements, particularly around First Contentful Paint, Largest Contentful Paint, Total Blocking Time, total number of Bytes and the Total Requests. So we were able to do quite a bit here and to see some good results. That being said though, there were still places where we fell short and there were areas where we were not able to improve the performance as much as we would have liked. One area in particular is with the Drupal templates. Part of this is based on the development path that we’ve taken, part of it is based on the way that sort of Drupal works.

We still have a lot of unused styles and scripts that are coming down into the page and in order to prevent that from happening, we would have to do a lot of rearchitecting or a lot of rethinking in terms of how the modules are used within Drupal in order to build out the pages. And that kind of rearchitecting, it would be really expensive and/or difficult or impossible to do.

Some of that prevents us to do some things that we know would improve performance, like in-lining some of the critical JavaScript and CSS. Part of that, again is the dev path we’ve chosen, but also, part of the way that Drupa; works. And we still have the flashing ones, unstyled content that’s going on that we have to live with.

And then there’s still a lot of third-party cookies and things that are happening just based on the business needs of the site. So there’s things there that could still be done that we haven’t been able to address. A lot of what’s left is what you might call a heavy lift. And so those are things that have been shelved for now, and we’re going forward and continuing to improve performance with new development efforts or with the campaign efforts as they come. But we haven’t circled back around to address some of those harder things yet.

So Rob, after having gone through that process, both with the site and with the campaign work we’ve done, how do you think things would be different in terms of the next go round? Do you have any thoughts in terms of what would be the one main thing we would do differently?

Rob: To me, it’s a pretty easy answer, which is, make sure that you have a seat at the table from the very beginning. The tools that Mark is going to show here in a minute, are pretty fascinating. And I don’t honestly know that a lot of people like me, I know that for a fact, but the outside agencies that we have used for design in the past, that they are looking at that, or even considering it. Again, it’s this line between what you know is going to attract a user to your site, keep them engaged, but also on the back end, give it that performance that is going to allow it to meet or exceed their expectations. So to me, it’s just having that conversation upfront, showing them examples. And now we have plenty of examples and lots of data to go off of for the next time that we do any sort of major launch. I will absolutely make sure, well, we did it for this and when we did it for Breath of Stress Air, We. Decided that we weren’t going to, a small thing, but we weren’t going to use the overlay. And unfortunately, we had to turn that one back on. But I think just having the conversation earlier in the design process is really critical because you’re going to get to a point where the design is loved by everyone, it’s been worked and reworked and you get sign off from the highest levels, and then all of a sudden you’re going to have a Mark swoop in and be like, hang on just a second. That’s a hard conversation to have, right, once you’ve already gotten things approved. Having you at the table much earlier in the process, I think is probably my long-winded short answer to that question.

Mark: Okay, (laughter) sounds good. So now we want to move into the third section of the session here and talk about some of the tools that you can use to improve your own site. And before doing so we have a slide here that we want to focus on, really, what can you do now to improve your site? What are the things that maybe short term you could start doing today? And I would say the number one thing is to start a performance testing program. You want to be able to set yourself up so that you are able to run tests and to monitor and audit performances on a regular basis.

And that you take the time to figure out, really, the pages or the UIs that you need to focus on, probably based on top pages that you’re seeing in your analytics. You want to set that up and then you want to create initial benchmarks. Draw a line in the sand. This is where we stand today.

And, going forward, we want to find ways to improve our performance, and we’ll compare them against those benchmarks as we go forward. You want to make performance testing too, if you can, be part of every new significant development effort. and as Rob was just saying, getting performance into the conversation, getting that voice as part of the design effort to say, but we also need to think about the performance. What are the end users going to experience here?

Who are they and what are they going to be using to look at our UIs? We want to make sure that performance is as optimal as it can be, so that the user experience overall really flies and is really something that we’re proud of. Specifically, if you’re running a CMS, things to look at are probably compression. And so this is serving up assets compressed versus uncompressed, just speeds up the load time. And then minification, as we had mentioned is a similar concept where you’re reducing the amount of space that’s used for the CSS and the JavaScript in particular, in terms of serving up assets. And then caching, take a look at the caching that’s going on in your CMS.

Not everything needs to be pulled each time from the CMS and you can cache a lot of information that’s unchanging out to the end user to improve their performance. You also want to focus on content and, again, images primarily always offer up opportunity, it seems, to either resize them to make them smaller, to use more modern file types, to change the file type, if you can improve the performance by doing so.

So take a hard look at your images and not only images, I should say too, that are a part of the content, but also images, maybe they’re part of the design. I don’t know if you remember this Rob, but one of the things that we found was is that a background image initially on thetruth.com site, which was just there to provide sort of some interest in the background, it was like a pattern type of thing, but it was unnecessarily large.

And we were able to shave off, I don’t know, something like four or 500 kilobytes on that. And that in and of itself helped things out quite a bit. And then you also, you want to socialize these things as you can. You want to make people aware that performance is important. You want to get that back to content creators and the decision makers, as well, as we mentioned so that people are aware of these things and going forward, you have some buy-in in order to address them.

Also if you don’t have a CDN, I would recommend getting one. They’re fairly inexpensive in this day and age. And as I had mentioned with regard to the tools that we’ve been able to use for Truth, they really provide a lot of great stuff for you guys to, to work with in order to improve performance.

Rob: If I could just add one thing here, again, Mark being the very technically savvy person, he’s walked me back from the ledger a couple of times, as I said, my takeaway here in general to this is don’t try to boil the ocean and get hung up on all of the very many pieces of data that are available. Mark’s about to show some of the stuff that you can see with the tools that he uses.

You guys are probably already using them. The funny thing with the tools is that, and Mark and I were doing this, you can run any website through them, and there’s some big brands out there that require just as much work as the regular Joe website. what Mark and I have agreed to is, he’s already implemented a lot of this stuff, but starting small, work your way up, take that low-hanging fruit, grab it and implement it, and you’ll see pretty immediate results. The image, I know we keep coming back to it, but the image optimizer or the optimizing of your images can make a really big difference. And that’s something that a lot of people can do just right away. So don’t try to boil the ocean and, and just take your wins where you can, and then start working through some of these more complex, these more complex practices, because it’s a lot, it’s a lot to look at.

Mark: So what are the tools? We talked about New Relic, and we wanted to start here because this is a pretty interesting one, particularly for what it provides to nonprofit organizations. The tool is really meant to be a continual performance and troubleshooting monitoring tool. One of the things that we get on the performance monitoring side is this cool report.

And Rob had alluded to it earlier, that gives us week by week what the load times are. And this is an average load time across all of the pages that it’s looking at. And so we use this as a metric, and because it’s based really on the volume of people going to the most traffic pages, we’re able to see that generally across the most traffic pages, the load time in seconds is, is what it is.

And we’ve been able to see that go down, which is exciting for us over time. So we get happy when we get this email on a Monday and the number is low.

Rob: Yeah, remember that, that number, on the second row there used to be a 10 for us folks. a ten. And that is where I had that moment of panic. Six is remarkably better, three pretty wicked. I have to say. So kudos to Mark.

Mark: And the tool does a whole lot of stuff other than performance monitoring, it does a lot of deep diagnostic troubleshooting and stack tracing and things like that for people who need to diagnose stuff, maybe on your backend, that has a great alerting system.

And the cost is, there’s a free version of new Relic that’s pretty wonderful. And they have this great generous Observability for Good program. And as a nonprofit organization, you can have TechSoup validate your credentials as an NPO and get the free account enhanced with additional users and additional storage space for data, which is really a great deal.

So my pitch to any nonprofit is to always get a New Relic account. It’s not difficult to implement on your site or even on your mobile apps. And it’s something you should look into if you’re not using this one.

So next let’s take a look at WebPateTest.org. This is another free tool to use it’s out there. This grew out of an internal tool that was initially developed at AOL. It’s great because it’s easy to use, it’s easy to configure for different network connections and devices. It delivers top line metrics that are interesting, and they hit pretty close to the Core Web Vitals, but for some reason they don’t include all of them. I’m not quite sure why. But it’s got a nice interface. The waterfall review is great. So it’s easy to take a look at everything that’s coming down in the load, and it gives you maybe a better view than you might get using dev tools in your browser.

The image analysis is awesome as well. You can see in the bottom left here, that’s a screen cap of a tool called Cloudinary, which is plugged into WebPageTest.org. And what it’ll do is it’ll take a look at all the images on your page, and it’ll suggest improvements that you could get from doing some optimization. And it’ll even, perform that optimization for you to give you an optimized image to download right from the tool, which is pretty great.

The other thing I like about WebPageTest is the ability to do comparison tests, like we’re seeing here in the screen grab, and you can see the before and after, after implementing a set of changes, how that affected your site, which is pretty nice. So we use WebPageTest.org quite a bit.

Rob: Yeah, Walmart’s doing that. I will say those visuals. for any time you need to have somebody at the very top having a basic, a baseline understanding, because that can be a technical conversation with somebody all the way. it just depends, right, it can be difficult, that visual right there, it speaks for itself.

The visuals have been really helpful for us to grasp the concepts and have been helpful for me to make sure that we can. educate the folks that are above me that ultimately make decisions on what’s going to go onto the website.

Mark: Yeah, so we use these quite a bit. We look at the top line metrics and the waterfalls, and then this is the visual comparison view that you can see where you can run multiple tests, you can rerun tests and you can compare them before and after.

And you can even store some of your tests for a while up on the site, or you can export them, which is great.

In addition to WebPageTest, we’ve got the Google Search Console, and I’m sure most of the people who are watching are very familiar with Google Search Console, but it’s worth taking time regularly to take a look at the opportunities you have in the performance section here.

And to just take a look at what they’re reporting in terms of what’s coming back as poor, or needs improvement, because you’ll be able to find some recommendations in here as they evaluate your pages and give you example URLs and point out some of the specific problems that they’re noticing.

And then you can use those to come back with your own recommendations for how to improve your site. So that’s a good one to look at on a regular basis.

Google PageSpeed Insights. I think some people, maybe you’re a little bit confused between this and Lighthouse and how they work. They’re not exactly the same and there are some key differences here. I like a Google PageSpeed Insights, primarily because it gives you direct access to the Chrome UX report.

And that’s the field data that we were talking about before. So the ability to look at real-world data from users having experienced your site and see what kind of metrics that Google is returning from that information. And the other thing that they do a good job of, as they do in Lighthouse, is really providing a good list of metrics and recommendations for things that you can work on. For both Google PageSpeed Insights and for Lighthouse, the simulated network connection is really a fast 3G, so it’s a lowest common denominator for your users, because it’s probably not the case globally, but for most users in the US, a fast 3G connection is probably on the lower end of what people are experiencing in terms of accessing your site.

So the numbers tend to be a little bit scary when you’re looking at such a slow connection, but they do give you a good idea of where you stand and give you a good way to look at a series of recommendations. The other thing that’s been great lately, and I’ll toggle over to a view there, is they’ve been including recommendations that are specific for platforms that people use a lot of.

So for instance, a lot of people are using WordPress and Drupal so that what they do is inline. They have recommendations here in terms of actual plugins that you can use that are either from WordPress or Drupal to address some of the recommendations they have. That’s what we’re seeing here.

This is a list below the lab data here that I’ve run, and I’ve got a list of a number of different opportunities as they call them that I could take a look at in order to improve the performance on this page.

So this is a really useful tool. Lighthouse does something similar and it gives you similar types of information. But what they don’t have is this section at the top here, which is based on the Chrome UX report here. So these Web Core Vital stats are based on your actual user experience and not based on that data.

And then the final tool that we wanted to highlight was really the browser dev tools. Folks are probably used to using this a lot as well. The elements and inspector tab is great for reviewing the assets that you have and looking at the markup quickly. You can find errors within the console; the network waterfall is pretty good.

There’s a performance tab now where you can run the profiler and you can look, and you can see the specific elements loading into the page and investigate them. There’s a filmstrip there. They overlay the Web Core Vitals as well. So if you have a long task that’s happening as part of the load, that’s a good place to really get into the weeds and investigate what’s going on as part of the load.

And then of course, Lighthouse is there, as one of the tabs now, too, in the dev tools. And everybody, I assume knows how to get, access their dev tools. There’s a shortcut within Chrome and I think it’s the same in Firefox. The F12 will open up your dev tools, so you can start taking a look at what’s going on under the hood there.

Rob, do you have anything to add in terms of what you’ve seen out of the tools before we wrap up?

Rob: Nothing other than just, I love the visual aspect of it. I’m a visual person. I think a lot of people are, so those are really helpful for me, and like I said, raising it up through the ranks.

Mark: Okay, great. That’s what we have in terms of our talk today.

Rob: Thank you, Mark, for letting me join you on this and thank you everyone for listening.