How to improve web performance in Episerver and increase your bottom line

How to improve web performance in Episerver and increase your bottom line

HTML Source EditorWord Wrap

Let’s face it: Web performance is the be-all and end-all when it comes to a new site’s success. E-Commerce customers are notoriously fickle creatures, prepared to completely abandon a site that keeps them waiting for too long. So performance affects not only customer satisfaction, but also your brand value and, ultimately, your bottom line.

The Kissmetrics blog gives us the bare and sobering numbers: “A 1-second delay in page response can result in a 7% reduction in conversions. If an e-commerce site is making $100,000 per day, this could potentially mean a loss of sales of $2.5 million every year.” The longer a site takes to load, the more visitors leave for a more responsive one. According to a survey conducted by the blog, 47% of consumers expect a site to load in two seconds or less. Mobile users are a little more forgiving, with one third of users saying they would wait no longer than ten seconds for a page to load.[1]

Now, this is no reason to abandon all hope. Page Load or Time To First Byte are traditional metrics that really don’t represent the best way to look at web performance. The first byte doesn’t register with the user and a full page load is not always necessary for the user to do what they came to do. Better measurements are things like Time to Interactive, Hero Rendering Times, First Meaningful Paint, and a “Speed Index”. As Speedcurve’s Tammy Everts put it, “The best performance metric for measuring user experience is one that measures how long the user waits before seeing…critical content.”

With average page size growing from 929 KB in 2011 to 3034 KB in 2017 and images and videos responsible for most of this growth[2], plus more traffic moving over to mobile, good performance optimization is now more important than ever.

How to measure performance

Before we go further and talk about how to improve your site’s performance, we should find out what your site’s performance is like and how much room for improvement you have. There are two kinds of monitoring tools. Passive tools are used to manually test performance, active tools automatically monitor performance.

Some of the most well-known passive tools are Web Page Test and Google Lighthouse, both of which give you a clean overview of their test results.





Active tools automatically test the performance of your solution over time, and can inform you if you are not reaching your defined targets, for instance by sending an email. The active tool Speedcurve offers automatic tracking using a Web Speed Test and tracks performance over time. There are several performance measurements to choose from. With its API, it also enables custom tracking, which can be used to test things like the different steps in a checkout flow.

New Relic offers similar measurements and functionalities to Speedcurve, showing you everything from average load times to error rates.

What can you do?

There are some approaches that can help you to significantly improve performance. One step is to add a Content Distribution Network in front of your web application. Under a CDN scheme, your content is not stored on a single server, but instead distributed to multiple locations all over your operating area. Bringing the content closer to the user enables faster downloads of static content like images and eases the load on the server, allowing for quicker response times.

In a use case (in which we investigated an e-commerce solution that had issues with performance), implementing a CDN dropped a page’s load time by 12.5%, from about eight seconds to seven seconds. It also increased the amount of requests per second to about double the number of users that could visit the site at peak time before performance on the servers started to decrease.

Equally important is switching from HTTP 1 to HTTP 2. With HTTP 1, a browser can download six to eight requests per domain in parallel, while HTTP 2 allows the sending of multiple files on the same connection. There is still some slight overhead for each separate file. Bundling client resources still makes sense, but it’s prudent to make several bundles. Microsoft’s cloud service Azure is currently working on adding support for HTTP 2 for web apps. CDNs also support HTTP 2, meaning that static files, like images, video and scripts can be downloaded faster to the end user from the CDN, even if the web server does not support HTTP1.

Image optimization can also considerably lessen loading times. This makes even more sense considering the hard shift to mobile browsing. At this point, having optimized images should be considered a basic non-functional requirement (NFR). With image resizers and modern image formats like JPEG 2000 and JPEG XR, this has become easier than ever. In addition, CDN vendors now support the conversion of traditional formats, like JPEG, to more modern formats which they can serve to browsers supporting these formats.

When should you think about web performance?

Conventional wisdom dictates that performance is part of one of the basic steps in project management, namely testing and optimization.



This approach may have been good enough in the past, but doesn’t cut it anymore nowadays. What we should be doing is this:



Performance must be part of every step in the design and development process. In the conceptualization and design phase, ideas that could potentially lower performance should be weeded out in favor of designs that allow for quicker response times. For instance, though a feature itself might give value, you always have to consider any potential impact it might have to the overall performance. A good example is the usage of Ads for a website. Though an Ad can bring revenue, it can also slow down the site and thus have an impact on your main business. During development of a site, programmers should already carefully think through the implications of such decisions and how they can affect performance, instead of leaving those concerns to the testers.

And of course, during launch and maintenance, performance optimization leading to increased customer satisfaction should be a priority. The quest for performance should be a cycle that never ends. As you measure and prioritize your site’s performance, you find new ways to improve it. After you have finished said improvements, you should measure again and find new ways to increase performance.



Episerver Gotchas

Using Episerver, there are some simple performance improvement steps that you should keep in mind throughout your development process:

  • Minimize database calls. Instead, use Episerver’s caching layer as much as possible.

  • Minimize external requests and always cache the ones you are making, when possible.

  • Personalization might have an impact to performance, therefore make sure you implement personalization in a way that has the least possible impact; for instance, by asynchronously loading the parts that should be personalized.

  • Avoid frequent querying of the Dynamic Data Store.

  • Try to keep your content tree well balanced.

  • Prefer Lists over ContentArea where possible to avoid big content structures for complex pages.  

  • Avoid dynamic properties when possible.

What does this mean for me?

It means one thing: High performance doesn’t just happen by accident. It needs a lot of planning throughout project development and constant monitoring after launch. But what you should always keep in mind is that the effort pays off, both in customer satisfaction and profit. Better performance equals better conversion rates, which means better business for you.

About The Author

Linus Ekström

Linus Ekström

Chief Technology Officer

Linus is a prominent figure amongst Episerver professionals. He develops strategically sound solutions to business challenges.

Get more in-depth tech insights

How Can We Reach You?

Contact us

close