Sitecore performance - How you can make performance a key part of your projects
When developing websites, a lot of emphasis is put on performance. How long users wait to see a page can have a big impact on how well your site achieves your business goals.
A surprisingly small increase in the time for a page to load can cause people to give up and go elsewhere. In turn, that reduces those all-important conversion rate statistics for turning visitors into customers.
Recently, I spoke to the Manchester Sitecore user group about some of the tools and processes that developers can use to keep their Sitecore websites running fast. That discussion was fairly technical, (you can read a summary of it here) but the key messages are important for content editing and management staff to understand as well.
Fundamentally, people browsing the internet have a fairly short attention span. I’m sure we’ve all been in the situation where we click a link in Google and find ourselves waiting a couple of seconds for the page to appear. The urge to click “back” and open the next link on the results page is pretty strong – and this isn’t just anecdotal.
Measurements taken by companies involved in content caching, stats and optimisation confirm this for us. The precise numbers vary a bit depending on what’s being measured, but one commonly cited stat from Akami suggests that: ‘…by the time users have waited 3 seconds, 79% of them have clicked away. That’s a lot of potential customers lost.’
At a high level, there are three concepts you can control that affect your users’ perception of your site’s speed: (I’m ignoring network performance here as that is often out of your direct control)
Sitecore performance affects everyone involved with a project, not just developers. Do you want the experts at Kagool to help you improve the performance of your next Sitecore project? We’d love to chat. Get in touch with us today for a free demo or audit. For more detail, read on!
As noted above, the sever-side performance is governed by how fast Sitecore can assemble your page and how fast your custom code runs. But both of those are affected by raw server performance.
At its simplest, Windows tools like Process Monitor can record detailed information about how a machine is performing. You specify a set of things you want to examine, such as CPU load, memory usage or network traffic, and it will record data in real-time and graph it for you. This is good for showing trends over time, and allowing you to export raw measurements for other analysis.
You can get a very quick idea of how hard a server is working from tools like this. Trends on these graphs can show how requesting certain web pages affects servers, or it can show how hardware is coping with overall levels of load. As a rule of thumb, the higher the graphs are, or the bigger the spikes you see, the harder your infrastructure is working.
(If you’re working with Platform as a Service infrastructure, the monitoring tools are a bit different here, but things like Application Insights on Azure can be used to provide similar data)
What these graphs don’t tell you a lot about, is the underlying cause of the load you see. To look more specifically at why a server is working hard, you need more specific measuring tools.
Sitecore provides a really helpful tool built into Experience Editor. The “Debug” view available to editors will give detailed statistics on how hard Sitecore is working to render a page.
It breaks the data down in two key ways:
Once you’ve identified components that take a long time, your developers will need a bit more data on what aspects of them are taking a long time. Sitecore doesn’t examine statistics to this level, but code profiling tools can do this for you. Whilst many are available, Visual Studio (which most of your developers are probably using anyway) includes some really helpful tools.
As well as being able to monitor trends in memory / processor use for your code, it can also tell you down to the individual-line level which bits of your code are run the most, and which take the longest time. This can give developers really detailed information about where they should focus their efforts for making pages run faster.
One of the nice features about this tool is that while the data it collects is very detailed and technical, it is really well integrated with the overall code editing experience in Visual Studio. Clicking on a statistic that looks high can take you immediately to the piece of code that is responsible for it.
But as I mentioned before, how your server deals with running your website code is only part of the battle. A successful project will think about what happens on your user’s web browser when it tries to show the page onscreen.
Unsurprisingly there’s a whole other set of tools here. The web browsers your site targets can give you a wealth of information here. All the modern browsers include a “developer tools” feature which can give you detailed statistics on what the browser does to render your page:
Graphs like this can show you the effort expended on turning HTML, CSS and Javascript downloaded from your servers into a visible page. This lets your front-end experts spot where there are options to simplify or adjust the page to make things happen faster. But browsers can also track the work done to download the web page across the network.
You can see file sizes, download times and things like “how long did the browser wait before it had the free time to download this asset”. These statistics can help developers make more effective use of techniques like:
Historically, with waterfall-style projects, this sort of testing would usually be scheduled towards the end of the project, as part of the quality and user acceptance phases of work.
This isn’t a great idea. The later in a project that you discover a performance challenge, the more difficult and expensive it is to fix.
Ideally you need to spot these challenges early, and fix them as soon as they crop up. The move to more agile project management approaches in recent years has helped a bit – since in the worst case, these checks should now be scheduled at the end of sprints, to examine the output of that sprint.
But my presentation to the Manchester user group argued that this isn’t really enough. You’re still allowing developers to finish a task and move on to something else before checking these issues. I suggested that since these are all tools that are available during your development process, these tools and tasks are things that should be done day-to-day as part of the general process of writing code.
Modern developers talk in terms of “make it work, then make it pretty” when they construct the logic of their code. I’d argue that we should all be thinking in terms of “make it work, then make it fast and pretty” to make performance a core part of our projects.
While some of the data from the tools above is pretty technical, the overall trends shown by Sitecore’s debug statistics, or a web browser’s profiling tools are fairly easy to grasp. That means content editors can also see that a component or page takes much longer than others, and it doesn’t matter if they don’t understand why. After all, you don’t need to understand the code to raise a bug in how a page behaves – so why should you need to understand it to raise a performance issue?
So, a key step of our journey towards increasing customers and their satisfaction with our websites starts from having the whole project team thinking about performance.
Looking for help with your next Sitecore project? We’d love to hear from you. Get in touch with our Sitecore experts today for a free demo or audit.