Ben Cripps
Ben Cripps I'm one of the Application Performance technical team.

Want a fast web site? Then you need to know what you are doing!

Want a fast web site? Then you need to know what you are doing!

During my developer career, I have come across developers and teams who do not know or appreciate the importance of web performance, and quite often it can be treated as an afterthought. And yes, at some point I was one of those developers!

When I became aware of web performance

For me, the importance of having a fast website dawned on me back in 2001 when I worked for the leading motorsport news portal at the time. During my seven-plus years there, there were two key areas where we suffered from poor web performance.

The first big challenge we hit was with the database. The website was a great success, and our monthly unique visitor count was growing, and the number of pages per session was stable, but during that time the website grew to take on more championships, more news, more functionality and different languages.

Combined with the growth of digital photos and image galleries, the database soon  became quite large and a performance issue gradually started to raise its head, starting with the odd blip, which eventually became a P1 site-down incident.

The second performance challenge we had was the number of third-party tracking and advertising tags. I’ll talk about third-party tags in a future post, but in this blog I’m going to touch on the importance of knowing what you’re doing, be it writing code or a database statement.

An eye opener

The database performance challenge was an eye-opener for me, and during that time I learnt a lot about the pitfalls we were certainly guilty of at the time. Ultimately we got stuck, and we turned to a database specialist to help identify and resolve the issues. I was lucky enough to sit in and absorb as much knowledge as my young mind could take at the time.

One of the biggest lessons learnt was our reliance upon the automatic database tuning tool we were using. The tool looked great – we just ran it against a set of database logs which it would analyse and suggest a set of indexes to add. Just like any wizard tool we used at the time, we just blindly followed the steps and allowed it to generate a number of randomly named indexes.

As both the database and demand grew, we started seeing more slow database calls occurring, ultimately ending up in a dead database and broken website. To resolve this, we would manually restart the database and re-run the tuning wizard. That in turn would suggest adding yet more randomly named indexes.

Before we knew it, we ended up with overlapped indexes, all causing various locks and blocking requests. All bad things, and it was all due to us blindly running a wizard. In short, we didn’t know what we were doing and created a complete mess.

It wasn’t the tool’s fault, far from it!

Lessons learnt

The lesson here is that if you’re going to use a tool to generate something, make sure you know and understand what it is doing. It’s a bit like writing code – sure, you can write it, but if you don’t understand what the code is doing and how it operates, you’re setting yourself up for future failure.

Along with the obvious advice on knowing what you’re doing, other small changes and processes could have helped. A key one is peer review. If you are lucky enough to have a DBA, then great, get them to review and approve any database statements or changes. No DBA, no problem – get your fellow developer to give it a once over. Remember – a second set of eyes never hurts.

Either way, there is always value; it triggers team communication, shares knowledge, career experience, and it results in the right questions getting asked. These teams are engaged teams; they are winning teams.

Thankfully, that lesson has stuck with me and post-event we took the time to learn and understand more about database behaviour, the do’s and don’ts of indexing etc. After the sense of panic faded, and the editorial team started to talk to us again, I found it to be one of the best lessons of my career.

And don’t take this blog as saying code generation or helpful wizards are evil - they’re great time savers, but only when you know and understand exactly what they are doing.

Know and understand what you’re doing

Today, I still see these, or similar, challenges occurring all the time, the most recent being a team who used a code generation tool without reviewing or truly understanding how it ran. I learnt the hard way – tools such as these can be great, but running them without any understanding about what they’re doing is like playing chicken (you might get away with it today, but one small mistake or misunderstanding and tomorrow you may get run over) - and if I can, I will help other teams avoid the same pitfalls.

In the short term, did the development team in question save time, resource and money? Yes. But what about the medium and long term benefits? I would argue no, and I believe it’s cost the team more than they realise.

As an outside adviser looking in, I see internal confusion and lack of understanding – what does this randomly named bit of code do, what’s the priority, its dependencies etc.? If I see that, what does their customer see? Does it raise doubts in their customer’s minds? I suspect so.

While the team are running around firefighting, they are still getting other requests and issues coming in. These type of situations lead to frustrated customers, annoyed managers and an unhappy, demoralised development team. It’s a vicious circle – the work backlog keeps growing, the pressure building to resolve the issue once and for all.

Had the team understood, managed and documented the code from day one, the issue may have been averted altogether, or if not, hopefully, resolved more efficiently. The reduced firefighting time could have been spent developing new and exciting functionality; to me, it is a lost opportunity.

How did we resolve the database performance issue?

After calling in the database expert, we formulated a plan. First point was to never blindly use a performance tuning tool again, and secondly  to know and understand what we were doing (we worked that part out for ourselves)!

While we went away and learnt about database indexes, the different types and when they should be used, the DBA reviewed the database schema and our top database statements. Their work identified what indexes we needed, any poorly performing SQL statements and what future work we should consider for the database.

From reviewing these recommendations, we tried to minimise the number of table joins, and implemented a sensible, clearly named, set of indexes. We saw an immediate benefit from our joint efforts – both our customers and our content editors had an improved experience and the pressure started to lift.

We followed up by reviewing the database schema itself. During the first few years of the website, the number of championships and functionality offered had increased, resulting in a few strange database schema decisions. We looked at our past and expected growth rate, and the technical director rightly took the decision that a fresh database design was required.

We were a small team (two developers, one IT director) but bizarrely it was the first time we truly sat down as a team and shared our thoughts. It was great, the knowledge silo’s disappeared, and together, we created a new database schema with performance and the website’s plans in mind.

Did it work? When the time was right, I’m pleased to say the new database schema went live without issue. I don’t recall our customers even knowing something had changed; it really was that good.

If you can recognise any of this in your own organisation, and want to do something about it, why not get in touch and we can have a chat, with no pressure.

cta-which-monitoring-tool