Our Website Architecture: Under the Hood

Convincible
8 min readAug 1, 2021

This article explains the technological approach we take when building and hosting bespoke websites — and the unique benefits this offers to our clients.

Some aspects of this article are simplified, in that a knowledgeable person might say “ah, but you haven’t mentioned…” These caveats are omitted because there is plenty I could say in response, but only so much space. If questions arise, let’s discuss!

Principles

Our architecture is designed to create websites which:

  1. Load almost instantly
  2. Never go offline or throw unexpected errors
  3. Are immune to malicious attacks
  4. Are easy to edit and impossible to break accidentally

These are tall orders. Most websites have loading bottlenecks, intermittently stop working, continually sprout security vulnerabilities, and offer unintuitive editing systems. These translate into higher bounce rates, lost sales, the spread of malware, and site owner frustration.

The importance of website speed cannot be overstated. Each delay of just 0.1s can hurt conversion rates by 7%, while a 2s delay can double the number of visitors who give up and leave (‘bounce’).¹ The sweet spot is when pages load in less than a second. Few websites achieve that; ours do.

When we’re working on a bespoke website for you, we don’t just want to make it look good — we want it to work flawlessly for your visitors. Here’s what we do to achieve that.

Static

Most website architectures out there are ‘dynamic’. Our architecture is ‘static’. The first certainly sounds better, but here’s the difference:

  • In a ‘static’ architecture, all the pages of the website are generated in advance. Now, when the visitor arrives, the page is immediately sent to them.
  • In a ‘dynamic’ architecture, nothing is generated in advance. When the visitor arrives, the entire system fires up to generate the page they requested. Once it’s done, the page is sent to them.

It’s clear which architecture produces a faster website. Dynamic sites require a program to run every time a page is requested, which takes time.

Static architectures have some limitations. For instance, a website like Facebook has to be dynamic. You can’t generate the News Feed in advance; it’s got to be up-to-the-millisecond every single time you visit. But most organisation websites are nothing like this.

Fundamentally, most organisational sites are just images and text. Nothing essential to the content of the page changes between each visit; the page only changes when you, the client, edit it. So, we can generate the site after each edit, rather than before each visit. It’s a colossal time saving. Page load times can drop from 10 seconds to 10 split-seconds in one fell swoop.

Headless

A key reason that most architectures are ‘dynamic’ is because the chosen content management system (CMS) is doubling as the page delivery system.

The software used to edit pages (the CMS, which must be dynamic), owing to its design, must also handle sending pages to visitors (forcing this to be dynamic as well).

WordPress and Drupal are examples of this. Both are programs that must run ‘dynamically’. When you create pages with them, they save your content in their own special format in a database. So, the program has to run again, to read that format from the database, when a visitor wants to view a page.

The alternative to this is a ‘headless’ CMS (don’t ask why it’s called that). This is a piece of software with the sole purpose of editing content, not serving pages. Only when you, the editor, click ‘Publish’, does it trigger our code to read your content, generate webpages based on it, and save them to disk. Later, when people visit your site, entirely different software reads those webpages from disk and delivers them. The processes for editing and delivery are thus (to use the lingo) “de-coupled”. Editing is dynamic, but delivery can be static.

Because headless CMSs are concerned only with editing, they can also make editing easier. In large part that’s because they must be custom-configured for each individual site. For instance, if your home page has a title, introduction and featured image section, then the editing screen will offer precisely these three controls. If you also want to be able to edit the call-to-action, then we’ll configure it with a control for this too.

In short, we set up the system to present exactly and only what you require. There are no confusing settings you’re “not allowed to touch”, and there’s no way you can accidentally change the wrong thing.

CDN

Finally, your pre-generated pages must be uploaded to the Internet; and the ideal place for them is a Content Delivery Network (CDN).

Traditional website hosting sells you space on a server. So your site exists on literally a single computer, somewhere out there, in all likelihood squished up against a hundred other sites among which the server divides its attention. If any part of the server experiences an error, or develops a vulnerability, your site could go offline, or be automatically hacked.

A CDN is a worldwide network of hundreds of computers, each of which is focused on doing one thing: quickly delivering a webpage from disk. It distributes copies of your website throughout its network, so that even if one ‘node’ goes offline, all the others remain. And whenever a visitor requests a webpage, the closest computer in the network delivers it to them, reducing time in transit.

In fact, CDNs offer such unparalleled reliability and speed, they can comfortably serve your site to 10,000 visitors per second. Their limitation is that they can only serve static websites. Which is no problem for us at all.

Summary

So, here’s how it all comes together:

  1. You edit the content on your site in a headless CMS, then click a button to say you’re done and want the changes to be made live.
  2. Automated code reads the content from the CMS, and transforms it into webpage files.
  3. Those pages are uploaded to the CDN. Copies are sent to dozens of locations across the globe.
  4. A visitor types in your website address. The closest computer in the CDN immediately sends the stored copy of (e.g.) your home page.

This architecture is the primary way we achieve the aims we stated at the beginning:

  1. It’s fast, because the pages are ready to be sent by a high-speed network.
  2. It’s reliable, because several copies are stored all over the world.
  3. It’s secure, because there is no single server to attack and no program to hijack.
  4. It’s easy to edit, because the CMS is customised for your specific site.

While our specific implementations may be unique, this sort of architecture is not. Also called a JAMstack architecture (don’t ask), many website developers take a similar approach. More and more sites are being built this way, because of the unique benefits it offers.

None of this is to say that other architectures are bad. WordPress — the most popular software in the world for running websites — has many of its own unique benefits. It also has many drawbacks that our approach completely sidesteps.

Front-End Code

We’ve discussed our broad system architecture, but the heart of your website — the part that differs completely from one site to the next — is the HTML, CSS and JavaScript code that creates the structure and visual design of the site. This is often called ‘front-end’ code.

We don’t copy and paste any pre-made templates. We write every line of code by hand. So we understand how your site works, because we crafted it. And that also means we can control it completely, both to optimise it and to ensure it matches your requirements exactly.

Our aim when writing this code — besides making your site look fantastic and enhancing your brand — is to create websites that:

  1. Just work, for all your visitors, regardless of browser or mobile device, and being considerate of visitors who may have poorer eyesight or other difficulties (i.e. ‘compatibility’ and ‘accessibility’).
  2. Help you turn up higher in search results (‘SEO’) and on social media (‘machine readability’).

Achieving these points comes down to deeply understanding website code and its best practices. Two websites that look identical on the surface could be completely different at the code level. Better code is shorter, more efficient, and runs faster.

Talking technically, we also believe website code should:

  • Be semantically and syntactically valid, DRY and concise, and easy for other developers to understand.
  • Work even if JavaScript is disabled (“graceful degradation”/“progressive enhancement”) or other advanced features are unavailable.
  • Not rely on cutting-edge hacks or esoteric fads that can only be understood by a small subset of developers.

The above are the main reasons why we don’t create single-page apps using technologies like React or Vue. These can offer a further speed-boost, but at the cost of, for instance, not working at all in some situations.

These technical points actually contribute to the main two aims as well. For instance:

  • HTML that is semantically valid (e.g. specifying in the code that a heading is a heading, not just large and bold text) is more machine-readable, which improves SEO and accessibility.
  • Code that gracefully degrades and relies on well-supported features will be more compatible with more devices and browsers.

Like a car’s engineering, quality code ‘under the hood’ is invisible once you’re on the road — but it’s what’s responsible for a smooth experience.

Conclusion

Like any design decision, our architecture sacrifices some benefits in order to emphasise others.

We believe speed, reliability, security and ease of editing are the highest technical priorities for effective websites, so we’ve designed everything around maximising these advantages.

If you like the sound of having a static website built like this — or you have any further questions — just get in touch.

--

--