Inside DotMailer: Why Our New Kit Is 28 Times Better

Inside the technology team at dotMailer there’s a group of people who spend most of their time thinking about three things: performance, scalability… and the future.

dotMailer is one of the fastest growing top ESPs in the UK, so it’s essential that we proactively address such issues.

As such we’ve recently acquired new hardware to host the dotMailer platform. This is a significant investment in the product, and comes as a result of a number of detailed analysis projects focusing on our performance, scalability and projected growth over the coming years.

Implementing such a major change is a complex operation and we’re currently in the testing phase to ensure that it’s configured and tuned to the maximum before we do the actual migration.

You’ll hear more about the migration date when we’re happy to make the switch from our old kit to the new.

28 times larger, 8 times more powerful

Our new solution gives us significantly higher performance and throughput, better redundancy and more flexibility to react to new customers and market changes.

On a simplistic level we’re migrating to servers with eight times more CPU power and memory than we currently have, and attaching them to a storage solution 28 times larger – with the ability to grow way beyond that.

Finally the use of shared SAN storage (a network exclusively used for storage) means that we’re also moving to a new form of High Availability within our servers, which will provide a performance increase in its own right – but also an even more stable and protected platform.

In the future this means that we can simply purchase more servers and add them to our new cluster as demand grows.

We plan to keep any maintenance windows to an absolute minimum so we won’t be changing the entire hardware platform in one go. For this phase we’re changing our database servers and our storage solution.

In summary…

From a technical perspective this is a huge simplification of a large project (but this is an email marketing blog and not a technical infrastructure one!).

I hope however it goes some way to explaining the effort and commitment we have to keeping the availability and performance of the platform at a high level in the coming years.

If you’re a current or prospective customer interested in understanding our commitment and architecture in more detail, then please contact the sales team or your account manager, who can direct your specific questions to the technology team; we’re always happy to talk tech.

For those of a technical persuasion, here’s a summary of what we’re moving from and to:

Database Servers – typical current configuration

  • 2 physical CPUs with 4 cores (8 cores in total)
  • 64GB RAM
  • 2TB local storage spread across 38 physical drives
  • No SAN storage

Database Servers – typical new configuration

  • 4 physical CPUs with 8 cores and hyper-threading (that’s 64 cores in total)
  • 512GB RAM
  • Shared SAN storage, consisting of:
  • 56TB total storage spread across 84 physical drives
  • Multiple redundant paths (meaning we can tolerate hardware failures without anyone noticing!)
  • Ability to dynamically expand storage as needed (without taking servers offline)
  • Ability to grow beyond 56TB by simply adding more storage racks
  • Better backup and restore solutions

dotdigital Hack Week 2020

This is our yearly challenge to our software engineers, product designers, deliverability consultants, and systems administrators to come up with forward-thinking hacks and put them into practice, ready to demo at the end of the week. Not…

Top five content trends for 2021

Indeed, a recent Accenture poll concluded the average consumer has “dramatically evolved”, while a YouGov survey found 27% of consumers say that item cost is now a more important factor when shopping as half of workers have…

How to write the perfect email marketing copy

Email marketing requires you to wear many hats – you have to be a designer, a data analyst, a workflow expert, and more. And, after all of that, you’re left with one of the hardest jobs of all:…

This site uses cookies to improve your user experience. By using this site you agree to these cookies being set. To find out more see our cookies policy.