Global performance tuning

This is a "network scalability & big picture" performance question, not a machine-level performance question, but I didn't see a specific forum for it :)

I see that there are several international datacenters available for deployment. If I have a single non-replicated, vertically scaling Postgres database machine. Does it make any sense to deploy front-end machines to the datacenters around the globe? Or will I get better user performance deploying FEs to only the same DC as the database? Each user request typically makes at least one, but maybe more round trips to the DB machine.

If it does make sense to deploy FEs globally, how would the client target the closest one?

4 Replies

Definitely keep your application close to the database. It does not make sense to have the application close to users if you need to make requests to the database anyway. If you want to speed up connection speeds you can consider adding a CDN like Cloudflare or Fastly in front of your application server so that you can cache static content close to the user, and to ensure that users are routed over a fast backbone.

If you are up for it you can also consider some kind of database replication. Some applications can use master-slave replication where you always write to a single master somewhere in the world but reads go from a local read slave. This is only a benefit if most page loads do no writes at all.

Finally, there are possibilities to set up master-master (also called multi-master) replication over WAN. This usually slows down the speed of commits since they need to lock while waiting for acknowledgements from other nodes. I'm not very familiar with the Postgre ecosystem, but I found these two projects that might be interesting if you want to go down this road: and

It all depends on your application and data requirements, plus budget.

If you are deploying a global (around the Earth sort of thing) system, then you definitely need to setup front-end facing servers around the globe in different data centres. But you will have to be very smart with data replication, caching and performance tuning, not to mention timing and conflict resolution.

I kept a large (30+) cloud of servers on the linode Frankfurt data centre and the whole data centre went down last week for half a day, so all my high availability systems became irrelevant. Oh well…

How often do entire datacenters go down on Linode? Any public stats or event history?

It's pretty concerning that an entire datacenter could go down, that would essentially require multi DC replication for any datastore. Of course, I'm just starting out a product, so dome downtimes will be understandable.

I've discussed with close friends, and we decided that, if the data is just in one location, there's really no point to distributing front-ends. Any global distribution would pretty much require data replication to provide any benefit.

Well, I've been with linode for a few years and I can remember two cases of entire data centres dying.

I also host things with other providers, the Canadian data centre died once (for nearly a week) and the US data centre has died twice. Amazon has had similar downtime as well in recent history.

So I think linode is within the average death count, in the industry :)


Please enter an answer

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct