*** This blog is based upon a recent webcast that can be watched here. ***
As with part 1, part 2 ,and part 3 of this data modeling blog series, this blog also stresses that the cloud is not nirvana. Yes, it offers essentially infinitely scalable resources. But you must pay for using them. When you make poor database design choices for applications deployed to the public cloud, then your company gets to pay every month for all those inherent, built-in inefficiencies. Static over-provisioning or dynamic scaling will run up monthly cloud costs very quickly due to that bad design, although capped at your cloud over sizing selection.
However, if you instead embrace newer serverless database options from vendors such as Snowflake and Amazon Redshift, then your database cloud resource usage becomes 100% dynamic (i.e. it auto-scales to maintain a service level agreement threshold). So, you pay for constant performance at the expense of uncapped monthly costs. In that scenario, bad database design can easily overwhelm any planned budget. So, for serverless databases, you really cannot afford to build in bad choices.
Serverless database deployment
First, let’s examine what a truly serverless database deployment option looks like. Below is a diagram showing an example for Amazon Redshift.
Figure 1: An example serverless database deployment in Amazon Redshift.
It’s actually a very simple option to deploy. But now the taxi meter runs non-stop at whatever rate is required to maintain the acceptable level of performance that you set.