Implementing Storage Caps and Limit Enforcement in a Database-as-a-Service

Derrick Sekidde
Crane Cloud
Published in
3 min readMar 22, 2024

--

image from https://storyset.com/technology

First things first; am guessing we all understand what Database as a Service (DBaaS) is. If not, well it is a cloud computing service that allows you to access and manage databases. This service helps you to be able to do all that(access and management) without having to set up and maintain your own hardware or software.

So it is you just having a remote database. The service provider handles everything from setting up the database to keeping it secure and up-to-date. Crane Cloud does just that, it is championing Cloud Native Computing initiatives by providing these services. To check out Crane Cloud head over to https://cranecloud.io or if you want to be able to breeze through database management on the platform please do so here: https://docs.cranecloud.io/databases/database/

I have had the opportunity to be part of the architecting team for a DBaaS service and I want to share the design/implementation plan I used. First; why would we need database storage caps or limits? Well, with Cloud Solutions, you pay for your usage. So in case you have a database for example on Crane Cloud, the database is limited by storage, you have a cap which for example can be around 1 gigabyte of storage.

To ensure that one user does not exceed the allocated database storage, we must be able to enforce the limit. For simplicity, we did not want to make use of third-party tools so we had to make use of the functionality provided by the database management system itself(Postgres/MySQL). We decided that implementation-wise a custom script or trigger within our application’s codebase would do.

First, we devised and implemented it in such a way that when your allocated limit is exceeded your database automatically locks you the user out. Well that seemed perfect but it seemed a little malevolent 😊 so we resolved to find another approach.

The next approach we determined that:

  1. We have a storage check for all databases and in case a database has reached 80% of its allocated storage we would email the user to notify them. This is intended to assist them in planning effectively and not be surprised when their limit is reached and we take further action.
  2. Once the limit/cap is reached, the user’s permissions for inserting and updating data into the database are immediately revoked. However, they retain the ability to read from the database and also to delete entries from it.
  3. The initial implementation of locking out a user would remain unchanged but rather serve in situations where we require the activation and deactivation of a database by either a user or administrator for diverse reasons. One such reason is limiting the billing since it may accumulate instances where deletion of the database is undesired yet it is not being used.

We settled on the above as it allows us to monitor and control the storage usage on a per-user basis, ensuring adherence to the allocated limits. The storage checks are run using a cron job within our application. In case you have any questions or queries or just want to take a look at the code for this, leave a comment. Catch you on the next one! ✌️

--

--