Service Resilience — part 1: Startup Technology

Service Resilience — part 1: Startup Technology

Martina Alilovic 5 min read
Service Resilience part 1 Startup Technology Th

The beginnings

It’s 2011 — ancient history for NoSQL databases. It’s been 5 years since the publication of Google’s famous Bigtable article that started the NoSQL revolution. New cool databases are popping up from every corner. The NoSQL database market, even though full of potential, is still very unstable: Cassandra just released the 0.6 version, and MongoDB with its 1.6 version is still finding its voice. Nevertheless, the promise of easy scaling, high availability, and small write latency is taking everybody’s attention. ReversingLabs is no exception.

At the time, ReversingLabs was a small startup with no money for pricey licenses for enterprise solutions. Our mission was to collect a large number of samples every day and analyze them with different processing systems, dynamically and statically, generating reports that needed to be accumulated and stored. What that meant was — loads of data and loads of simultaneous updates. PostgreSQL with its multi-version concurrency control was consuming too much memory and write amplification was just too large for our use case. We needed something different.


As a small startup with an ambitious mission, we had only one solution at the time — we had to build our own datastore. It had to consume very few resources, the writing had to be really fast and it had to be horizontally scalable. The fancy features didn’t interest us, we didn’t need secondary indexes, nor counters. We had a simple, but specific use case we had to cover. Even though our core business lies in malware analysis, we spent a lot of resources developing the technology that could keep up with our growth in that department.

That is how the idea of MegaBase developed. MegaBase is a single leader, key-value store, written in C++ and built specifically around our use case.

We started with one node. The b-tree indexing model we had at the beginning, with its in-place update implementation, was optimized to use minimum memory. While Postgres’s copy-on-write would require a lot of memory and annoying vacuums, MegaBase enabled us to store and daily update millions of entries using cheap technology. It enabled us to grow fast at the beginning when we had very few investors.

The next feature that we needed was redundancy and increased read throughput, so another replica node was added to MegaBase and we focused the development around elasticity through horizontal scaling.

Soon, as the load demands increased, we needed lots of disk space and even faster writes. Customization of known compression algorithms reduced disk usage three times. Because of the simple and layered codebase, it took changing just the layer responsible for writing on disk to switch to the LSM tree-based model, making a major improvement in write speed.

At the time, MegaBase was already running on 7 nodes — 6 for storing data and 1 leader that synchronized writes. All the data was partitioned in a fixed number of parts which were distributed across data nodes. Each data node had one additional replica for better read performance and writes were always synchronous. We developed a simple Python interface for working with data that all our services used for communication with MegaBase.

An improved network layer in 2015 gave our services a new set of features for communicating with MegaBase — bringing 9k Jumbo frames into the picture. The added key-locking feature enabled us to create services that needed to synchronize their operations in a distributed system.

In 2018, MegaBase, with 17 servers, reached its peak. A funny thing is, we never implemented any tool for measuring MegaBase performance statistics. All the performance of the production cluster was measured by logs of services for updating and reading the data. What we could see in the logs, however, was a throughput of 12 billion read requests daily (single and range) — which translates to 90 billion key reads (1 million/s) and 18 million updates daily (single and batch), which translates to 2 billion key updates (23 K/s).

Megabase lacked a lot of features that other databases provided at that time. It lacked the ability to simply repartition the data and there wasn’t even a failover node in case the leader node crashed. But even with this limited flexibility, MegaBase was the fastest and cheapest possible database for our use case. And it perfectly followed our growth in data size, keeping the read and write latency to a minimum.


Our database served us well, but we were still very aware of the issues around it and the lack of some basic features. The risk of maintaining our own data store was growing every day. It’s not like we could find DBAs experts in MegaBase on the street, so we needed to educate a specialized team just to build features around the database. At the time, ReversingLabs was gaining more and more customers and investors, and the pressure of keeping the four 9s SLA was beginning to catch up. ReversingLabs finally outgrew MegaBase.

We focused on exploring the database market. One of the databases that we found interesting was ScyllaDB. Scylla seemed to have all that we’re missing, and we found the leaderless architecture very interesting. But Scylla was very young at the time and it seemed too risky to invest the cost of migration into such an early release of the new technology.

The quest for the technology that fit our needs did not stop. We were constantly looking at new databases on the market, monitoring the progress of the ones we explored earlier.

In a few years, ScyllaDB would finally prove to have the stability we were looking for.

The new beginnings

It took another 4 years of exploring to decide to stop with MegaBase development. Having a leader node as a single point of failure was a risk we could no longer take. We had to decide if we wanted to invest additional time in developing multileader architecture around and introducing green threads in MegaBase or go with a different database with enterprise support. At that point, we matured enough as a company to choose a more stable path. We finished all the PoCs needed for choosing ScyllaDB as our alternative solution. It was a huge step for us. It was finally time to close that chapter and focus our resources on our core business. Nevertheless, the work on our in-house database and cloud technology will always be a major part of the success story of ReversingLabs.

Now it’s time for the biggest migration in the history of our cloud. Migrating more than 300 TB of data, more than 400 services, and data models seamlessly, and with ZERO downtime was the next challenge to conquer.


[10:27 AM]