Crypticle was designed to scale out of the box at the click of a button. Until a few days ago, this was purely theoretical so it was important to test it out on a real cluster to validate our assumptions.

Test overview

The goal of this test was not to measure resource consumption (e.g. CPU, memory and bandwidth), instead, it was to measure the number of database read and write operations to see how they were affected by sharding.

Based on the total number of reads and writes made against specific servers, we can infer the relative resource consumption of the RethinkDB database service. Note that Crypticle is made up of a number of sub-services; the RethinkDB service was chosen because it was identified as being the bottleneck of the overall system. If we can prove linear scalability of the RethinkDB service, then we can safely infer that the system as a whole can scale linearly as we add more hosts to the cluster.

As part of this test, the Kubernetes cluster was made up of 3 t2.medium instances running on Amazon EC2 and managed using Rancher.

Note that Crypticle enforces a configurable maxTransactionSettlementsPerAccount property which allows us to rate limit the number of transactions per settlement cycle on a per-account basis (default is 10 transactions per cycle). The interval of the settlement cycle is configurable and defaults to 5 seconds.

For this test, all the defaults were kept; this artificially limited the rate of transactions that could be processed per-account.

Test methodology

With 1 shard

  1. Start with a single shard for all database tables and a single RethinkDB server/replica in the cluster.
  2. Open the RethinkDB admin panel to the screen which shows the chart with the number of reads/writes for the Transaction table.
  3. From a regular account logged into the Crypticle control panel, send 100 transactions to 100 different accounts via Crypticle's WebSocket API using the Chrome developer console (e.g. using a for loop).
  4. Check the number of reads and writes over time which are made against the Transaction table on the RethinkDB server and take a screenshot of the chart.

With 3 shards

  1. Scale out the number of RethinkDB servers from our Kubernetes control panel to 3 replicas.
  2. From the RethinkDB admin control panel, configure both the Transaction and Account tables to each have 3 shards.
  3. Open the RethinkDB admin panel to the screen which shows the chart with the number of reads/writes for the Transaction table for any one of the availale servers/shards (there should be 3 to choose from).
  4. From a regular account logged into the Crypticle control panel, send 100 transactions to 100 different accounts via Crypticle's WebSocket API using the Chrome developer console (e.g. using a for loop).
  5. Check the number of reads and writes over time which are made against the Transaction table on one of the RethinkDB servers and take a screenshot of the chart.

Test results

With 1 shard

No sharding

  • Number of writes tend to peak at around 40.
  • Number of reads tend to peak at around 100.

With 3 shards

Three shards

  • Number of writes tend to peak at around 14.
  • Number of reads tend to peak at around 30.
  • Compared to the 1 shard scenario above, there are fewer spikes for both reads and writes.

Interpretation of the results

The results above are in line with our expectations of linear scalability. With three shards, we expect each shard to handle one third of all writes and reads; so we expect 40 / 3 = 13.33 writes and 100 / 3 = 33.33 reads - This is consistent with the results from the charts above.

Note that according to RethinkDB documentation, a table may have up to 64 shards. It is not clear whether this is a hard or soft limit, but in any case that should allow Crypticle to handle a significant number of transactions.

It may be worthwhile to repeat this test with an increasing number of shards/servers to confirm that our assumption of linear scalability remains correct for a shard count > 3. Tests carried out by the RethinkDB team seem to support this, see this report.