📈Performance testing

We run performance tests for several reasons:

Dev: This is to ensure there's no degradation due to additional code and to ensure stabilization. In the case of a performance feature, this type of testing ensures that we achieve the expected performance gain.

Dev tests are a combination of regression tests and larger benchmarks.

Stress test: By running our code over a longer period of time and under stress, we ensure there's no additional degradation.

Competitive analysis: This enables us to compare our performance to our competitors and show our achievements.

For each test we run, it's important to understand the reason for running this specific test, in addition to the configuration and pattern.

Test coverage

When testing, we aim to get maximum coverage. This includes variations on configurations and io patterns.

Terminology

This page uses the below terms.

For the configuration key size default 16B the below are value sizes.

  • Small obj – 64B

  • Large obj – 1000B

  • Small DB – Below the instance RAM < 100GB (consider, smaller instance and smaller DB )

  • Large DB – Above the instance RAM > 150GB

  • Huge DB – Factor larger than large

  • db_bench benchmark, or benchmark – a set of tests running on a specific configuration

  • Test - a single result within a benchmark

Performance instance

  • type i3.4xlarge

  • Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz

  • 16 cores

  • RAM 122GB

Current test configurations

  • Large obj, small DB

  • Small obj small DB

  • Large obj, large DB

  • Large obj huge DB – in this configuration, multiple CF are also tested

It's possible to run tests in other configurations.

Test IO patterns

Currently, each benchmark includes fillup + 10 patterns:

  1. Fillup 100% random writes

  2. 100% random reads

  3. 100% random write (rewrite)

  4. Mix load reads writes

  5. Seeks

The tests are run in order; the state of the DB is affected by previous tests.

Performance optimization settings

We added the following settings to our db_bench testing:

  • Reduce the CPU bottlenecks on compaction

  • max_background_flushes=4

    • Allow better handling of faster workflows

  • max_write_buffer_number=4

    • Allow better handling of faster workflows

  • write_buffer_size=265MB

    • In some cases, improves memory handling

  • bloom_bits=10

Performance test results

The performance benchmark result will be uploaded to a dashboard (AKA web admin). The result will also be compared to a baseline either in graphs or tables. We aim to get simple results in the form of:

  • Better - Improvement with >X% across all benchmarks & tests, with memory & disk space usage the same or less

  • Same - +- X% across all benchmarks & tests, memory & disk space

  • Degraded – >X% less across all benchmarks & tests, memory & disk space same or more

  • Inconclusive – Improvement in some tests and degradation in others

In addition to the IOPS results, we will monitor and report the memory usage, the disk space consumption, and CPU stats (although at this stage, we don't evaluate them).

Additional parameter tests

Running the same tests with different parameters to increase coverage.

Suggested:

  1. Compression - off/LZ4/Snappy

  2. Wal (redo log) - off/on

  3. Number of threads – 50/16/4/1

  4. Optional feature active/disabled

Last updated