Testing Procedure

The testing procedure is straight forward. After the initialt preparations the test enters a loop, that will repeat until the SSD unit fails or the static data has been corrupted.

  1. Write 16GB of random data to the SSD. Each run alternates between sequential and random writes as this is a more realistic and demanding usage pattern. Block sizes used are 4kB to 128kB.

    Random writes  will write random data to randomly selected positions within the allocated file. This is important as many tests use a random write pattern to fill up the whole file without overlapping data. That is no challenge to the SSD's controllers. Allowing overlapping will stress the NAND and the wear leveling algorithms, as some blocks will have to be re-written, causing higher write amplification.
    The correct number of bytes are still beeing written.


  2. Report the number of bytes written and SMART data to our server for further processing.

  3. Every 8th time, or 128GB, the static data consistency is verified using checksums and reported back to the server. The speed of the checksum verification process is shown as the average read speed.

  4. Start over with step 1.

Initial Preparations

The SSD is partitioned to till up all available space after correct alignment. The partition is formatted as ext2 and filled to 50% with random static data. A checksum for the static data is calculated and stored for verifying the data consistency through out the test. TRIM is not used. The initial SMART data is reported to the server before the main test loop is started.

When a SSD fails

A SSD is considered failed when static data is corrupted, the SSD unit stops working or if the SSD performance is permanently degraded to below a usable level.

If SSD performance is seriously reduced we will let it recover during a minimum 6 hours of idle time. If performance issues remain we'll try to contact the manufacturer.

The reported SMART data is what we base our tests upon.
If a drive failes prematurely without showing any prefailure warning signs we will not just fail the disk. Instead we'll try to test another unit of the same model. It's the SMART data prefailure information that is the main purpose of this test.

Is this really normal?

No. The tests preformed here must not be confused with normal use. They are designed to stress SSDs and simulate a really busy environment.
Normal workstation use would be more like 10-20GB written daily.

Why Test Smaller SSDs?

They use fewer NAND circuits and will wear out faster. Once we know the write endurance of these units, it'll be easy to estimate the equivalent numbers for the larger units.

TRIM and SSD Endurance in RAID Arrays »