Personal View site logo
Need help: fast HHDs in Raid 0 - low throughput!
  • After working on this problem for two weeks and getting nowhere... maybe someone here has a clue.

    1) Setup:

    • Fast PC (Threadripper, 64GB RAM, system on M.2 SSD, Windows 10 64bit,...)
    • 2x HGST He10 HDDs (10 TB, 7200rpm, 200+ MB/s sustained read/write speed!, SATA 6Gb/s)
    • Testfile: 50 GB folder with a CDNG image sequence from BMPCCK4k (each image is about 8 MB)

    2) Pre-Test:

    I tested both HDDs individually => sustained sequential read/write speeds close to 250 MB/s (empty disk) for the 50 GB folder

    3) The goal:

    I want to Raid 0 those two 10 TB drives to get higher sustained transfere speeds especially when the disks approche 50% capacity (I need arround 300 MB/s).

    Videofiles (stored on another HDD) will be copied to the Raid for the time I work on them in Davinci Resolve, only to deliver the data to Resolve. Rendering results will be stored on a SSD (so there is no risk of loosing data even when the raid is totally destroyed). I need at least 10 TB of fast storage, as the RAW CDNG image sequences in 4k are quite large - and I don't have the money to do the same with SSDs.

    4) The Problem:

    When I convert the two drives in Windows 10 to one "striped-volume" I get a nice 20 TB software Raid 0, but when I test the transfere speed by simply copying the 50 GB folder I get 200 MB/s MAXIMUM transfere speed (thats even below a single drive!). The throughput starts at 400+ MB/s for the first 2-3 seconds filling up the disk caches and some system memory and then plummets to arround 100-200 MB/s, where it stays for the rest of the time.

    5) I tried:

    different stripe-sizes, an (old) hardware raid controller (even worse results, 150 MB/s max), checking for new system drivers (the Windows HDD disk drivers seem to be from 2006 and are only 32 bit!), checking HGST website for information and of course searching the internet for many many hours (either its a different problem or there is never a solution posted to the problem).

    I can't try/use the onboard raid controller of the motherboard (because of Windows bullshit and I can't do a fresh system installation at the moment).

    Why isn't it working?! (Setting up a Raid 0 is nothing I thought.)

    UPDATE: After some fiddling with Windows and a system reboot, read performance is high as it should be - only write performance is still bad.

  • 7 Replies sorted by
  • What is the exact chipset and installed drivers for it?

    Normally you need to check motherboard manufacturer site for chipset drivers, BIOS settings and such.

  • @Vitaliy_Kiselev I'm not using the motherboard raid controller, its no hardware raid. I'm using Windows software raid ("striped volume"), which shouldn't require any motherboard drivers.

  • @Psyco

    Normally you need to have proper chipset drivers for this, as well as BIOS support.

    May be this "Windows" approach works, but I did not saw it used, and most probably it makes extreme CPU load.

  • I'm not using the chipset. No Raid in BIOS, no NVME. Just two SATA HDDs.

    I don't see any "extreme" CPU load when writing to the Raid and the CPU is not doing much when using Resolve anyway.

  • @Psyco

    Same Ryzen drivers and RaidXpert2 are used for SATA.

    Try something common, saves time at least.

  • I can't install those raid drivers, as I can't switch my system to UEFI-boot at the moment (Windows was - for some stupid reason - installed in legacy mode and I can't migrate to UEFI mode, as Windows did some fuckup with the boot partition).

    But, again, I'm not using the on board raid controller, I want to do a software raid.

    (CPU load when using the raid is about 5-10% on 1 core, so it really doesn't matter.)


    After fiddling arround with the Windows disk drivers - I'm not sure what I did, but at one point it said "install/update driver" (but it didn't download any new drivers, just used something already there) and a system reboot I get better read performance:

    write to Raid: arround 200 MB/s (nothing new here)

    read from Raid: arround 500 MB/s (theoretical limit, perfect!)

    So, this setup will work for my usecase... but still, what is going wrong when writing to the Raid?

    My guess: There is some problem with Windows working against the modern/highend cache system of this large HDDs.