Low-end hardware RAID vs Software RAID
Solution 1:
A 10-20$ "hardware" RAID card is nothing more than a opaque, binary driver blob running a crap software-only RAID implementation. Stay well away from it.
A 200$ RAID card offer proper hardware support (ie: a RoC running another opaque, binary blob which is better and does not run on the main host CPU). I suggest to stay away from these cards also because, lacking a writeback cache, they do not provide any tangible benefit over a software RAID implementation.
A 300/400$ RAID card offering a powerloss-protected writeback cache is worth buying, but not for small, Atom-based PC/NAS.
In short: I strongly suggest you to use Linux software RAID. Another option to seriously consider is a mirrored ZFS setup but, with an Atom CPU and only 4 GB RAM, do not expect high performance.
For other information, read here
Solution 2:
Go ZFS. Seriously. It's so much better compared to hardware RAID, and reason is simple: It uses variable size strips so parity RAID modes (Z1 & Z2, RAID5 & RAID6) equivalents are performing @ RAID10 level still being extremely cost-efficient. + you can use flash cache (ZIL, L2ARC etc) running @ dedicated set of PCIe lanes.
https://storagemojo.com/2006/08/15/zfs-performance-versus-hardware-raid/
There's ZFS on Linux, ZoL.
https://zfsonlinux.org/
Solution 3:
Here is another argument for software on a cheap system.
Stuff breaks, you know this that is why you are using raid, but raid controllers also break, as does ram, processor, power-supply and everything else, including software. In most failures it is simple enough to replace the damaged component with an equivalent or better. Blow a 100w power-supply, grab a 150w one and get going. Similar with most components. However with a hardware raid there are now three exceptions to this pattern: raid controller, hard drives, and motherboard (or other upstream if not an expansion card).
Let's look at the raid card. Most raid cards are poorly documented, and incompatible. You cannot replace a card by company xyz with one by abc, as they store data differently (assuming you can figure out who made the card to begin with). The solution to this is to have a spare raid card, exactly identical to the production one.
Hard drives are not as bad as raid cards, but as raid cards have physical connectors to the drives you must use compatible drives and significantly larger drives may cause problems. Significant care is needed in ordering replacement drives.
Motherboards are typically more difficult than drives but less than raid cards. In most cases just verifying that compatible slots are available is sufficient but bootable raids may be no end of headaches. The way to avoid this problem is external enclosures, but this is not cheap.
All these problems can be solved by throwing money at the problem, but for a cheap system this is not desirable. Software raids on the other hand are immune to most (but not quite all) of these issues because it can use any block device.
The one drawback to software raid on a cheap system is booting. As far as I know the only bootloader that supports raid is grub and it only supports raid 1 which means your /boot must be stored on raid 1 which is not a problem as long as you are only using raid 1 and only a minor problem in most other cases. However grub itself (specifically the first stage boot block) cannot be stored on the raid. This can be managed by putting a spare copy on the other drives.