Increase reliability by partitioning disks of different size?
What you're describing is a bit tacky.
ZFS wants full disks of equal size and capability. This is critical for a variety of reasons, but also just makes common sense.
All you'd be doing in the situation you've outlined is add complexity to the environment and increase your risk.
Let's put it this way:
If you have 1 TB of data on disk one, you can replicate it to disk two, and you can afford to lose either disk.
If you have 1.5 TB of data on disk two, you can replicate only the first 1 TB of data to disk one. In this scenario, if disk two fails you WILL lose data.
ZFS is very capable but as a general rule, as per the two points above, mixed disk setups are silly and not super-useful. If you care about reliability and redundancy, pretend the second disk is also only 1TB.
The thought is that if a disk doesn't go totally busted, and only portion of it fails, I can still access the data?
In theory, this thought is correct. As long as you encounter an error on a single device of your RAIDZ1 vdev, ZFS can and will inform you and correct the error, assuming the other devices are error-free.
What may differ in reality are several things:
- Errors may span over partitions and therefore two or more devices will be affected, which can result in unrecoverable errors or even whole pool loss (depending on location and amount of errors). You could use RAIDZ2 or Z3 to mitigate this somewhat, but the problem is always there.
- While resilvering a partition, the disk needs to read (2 times) and write (1 time) to the same disk concurrently and randomly. Unless you use Solaris 11.3 with sequential resilvering, this will be very very slow. Until you are finished with the resilver process, you are vulnerable to errors on the other partitions. If your resilvering time is longer, your chance of encountering an additional URE grows. It also places additional load on the drive, increasing the chance of complete drive failure.
- Imagine your 3rd partition (the last one on the 1.5TB disk) shows enough errors to degrade the pool and call for a replacement. If you cannot add another disk, you cannot do a replacement without shutdown/export, and even then it is more complicated than usual.
Based on those points, I would advise to not do this if reliability is your main goal. Assuming a fixed hardware situation, I would do one of the following:
- Use mirrors and lose 500GB, but gain a simple setup with easy expandability in the future
- Use two separate pools and
copies = 2
if you want some resiliency against smaller errors (whole disk failure would only kill 2/5 or 3/5 of your data compared to your setup) - Use other file systems than ZFS if you want to have your cake and eat it, too