Disadvantages of partitioning an SSD?

SSDs do not, I repeat, do NOT work at the filesystem level!

There is no 1:1 correlation between how the filesystem sees things and how the SSD sees things.

Feel free to partition the SSD any way you want (assuming each partition is correctly aligned, and a modern OS will handle all this for you); it will NOT hurt anything, it will NOT adversely affect the access times or anything else, and don't worry about doing a ton of writes to the SSD either. They have them so you can write 50 GB of data a day, and it will last 10 years.

Responding to Robin Hood's answer,

Wear leveling won't have as much free space to play with, because write operations will be spread across a smaller space, so you "could", but not necessarily will wear out that part of the drive faster than you would if the whole drive was a single partition unless you will be performing equivalent wear on the additional partitions (e.g., a dual boot).

That is totally wrong.  It is impossible to wear out a partition because you read/write to only that partition. This is NOT even remotely how SSDs work.

An SSD works at a much lower level access than what the filesystem sees; an SSD works with blocks and pages.

In this case, what actually happens is, even if you are writing a ton of data in a specific partition, the filesystem is constrained by the partition, BUT, the SSD is not. The more writes the SSD gets, the more blocks/pages the SSD will be swapping out in order to do wear leveling. It couldn't care less how the filesystem sees things!  That means, at one time, the data might reside in a specific page on the SSD, but, another time, it can and will be different. The SSD will keep track of where the data gets shuffled off to, and the filesystem will have no clue where on the SSD the data actually are.

To make this even easier: say you write a file on partition 1. The OS tells the filesystem about the storage needs, and the filesystem allocates the "sectors", and then tells the SSD it needs X amount of space. The filesystem sees the file at a Logical Block Address (LBA) of 123 (for example). The SSD makes a note that LBA 123 is using block/page #500 (for example). So, every time the OS needs this specific file, the SSD will have a pointer to the exact page it is using. Now, if we keep writing to the SSD, wear leveling kicks in, and says block/page #500, we can better optimize you at block/page #2300. Now, when the OS requests that same file, and the filesystem asks for LBA 123 again, THIS time, the SSD will return block/page #2300, and NOT #500.

Like hard drives nand-flash S.S.D's are sequential access so any data you write/read from the additional partitions will be farther away than it "might" have been if it were written in a single partition, because people usually leave free space in their partitions. This will increase access times for the data that is stored on the additional partitions.

No, this is again wrong!  Robin Hood is thinking things out in terms of the filesystem, instead of thinking like how exactly a SSD works. Again, there is no way for the filesystem to know how the SSD stores the data. There is no "farther away" here; that is only in the eyes of the filesystem, NOT the actual way a SSD stores information. It is possible for the SSD to have the data spread out in different NAND chips, and the user will not notice any increase in access times. Heck, due to the parallel nature of the NAND, it could even end up being faster than before, but we are talking nanoseconds here; blink and you missed it.

Less total space increases the likely hood of writing fragmented files, and while the performance impact is small keep in mind that it's generally considered a bad idea to defragement a nand-flash S.S.D. because it will wear down the drive. Of course depending on what filesystem you are using some result in extremely low amounts of fragmentation, because they are designed to write files as a whole whenever possible rather than dump it all over the place to create faster write speeds.

Nope, sorry; again this is wrong. The filesystem's view of files and the SSD's view of those same files are not even remotely close. The filesystem might see the file as fragmented in the worst case possible, BUT, the SSD view of the same data is almost always optimized.

Thus, a defragmentation program would look at those LBAs and say, this file must really be fragmented!  But, since it has no clue as to the internals of the SSD, it is 100% wrong. THAT is the reason a defrag program will not work on SSDs, and yes, a defrag program also causes unnecessary writes, as was mentioned.

The article series Coding for SSDs is a good overview of what is going on if you want to be more technical about how SSDs work.

For some more "light" reading on how FTL (Flash Translation Layer) actually works, I also suggest you read Critical Role of Firmware and Flash Translation Layers in Solid State Drive Design (PDF) from the Flash Memory Summit site.

They also have lots of other papers available, such as:

  • Modeling Flash Translation Layers to Enhance System Lifetime (PDF)
  • Leveraging host based Flash Translation Layer for Application Acceleration (PDF)

Another paper on how this works: Flash Memory Overview (PDF).  See the section "Writing Data" (pages 26-27).

If video is more your thing, see An efficient page-level FTL to optimize address translation in flash memory and related slides.


Very long answers here, when the answer is simple enough and follows directly just from the general knowledge of SSDs. One does not need more than read the Wikipedia term of Solid-state drive to understand the answer, which is:

The advice "DO NOT PARTITION SSD" is nonsense.

In the (now distant) past, operating systems did not support SSDs very well, and especially when partitioning did not take care to align the partitions according to the size of the erase block.

This lack of alignment, when an OS logical disk sector was split between physical SSD blocks, could require the SSD to flash two physical sectors when the OS only intended to update one, thus slowing disk access and increasing Wear leveling.

Currently SSDs are becoming much larger, and operating systems know all about erase blocks and alignment, so that the problem no longer exists. Maybe this advice was once meant to avoid errors on partition alignment, but today these errors are all but impossible.

In fact, the argument for partitioning SSDs is today exactly the same as for classical disks :
To better organize and separate the data.

For example, installing the operating system on a separate and smaller partition is handy for taking a backup image of it as a precaution when making large updates to the OS.


There are no drawbacks to partitioning a SSD, and you can actually extend its life by leaving some unpartitioned space.

Wear leveling is applied on all the blocks of device (ref. HP white-paper, linked below)

In static wear leveling, all blocks across all available flash in the device participate in the wear-leveling operations. This ensures all blocks receive the same amount of wear. Static wear leveling is most often used in desktop and notebook SSDs.

From that, we can conclude partitions don't matter for wear-leveling. This makes sense because from the HDD & controller point of view, partitions don't really exists. There are just blocks and data. Even partition table is written on the same blocks (1st block of the drive for MBR). It's the OS which then reads the table, and decides to which blocks to write data and which not. OS sees blocks using LBA to give a unique number to each block. However, the controller then maps the logical block to an actual physical block taking wear-leveling scheme into consideration.

The same whitepaper gives a good suggestion to extend live of the device:

Next, overprovision your drive. You can increase the lifetime by only partitioning a portion of the device’s total capacity. For example, if you have a 256 GB drive— only partition it to 240 GB. This will greatly extend the life of the drive. A 20% overprovisioning level (partitioning only 200 GB) would extend the life further. A good rule of thumb is every time you double the drive’s overprovisioning you add 1x to the drive’s endurance.

This also hints that even unpartitioned space is used for wear-levelling, thus further proving the point above.

Source: Technical white paper - SSD Endurance (http://h20195.www2.hp.com/v2/getpdf.aspx/4AA5-7601ENW.pdf)