There has been a lot of discussions on the Internet about the older magnetic Hard Disk Drives (HDD) and Solid State Hard Drives (SSD). The major issue discussed is drive speed.
SSD disks do cost more than the older HDD disks. The SSD disk capacity is reaching twice the capacity as the largest HDD. Recently there has been an SSD at 30 TB and an HDD at 16 TB. A 7.68 TB SSD disk can cost as much as $6,700; whereas, a 16 TB HDD can be around $400.
An SSD stores information a lot faster than an HDD. Cache and RAM on a system operate in nanoseconds. HDD disks operate at milliseconds. Hard Disks are very slow compared to the rest of any newer PC. An HDD is a system bottleneck and severely limits a system that is data-intensive such as accessing a database.
SSD disks store information in a fashion similar to Random Access Memory (RAM) except that the data is not lost when powered down.
An SSD can slow down when it is getting full, an HDD does as well. Data is stored in blocks. The block size will vary depending on the SSD capacity.
To find the block size you can use the following command in a Terminal:
blockdev --getbsz /dev/sdb1
The value returned is in kilobytes (KB). These blocks can store data from a file. Multiple files can be stored in one block in what is called a page. When a file is deleted the pages are marked as unused. When a page is needed to write a new file the page is read and rewritten with the deleted pages as blank. The empty pages are then used to write the new file.
The issue in most cases is not what size of drive you may buy, but how to optimize its performance.
With any drive, the filesystem used to format that drive will dictate its abilities.
To optimize the possibilities with Linux, in this case, Ubuntu, we need to install more support for the various filesystems. When Ubuntu is first installed there are a few drivers installed by default. These filesystems supported are shown in Figure 1.
As you can see there are limitations to the number of filesystems supported. Use the following command to increase the support which should be more line Figure 2.
sudo apt install btrfs-progs btrfs-tools f2fs-tools hfsutils hfsprogs jfsutils cryptsetup dmsetup lvm2 util-linux nilfs-tools reiser4progs reiserfsprogs udftools xfsprogs xfsdump libguestfs-reiserfs -y
Some filesystem will not allow booting from a device formatted with the filesystem. These are hard set during the install, but they can be changed later. As long as the support for a filesystem is in the kernel then the format is usable at boot. You can see a list of currently loaded filesystem modules by using the command:
To test the performance of various filesystems I installed all of the filesystems as mentioned above.
I got a SanDisk 250 GB external SSD and hooked it to my Ubuntu system. I used Gparted to format the drive as I needed. I used a drive label such as ‘SSD’ after every format. I closed Gparted, removed and reconnected the SSD to my USB port so it would mount.
On my laptop, I used a folder called ‘SSD’. In this folder, I created a file of random characters with a size of 1 GB (or 1,000 MB). To create the folder I used the command:
openssl rand -out sample.txt -base64 $(( 2**30 * 3/4 ))
This created the file ‘Sample.txt’ which is needed to perform the tests. If you would open the file with a text editor then you can see that is made up of random characters.
After each format of the SSD and a re-mount I would run the following:
time cp sample.txt /media/jarret/SSD
NOTE: You cannot format an F2FS filesystem using Gparted. You must use the command ‘mkfs.f2fs /dev/sdx1 -l SSD -f’. Make sure you specify the proper partition for ‘sdx’. Also, specify the disk label you want instead of ‘SSD’.
The ‘time’ command will run another command, in this case, the ‘cp’ command. The file ‘sample.txt’ is being copied from the current folder to the drive labeled ‘SSD’. The time is listed after the command is finished executing. The times, in seconds, given by the command are below for each filesystem checked.
EXT2 1.662 602 MB/s
EXT3 1.074 931 MB/s
EXT4 .772 1295 MB/s
F2FS 1.060 943 MB/s
FAT32 3.085 324 MB/s
HFS+ .946 1057 MB/s
JFS 1.370 730 MB/s
NTFS 7.637 131 MB/s
ReiserFS 1.310 763 MB/s
UDF 2.194 456 MB/s
XFS .4935 2026 MB/s
As you can see from the results, the XFS filesystem allows for better writing capabilities to an SSD device. With a throughput of around 2,026 MB/s the XFS filesystem seems to offer the best writing speed. Let’s look at what happens if we increase the amount of data copied to about 5 GB.
EXT2 44.207 113.1 MB/s
EXT3 45.412 110.1 MB/s
EXT4 50.184 99.6 MB/s
F2FS 47.803 104.6 MB/s
FAT32 doesn’t support 5GB file size
HFS+ 44.197 113.1 MB/s
JFS 43.562 114.8 MB/s
NTFS 55.138 90.7 MB/s
ReiserFS 42.816 116.8 MB/s
UDF 48.329 103.5 MB/s
XFS 58.414 85.6 MB/s
If you are going to store larger files then it may be best to use the ReiserFS format.
SSD disks can immensely improve the speed of data access over standard magnetic media. As a test I copied data from the SSD drive to an EXT4 magnetic hard disk. The speed was 341 MB/s for a 1 GB file. Copying a 5 GB file the throughput was 75.2 MB/s.
For your requirements, you may want to perform some tests with sample data or applications to test the best speed. Tests could be performed on a temporary server before being put into production.
Overall be aware of the speed differences. The results I got will vary to others being done on different systems as well as using different SSD models.