HighPoint HotRod 100
Redundant Array of Inexpensive Disks, that's what RAID
stands for. RAID is a technology that adds flexibility and power to the humble
hard disk. It's been around for years in big powerful server computers in the
form of rather expensive SCSI (Small Computer Systems Interface) setups. However,
over the last year RAID has been gaining popularity in desktop (Or in my case,
deskunder) computers using the cheaper and more common EIDE (Enhanced Integrated
Drive Electronics) drives. They first cropped up on add in cards like the Abit
HotRod 100 but now can commonly be found on motherboards.
ATA/100 RAID Controllers Roundup
This article is going to be a logical continuation to our recent ATA/66 and ATA/100 Controllers Roundup. And this time we will discuss the popular ATA/66 and ATA/100 IDE RAID controllers available in the today's market and their performance.
Redundant Array of Independent (or Inexpensive) Disks, RAID, denotes a category of disk drives that employ two or more drives combined to provide a balance of performance and data protection. Although RAID subsystems are used frequently on servers and aren't generally necessary for personal computers most users prefer to eliminate even the smallest causes for concern and to squeeze the maximum out of their storage subsystem. This is how the demand on RAID started growing. Bearing in mind pretty rapid development of the data transfer interfaces (see our ATA/100 Investigation for more) and hard disk drive manufacturing technologies and especially the necessity to work with huge amount of data, using RAID becomes quite a popular thing.
What does the RAID subsystem look like? Despite its multi-drive configuration, a RAID subsystem can be viewed as very large virtual drive, which is created and controlled by the operating system through the RAID management software. The software doesn't only set up the system to address the RAID unit as if it were a single drive, it also allows the subsystem to be configured in ways that best suit the general needs of the host system. RAID subsystems can be optimized for performance, the highest capacity, fault tolerance or a combination of two or three of the above. In this respect, there have been defined and standardized different RAID levels. There are six of them now, called RAID 0, 1, 2, 3, 4 or 5. Let's briefly mention all of them.
RAID Level 0
RAID Level 0 is achieved through a method known as striping. It implies that sectors of data are interleaved between multiple drives when being read or written. In a RAID Level 0 array the data is organized in stripes across the multiple drives, i.e. the data is broken down into blocks and each block is written to a separate disk drive. A typical array can contain any number of stripes, usually in multiples of the number of drives in the array.
One of the advantages of RAID 0 is that I/O performance is greatly improved by spreading the I/O load across many channels and drives. However, when any disk array member fails, it affects the entire array, and it fails, too.
RAID Level 1
The RAID Level 1 is achieved through disk mirroring, and is done to ensure data reliability or a high degree of fault tolerance. RAID 1 also enhances read performance, but the improved performance and fault tolerance are at the expense of available capacity in the drives used.
In a RAID Level 1 configuration, the RAID management software instructs the subsystem's controller to store data redundantly across a number of the drives (mirrored set) in the array. In other words, the same data is copied and stored on different disks (or "mirrored") to ensure that in case of drive failure the data is available somewhere else within the array. The read performance gain can be realized if the redundant data is distributed evenly on all of the drives of a mirrored set within the subsystem. The number of read requests and total wait state times both drop significantly; inversely proportional to the number of hard drives in the RAID.
RAID Level 2
RAID Level 2 is rarely used in commercial applications, but is another means of ensuring data protection. Each bit of data word is written to a data disk drive and each data word has its Hamming Code ECC word recorded on the ECC disks, which is used as a means of maintaining data integrity. ECC tabulates the numerical values of data stored on specific blocks in the virtual drive. The checksum is then appended to the end of the data block for verification of data integrity when needed. As data gets read back from the drive, ECC tabulations are again computed, and specific data block checksums are read and compared against the most recent tabulations. If the numbers match, the data is intact; if there is a discrepancy, the lost data can be recalculated using the first or earlier checksum as a reference point.
RAID Level 3
This RAID level is really an adaptation of RAID Level 0 that sacrifices some capacity, for the same number of drives, but achieves a high level of data integrity or fault tolerance. The data block is subdivided ("Striped") and written on all but one of the drives in the array. Stripe parity information, which is used to maintain data integrity across all drives in the subsystem, is generated on Writes and recorded on the parity disk. The parity drive itself is divided up into stripes, and each parity drive stripe is used to store parity information for the corresponding data stripes dispersed throughout the array.
The parity info is checked on Reads. This method achieves very high data transfer performance by reading from or writing to all of the drives in parallel or simultaneously but retains the means to reconstruct data if a given drive fails, maintaining data integrity for the system.
RAID Level 4
RAID Level 4 is similar in concept to RAID Level 3, but RAID Level 4 has a larger stripe depth, usually of two blocks, which allows the RAID management software to operate the disks much more independently than RAID Level 3. This essentially replaces the high data throughput capability of RAID Level 3 with faster data access in read-intensive applications.
RAID Level 5
This is the last of the most common RAID levels in use, and probably the most frequently implemented. RAID Level 5 minimizes the write bottlenecks of RAID Level 4 by distributing parity stripes over a series of hard drives. No parity stripe on an individual drive stores the information of a data stripe on the same drive. In doing so, it provides relief to the concentration of write activity on a single drive, which in turn enhances overall system performance. RAID Level 5's parity encoding scheme is the same as Levels 3 and 4: it maintains the system's ability to recover any lost data if a single drive fails.
Well, we have considered the most widely spread RAID levels. However, this in no way means that there are no more levels. They do exist, but they are quite complex and definitely have a different application field. Anyway, we hope you've got a good idea of RAID levels. But RAID subsystem will be incomplete without a RAID controller card.
A RAID controller card is the hardware element that serves as the backbone for the array of disks. It not only relays the input and output commands to specific drives in the array, but provides the physical link to each of the independent drives so they may easily be removed or replaced. The controller also serves to monitor the integrity of each drive in the array.
In fact, there aren't too many IDE RAID controllers in the today's market. And those, which are currently available, cannot boast a wide range of supported RAID levels: RAID 0, RAID 1 or RAID 0+1 that's usually everything you can get. A few days ago Promise announced the first RAID controller card to support 3 and 5 RAID levels - SuperTrak100. But since we didn't expect it to come out so soon, we didn't manage to get it for our roundup. Anyway, we will definitely offer you a review of this new RAID product a bit later and now it's high time we got started.
The controller cards we will be considering today are:
Besides, we will also compare the results obtained for our testing participants with those shown by Promise FastTrak66 IDE RAID controller card, so that we could see if there is any performance gain. We have already discussed everything concerning FastTrak66 in our Promise FastTrak66 IDE RAID Controller Review. So, we won't dwell on its peculiarites here.
And now let's take a closer look at our today's players.
Promise FastTrak100 IDE RAID
This controller card is a successor to Promise FastTrak66. Since faster ATA/100 interface had been developed, Promise introduced a new product supporting it, having retained all the basic features of the predecessor plus ATA/100 support. Namely:
Besides, you can also set the block size when working in stripe mode (RAID 0). The smallest allowed size makes 0.5KB and the top is 1MB (i.e. 0.5, 1, 2, 4, … 1024KB). This controller also allows checking S.M.A.R.T. status of the drives with the help of FastCheck RAID utility.
ABIT HotRod100 Pro IDE RAID
Here is a brief list of the key features announced by the manufacturer:
And now that you have got acquainted with all the roundup participants we are passing over to the tests. The testbed was configured in the following way:
We carried out all the tests with 2 HDDs from IBM, however, on the charts you may also see the results obtained for only one hard disk drive used, for a better performance comparison. Just for your reference, we have composed a list of IBM DTLA 307015 basic features:
All the tests were run three times each for each testing participant and then the average value was taken for the charts and diagrams. We didn't let the HDDs cool down to room temperature between the tests. So here come the results:
ABIT HotRod 100 Pro proved the best when working in stripe mode (RAID 0) in office applications. It managed to leave FastTrak100 and FastTrak66 quite far behind. The second position belongs to fastTrak100 and the third remains for FastTrak66.
In mirror mode (RAID 1) FastTrak100 won the lead, while HotRod and FastTrak66 ran neck and neck. These results illustrate very well the advantages of ATA/100 controllers over the previous generation, ATA/66 ones.
When we ran the CPU utilization test in stripe mode, ABIT HotRod 100 Pro required more CPU resources than its competitors from Promise. In other words, this is what you have to pay for the fastness…
And in mirror mode all the testing participants showed close results. As you can clearly see from the chart, the CPU utilization in stripe mode appears nearly 4 times higher than in mirror mode. Maybe it's only mirror mode that is implemented on the hardware level and stripe - via software? :-)
Here both FastTrak controllers performed just brilliantly: linear read speed for the RAID array appeared twice the read speed of IBM DTLA (which is approximately 37MB/sec). And the fact that fastTrak66 managed to slightly surpass its elder brother is just casual. Besides, the gap between two Promise cards is so insignificant that, it can hardly be considered important and meaningful (probably it happened because all WinBench results are usually very unstable so that it affects even the average value). What is surprising, it is the impossibly low results shown by ABIT HotRod 100 Pro. Why? We will return to the reasons a bit later.
Relying on the common sense we expected FastTrak100 to perform better than FastTrak66, however, having analyzed the data transfer rates obtained on the inner and outer tracks, we arrived at a conclusion that in stripe mode FastTrak66 appeared the indisputable leader due to its stably high results.
The left graph shows the non-linearity of FastTrak100's data transfer rate. We even heard some snapping of the HDD heads though both hard disks were absolutely OK. There must be something wrong with the BIOS of our controller then.
And the right graph belongs to ABIT HotRod 100 Pro, which proved even worse in terms of linear reading. We dare suppose that the results appeared so low because of the raw drivers and HotRod's personal dislike toward IBM DTLA.
According to this chart, all the roundup participants showed nearly the same performance in both: stripe and mirror modes. Probably the controller type doesn't matter that much for this test, and it is the HDD that sets the pace here.
In Adaptec ThreadMark FastTrak100 surpassed a bit FastTrak66, and ABIT HotRod 100 Pro appeared hopelessly behind. As we have already pointed out several times in our articles, controllers based on HighPoint HPT chips (or their drivers) are optimized for file system that's why they don't feel at home in the benchmarks dealing with streaming reads/writes…
Block Size In Stripe Mode
Well, when we carried out all those tests, the default block size for stripe arrays (if you remember, in case of stripe mode the data is broken down into blocks and each block is written to a separate disk drive) was equal to 64KB. We didn't change the block size when testing the controller cards in order to get equal comparison conditions. However, if we set a different task, something like achieving the best balance between the block size and the performance in stripe mode, then it could be quite interesting to try various combinations manipulating the block size. So, let's take a look at that.
For our investigation we took Promise FastTrak100 IDE RAID controller card and 2 IBM DTLA 307015 hard disk drives. All the other components in our testbed remained the same as for the controllers tests plus Fujitsu MPE3064AT HDD. We utilized the same tests: WinBench 99 (Business WinMark and Disk Inspection Tests) and Adaptec ThreadMark 2.0. All of them were run three times and the average results were taken to be presented on the charts.
The controller BIOS allows setting the block size equal to one of these values: 1, 2, 4, 8, 16, 32, 128, 256, 512 and 1024KB. We decided not to make the block smaller than the cluster, i.e. smaller than 4KB, because it seems to make absolutely no sense. And which of the remaining values appears the most optimal one is the question No 1 to answer.
So, here are the results obtained:
As you can see from this test, the dependence of the performance on the block size is almost hundred percent linear. The maximum performance gain is obtained between 32 and 128KB and then the growth slows down a bit. Of course, these results depend on the files defragmentation level, but we have formatted the HDD beforehand that's why the results obtained are close to ideal, because all the files were saved to it during the test. In real-life situation the performance can turn out considerably lower than what we managed to get now.
Here the picture is slightly different: no linear dependence at all. The performance maximum appeared in case of 128KB block. But in general, the performance keeps growing together with the block size increase.
Well, here we also cannot see any regularity though the following two value groups are more than evident: 4, 16, 64, 256, 1024KB and 8, 32, 128, 512KB. The latter sizes provide higher average performance growth than the first set. And again the maximum appeared in case of 128KB block.
Judging by everything said above we arrive at the conclusion that 128MB is the most optimal block size for the configuration with Promise FastTrak100 and 2 IBM DTLA 307015 HDDs.
By the way, we came across an interesting file called PCACHE on Promise web-site. According to the accompanying description, we could get incredible performance gain very easily. Simply by installing just one more file into the system: Pticache.vxd. We decided to take the risk and here is what we got (we will compare the results obtained for RAID 0 with 64KB data blocks with Pcache and without Pcache):
Well, well. That's something! The results in Business Disk Winmark got nearly 1.5 times higher! In High-End test the results also increased, not so significantly though. And the performance in Adaptec dropped quite tangibly. You wonder if it tells on the conventional work in OS? Well, it is very likely to, actually, but this influence is considerably lower then. As for us, we didn't notice any significant harmful effect. Doesn't the whole thing remind you of something? Yeah, high performance in file tests and low performance in streaming reads and writes… That's exactly the picture we saw with ABIT HotRod 100 Pro controller built on HighPoint HPT370 chip.
And one more thing. Though readme.txt says that Pcache doesn't work with FastTrak66 and Ultra66 controllers, it appears absolutely wrong: the program does work and it works fine.
Well, the tests have clearly shown that Promise FastTrak66 performs quite well and allows squeezing high data transfer rates out of your Ultra ATA/100 devices. In case of several ATA/100 hard disk drives, FastTrak66 will double the transfer rates compared to the single HDD. We consider this choice to be the best for those who work with graphics, music and video editing, as well as large presentations. As for business applications and games, you will be able to speed up their loading with FastTrak66 controller. Speaking about the today's cost of this controller, we should stress that the price-to-performance ratio is pretty attractive. Are you looking for a storage device of large capacity? Create your own virtual disk of 64GB (Win98) or 128GB (WinNT) and you will forget about lost frames and other problems.
Frankly speaking, we were a bit disappointed with the relatively low performance gain provided by Promise FastTrak100 controller compared to its predecessor (in case of 2 HDDs). However, on the other hand, we were very happy to find out that the price difference between these two controller cards makes only a few bucks. Moreover, we have to admit that FastTrak100 should be undoubtedly praised for the performance in mirror mode, which will ensure its application in file servers and workstations. As for the RAID array linear read problems, we really hope for the better here.
And at last ABIT HotRod 100 Pro. Unfortunately, it turned out absolutely unsuitable for systems aimed at multimedia streams processing having shown considerably low and unstable performance in the appropriate benchmarks. But things are not dramatic at all. Hot Rod 100 Pro appeared the best in all file benchmarks having left both rivals far behind. So, it also has its strong point.
Anyway, though the today's choice of IDE RAID controllers
isn't that rich, anyone will be able to find something for his specific needs.
Copyright © 2002