HighPoint HotRod 100   











« RAID «

Redundant Array of Inexpensive Disks, that's what RAID stands for. RAID is a technology that adds flexibility and power to the humble hard disk. It's been around for years in big powerful server computers in the form of rather expensive SCSI (Small Computer Systems Interface) setups. However, over the last year RAID has been gaining popularity in desktop (Or in my case, deskunder) computers using the cheaper and more common EIDE (Enhanced Integrated Drive Electronics) drives. They first cropped up on add in cards like the Abit HotRod 100 but now can commonly be found on motherboards.

So what can RAID actually do? Well, there are many flavours of RAID but the three most common (Well two most common to be honest) options are RAID 0, RAID 1 and RAID 10 (0+1, 1+0). I shall now endeavour to explain how they all work, using kettles as examples.

Normal drive (not RAID): For this you will need two litres of water and a kettle. Add water to kettle and turn kettle on. Two minutes later, the kettle has boiled. This is how a normal drive works, only the litres are megabytes. Just as the kettle steadily heats the water, the drive steadily writes/reads the data.

RAID 0: For this you will need 2 litres of water and two kettles (Although of course, having two kettles means you can boil twice as much water if you want). Pour one litre into each of the kettles and set both boiling. As each kettle has less work to do, both boil in a minute and you have your full 2 litres of boiling water to use as you wish. RAID 0 works similarly, when writing/reading 2 megabytes each of the drives (kettles) provides one of the megabytes so the work is done in half the time, or twice as fast. On top of the speed increase you also get twice as much space to use but if one drive fails then ALL your data is gone. RAID 0 is also known as 'stripping'.

RAID 1: For this you will need 2 litres of water, another two litres of identical water and two kettles. Pour 2 litres into each kettle and boil both. Both kettles should boil in two minutes but in case one fails you still have your 2 litres of water. In terms of hard drives whenever 1 megabyte of data is written it is written identically to each drive. Should one of the drives fail then the RAID system runs on just the one drive until the faulty drive is replaced. It's like a realtime back up system. RAID 1 is also known as 'mirroring'.

RAID 10 (1+0): This is exactly what it sounds like, take to RAID 0 arrays with their nice speed and then mirror them so they are backed up as well. The four disks you need for this makes it rather expensive.

Those are the simple RAID configurations in their simplest forms, you can do things like have four drives in striping. In the example each drive would take half a megabyte. However, due to the increased overhead of dealing with four drives (The technical reasons for which I won?t go into) the speed increase is actually below 4x.


The Hot Rod 100 is in many ways, a motherboard just for hard drives. It contains a CPU, Highpoint HPT370(A) (In many ways this is more a review of the HPT370 as that's the guts of the card, so if you're looking at a RAID motherboard with a Highpoint on much of this review will apply) and the various connectors needed to do the business. In this case, a PCI prong to connect it to the motherboard, an LED connector should you have a spare LED on the front of the case you want to use to show RAID activity and two 40 pin connectors for attaching hard drive ribbon cables. The card is dirt simple to install. Locate an empty PCI slot (the white ones) and remove the metal backing plate on the case. Insert the card and screw in position. Then plug in any relevant cables (You get two 80connector cables that allow you 100mb/sec bandwith on each channel as opposed to 66mb/sec with standard 40pin cables. Bear in mind though that no matter how fast or how many drives you string off the RAID card, the theoretical maximum speed is 133mb/sec because that's the speed the RAID card talks to the CPU at) and you're off! Off into software configuration. You'll find that after your PC BIOS (Where the memory is counted) is done an extra BIOS pops up for the RAID card. Popping in here you can do several things.

Make RAID 0, 1 and 10 arrays.
Make a JBOD (Joins several hard drives together and presents them as one large drive. No speed gain is to be found, it's just a matter of convenience.
Finally you can make the HotRod just act as a normal hard drive controller.

When you create RAID 0 (And 10) arrays you can choose the stripe size, this is how often the card swaps data from drive to drive. You can choose from 4k to 64k in square steps (4,8,16,32). So if you have a megabyte file (1024k) and a 32k stripe (as I have chosen on my system) then each drive would receive 16 stripes simultaneously. Now, there are various theories floating around about which stripe size is fastest but the truth is that different uses are faster on different stripes. The differences aren't huge anyway!

You can also choose to make any of the above bootable or if you really are feeling manic, you can delete anything you've created. Of course, if you want to boot off the RAID card you have to tell your PC you want to. This is normally done by telling the PC BIOS to boot from 'SCSI' but some clever motherboards will actually figure out what's going on.

Next you have to install the drivers which is pain free unless you have a Soundblaster Live! (or similar style cards). You'll find that whenever you install the drivers windows won?t boot. What you have to do (and this is the easiest way to do it) is install the RAID card prior to the Soundblaster. Impossible if you're upgrading rather than building I know, so instead remove the Live! Install the RAID drivers and then replace the Live! Bingo!


I?m personally using two IBM 30gig 75gxp drives which gives me a total space (unformated) of 60gig. Not bad! Now, while you don?t have identical drives it?s a good idea to, here?s why. If you have two drives from the same range (Say 75gxp) but of different sizes (30gig and 75gig say) then the RAID array will only be 2x the size of the smaller drive. Where would the extra 25gig be striped? As for speed, the RAID array would go at the 2x the speed of the slower drive, as the faster drive would sit around with nothing to do while the RAID card waited for the slower drive to do it's share of the work so the card could assemble the data together and pass it on to the CPU.


Boy this sucker moves! Using a program called Hdtach I first benchmarked a drive without using RAID, but using the RAID card to give the drive the full 100mb/sec bandwith, should it feel the need. The average speed (ie, real world situations where the drive head is wondering around a bit), which is all that counts really, was around 35mb/sec with burst read (data all in a row on the drive so the read head doesn't have to move and using the drive cache to speed things up) went off the scale after 80mb/sec. Using both drives, RAID 0 and a 32k stripe the average speed jumped up to just over 49mb/sec. The highest speed achieve under normal conditions was a jaw dropping 60mb/sec. The burst speed was obviously, still off the scale. I loaded up SiSoft Sandra, out of curiosity more than factual pursuit as it's hard drive benchmarks don't have a great representation but all the same I caught my breath when I saw that it reckoned the RAID arrays burst speed was 99mb/sec! All this required only 1% of my CPU's attention. Compare this to my other hard drive, a Fujitsu which had an average speed of under 10mb/sec and not only is the RAID impressive but the IBM drives on their own are! You're probably wondering why the mb/sec didn't double under RAID though. Simple really, RAID doubles the speed of reading and writing inside the drives, but the mb/sec benchmark the amount of data flowing to the CPU. The hard drives still have to move their heads to where they need to read/write. This is obviously constant no matter how many drives you have.


All this functionality for around £35 just before Christmas 2000? not bad value at all! While I'm not using the Hot Rod anymore, I'm still using a Highpoint 370A as there's one on my new motherboard, which made the card redundant. Of course, there is a bigger risk of data failure, so I use my Fujitsu drive to store all my documents/mp3's, data in general really. Just putting the programs that make/use the data and the operating system on the RAID array. Although the RAID array is useful as a temporary work area for when I'm using my PC as a recording studio and several mb/sec of audio data are flying around.

ABIT card - http://www.abit.com.tw/eng/product/card/h100.htm
Highpoint chip - http://www.highpoint-tech.com/hpt370a.htm

PS ? Any bits where I used to much techy mumbo jumbo that you didn?t understand, leave me a comment and I'll try to explain :)



ATA/100 RAID Controllers Roundup

This article is going to be a logical continuation to our recent ATA/66 and ATA/100 Controllers Roundup. And this time we will discuss the popular ATA/66 and ATA/100 IDE RAID controllers available in the today's market and their performance.

Redundant Array of Independent (or Inexpensive) Disks, RAID, denotes a category of disk drives that employ two or more drives combined to provide a balance of performance and data protection. Although RAID subsystems are used frequently on servers and aren't generally necessary for personal computers most users prefer to eliminate even the smallest causes for concern and to squeeze the maximum out of their storage subsystem. This is how the demand on RAID started growing. Bearing in mind pretty rapid development of the data transfer interfaces (see our ATA/100 Investigation for more) and hard disk drive manufacturing technologies and especially the necessity to work with huge amount of data, using RAID becomes quite a popular thing.

What does the RAID subsystem look like? Despite its multi-drive configuration, a RAID subsystem can be viewed as very large virtual drive, which is created and controlled by the operating system through the RAID management software. The software doesn't only set up the system to address the RAID unit as if it were a single drive, it also allows the subsystem to be configured in ways that best suit the general needs of the host system. RAID subsystems can be optimized for performance, the highest capacity, fault tolerance or a combination of two or three of the above. In this respect, there have been defined and standardized different RAID levels. There are six of them now, called RAID 0, 1, 2, 3, 4 or 5. Let's briefly mention all of them.

RAID Level 0

RAID Level 0 is achieved through a method known as striping. It implies that sectors of data are interleaved between multiple drives when being read or written. In a RAID Level 0 array the data is organized in stripes across the multiple drives, i.e. the data is broken down into blocks and each block is written to a separate disk drive. A typical array can contain any number of stripes, usually in multiples of the number of drives in the array.

One of the advantages of RAID 0 is that I/O performance is greatly improved by spreading the I/O load across many channels and drives. However, when any disk array member fails, it affects the entire array, and it fails, too.

RAID Level 1

The RAID Level 1 is achieved through disk mirroring, and is done to ensure data reliability or a high degree of fault tolerance. RAID 1 also enhances read performance, but the improved performance and fault tolerance are at the expense of available capacity in the drives used.

In a RAID Level 1 configuration, the RAID management software instructs the subsystem's controller to store data redundantly across a number of the drives (mirrored set) in the array. In other words, the same data is copied and stored on different disks (or "mirrored") to ensure that in case of drive failure the data is available somewhere else within the array. The read performance gain can be realized if the redundant data is distributed evenly on all of the drives of a mirrored set within the subsystem. The number of read requests and total wait state times both drop significantly; inversely proportional to the number of hard drives in the RAID.

RAID Level 2

RAID Level 2 is rarely used in commercial applications, but is another means of ensuring data protection. Each bit of data word is written to a data disk drive and each data word has its Hamming Code ECC word recorded on the ECC disks, which is used as a means of maintaining data integrity. ECC tabulates the numerical values of data stored on specific blocks in the virtual drive. The checksum is then appended to the end of the data block for verification of data integrity when needed. As data gets read back from the drive, ECC tabulations are again computed, and specific data block checksums are read and compared against the most recent tabulations. If the numbers match, the data is intact; if there is a discrepancy, the lost data can be recalculated using the first or earlier checksum as a reference point.

RAID Level 3

This RAID level is really an adaptation of RAID Level 0 that sacrifices some capacity, for the same number of drives, but achieves a high level of data integrity or fault tolerance. The data block is subdivided ("Striped") and written on all but one of the drives in the array. Stripe parity information, which is used to maintain data integrity across all drives in the subsystem, is generated on Writes and recorded on the parity disk. The parity drive itself is divided up into stripes, and each parity drive stripe is used to store parity information for the corresponding data stripes dispersed throughout the array.

The parity info is checked on Reads. This method achieves very high data transfer performance by reading from or writing to all of the drives in parallel or simultaneously but retains the means to reconstruct data if a given drive fails, maintaining data integrity for the system.

RAID Level 4

RAID Level 4 is similar in concept to RAID Level 3, but RAID Level 4 has a larger stripe depth, usually of two blocks, which allows the RAID management software to operate the disks much more independently than RAID Level 3. This essentially replaces the high data throughput capability of RAID Level 3 with faster data access in read-intensive applications.

RAID Level 5

This is the last of the most common RAID levels in use, and probably the most frequently implemented. RAID Level 5 minimizes the write bottlenecks of RAID Level 4 by distributing parity stripes over a series of hard drives. No parity stripe on an individual drive stores the information of a data stripe on the same drive. In doing so, it provides relief to the concentration of write activity on a single drive, which in turn enhances overall system performance. RAID Level 5's parity encoding scheme is the same as Levels 3 and 4: it maintains the system's ability to recover any lost data if a single drive fails.

Well, we have considered the most widely spread RAID levels. However, this in no way means that there are no more levels. They do exist, but they are quite complex and definitely have a different application field. Anyway, we hope you've got a good idea of RAID levels. But RAID subsystem will be incomplete without a RAID controller card.

A RAID controller card is the hardware element that serves as the backbone for the array of disks. It not only relays the input and output commands to specific drives in the array, but provides the physical link to each of the independent drives so they may easily be removed or replaced. The controller also serves to monitor the integrity of each drive in the array.

In fact, there aren't too many IDE RAID controllers in the today's market. And those, which are currently available, cannot boast a wide range of supported RAID levels: RAID 0, RAID 1 or RAID 0+1 that's usually everything you can get. A few days ago Promise announced the first RAID controller card to support 3 and 5 RAID levels - SuperTrak100. But since we didn't expect it to come out so soon, we didn't manage to get it for our roundup. Anyway, we will definitely offer you a review of this new RAID product a bit later and now it's high time we got started.

The controller cards we will be considering today are:

  • Promise FastTrak100 IDE RAID
  • ABIT HotRod100 Pro IDE RAID

Besides, we will also compare the results obtained for our testing participants with those shown by Promise FastTrak66 IDE RAID controller card, so that we could see if there is any performance gain. We have already discussed everything concerning FastTrak66 in our Promise FastTrak66 IDE RAID Controller Review. So, we won't dwell on its peculiarites here.

And now let's take a closer look at our today's players.

Promise FastTrak100 IDE RAID

This controller card is a successor to Promise FastTrak66. Since faster ATA/100 interface had been developed, Promise introduced a new product supporting it, having retained all the basic features of the predecessor plus ATA/100 support. Namely:

  • Promise PDC20267 chipset
  • Complies with PCI 2.1 standard
  • Supports data striping (RAID 0), mirroring (RAID 1) and striping/mirroring (RAID 0+1)
  • Allows data mirroring "on the fly" onto a single drive or onto a pair of drives
  • Offers HotSwap technology
  • Restores data in the background mode
  • Provides fault tolerant data protection for low-cost servers
  • Creates up to 128GB virtual array (64GB in Win98)
  • Supports UltraDMA 5/4/3/2/1/0, PIO 4/3/2, DMA 2/1/0
  • Supports ATA/100/66/33 devices
  • Supports PCI Plug-n-Play, PCI interrupt sharing, coexists with mainboard IDE controller
  • Supports IDE bus mastering

Besides, you can also set the block size when working in stripe mode (RAID 0). The smallest allowed size makes 0.5KB and the top is 1MB (i.e. 0.5, 1, 2, 4, … 1024KB). This controller also allows checking S.M.A.R.T. status of the drives with the help of FastCheck RAID utility.


Here is a brief list of the key features announced by the manufacturer:

  • HightPoint HPT370 chipset
  • UltraDMA 100MB/sec data transfer rate
  • Supports data striping (RAID 0), mirroring (RAID 1) and striping/mirroring (RAID 0+1)
  • Two independent ATA channels
  • 256Byte FIFO per ATA channel
  • Concurrent PIO and bus master access
  • Compliant with Plug & Play
  • Up to 4 IDE-devices support
  • Supported drive modes: Ultra ATA 5/4/3/2/1/0, PIO 4/3/2/1/0, DMA 2/1/0


And now that you have got acquainted with all the roundup participants we are passing over to the tests. The testbed was configured in the following way:

  • Intel Pentium III 600E CPU;
  • ASUS P3B-F mainboard with 1005 BIOS;
  • 256MB PC133 SDRAM by Hyundai;
  • Creative 3D Blaster Annihilator 2 (GeForce2 GTS) graphics card (32MB DDR);
  • IBM DTLA 307015 HDD;
  • Windows 98.

We carried out all the tests with 2 HDDs from IBM, however, on the charts you may also see the results obtained for only one hard disk drive used, for a better performance comparison. Just for your reference, we have composed a list of IBM DTLA 307015 basic features:

Capacity 15.3GB
Spindle rotation speed 7,200rpm
Buffer size 2048KB (132KB FirmWare)
Heads 2
Platters 1
Rotation latency 4.17ms
Average seek time (reading) 8.5ms
Average track-to-track seek time 1.2ms
Average full stroke seek time (reading) 15ms

All the tests were run three times each for each testing participant and then the average value was taken for the charts and diagrams. We didn't let the HDDs cool down to room temperature between the tests. So here come the results:

ABIT HotRod 100 Pro proved the best when working in stripe mode (RAID 0) in office applications. It managed to leave FastTrak100 and FastTrak66 quite far behind. The second position belongs to fastTrak100 and the third remains for FastTrak66.

In mirror mode (RAID 1) FastTrak100 won the lead, while HotRod and FastTrak66 ran neck and neck. These results illustrate very well the advantages of ATA/100 controllers over the previous generation, ATA/66 ones.

When we ran the CPU utilization test in stripe mode, ABIT HotRod 100 Pro required more CPU resources than its competitors from Promise. In other words, this is what you have to pay for the fastness…

And in mirror mode all the testing participants showed close results. As you can clearly see from the chart, the CPU utilization in stripe mode appears nearly 4 times higher than in mirror mode. Maybe it's only mirror mode that is implemented on the hardware level and stripe - via software? :-)

Here both FastTrak controllers performed just brilliantly: linear read speed for the RAID array appeared twice the read speed of IBM DTLA (which is approximately 37MB/sec). And the fact that fastTrak66 managed to slightly surpass its elder brother is just casual. Besides, the gap between two Promise cards is so insignificant that, it can hardly be considered important and meaningful (probably it happened because all WinBench results are usually very unstable so that it affects even the average value). What is surprising, it is the impossibly low results shown by ABIT HotRod 100 Pro. Why? We will return to the reasons a bit later.

Relying on the common sense we expected FastTrak100 to perform better than FastTrak66, however, having analyzed the data transfer rates obtained on the inner and outer tracks, we arrived at a conclusion that in stripe mode FastTrak66 appeared the indisputable leader due to its stably high results.


The left graph shows the non-linearity of FastTrak100's data transfer rate. We even heard some snapping of the HDD heads though both hard disks were absolutely OK. There must be something wrong with the BIOS of our controller then.

And the right graph belongs to ABIT HotRod 100 Pro, which proved even worse in terms of linear reading. We dare suppose that the results appeared so low because of the raw drivers and HotRod's personal dislike toward IBM DTLA.

According to this chart, all the roundup participants showed nearly the same performance in both: stripe and mirror modes. Probably the controller type doesn't matter that much for this test, and it is the HDD that sets the pace here.

In Adaptec ThreadMark FastTrak100 surpassed a bit FastTrak66, and ABIT HotRod 100 Pro appeared hopelessly behind. As we have already pointed out several times in our articles, controllers based on HighPoint HPT chips (or their drivers) are optimized for file system that's why they don't feel at home in the benchmarks dealing with streaming reads/writes…

Block Size In Stripe Mode

Well, when we carried out all those tests, the default block size for stripe arrays (if you remember, in case of stripe mode the data is broken down into blocks and each block is written to a separate disk drive) was equal to 64KB. We didn't change the block size when testing the controller cards in order to get equal comparison conditions. However, if we set a different task, something like achieving the best balance between the block size and the performance in stripe mode, then it could be quite interesting to try various combinations manipulating the block size. So, let's take a look at that.

For our investigation we took Promise FastTrak100 IDE RAID controller card and 2 IBM DTLA 307015 hard disk drives. All the other components in our testbed remained the same as for the controllers tests plus Fujitsu MPE3064AT HDD. We utilized the same tests: WinBench 99 (Business WinMark and Disk Inspection Tests) and Adaptec ThreadMark 2.0. All of them were run three times and the average results were taken to be presented on the charts.

The controller BIOS allows setting the block size equal to one of these values: 1, 2, 4, 8, 16, 32, 128, 256, 512 and 1024KB. We decided not to make the block smaller than the cluster, i.e. smaller than 4KB, because it seems to make absolutely no sense. And which of the remaining values appears the most optimal one is the question No 1 to answer.

So, here are the results obtained:

As you can see from this test, the dependence of the performance on the block size is almost hundred percent linear. The maximum performance gain is obtained between 32 and 128KB and then the growth slows down a bit. Of course, these results depend on the files defragmentation level, but we have formatted the HDD beforehand that's why the results obtained are close to ideal, because all the files were saved to it during the test. In real-life situation the performance can turn out considerably lower than what we managed to get now.

Here the picture is slightly different: no linear dependence at all. The performance maximum appeared in case of 128KB block. But in general, the performance keeps growing together with the block size increase.

Well, here we also cannot see any regularity though the following two value groups are more than evident: 4, 16, 64, 256, 1024KB and 8, 32, 128, 512KB. The latter sizes provide higher average performance growth than the first set. And again the maximum appeared in case of 128KB block.

Judging by everything said above we arrive at the conclusion that 128MB is the most optimal block size for the configuration with Promise FastTrak100 and 2 IBM DTLA 307015 HDDs.

By the way, we came across an interesting file called PCACHE on Promise web-site. According to the accompanying description, we could get incredible performance gain very easily. Simply by installing just one more file into the system: Pticache.vxd. We decided to take the risk and here is what we got (we will compare the results obtained for RAID 0 with 64KB data blocks with Pcache and without Pcache):

  Business Disk WinMark 99 High-End Disk Winmark Adaptec Threadmark
Promise FastTrak100 6787 25200 26.47
Promise FastTrak100 + Pcache 9940 27800 21.5
ABIT HotRod 100 Pro 8923 25200 19.64

Well, well. That's something! The results in Business Disk Winmark got nearly 1.5 times higher! In High-End test the results also increased, not so significantly though. And the performance in Adaptec dropped quite tangibly. You wonder if it tells on the conventional work in OS? Well, it is very likely to, actually, but this influence is considerably lower then. As for us, we didn't notice any significant harmful effect. Doesn't the whole thing remind you of something? Yeah, high performance in file tests and low performance in streaming reads and writes… That's exactly the picture we saw with ABIT HotRod 100 Pro controller built on HighPoint HPT370 chip.

And one more thing. Though readme.txt says that Pcache doesn't work with FastTrak66 and Ultra66 controllers, it appears absolutely wrong: the program does work and it works fine.


Well, the tests have clearly shown that Promise FastTrak66 performs quite well and allows squeezing high data transfer rates out of your Ultra ATA/100 devices. In case of several ATA/100 hard disk drives, FastTrak66 will double the transfer rates compared to the single HDD. We consider this choice to be the best for those who work with graphics, music and video editing, as well as large presentations. As for business applications and games, you will be able to speed up their loading with FastTrak66 controller. Speaking about the today's cost of this controller, we should stress that the price-to-performance ratio is pretty attractive. Are you looking for a storage device of large capacity? Create your own virtual disk of 64GB (Win98) or 128GB (WinNT) and you will forget about lost frames and other problems.

Frankly speaking, we were a bit disappointed with the relatively low performance gain provided by Promise FastTrak100 controller compared to its predecessor (in case of 2 HDDs). However, on the other hand, we were very happy to find out that the price difference between these two controller cards makes only a few bucks. Moreover, we have to admit that FastTrak100 should be undoubtedly praised for the performance in mirror mode, which will ensure its application in file servers and workstations. As for the RAID array linear read problems, we really hope for the better here.

And at last ABIT HotRod 100 Pro. Unfortunately, it turned out absolutely unsuitable for systems aimed at multimedia streams processing having shown considerably low and unstable performance in the appropriate benchmarks. But things are not dramatic at all. Hot Rod 100 Pro appeared the best in all file benchmarks having left both rivals far behind. So, it also has its strong point.

Anyway, though the today's choice of IDE RAID controllers isn't that rich, anyone will be able to find something for his specific needs.

Startside ] Opp ] [Søk]

Copyright © 2002 Øyvind Haugland
Sist endret:  13 januar 2019

  Interested in this stuff? Please write to:

HTML Counter            stats counter