Tom's Hardware did a moderately detailed benchmark of SDRAM vs. RDRAM a while back.
Which is better? It depends on both the montherboard configuration and on what you're doing.
Intel's high-end RDRAM motherboard beat the hell out of SDRAM systems. It had two interleaved RIMM slots, doubling effective bandwidth.
Intel's more recent SDRAM offerings have generally been pretty bad. Via chipsets put out a good effort, but were still beaten out by the high-end RDRAM systems and the BX board.
The best SDRAM offering was a 440 BX board overclocked to 133 FSB. Tom swears it's stable. YMMV.
As far as load is concerned, RDRAM is optimized for throughput, SDRAM is optimized for latency. Something that hits many cache rows in more or less random order taking only a little data from each will work well with SDRAM. Something that processes large amounts of data in more or less linear order will work well with RDRAM. It depends on what you're doing.
My personal opinion? RDRAM is a bad implementation of a good idea. In five years we might see something better. For now, by DDR SDRAM. YMMV.
Re:This is sensitive to many things.(Score:3, Informative)by Shoeboy (drhelpful@portalofevil.com) on Sunday July 09, @11:24PM EDT (User Info) http://microsoft.com
Intel's high-end RDRAM motherboard beat the hell out of SDRAM systems. It had two interleaved RIMM slots, doubling effective bandwidth. |
[ Reply to This | Parent ] |
Re:This is sensitive to many things.(Score:3, Informative)by Mr Z (moc.tenemirp@c2u41mi) on Monday July 10, @12:22AM EDT (User Info) http://www.primenet.com/~im14u2c/
Adding a second RIMM channel also reduces the likelihood you'll take a "bank hit" in the RDRAM, and it allows the chipset to prefetch on the second channel if it thinks there's going to be a subsequent access over there when it sees an access on the first channel. Of course, CPU and chipset designers have never been all that good at ESP. And, as on-chip caches grow larger, the traffic at the CPU boundary looks increasingly random because all of the redundant and predictable traffic has been absorbed/filtered by the cache, making ESP all the more important. (And yes, I mean Extra Sensory Perception, as in the chipset needs to psychically know where the CPU's going next.) The other comments about making the channel wider rather than deeper to reduce latency also apply. --Joe-- Wanna program the Intellivision? Get an Intellicart! |
[ Reply to This | Parent ] |
I think Intel needs to come clean as to why exactly it's still pushing Rambus memory so hard.
Other than the fact that they own Rambus? How about profits from licensing Rambus technology? How about using patents to put the squeeze on SDRAM manufacturers? How about designing future CPU's and chipsets so that rambus is the ONLY memory that is supported?
We love to bash M$ because we are visibly affected by their evilness on a daily basis, but I think most people would be suprised by the kind of nasty stuff that Intel gets away with (just ask intergraph!)
Maybe I'm just not enough of a hardware junkie, but are a few percentage points difference that big a deal?
I think the big deal is the fact that RDRAM is suppose to be so much better in terms of performance than SDRAM. The very fact that SDRAM matches or beats or loses by so little causes one to wonder why spend the extra $$$ for RDRAM. So, no... in terms of performance only a few percentage points don't matter. But if you look at the overall picture: price, availability, compatbility, APPLCATION.... which technology do you really need?
---
"Both players [MPMan and the RIO] were able to withstand a vigorous shaking with no skips whatsoever" --Matt Rosoff, C|net
Rambus handily outperforms PC133 DIMMs, and is worth the extra expense
I think that the benchmarks make you step back and think. Do you really need to spend the money on Rambus? Think of it this way, if you were about to invest in a Rambus system just because you thought it was faster than PC133... you might be surprised to find out that whatever your application is, SDRAM performs just as good.
So, think of it in that respect, it all depends on the application and if the application warrents the cost. If your specific application won't gain anything out of it, why spend the money? On the otherhand, you might be able to rest assured that the money is well spent.......(which I know most people here won't think that way, they'll just look at the numbers, but hey that's life).
---
"Both players [MPMan and the RIO] were able to withstand a vigorous shaking with no skips whatsoever" --Matt Rosoff, C|net
Intel's philosophy is no different from Microsoft's: Embrace, extend, extinguish. I'm just amazed that your typical Microsoft-bashing /.ers aren't Intel bashers, too, because Intel deserves a big ol' can of whoopass opened right by their corporate asses. Let's examine a little...
First off, Intel has been in the process of developing standards for the PC architecture for some time, as well it should. However, they've doing it the same way Microsoft has been "contributing" to Internet standards. For example, they developed AGP up to 4x, which has proven to be very useful; however, rumours are churning out from reputable sources discussing an Intel project to create a successor to AGP 4x, and this successor is to be limited to Intel chipsets and chipsets made by select Intel partners--i.e., anyone who annoys Intel will get left behind. Intel developed PC-100 memory standards--a great service, but...then it refused to develop PC-133 standard or DDR-SDRAM specifications, because of its own interest in RDRAM as a wholesale replacement for all SDRAM.
Many have questioned that Intel has much to gain from Rambus becoming the new standard instead of DDR-SDRAM; after all, contrary to popular belief Intel doesn't completely own Rambus, and their deal with Rambus would only give them compensation in the tens of millions, which isn't much for a company whose revenues are in the billions each year. But what Intel has to gain isn't direct monetary compensation by Rambus, it's *control* over the standards for memory and memory controllers--and the rights to manufacture and license those memory controller technologies. This is exactly what MS did with IE--it didn't directly make a profit by developing a new web browser and bundling it with Windows; it gained market control and the ability to manipulate the Internet protocols so that all its products, from IIS to Frontpage to NT Server and the rest, had an advantage of guaranteed interoperability and increased functionality over competing products.
Intel wants to do the same with RDRAM and its new IA64 architecture, and its new forays into the emerging appliance market. Intel will make royalties on all chipsets which support RDRAM. Intel will make direct profits on its IA64 processors and has probably been hoping to licence the ISA to competitors once x86 plateaus. Intel has purchased the StronARM and other embedded/appliance hardware companies, hoping to leverage its market dominance to push it into every area. And, let's not forget that they tried and tried and tried to force their way into the graphics market, but failed there due to too-short product cycles and competitors with much more graphics experience.
It's clear that Intel wants to be the Microsoft of the hardware world. If they leverage enough tech patents on all fronts, they can force use of their products in the same unfair ways Microsoft leveraged itself into every crevice: big OEMs unable to get the best prices on Intel desktop processors unless they agree to use StrongARM in their embedded/appliance products instead of Transmeta or MIPS, or unable to get hold of ahort-supplied IA64 for workstations/servers unless they use P4 in their desktops, VIA unable to make the most advanced RDRAM chipsets unless they cut back on DDR or agree not to pursue QDR, etc. Don't think it won't happen, even with M$ as an example: there are many sneaky, below-the-board ways to hint at such matters without bluntly making demands.
And, since everyone here hates the x86 architecture so much, why the Hell are so many /.ers such big Intel fans? They're the companywhich kept pushing x86 for decades instead of developing something new and improved and more RISCy, so why so many Intel apologists and AMD naysayers? After all, as good and serviceable as the P6 core was, it didn't deserve to stay in service for 5+ years. AMD may have been a dog back then, but at least it made radical improvements with almost every product cycle; Intel just wasn't trying at all. And look at the disaster which
Read the rest of this comment...
Re:Yes, Intel thinks users will remain dumb foreve(Score:5, Insightful)by The Man on Monday July 10, @01:34AM EDT (User Info) http://foobazco.org I fail to see your argument for AMD. I agree with your entire position on Intel, but logically you cannot exempt AMD from your ire. While they are surely less evil than Intel, they are still evil for contributing to the continued existence of x86. And any proportion of evil makes for complete evil. And to get right down to it, x86 itself is nowhere near as bad as the peecee architecture in general. The product of 20 years of non-design and corner-cutting cheapness, this architecture offers atrocious performance, maintenance nightmares, and outrageous total cost of ownership. So if you really want to make a difference, stop buying peecees altogether. Your case for AMD is weak at best. If Intel is evil, so is AMD. There are plenty of options out there; many are surprisingly inexpensive. Quality, high-performance workstations from Sun, SGI, and Decompaq can be had for less than USD 5000, often less than half of that, which do not use x86 nor the peecee architecture. You'd better hurry, though, before everyone drops their quality architectures for IA64 and gives Intel the market chokehold it has been lusting after for years.
Fight the power; insist on quality; boycott the peecee! |
[ Reply to This | Parent ] |
You've managed to overlook one major problem...(Score:3, Insightful)by knghtbrd (knghtbrd@debian.org.NOSPAM) on Monday July 10, @03:05AM EDT (User Info) http://quakeforge.net/ Granted the PC architecture sucks. Look at IDE! How many more band-aids are we going to see placed over what is essentially a 16-bit interface designed for the 286? How about "standard SVGA"? The closest there was to standard was VESA and everyone reading this should know VESA is useless except when used with DOS real mode - too slow otherwise. Our sound card standard pretty much the SB Pro with Windows Sound System for 16 bit audio - and you can't even depend on that! USB is nice, but then again its implementation is also a band-aid. I could go on, but I think I've made your point for you: the PC, as a platform, sucks. The other part of your argument ... AMD is evil because they make wintel-class chips? I think not. AMD would be out of business if they made some little off-brand CPU architecture. With more than 90% of the intalled base of desktops and workstations running under the PC architecture, you'd be a fool not to consider making hardware for it! Even SGI has been moving their software to the PC platform because there's just more of it out there and they know they can't keep up when it comes to price vs. performance.
I don't know what planet you're from if you consider US$5k for a workstation (even a high-end workstation) "surprisingly inexpensive" either. I can build a pretty damned sweet workstation by any standard for US$3.5k and that's including a monitor better than my current 21" and some very nice (if expensive) input devices. You said it yourself, the PC architecture wasn't planned beyond build something that "works"(?) as cheaply as possible. Until other architectures can deliver as much or more performance at a comparible or lower cost down in the mid- to low-end workstation range as well as the high-end and our respective mothers can still play Solitaire and Minesweeper... The resurgance of unix and unix-like platforms, especially those which are developed portably and openly with such a focus on ease-of-use may as they mature make it easier to throw away the tired PC architecture. That time just ain't here yet. Until then, AMD looks like a mighty promising choice the next time I build a box. |
[ Reply to This | Parent ] |
Actually, I never said...(Score:4, Insightful)by Sir_Winston on Monday July 10, @03:37AM EDT (User Info)
> I agree with your entire position on Intel, but logically you cannot exempt Read the rest of this comment... |
[ Reply to This | Parent ] |
Re:Actually, I never said...(Score:4, Informative)by stripes (stripes at eng dot us dot uu dot net) on Monday July 10, @09:58AM EDT (User Info) http://www.eng.us.uu.net/staff/stripes/
As long as you don't have to code in assembler for it--and few code in assembler these days, anyway--there's nothing wrong with x86 since modern x86 CPUs are really a RISC core with an x86 decoder tacked on, which according to Ars Technica only adds about 1% penalty to the processor's speed. I don't see how they came up with the 1% number. Here are a few counter arguments... The x86 has reached some pretty impressave speeds. 1Ghz is shockingly fast. Even 800Mhz is quite speedy. Intel has done this by using extreamly long pipelines. Some 15-22 pipestages depending on the operation. AMD has done the same. A longer pipeline increses the latency of many operations, and makes sequential dependencies in code cost more and more. And branch penailties, and load cache misses. IBM has the PowerAS running at 600Mhz with a 5 pipestage machine (that is fewer pipe stages then the AMD uses to decode instructions!). It smashes the PPro-P-III and AMD in anything that has lots of poorly predicted branches. Like DB code. It does better on code that does lots of pointer chasing (like linked list walks). (the PowerAS has a zero to one cycle peanilty for "misprecidted" branches (it's prediction method is "allways taken", or "never taken" I forget which); the Intel has a penality of more like 11 to 20 cycles, with a maximum peanilty of 44 or so cycles of work discarded from the ROB, the Intel has a very good branch prediction scheme, for predictable branching patterns, when it gets to bad to predict code it sucks big time) The P-III and AMD managed to decode 3 instructions per cycle, quite an acomplishment with an irregular sized instruction set. They have finally gotten to this point. The SuperSPARC in 1992 or 1993ish decoded four instructions per cycle. That means the best the x86 can do over the long term is to execute three instructions per cycle (because even if they have spare functional units, they will run out of instructions in the reorder buffer if they manage to execute >3 instructions per cycle for long). RISCs have grown a few more decoders in the intervening 8 years. Some of them at least. If the x86 is only one percent slower then RISCs, why is the anchent (2 year old?) Alpha 21264 at a mere 667Mhz still turning in better SPEC2000 FP numbers then the "shipping only to select OEMs, and not many units either" 1Ghz Intel part? Try to get a stream benchmark number in the same ballpark as a real Alpha (not one baised on the PC chipsets), with a Xeon. Intel hasn't made a memory system that can compete. And the memory system is half the price of the damm Alphas. RISC may have lost the comercial war to CISC, but there is no need to stomp on it's acomplishments. There are really impressave RISC CPUs for a fraction of the research dollar Intel (and AMD!) spend. So, I never said Intel was evil for pushing x86 for so long, I said that it's dumb for people to hate x86 but not fault Intel for creating a better ISA long ago. Oh, but they have. The i960 is a diffrent ISA, I never used it, but I'm sure it is quite diffrent from the x86. The i860 was also very diffrent. It had a pretty nice ISA as long as you didn't put it into streaming mode. The VLIW mode was a bit odd to me, but it wasn't a huge deal.
Peopel even used them. Just apparently not enough people used the i860. I donno what the deal was with the i960. It was extreamly popular 5 years ago, but doesn't seem to be now. |
[ Reply to This | Parent ] |