It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

AMD and Intel processor names

page: 1
0

log in

join
share:

posted on Feb, 12 2010 @ 10:17 AM
link   
This is about disinfo...not tech. And here is my quandry.
I want to mention that I realize video card performance is more important with todays generally "fast enough" processors.

Why wouldn't AMD and Intel include the core frequency in the current processor offerings?

Old intel stuff after the 286 was called out by frequency...thus performance from chip to chip was easy to compair. (example 486 SX 25 was slower than 486 DX 66)

Old AMD stuff was called out by percieved performance, but was at least relative to performance. 2500+ was faster than 2000+

For example here are some current Intel processor Names and local prices:
Pentium Dual Core E5300........$69.99
Core i3-530............................$99.99
Core 2 Quad Q8200................$129.99
Core i5-750...........................$189.99
Core i7-850...........................$229.99
Core i7-920...........................$229.99

AMD is better, but still cryptic.
I have to be familiar with the performance of current processors, so I have to keep up with the actual performance charicteristics, and am not adressing those issues here.

Ask some guy with a smallblock chevy what he's running, and you WON'T get.." I got a badass Lemon Mark 5"

Why have they made the actual chip performance less transparent?
Is this just to slip in the lower performance chip without the average consumer knowing what they are really buying?

I will say this problem costs me time explaining the virtues of one chip over the other to the uneducated customer.

I wish they would stick with the number of processors, processor frequency, fsb freq. and Cache size.

Why do they do this to me.



posted on Feb, 12 2010 @ 10:58 AM
link   
I feel your pain. It's like they intentionally try to confuse people with flashy names and numbers straight out of a cheesy 50's sci fi movie. After I figure out what top end is for my motherboard, something AMD dual core (here's hoping a 6200 goes in this sucker), Until I see actual speeds being touted for processor speed, and not all the hot air hype, I may never go multi core.



posted on Feb, 12 2010 @ 11:16 AM
link   
What's that law? Moore's law yep that one.

Well maybe processing power has got to the limits of current tech, so now to sell a processor they give it a funky name? It's like 6 of one half a dozen of the other, give it a better name than the other and you sell your chip.



posted on Feb, 12 2010 @ 02:48 PM
link   
reply to post by SLaPPiE
 


Well, there is more to a chip than just frequency these days. If it was just the frequency then an old P4 would be a better chip than the new i5's or i7's. With the Intel chips it still boils down to the larger number generally relating to a better chip.

How would you choose to name the chips? What numbers do you feel are the most important? The entire market is pretty convoluted these days. It all depends upon what you prioritize.

In the end, I agree that it is confusing but it isn't some sort of direct attempt to mislead you. It is just a bi-product of how processors have changed and how much more they can get out of the lower frequencies and lower energy envelopes.



posted on Feb, 12 2010 @ 04:59 PM
link   
reply to post by SLaPPiE
 


Unfortunately to truly understand the disinfo coming from AMD and Intel you must understand some of the technical problems.

Here's why. It's because chips aren't getting any more clock cycles. Above 4 ghz the chips start to catch on fire. No, that's not a joke. They literally start to catch on fire. So, if they kept advertising the frequency number, all the boxes would have the same frequency, 4 ghz. You can't stand out like that.

Also, clock cycle is a very bad predictor of performance. What's a hertz? It's a clock cycle (CPU tick) per second. 4 ghz is 4 billion times a second. But that's not an actual program instruction.

A program instruction may take 4 cycles to run or 100 cycles to run or 400 cycles to run. Also, sometimes you have to waste crap loads of cycles waiting for the data to come in from RAM. So, there's no way to predict how many cycles are needed.

Another problem is pipelining and branch prediction. With pipelining you can send more than one instruction to the CPU at a time. So, say you have 10 instructions that can be pipelined. Should be 10 times faster right? Well, turns out it ain't. The reason is because the 2nd instruction might need the result from the 1st instruction to finish. So, you may still up waiting just as long. But the number one reason is the next 9 instructions may have not even been loaded from RAM yet.

Branch prediction is when the program has two or more paths that it can take. The CPU will try to guess which one the program is going to take while it's waiting on something. It will go ahead and run that code. However, if it guessed wrong, it has to go back and do it again. So there's no way to know how many ghz it will take.

So, it all basically has to do with if the computer has to wait on something or not. Things like this make it very hard to predict how many clock cycles it will take to process a set of instructions.

So the real disinfo was making people believe that the ghz number on the computer actually meant something to begin with. It never did. A 1 ghz computer might be faster than a 5 ghz computer depending on how it's built inside and depending on what the chip was built for. One chip may take less clock cycles than another for the same instruction.

For example RISC chips do simple instructions really fast in only a few cycles where the x86 may take many more cycles to do the same work. Same frequency, but the RISC chip is much much faster. So we should switch to RISC right? Well, we already did. Most x86 chips are just RISC chips that pretend to be x86 chips BTW.

So, how do we make computers faster then? Well, lets look at mine. An AMD X2 5000+. Instead of pumping up the chip to 5ghz and melting it, it's basically 2 2.6 ghz processors sharing a socket. You lose about 200 mhz doing that so AMD rated it as a 5000+.

The problem is that doesn't tell us anything either. Old programs aren't designed to run on more than one CPU. Old programs don't get any speedup and the other cores just sit there doing nothing.

Also, only certain special parts of a program can be ran on more than one core. Other parts can't finish until they have the results from some other part of the program. When this happens they have to stop and wait.

So here's the trick. Adding more and more cores will only make certain parts faster, while other parts that can't be ran in parallel will run at the same speed from here on out until we can make chips go faster than 4 or 5ghz without catching on fire. So, every time you double the cores you get less of a speed boost than you did the time before.

So, sometimes my chip can work like a 5ghz chip, but sometimes it can't. because one chip has to wait on something. When that happens it's only working at about 2 ghz or less. The trade off though is that my computer is not on fire.

However, the next disinfo you'll see from the chip companies is the core war. You'll see computers with 8 cores, then 16 cores, then 32 cores and they'll just keep going, but the computers won't be getting twice as fast. They'll only be getting 5% or 10% faster because only small portions of the programs we run can be ran at the same time.

But here's the biggest problem of all. RAM is already about 400 to 500 times slower than the CPUs we already have. That means even if you could double the speed of the CPU it won't help. That would just mean the RAM is now 800 times slower than the CPU. In other words, the CPU spends most of its time waiting on data already! If we double the speed of the CPU that means it'll just be waiting faster.

This is why cache is so important. If the instructions the CPU is going to run are already in the cache, it can just run them. An instruction that may only take 1 clock cycle to run from cache may have to wait 400 clock cycles while the data is loaded from RAM. This is called a cache miss. This is why writing programs so the most used parts of it stay in the cache is so important.

So this is what they mean when they say CPUs are already fast enough. Other bottlenecks such the RAM, the network, and the hard drive are much more important. This is why solid state hard drives (SSD) are getting popular. The faster we can get the data off the drive, the faster we can get it to the CPU so it doesn't have to wait. Modern CPUs can already process the data from RAM, the network, and the HD combined, faster than the new data can be read in anyway.

So, the disinfo is that AMD and Intel are trying to still sell faster and faster chips when we don't need them. The CPUs we have are already so much faster than everything else in the system that speeding up the CPU only gives single digit increases now like 5%.

If you want a faster computer you need more cache and data bandwidth. Faster hard drive, faster RAM like triple channel and things like that and just keep pumping the CPU full so it doesn't wait around. But AMD and Intel probably don't want the average consumer to know that it's not really faster CPUs that they need.

But they don't advertise ghz as much anymore, because it's stuck. So they advertise cores instead.

But this is why GPUs try to get the fastest video ram available. First, they have thousands of cores and then use the fast RAM to keep those cores as busy as possible all the time. Which is cool, but faster RAM doesn't really help Intel sell more chips.

[edit on 12-2-2010 by tinfoilman]



posted on Feb, 12 2010 @ 05:36 PM
link   
reply to post by tinfoilman
 


Thanks for that post bud, this stuff usually gets explained so badly I can't make head nor tale of it! but I understood your post



posted on Feb, 12 2010 @ 08:57 PM
link   

Originally posted by SLaPPiE

Ask some guy with a smallblock chevy what he's running, and you WON'T get.." I got a badass Lemon Mark 5"


No, that's another rant entirely.
Real men had eight cylinder engines, rated in cubic inches of displacement. NOT this poofy liters nonsense.

Maybe I'm just feeling inadequate because this is my first small block eight (305).


And now that you mention it, we also have the number of cores to throw into the mix.
My main box at home is a dual core AMD 5000 Black, overclocked to 3g. Great value at the time - Black series is unlocked for overclocking. 4G ram, Xubuntu 64 bit.

Good luck in your search.



posted on Feb, 14 2010 @ 12:53 AM
link   
tinfoilman,

I like that post. Some good information in there. So, it seems to be more about the RAM and the hard drive speed. That's good to know. Bottlenecks are the problem.

Troy



posted on Feb, 14 2010 @ 08:22 AM
link   
reply to post by tinfoilman
 


Thanks man!
Nice work!
I'll just print that out and pass it on when folks ask about it
(If that's ok w/ you).

Still, You have to admit, it is very difficult for the newb to select a processor based on much more than cost.

Prime number finding crunches CPU's pretty hard to!

As for actual performance, I was refering to the ability to hold high frame rates in 3d games, rather than how fast inventor or blender renders.
This is where I see the video card as a bottleneck, but not the only problem, just one we can easily swap out.



posted on Feb, 14 2010 @ 03:20 PM
link   
reply to post by SLaPPiE
 


Well to everyone in the thread that thanked me, you're all welcome. Just remember, the numbers keep changing as new hardware come out, but the concept is the same. Since CPU speed is stuck right now, the RAM speed will probably catch up or come closer to it. Then we might need faster CPUs.

To SLaPPiE. yeah crunching numbers where you do a lot of processing on small amounts of data can still bring a computer to its knees. So, a faster processor can give you a big speed up when doing stuff like that.

Like calculating PI is a good example because when you're generating PI you don't really read anything in from RAM. Instead you're writing out the results. The CPU doesn't usually have to wait to write things out. It can just write and then keep going while the memory controller takes care of the write.

Most PC users don't crunch a lot of numbers like that though. When they play games the CPU spends most of it's time calculating the physics of the game and the logic. Doesn't take that much data from RAM, but takes a lot of numerical processing. So, this is where the CPU really shines right now gaming wise. Multi-cores can really help here too because you have to calculate the physics of many different objects. Each core can work on each one.

But most people aren't really concerned with frame rate anymore with LCDs. Most video games can already draw games at 150 to 200 FPS. But there's a problem.

Old CRTs would flicker up to about 85 hz. They drew from top to bottom. At 60hz the pixels on the screen would start to fade out before the scan line could come and redraw them again. This would cause a flicker. You had to get em up to about 85 before the scan line was fast enough to redraw the pixels before they noticeably faded causing the flicker.

However, without the flicker the human eye can't really tell the difference in animation above 60-75 FPS. What people were noticing wasn't slow animation, it was just the flicker. But you want your FPS to match your refresh. So, when they pumped up the refresh rate on the CRTs to 150 to get rid of the flicker, they wanted the game to match. So, you needed really high FPS to get rid of that flicker.

But the animation was already smooth enough. The problem was what we were drawing it on.

But the way LCDs work, they don't have that flicker problem. All the pixels stay on at all times and they don't fade. So, there is no flicker. Since there is no flicker 60FPS is usually good enough (not always so professional gamers will still use old CRTs.)

So guess what? Most LCD monitors are locked at 60 hz. That means even if your comp could draw the game at 150 FPS, the LCD can't draw that fast anyway. So, if the game is displaying the true FPS correctly you'll see all your games are locked at 60 FPS when you play on an LCD.

Unless you have a really good LCD of course. But most people now, even me, just have crappy cheap LCDs that are locked at 60 FPS anyway. Almost any video card can play any old game at that speed.

So the big thing now is resolution instead. How well can we draw it at 800x600 compared to 1440 X 900 compared to 1680x1050.

At 800x600 you're only drawing 480,000 pixels where at 1680x1050 you're drawing almost 1.8 million pixels 60 times a second and if you've got AA on you have to do AA on each pixel 60 times a second.

Also, at higher res you need higher res textures because when you stretch a low res texture out it looks like crap just like when you zoom in on a picture in paint. So higher res textures take up more video ram. This is why they're pushing so much RAM on the new video cards now. Like 2 gigs of ram lol. But it looks way better that way.

But then the problem is all about data bandwidth again. How are we gonna push 102 million pixels a second through the RAM chips? Well, it can be done, but that's what video cards are focusing on now. Pushing all those pixels around and making room for those highly detailed textures that look oh so pretty.



posted on Feb, 27 2010 @ 11:41 PM
link   
Because ghz only ever matters when comparing processors that have an identical architecture. Otherwise, ghz can be entirely misleading. For example, you don't want people thinking a 3.8ghz Celeron-D is faster than a 2.66ghz Core i7. The i7 is in reality about ten times as fast, and 10 times as expensive. Furthermore, comparing processors with ghz alone hurts business. As an example, a Core i7 920 at 2.66ghz will, in actual performance tests, murder a AMD Phenom II at 3.4ghz. Why would Intel want to advertise (irrelevant) ghz numbers which makes the general costumer think the Phenom II is faster? The whole "megahertz myth" was perpetuated by many factors, including Intel, which developed the Pentium 4 which, despite being slow, hot, and expensive, ran at a high clock speed which at face value told the end-user that it would be fast. It wasn't.

It's much better today. The question today is no longer, "how many ghz is it?", because ghz is irrelevant. Instead, the question is, "Is it an Core i3, Core i5, Core i7, or Core 2?". The i3 being the entry-level product, i5 being the mid-range product, the i7 being the high-end product, and the Core 2 is the legacy product.

If you want to compare processors, just look up benchmarks...:
www.tomshardware.com...


I feel your pain. It's like they intentionally try to confuse people with flashy names and numbers straight out of a cheesy 50's sci fi movie. After I figure out what top end is for my motherboard, something AMD dual core (here's hoping a 6200 goes in this sucker), Until I see actual speeds being touted for processor speed, and not all the hot air hype, I may never go multi core.

That's what I call a "Self-fulfilling prophecy". There are plenty of sources of information that show massive speed increases from multi-core processors. You just haven't looked. I run a Core 2 Quad QX6850, some programs bring it each of its four cores to over 85% of maximum capacity. If I had an E6850, identical to the QX6850 but with half the cores, I would get getting just above HALF my current performance. Most programs do not have a gain THAT big, but there definitely can be massive gains.

[edit on 28/2/2010 by C0bzz]



new topics

top topics



 
0

log in

join