It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: Echo007
High end LGA2066 processors are going to be very expensive. LGA2066 is pointless, if all you do is surf the web and play video games. If only 1% of the user base has 12 cores, what game developer is going to waste time programing the game to take advantage of all the extra cores. If you do video encoding or photo editing professionally, i could see going with LGA2066.
You could build a new system that includes new MOBO, RAM, CPU, GPU and PSU for the cost of Intel high end LGA2066 CPU.
originally posted by: Aazadan
originally posted by: Echo007
High end LGA2066 processors are going to be very expensive. LGA2066 is pointless, if all you do is surf the web and play video games. If only 1% of the user base has 12 cores, what game developer is going to waste time programing the game to take advantage of all the extra cores. If you do video encoding or photo editing professionally, i could see going with LGA2066.
You could build a new system that includes new MOBO, RAM, CPU, GPU and PSU for the cost of Intel high end LGA2066 CPU.
Here's the problem with adding more cores. When software runs, it either runs in serial or parallel. When in serial, each task is taken one after the next. When in parallel, multiple threads each compute their own tasks simultaneously. However, not every task can be calculated simultaneously, some core logic to just about anything (there are exceptions, such as decryption) must be done in serial. Your resulting gain from adding more cores therefore is serial + (parallel/cores).
So for example if you have 2 cores and 90% of your code can be made to run in parallel your software in a dual core system will complete a task in 10 + (90 / 2) or 55% of the time a single core machine can complete it. That's a pretty big boost, but as the number of cores increase, the gain becomes less and less. A 4 core machine in similar circumstances will complete a task in 32.5% of the time a single core machine will do it. 1 core = 100%, 2 core = 55%, 4 core = 32.5%. While that first core nearly halved your run time, it took 2 more to get 2/3 as much benefit.
You can scale this concept out too. A 12 core system in the same 10% serial setup will complete the task in 10 + (90 / 12) or 17.5% of the time. You have to jump from 4 cores to 12 in order to halve runtime. From that point it's not even possible to halve it again. Jumping up to 60 cores will get you to 11 + (90 / 60) or 12.5%. 12 to 60 cores is only a 1/3 decrease in runtime.
In reality, given how much of a task can typically parallelized in home/office use, there's little to no benefit in going beyond 4 cores. Xeons are built more to be a server processor, an area in which having multiple threads lets you scale to more concurrent users. That has some business value for certain applications, but it's not the sort of thing you would want to give someone as their desktop PC, even if they were using very demanding hardware.
But, to ask your question since I am a game developer. CPU's have basically reached a point where more isn't better. Most of the demanding work has been moved off to GPU's. The only real game application faster CPU's have at this point is that with faster CPU's you can use trig operations like Sin and Cos a lot more freely (these are very expensive operations, and under normal circumstances must be used sparingly), which in turn lowers the math requirements for game developers.
Intel’s answer to AMD’s [chip...] is an 18-core, 36-thread monster microprocessor of its own, tailor-made for elite PC enthusiasts.
The Core i9 Extreme Edition i9-7980XE, what Intel calls the first teraflop desktop PC processor ever, will be priced at (gulp!) $1,999 when it ships later this year. In a slightly lower tier will be the meat of the Core i9 family: Core i9 X-series chips in 16-core, 14-core, 12-core, and 10-core versions, with prices climbing from $999 to $1,699. All of these new Skylake-based parts will offer improvements over their older Broadwell-E counterparts: 15 percent faster in single-threaded apps and 10 percent faster in multithreaded tasks, Intel says.
originally posted by: glend
I don't run a lot of parallel applications these days and my I7 easily handles virtual machines when I want a bit of windows compatiability. So its bit pointless for me to spend $1200 for a processor that consumes 2-3x the electricity of my I7. Perhaps if I was interested in AI it could come in handy but I simply don't like Intels architecture. Intel seems to be playing for specs rather than true delivered performance. I'd be more interested if Intel created smaller cpus like the arm and banged together 1000's of them on a single layered wafer (it was done years ago but swallowed by military). Then we could create a HAL 9000 to destroy manlind!
originally posted by: eManym
The problem with gaming and multi threaded processors is the threads can't communicate fast enough to be useful in an environment that thrives on the speed of instruction executions.