posted on Mar, 26 2018 @ 09:48 PM
originally posted by: Maxatoria
After a good think I would say that we could be hitting a bottleneck in chip performance and that could be the actual chip(s), its memory or even the
circuitry between the chips, normally this would be more of a set of chips overclocked to a point where errors start to creep in due to lack of heat
displacement especially as its a premium product and thus at the max and running it somewhere in Texas may make it fail a lot easier than somewhere in
Canada.
We've got another couple decades before we hit it, but it's going to happen. There's a minimum size on transistors, it's physically impossible for
them to become smaller than an atom, we're also approaching a point where speed of light delays are affecting the chips.
Taking a multicore approach can still let us get more performance, and that's where the industry has been heading because the chips themselves can't
become much faster, but threading has it's own performance limitations because not all calculations can be run in parallel, some have to be run in
serial.
At this point, the speed of cpu calculations for everyday use doesn't have much to do with the hardware, it has more to do with how the software is
written.
On that note, there is a massive, and I do mean massive lack of global talent that's capable of optimizing runtimes. Unfortunately, it's not an easy
thing to teach because doing it properly requires an extremely deep knowledge of how the chipset operates as well as selling your soul for superhuman
coding prowess.