

In other words you could attribute the failure of price reductions for CPUs to market stagnation or lack of competition, but the failure for CPUs to improve in performance at the same price point is the result of shifting device priorities towards GPU computing. Inevitably they had to throw in an iGPU to justify the cost of the die. So although CPUs resources, with respect to transistor count has only increased modestly, and therefore surface area has dropped with each new node, which should imply large price reductions, Intel somehow managed to reduce their production costs with newer nodes while convincing the market to pay the same old price. The problem is that although Intel et al recognized that the demand for many-core CPUs is not there, they were unwilling to give up the profits of this market. Moving non-rendering tasks such as physics, AI, and more towards the GPU will only continue the trend into the coming years as it has in the past. CPU resources have remained modest, with improvements primarily targeting features and efficiency over performance, while GPU compute power has been pushing the limits of modern nodes mainly because this is what a game is targeted to utilize in full. Gaming demands in terms of proportion of compute resources has been greatly shifting towards the GPU for many years now. The reason no one stepped up to the plate and offered 8 or 16 consumer cores was because developers were not inclined to use them.

People would have been buying old Xeons if CPU processing power was really in demand. This 'stagnation' has been occurring for so long that just about any mediocre architecture with more cores, if they were truly in demand, would have beat Intel's offerings.
