Gartner predictions also show that AMD won't continue to add cores and performance so fast.
We’ve now reached 2020 and so the certainty that we will always have sufficiently powerful computing hardware for our expanding needs is beginning to look complacent. Since this has been obvious for decades to those in the business, there’s been lots of research into ingenious ways of packing more computing power into machines, for example using multi-core architectures in which a CPU has two or more separate processing units called “cores” – in the hope of postponing the awful day when the silicon chip finally runs out of road.
Intel is expected to initiate price cuts on its PC processors in the second half of 2020 to defend its market dominance, according to sources at PC makers.
It is extremely dangerous for Intel, as any fund shortages can lead to lot of problems, and like GF Intel could be stuck in them indefinitely.
We will announce more details about our N3 technology at our TSMC North America Technology Symposium on April 29
3nm development, and it can be first process where TSMC can fail miserably.
Intel dropped the price of some of their most expensive Xeons by 28%, more as you go down the stack. Before you get excited about your server prices in Q1 doing the same after a phone call to your OEM and a re-bid, lets just say this won’t happen unless you are a select few customers. Which customer? Those who bought the -M and -L large memory SKUs, roughly 2% of sales according to informal numbers.
Intel price cuts will soon spread wider, and note that Xeon margins are exorbitant (same as AMD server CPUs).
Some problems of Intel
Increasing die size
As Intel being run by greedy investors did not want to invest proper money in 14nm equipment hoping for simple 10nm transition.
So, from max of 136 millions CPUs per quarter they went down.
TSMC 7nm process lead time remains at about six months, with tight supply expected to last through 2020, or even worsened in the second half of the year if speculation about Intel possibly seeking to place orders for the Taiwanese foundry's advanced processes materializes, according to industry sources.
Someone clearly lie to public. As all this AMD public fanfares are not supplied by proper steady manufacturing.
Pure-play foundry Taiwan Semiconductor Manufacturing Company (TSMC) saw its revenues grow by a slight 3.7% in 2019. TSMC reported consolidated revenues hit a record high of NT$1.07 trillion (US$35.7 billion) in 2019. Revenues for December 2019 came to NT$103.31 billion, down 4.2% on month but up 15% from a year earlier. TSMC saw its fourth-quarter revenues amount to NT$317.24 billion, up about 8% sequentially and hitting a record high for the second consecutive quarter.
Things are not bad, but it seems like crypto mining fade played bad thing with TSMC (they had been biggest manufacturer of crypto shite that produced heat in exchange to dollars). As 3.7% only rise while AMD is making huge profits is not normal.
We can see consequence of all this in 1-2 years, as smartphone market fade can be killer blow for both TSMC and Samsung.
TSMC's 7nm production capacity is fully booked. Relief may only come when Apple migrates to 5nm in 2H'2020. TSMC's 7nm capacity will increase to 140,000 wpm in 2H'2020. By order proportion, the ranking of customers using 7nm will be re-shuffled. AMD's orders are set to double, replacing Apple as the largest customer [for 7nm]. Huawei's HiSilicon and Qualcomm are similar by order proportion.
TSMC's 7nm production capacity continues to rise. The industry expects monthly capacity to reach 110,000 wafers in 1H'2020. The top 5 customers by order proportion are: Apple, HiSilicon, Qualcomm, AMD, and Mediatek. Except for Mediatek, order share is split at roughly 20% each, depending on seasonality. Mediatek's share is around 13%.
However, with 7nm capacity rising to 140,000 wpm in 2H'2020, and the largest customer Apple migrating to 5nm with the A14 processor, customer ranking by 7nm orders will be re-shuffled. In one fell swoop, AMD booked capacity for 30,000 wafers, accounting for 21% of total capacity. HiSilicon and Qualcomm's orders are similar, at 17-18%. Mediatek's share also rose to 14%.
Samsung's 7nm production capacity is now roughly 150,000 wpm. It is also actively increasing 7nm capacity. According to industry rumors, Samsung plans to quadruple capacity in 2020. Nvidia and Qualcomm's next-generation products may be produced using Samsung's 7nm EUV process.
I have big doubt about Samsung capacity if it concerns anything besides their own LSI.
Nvidia already has huge issues because largest shareholders ordered them to use inferior and not ready Samsung process. But it did not work as expected.
In 2020 we will see idea to make Intel-AMD like monopolies duality from TSMC and Samsung. Intel will be lagging more and more and we can even expect 7nm closure announcement in 2021.
TSMC will be sole foundry partner of Apple iPhone chips for 2020. Volume production using 5nm EUV process will kick off by the end of second-quarter 2020. As much as two thirds of TSMC's available 5nm process capacity will be utilized to make the next-generation iPhone chips
From interview with AMD
AMD’s current highest TDP processor on the market today is at 280W, such as the EPYC 7H12 built for HPC. Is there an upper limit to TDP expansion? We see Intel moving into the market with higher TDP chips, at 350-400W TDPs.
Collaboration with our OEM partners isn’t just about maximizing the power available for the CPU. You also work across both CPU and GPU, and that’s what we’re doing with Cray/HPE for the Frontier supercomputer. That’s really indicative of the kind of system optimization across hardware and the system and software stacks that we can do with OEM partners to really push up the roof in the HPC market.
We announced that 7H12 part in our continued roll out of Rome, and you saw ATOS use it and we were really happy for them to see their placement to catch the Top500 listing as it was a race against time, but it just shows what you can do with great execution. But you know when you think about that time of integrated water cooled solution, it tells you that you have to grow these close partnerships, as we’re doing at AMD with our OEM customers, and there’s a lot more performance that can be had going forward. There’s a lot more room at the top!
AMD will hit TDP already in 2020. And issue is not just total TDP but simple fact that smaller dies are unable to transfer heat enough with their small surface.
Samsung is unable to do complex 7nm EUV in volume
During GTC 2019 in Suzhou, China, NVIDIA's CEO responded to the press that the majority orders of their next-generation 7nm GPU will be handled by TSMC with Samsung only playing a small role than previously reported.
It is interesting as whole chips had been designed with Samsung process in mind, as I understand problems originate in early 2019 and made Nvidia unable to launch new cards as planned already.
The Korean giant reportedly has lowered foundry quotes in order to win clients from the Taiwan-based competitor, who sees tight capacity at its advanced nodes because of strong demand.
It is no accident that US put in jail Samsung leader, as it is big war on last silicone processes coming.
TSMC is now king of mountain, but is risking a lot, any failure with 5nm-3nm and all this fanfares can turn into extreme issues. Just look at Intel.
Again Intel 10nm Issues
It looks like Intel is not just delaying a single server project, their entire roadmap has just slid significantly.
https://www.semiaccurate.com/2019/12/12/intel-significantly-delays-its-entire-server-roadmap/
An issue is, why don't we have 10Ghz + Silicon chips, what are the constraints these days? They being saying silicon is going tap out in efficiency due to leakage for a long time, but then they said 7nm then 5nm came up, and the group above is getting gains at 2.5nm. I know in science they deal with absolutes, but in engineering they work around these absolutes, why it's dangerous to take average scientists and engineers as absolutely right now. I've immediately thought of ways to deal with superposition producing leakage way back, but .not in that feild. So, what tricks are they using now?
Now, the highest silicon speed in the research is hundreds of GHz maybe, I can't remember. Surely they should be able to do at least 10Hhz with that? One thing I know from past associates is that silicon has a natural speed and overclocking it produces a lot more leakage. At 180nm I know people that where doing close to 0.06-0.07mw for 600-700mhz with program memory. What the big CPU companies have done is to keep the clicking low in order to deliver more in a certain power envelope. Because they are horribly complex circuites they use a lot more energy per until of speed. I'm hoping that the technology settles down to a natural speed 5Ghz+ for a simple complex circuit. 5Ghz, would be a useful speed for many things. 5Ghz, would be enough for many realtime graphic and video manipulation applications to produce good results in realtime realism in mass processor array. While patralism gets you so far, a certain minium speed is needed to do it properly. I am not confident it is enough, and I previously did calculations related fir my own work. However there is a certain technology where much slower works.
So, I'm interested in how they are dealing with the silicon efficiency constraints, as they are interested in doing optical, Quantum Automata, probably even magnetic, if such a thing exists for silicon, on silicon processes. So, what they can do on silicon sets the size limits for these next gen technologies on silicon. I suppose if I look up the speeds they clock low power memory might be a good indication of where things are at.
Exactly my point Vitaliy. They pickup niche things that will give them.exclusive lead. Cost effectiveness plays a major role in what processes get supported in the commercial realm, but certain players pay high amounts for performance effectiveness in unviable technology. So, if I had 1 nm technology that was ten times more costly than normal tech to make, they would line up to pay 100x the end price.
Anyway, that FPGA had three times the potential over 5Ghz, and it got signed exclusively to military. At the time I think Intel had sold 5Ghz CPU and someone had overclocked something way way past that using liquid nitrogen or something. It wasn't conventional chip technology. And on GHz claims, individual circuit elements maybe doing a lot more than the entire synchronised clocked circuit. The guy I mentioned had some part in his circuit running ar 5Ghz or something way back, but the chip ran at less than 500 MHz, or something like that. The FPGA people were looking at 5ghz FPGA running in a period of 500Mhz FPGA in that period, but that is likely to yield a complex circuit many times less in speed rather than a Pentium 4+. Still sad though.
There are lots of advanced technology out there that didn't gain a lot of traction or more funding because of cost effectiveness, or niche, like saphire chips. Litter left over from the struggle forwards. One that didn't was a form or 100QE OLED like technology, that the military funded, likely for lighting purposes to extend energy reserves. That had been used now commercially for years in consumer. A number of things get initial development and funding from US government before being set free. Of 'scams', OLED eventually turned out over decades and it is still a bit underwhelming with certain deficiencies. I suspect that in chips certain things will turn out in ways neither us or the industry expected. I mean, the way to beat silicon is to go non silicon, and they know how to do that, but people don't want to abandon their expensive equipment and momentum yet. They keep pushing silicon designs uphill instead of investing that money into the alternative. Every memory stick in your computer should also have processing functions, but it is just another option not receiving the attention it should. Present and past experience, is no absolute guide to what can absolute happen in a progressive dynamic situation. The right funding emphasis, and something better pops out. Above I mention an accurate atomic level way of making a 2.5nm circuit using existing lines. Being able to precisely make something is key. So, is such techniques going to lead to sub nanometer circuites one day. Sure it might even be horribly expensive to do that, but if it is the most viable game in town to do that it can thrive. But the question is, can that be applied to make optical components to, with precisions atomic it would be, interesting times ahead of they figure that part out. This becomes meta material optics engineering at that scale. Very exciting potential.
Optical tales, I'm telling optical fact, kept from the commercial market. In my own design proposals, I can see high deficiencies in using optical for conventional design too. So, I put that aside, but there are ways and means to get desirable results in areas. People able to make startrek like free floating holograms might have technology to solve some of the issues and do a good job at it. I've got certain proposals for things which to solve certain issues, but it is a complex mess to miniaturize them down to chip level, but a solution just occured to me. Anyway, just a larger scale optical processor has advantages due to the speed of light and lack of leakage, if you can figure out how to do it properly. But, you don't need a conventional processor to do the animations they were doing.
Well, you don't need to know secrets to understand that military play only very minor role in major areas. They can be good in certain small niche things.
Btw, tales of optical processors are present even in old literature. Issue with them is that you can't make them dense and you can't make good proper reflections along complex long path, I am not even talking about lack of light alternatives for many elements presented in any modern CPU.
This thing is similar to present quantum CPUs advanced scam, where huge resources are used to repeat same thing without any real progress. But it is lot of news and "products".
Issue is that all modern tech is very social thing, requiring international cooperation and big amount of people, hundreds of thousands usually. Same goes for resource usage.
FPGA 5Ghz+ tech went same way Intel 7Ghz claims went. People at the time had very bad understanding of smaller transistors work and lot of small stuff that made frequency wall.
? I've had an FPGA technology that looked promising for 5Ghz+ around 14 years ago, disapear into military only. This has happened a few more times to me to. I see a technology, and think thank you very much, I can use that, only to find out thar it gets taken by the military. In patenting, they have procedures where if it has certain military use, it is auto diverted to government and defence companies. Metal Storm were famous for slipping past this patent procedure, which makes me wonder how they did that, as I'm pretty sure they were doing cutting edge weapons systems before that, and one of my teachers was an ex Cornel I think used to work there previously.
Existing stabilised rugged radiation hardened, are all reasons to keep using standard tech, but a z80 doesn't have much place being the brains running an aircraft sized auto drone.
And here we have Samsung with their little fantasies
Military orders play almost zero role in financing latest processes, as they use much older ones, for most products very old.
Around 80% of financing is being done by buyers of mid to premium smartphones, all the rest by datacenters and supercomputers.
This looks like the one sorry:
http://news.mit.edu/2018/smallest-3-d-transistor-1207
It just reuses vapour deposition process. The 2.5nm finfets have 60% more performance with a higher on off contrast ratio. They talk about atomic level precision, it makes the transistors layer by layer. I wonder how it would go with new transistor structures.
The thing with low rates of 7nm and lower production is compensatable, by charging extreme extra for the small quantities to whoever wants to pay for it. So cutting edge applications, and the military might pay big dollar to suck those up. But, as a general mass product, that is difficult to justify big extreme prices for an extra 20% or so benefit. However, if you had a 1 nanometer circuit working at full speed today in even 1000 unit sucessfull production run, you could virtually auction them off to the highest bidder, 10k+ better for the military I would imagine.
I couldn't find a direct link to an article on it in my bookmarks, but here are some interesting articles that I found:
https://news.ycombinator.com/item?id=14486437
https://en.wikichip.org/wiki/3_nm_lithography_process
https://newscenter.lbl.gov/2016/10/06/smallest-transistor-1-nm-gate/
Yes, I was just about to post about sub 3nm technology that used modified existing lines, so it could be done more cheaply. I can't remember the name, but night be the one you just posted.
I remotely remember some technology smaller again.
But, what about leakage with these things? Leakage was supposed to be the ultimate restriction before size makes it unviable?.
3D vertical transistors and circuit stacking has been used for many years in the industry. The problem with 3D stacking of parallel circuites particularly, is thermal buildup. Magnetic based processing can do up to a million times less heat, so are better for stacking. You should look at doing a thread on magnetic FPGA? Magnetic Quantum Cellular Automata were an early leading research technology.
New Intel dreams
This one definitely made under very hard drugs.
It looks like you're new here. If you want to get involved, click one of these buttons!