The staff at VideoCardz.com have been a very busy of late, posting various articles on rumored NVIDIA graphics cards expected to be revealed this month. Today in particular we are seeing more (and more) information and imagery concerning what seems assured to be RTX 2080 branding, and somewhat surprising is the rumor that the RTX 2080 Ti will launch simultaneously (with a reported 4352 CUDA cores, no less).
Reported images of MSI GAMING X TRIO variants of RTX 2080/2080 Ti (via VideoCardz)
From the reported product images one thing in particular stand out, as memory for each card appears unchanged from current GTX 1080 and 1080 Ti cards, at 8GB and 11GB, respectively (though a move to GDDR6 from GDDR5X has also been rumored/reported).
Even (reported) PCB images are online, with this TU104-400-A1 quality sample pictured on Chiphell via VideoCardz.com:
The TU104-400-A1 pictured is presumed to be the RTX 2080 GPU (Chiphell via VideoCardz)
Other product images from AIB partners (PALIT and Gigabyte) were recently posted over at VideoCardz.com if you care to take a look, and as we near a likely announcement it looks like the (reported) leaks will keep on coming.
“…GTX 1080 and 1080 Ti
“…GTX 1080 and 1080 Ti cards, at 8GB and 11TB, respectively (though a move to GDDR6 from GDDR5X has also been rumored/reported).”
11TB! impressive 😉
Yea I want me one of those
Yea I want me one of those 11TB cards…lol
Ok, obvious error: should
Ok, obvious error: should have been TiB.
wow, that is orders of
wow, that is orders of magnitude more than a GB if i understand it correctly, but it is confusing, and i think even more than a TB
hope it can use all that
anyway, thanks for the correction
one mebibyte 1 MiB = 220 Byte = 1,048,576 Byte
one megabyte 1 MB = 106 Byte = 1,000,000 Byte
one gibibyte 1 GiB = 230 Byte = 1,073,741,824 Byte
one gigabyte 1 GB = 109 Byte = 1,000,000,000 Byte
one tebibyte 1 TiB = 240 Byte = 1,099,511,627,776 Byte
one terabyte 1 TB = 1012 Byte = 1,000,000,000,000 Byte
Oh my God! Do you really
Oh my God! Do you really think graphics card has TB of (V)RAM memory?
Only super computers have so much cause it is a lot of chips. Even latest SSDs have 0.5 – 1 TB per chip and DDR is going to be 8 Tb. So, 8 GB is 8 chips and 8 TB is 8000!!!
On units. All the electronics (memories, CPUs) are binary and sizes (values) are in power of 2. So, 1k is 2^10 or 1024, 1M is 2^20 or 1048576, etc. Capital B means BYTE while b means BIT -> 1B = 8b.
In decimal (human) world power of 10 is used. So, 1k is 10^3 or 1000. When someone says 1km he means 1000m.
Somewhere, sometimes distinction is made between binary and decimal and for binary GiB is used. As written above, in electronics we always (mostly) deal with GB and GB = GiB.
JEDEC standard used GB.
I meant to type “GB” when I
I meant to type "GB" when I wrote the post, but for some reason I typed "TB". The "TiB" comment was a joke. A very clever and extremely funny joke, damn it!
War is Peace
Freedom is
War is Peace
Freedom is Slavery
Ignorance is Strength
GB = GiB
Fake news. JEDEC recognizes
Fake news. JEDEC recognizes SI units, it only decided not to depricate the common usage in order to accomodate old documentation and reduce confusion of old documents.
Curious if silicon
Curious if silicon transistors will even shrink enough in the future to allow for 11TB of ram on a $1000 video card.
I’m not upgrading until I can
I’m not upgrading until I can get at least 16gb, so 11tb will definitely do it.
This looks very fake. Nvidia
This looks very fake. Nvidia dropped the DVI ports on their GPUs with the 1080Ti release. I’m hoping for more VRAM on the high-end GPU. 12GB would be a good start.
But what if it’s not a
But what if it’s not a picture of a reference board?
Where do you see DVI port on
Where do you see DVI port on these pics? Display outputs are:
3x Displayport 1.4(one elevated)
1x Hdmi 2.0b
1x Virtuallink USB-C
Nvidia Turing is just
Nvidia Turing is just competing with Pascal for the high end ##80Ti flagship crown while AMD will target with Navi in the 2019 time frame the 2080/2070 SKUs High-Mid Range competition from Nvidia.
AMD is on a mostly Professional Market CPUs and GPUs for the high end compute/AI market foucs and that Console/Semi-custom and Desktop/Laptop APU market. No great revenue streams in Flagship Gaming can be had currently without excessive investment from AMD and Mainstream Gaming is again what Navi will be targeting because that’s where there are the most unit sales for discrete GPU opportunities. Mainstream represents the largest segement of the discrete GPU market sales along with discrete mobile that AMD already targets and also integrated graphics in the low end with its integrated Vega PC/Laptop graphics.
Nvidia being a GPU company mostly will have to protect its current discrete gaming GPU cash cow until Nvidia can get its professional GPU market revenues producing more than its current gaming GPU revenues. Nvidia’s non gaming Tegra/Graphics automotive/other non gaming sales along with the Professional GPU sales are showing the most revenue growth business quarter to business quarter and year on year.
AMD’s Lisa Su has already stated many times over that its Epyc/Radeon Pro Compute/AI sales are what has the most foucs. And from a busines standpoint that’s the best option for AMD to Increase its quartetly gross margin figures. AMD’s Vega will still have more chances to reach a higher unit count of devices via any Raven Ridge Zen/With Vega Graphics and Semi-Custom Console gamimg Zen/Vega APU design wins. Navi will be mainstream and Vega 20 will be Prfessional market focused with that always available option of taking any non performant Vega 20 dies that do not make the Binning Scores to be used in any professional GPU SKUs and working up a Vega 20 gaming variant.
So AMD has in the past worked up those dual GPU dies on a single PCIe card variant to make a stab at some form of flagship competition, if for any reason other than just to try and make the best of some non performant Vega 20 dies that can not be used for professional GPU sales. So Mainstream for Navi and maybe some Vega 20 based products when the Reject Binns are full enough for AMD to market Vega 20 for some gaming usage. Same plan different year for that only this time around AMD’s Epyc market sales will grow larger than any maker’s GPU only sales in revenues. AMD appears to be letting its semi-custom clients take the R&D and market risks for some Zen/Vega console market revenues, and that’s a good idea for AMD. AMD has even partnered with Intel in the short term to let Intel pay AMD’s way into some NUC/Laptop revenue generating business.
AMD’s not going to be hurting at all for lack of any Flagship GPU sales and even if the Mining Market has cooled off any miners remaining will go with Vega’s higher Shader core counts before they will even consider Nvidia’s Pascal or newer SKUs. The only reason that the miners even chose Pascal GPUs for mining was that Vega 56/64, and even Polaris, GPU SKUs where already sold out with no other oprions for the miners but to go with Nvidia. Even AMD did not get burned like Nvidia just did with overproducing Pascal, AMD learned its lesson last time around to never trust mining ever again.
AMD’s has its CPU business now for Epyc and Radeon Professional GPU Compute/AI offerings and even its consumer CPU sales are helping, Zen/Vega Raven Ridge APUs included.
really appreciate your
really appreciate your insight and i have little doubt that it is you, lisa, or one of your coworkers
your insights have been right on for a few years now
don’t feed that troll-o-text
don’t feed that troll-o-text
No Not an AMD employee, just
No Not an AMD employee, just no, more of an AMD can prosper without a Flagship GPU for a good long while with those Epyc Sales and Radeon Pro Compute/AI professional market oriented products person. So really just look at the Pro GPU card markups and that certianly pays for the HBM2 with plenty remaining for great revenues and the quarterly gross margins that the Wallstreet Quants so love.
Look at Gamers, they do so whine about paying for things while the professional markets pony up the dosh and write that off on their taxes. AMD’s a CPU company and a GPU company, so Nvidia can and does depend on GPUs more than AMD has to. AMD’s got is semi-custom clients funding for those AMD Console Zen/Vega APU R&D costs so that’s a no risk way for AMD to get Vega Graphics on more and more devices in addition to Desktop/Laptop Zen/Vega consumer APU offerings. Even that AMD/Intel Bastard child in the skull box, and a few laptops, is earing AMD more GPU revenue dosh, even if Intel will eventually get some of its own later on.
Gamers do have Nvidia and they will mostly chose to suck it up and pay the green for the green. AMD’s not going to compete in the Flagship GPU market, Navi is a manistream SKU on the consumer side.
But do not worry gamers with that flagship ePEEN issue those Vega 20 poop dies binns will more than likely build up for a quick whip up of some dual GPU die on a Single PCIe card ego boost for AMD to at least recoup some revenue value on those Lower Binned Vega 20 dies that fail their test and are held back in the sweathog classroom.
And Intel(Got Raja) wants to be in that discrete GPU market also starting in 2020(?) so maybe some Blue in with the Green and Red. AMD’s not going to out invest Nvidia currently so money gets the flagship crown.
There are millions of folks gaming on consoles anyways so there is fun to be had. Only the most sociopathic need to have the fastest GPU at all costs while others game just fine on what they can afford. Hell AMD could do just fine with Mainstream and APU oriented graphics and save the best dies for the market that will gladly pay a proper markup for those Professional GPU Compute/AI/Pro Graphics SKUs. Let Nvidia worry itself sick about GPUs only because AMD’s got an x86 license and GPUs and in 2020 so will Intel for some discrete offerings along with Intel’s x86 64 bit license that Intel licenses from AMD.
Green has no x86 for any sorts of CPU revenues much at all.
PC Gamers will have to pony up or join the console peasant masses or they can be happy with Vega at a lower cost because Nvidia is going to charge for those Turings big time while it can. Vega 64/56 does pretty good against its intended competition the GTX 1080/1070 and that Vega is all gamers will get from AMD unti Navi is ready next year.
I’m hoping that there will be discrete mobile Navi offerings because Discrete Mobile Vega is nowhere to be found.
Great work Lisa Su and let the Flagship freaks eat Nvidia cake, that market does not justify any of AMD’s attention until there are sufficient Vega 20 poop dies in stock(full poop die bins)for AMD to care about a small unit sales market like flagship GPUs.
I too have thought long and
I too have thought long and hard about these things, and I agree 100% about 100% of your points, except maybe … 🙂
U may be pessimistic about the dual gpu card. As u say, they are experienced by now (and crossfire), and they do talk rather a lot about it re Navi – the dual gpu will just seem a single gpu to the system. They have clearly learned much of teaming processors from Zens introduction.
Behind the tech lies the equally important biz plan you touch on.
amd competes in two duopolies. intel & nvidia only compete in one each.
historically, amd seem to have traded profitably in value graphics, but struggled in cpu.(yep, very cautious corporate dna, i doubt mining hurt much at all).
Their resurgence in CPU, is a huge leg up for their gpuS imo.
Once, an nvidia gpu was like a reflex purchase for an intel buyer.
Now, the attractions of a single ecosystem will make them look very hard at an all Amd option.
I very much agree the apu is underrated as a driver for amd’s ecosystem.
There are massive volume markets perfect for apu. nothing comes close. Embedded, including; consoles, AV, …
Sub dgpu 4C/8T laptop & entry desktop.
Yet these countless millions are little different to a TR Vega workstation or server. Both are cutting edge; Zen, Vega,Fabric.
I predict another force will come into play. Many console gamers will graduate to PCs via the apu, in effect creating extra TAM for desktops.
A sleeper also is the apu is unique for now as the only Zen product based on a single ccx, so it doesnt have the inherent latency of the usual dual ccx zen products.
At this level of 4 cores, they are at no latency disadvantage to monolithic chips from intel. They can draw level or even improve on post meltdown intel 4 core gaming favorites in much prized latency.
Really a lot of the latency
Really a lot of the latency between the CCX units issues over the Infinity Fabric is because in Zen/Zen+ that’s tied to the memory clocks and that could be solved by placing the IF on the CPU in its own clock domain and running the IF at a higher clock speed than the memory clocks. There is still going to be some latency added because of all the cross clock domain buffering that would have to occur to cross any clock domains and that also requires extra transistors to implement. So any IF clock domain data transfers are going to have some latency added and the on IF clock speeds are going to have to be high enough to offset any latency issues added from cross clock domain buffering circuits.
AMD could go with an On Zeppelin die Ring Bus arrangement and reduce its Intra-Die CCX to CCX latency issues for gaming workloads. Windows OS and the gaming software needs to be more Tuned for NUMA CPU arrangements like occur with the design of multi Zen/Zeppelin dies on one MCM and across the socket for 2P motherboards. Larger L3 caches will help even if AMD sticks with Zen’s/Zen+’s basic IF design that has the Memory Clock domain shared with the IF clock domain.
AMD could go with a Faster IF for intra-die CCX units on the same Zeppelin die for cache coherency traffic and then an off die/cross die fabric that’s tied to the memory clocks. That Way any on Zen/Zeppelin die cache coherency traffic would be faster and maybe help with gaming latency across 8, or more, cores/16 threads on a single Zen/Zeppelin die.
Really the low hanging fruit for AMD at 7nm with minimal changes to the Zen2 IF design over the Zen1/Zen+ designs would be larger L3 caches up to a point of diminishing returns. Larger L3 caches will reduce DRAM accesses for any workloads that require large loops and other such general software/gaming code that’s so very latency sensitive. The next would be a larger CCX unit core count to 6 cores per CCX at a cost of more complexity on the intra CCX Unit CPU core fabric. So Maybe a Ring Bus for Zeppelin may be better done there on a single Zeppelin die to simplify the intra Zeppelin Die fabric complexity.
Nvidia is going to always be able to fund more gaming oriented focused GPU die tape-outs with its revenue streams allowing Nvidia to afford 5 or more specifically designed custom GPU die tape-outs compared to AMD’s, at the time of Vega’s introduction, only single Vega 10 base die tape-out that was only engineered with enough ROPs to compete with Nvidia’s GP104 based GTX 1080FE.
Gamers who are obviously mistakeing that one Vega 10 base die tape-out for Vega’s GPU micro-architecture and are not understanding the difference between what is part of the Shader/Compute core micro-architecture and the actual GPU’s tape-out that can be made up of many shader cores and varying amounts of ROPs/TMU’s etc. And look at Nvidia’s Pascal with 5 different based die variants that all have different amounts/ratios of Shader to TMUs to ROPs etc.
Nvidia now has it’s “Ray Tracing Cores” but Ray Tracing is a compute oriented task anyways so maybe AMD could call its ACE units/Shader compute cores “Ray Tracing and more cores” and be pretty much be correct in doing so until/or if the Nvidia Whitepapers are received by many of the independent hardware reviewers(Professional/Academic Computer Journals) and they say otherwise. Nvidia is going to have to show in some very deep dive Whitepapers the specific Ray Tracing cores design and how that provides any advantage over AMD’s Compute ACE/Shader cores that have been in use over several iterations of GCN generational designs.
Nvidia’s only advantage in Turing may be the speed of its Trained Tensor cores(Matrix math units) running some quick and dirty 8 bit AI convolutional/Other types of Image processing that are more quickly able to produce any degrained/denoised results in the millisecond time frame range needed for high FPS/With low frame variance image processing for gaming workloads. And any lower than that 10 Giga ray calculation speeds(Consumer Gaming Cards are not getting 10 Giga rays) at milliseconds per frame is going to result in some very grainy Ray Traced results so that AI speed is going to be needed in order to maintain ray traced quality that still going to be mixed done with the traditional raster operation results in hybrid ray tracing as its known. [also note that Nvidia’s consumer GPUs will get less tensor cores available that those SIGGRAPH demos so that’s going to have to be tested independently also]
Nvidia the Professional/Academic folks await the arrival of your PHD level folk’s detailed whitepapers with plenty of that stochastic analysis that is intrinsic to the Ray Tracing process on PC/GPU processors via that simulated/hybrid ray tracing process that also occurs in the natural world with light interactions. Even the Chip Fab folks have to worry loads about electromagnetic ray stochastic effects much to their chagrin!
Any non gaming Ray Tracing that’s not FPS/Millisecond dependent can already be done fine on any GPU, or CPU, over a longer time period that’s not millisecond time frame constrained like Animation or professional graphics workloads that are not like gaming FPS dependent workloads. So Nvidia’s going to have to train that convolutional/Other types of Image processing on its server farms of hundreds of Teslas in order to winnow down the best quick and dirty AI algorithm to run on any Gaming GPUs limited amounts of Tensor cores.
Interesting points.
I cant
Interesting points.
I cant agree about the ring bus and 4+ core CCXs.
Ring bus begins to flag at 6 cores/12T anyway.
AMD’s 4 core CCX direct HB interconnect using 3 links is perfection, so why change? Its CF that can improve dramatically.
(I recall reading IF also uses 3 links in and out of its central cache, and this is a/the key bottle neck for the apu – graphics generates a lot of traffic as u can imagine. Point being, perhaps the 3 link thing is built into the broader logic of the zen/IF architecture.)
To be fair, we have only seen version 1.0 and 1.1 of a ~beta concept, and the underrated improvements of Zen+ have already been great, imo.
To this inexpert eye, putting IF on the cpu defeats the whole purpose of coherency between processors. It has to be external on the die or socket module.
re “Really the low hanging fruit for AMD at 7nm with minimal changes to the Zen2 IF design over the Zen1/Zen+ designs would be larger L3 caches up to a point of diminishing returns. Larger L3 caches will reduce DRAM accesses for any workloads that require large loops and other such general software/gaming code that’s so very latency sensitive. ”
I would like that to be the case, but the huge l3 cache on the 1500x didnt help ryzen much afaik, and they could have kept both blocks of ryzen l3 on the apu die?, but instead halved it for the APU, which you would think could make good use of the extra l3.
Perceived warts an all, Zen is clearly highly competitive as is, in a still skeptical market (even crushingly so in some segments).
The apu can be crushing also (it arguably is now. folks are just waiting on decent APU laptops.) in TWO ways. 7nm and/or w/ a little dedicated gpu cache. Both are ~certainties.
This can happen with or without better IF, but thats sure to happen too imo. 7nm will surely improve IF for starters.
re:
“Any non gaming Ray Tracing that’s not FPS/Millisecond dependent can already be done fine on any GPU, or CPU, over a longer time period that’s not millisecond time frame constrained like Animation or professional graphics workloads that are not like gaming FPS dependent workloads. ”
Interesting and yes, it gels w/ my inexpert overview of their respective prospects.
Chasing gaming and chasing gpu compute are in many cases opposites as u say. Rendering & learning e.g. are not as time sensitive, and really come down to perf/$ – an amd strength.
I am very skeptical of nvidias solution to multi gpu (gcn?), which relies on a proprietary risc server add in card & dispay ports as an interconnect afaik. It smells bad IMO.
The advantages of amd making both processors concerned, is much underrated imo. A champion team beats a team of champions.
They dont have to have the best of each, just the best net result from better co-ordination of their sibling processors, and a no hassle single ecosystem.
Sounds like a whole lot of
Sounds like a whole lot of excuses to me. The Turing architecture will filter down to mainstream (GTX 2060) which will utterly destroy AMD sales. So what is the usual excuse we’ll hear from AMD aplogists? “Wait..Just wait”. Yeah we waited for Polaris, Vega etc and they were all shit.
Not really “utterly destroy”
Not really “utterly destroy” AMD’s sales even for GPUs because the professionl compute/AI market is Vega 10’s, and soon Vega 20’s, target market with consumer/gaming Vega sales just a lesser markup market for AMD anyways. Vega 56/64 sales are still going to occur as the prices of consumer Vega will drop with less demand. Many Vega gamers on a budget will maybe move to acquiring a second Vega 56, or 64, if the price falls enough and that’s plenty of extra Vega ACE units and their assoicated nCU based Shader cores to do some ray tracing and AI sorts of denoising workloads also.
Nvidia’s Turing will have some advantage with some quick and dirty Tensor/AI core accelerated denoising Operations speeds but the price will be high for any gamers with limited budgets to afford. Many of those that have already gone with Vega and Freesync at a savings will, now that Vega GPUs are not price inflated by mining demand, more than likely go with adding a second Vega 56 or 64 and gaming at 4k that way, at least until the Turing products prices come down.
Even Nvidia’s Pascal Gamers may go with a second Pascal GPU for the owners of any GTX GPU above the 1060 if that supports dual GPU gaming and they can get 4k gamng/VR gaming performance that way. Maybe the VR folks can begin pushing dual GPU for VR more now that Pascal is being replaced with Turing and Pascal will be priced to move what with all the excess Pascal inventory that Nvidia was left holding after the mining craze when south for a second time.
AMD got burned but learned by mining the first time around and gamers made out with some damn low AMD GPU prices for dual GPU gaming! So it’s Nvidia’s turn this time around to suffer giving Pascal gamers more reason to go dual Pascal than to jump on the Turing bandwagon just yet until the price comes down.
JHH over at Nvidia will gladly continue selling Pascal SKUs above the GTX 1060 for dual GPU gaming ASAP before the yearly reports come in and that excess Pascal Inventory has to be written down on the balance sheets.
Nvidia’s GPUs can not game without a little help from either AMD’s or Intel’s CPUs, and AMD’s Zen will make AMD more revenues than Nvidia’s GPU’s make for Nvidia. AMD is going up in the revenue and server market share department, no GPU sales really that necessary for that, even though AMD will still be selling Vega for professional Compute/AI and consumer gaming! It’s about time for many to get that second Vega GPU at a bargain price and wait for either Turing’s price to fall or Navi to arrive. Dual Vega GPU gaming will get even more affordable, ditto for dual Pascal gaming.
That Vega “Shit” had the miners bidding up the Vega 64 price above the price of the GTX 1080Ti and into the Titan range for a good long while. AMD has amortized its Vega R&D costs and can now afford a price decline to a better degree. Nvidia going with GDDR6, even on its pro SKUs, is going to take some of the demand pricing pressure off of HBM2 also.
Oh the Retailers are going to be MSRP+ pricing Turing for a good long while anyways!
Sounds like a whole lot of
Sounds like a whole lot of excuses to me. The Turing architecture will filter down to mainstream (GTX 2060) which will utterly destroy AMD sales. So what is the usual excuse we’ll hear from AMD aplogists? “Wait..Just wait”. Yeah we waited for Polaris, Vega etc and they were all shit.
Yeah that TU104A text looks
Yeah that TU104A text looks like it was photoshopped on. I would imagine the GPU being called something like GA104, not TU104 anyways. (A for Alan, since T was already used for Tesla)
RA probably if they are going
RA probably if they are going with RTX bbbv randing.
https://www.tomshardware.com/
https://www.tomshardware.com/news/geforce-rtx-2080-price-specs,37635.html
Fake? Or not?
As predicted, they are going
As predicted, they are going to release a gpu with less memory than the next next gen consoles.
NVIDIA is so far ahead of AMD
NVIDIA is so far ahead of AMD now that all we hear from AMD shills is “The great Lisa Su will save us with 7 nm Navi”. LOL not likely! AMD has all but given up competing in the GPU market and if they’re smart, they’ll keep their money focused on Ryzen and sell off RTG IP to Chinese companies.
No The RTG IP is great for
No The RTG IP is great for the Datacenter and the AI market and Ray Tracing was originally done on CPU and any GPU with OpenCL or CUDA libraries can do Ray Tracing and AI workloads. Tensor cores are just matrices math units and Google uses its own Tensor Cores. Ray tracing being a compute workload can be done on AMD’s nCU/ACE compute shaders so that how its been done for a good long time.
Really AMD does not have to give a Rats A$$ about Flagship gaming and could still make money off of only professional GPU compute/AI sales.
7nm Navi is only going to cover mainstream GPUs as that can justify the investment from AMD, not Flgship Gaming GPUs sales which does not produce much if any revenues for AMD.
AMD really is doing the proper business thing in avoiding investing in any GPU market segement such as flagship gaming as the returns and unit sales volumes are just not there to justify any gaming only focus on Flagship GPUs.
AMD has given up on any consumer Flagship GPU market and AMD’s Vega graphics will have a larger Installed base in integrated graphics on Raven Ridge Desktop/Mobile Zen/Vega APUs. The real Focus of AMD for the high end GPU market is that Compute/AI GPU accelerator/professonal graphics/visualization market and not any consumer only gaming market. Look at Vega 20 on 7nm that’s a Professional Market targeted product where the markups are very high.
7nm Navi is for consumer mainstram gaming and hopefully discrete mobile gaming also but Flagship GPUs are only worth the investment if one has a GP102 like professional die tapeout that can be binned down for Flagship gaming after the development costs of the GPU was already paid for by professsional market GPU sales! Nvidia’s GP102 based GTX 1080Ti was made from that binned Quadro targeted GP102 base die tapeout and even Nvidia is not stupid enough to attempt to finance a Flageship GPU via Consumer Only market sales as Nvidia would lose money doing so, and that GP102 based GTX 1080Ti is proof posivitive of that fact.
GP104 was and is currently among Nvidia’s Gaming only focused base die tapeouts, along with GP106/GP108 for the cosnumer gamming markets. GP102 is a professional targeted product along with GP100 and those SKUs run up to $10,000 range and those professsional market markups are what paid for those R&D costs. The GTX 1080Ti is based on a binned GP102 die.
AMD could do the same for a Flagship Vega 20 once the reject bins begin to fill up with sufficient Vega 20 reject dies that can not be sold for professional usage. Really for Flagship Gaming that whole market is just there to make use of lower binned pro GPU dies instead of throwing them in the garbage. Feed those Flagship Gaming Pigs that reject die slop and earn at least some money back to cut the losses on defective pro GPU dies.
Nvidia is not far ahead with those Ray Tracing cores as AMD can do Ray Tracing in hardware on its ACE units/Vega nCU shader cores. Nvidia still can not do real time raytracing even on the most powerful Volta/Tesla SKUs that’s a limited amount of rays that’s sampled and mixed in with the normal raster operations restlts on thoes upcoming Truing SKUs. Real time Ray Tracing cannot be done on any makers’ single GPU card.
Nvidia is not ahead as Imagination Technologies was and is Ahead of everybody with on GPU Ray Tracing IP. AMD’s has had compute cores on its GCN GPUs for years and Ray Tracing is a CPU oriented compute workload that has been done for years on GPUs Via CUDA/OpenCL.
Gamers and their little chump change is meaninless to AMD and even Nvidia(To a lesser degree). AMD can really not lose any sales from a Flagship gaming market that AMD has not been a part of for some years now. It’s not worth the investment for AMD until there are enough Vega 20 reject dies for AMD to maybe make use of to get some financial return off of any reject Vega 20 dies.
Gamers need to get over themselves for ever thinking that any Flagship GPU is intentionally made for gamers, just look at Nvidia’s GP102 Quadro Reject dies, and that’s what became the GTX 1080Ti.
AMD’s going to be making more revenues from EPYC sales that Nvidia will be making for gaming GPU sales and AMD’s still making mainstream GPU revenues also along with a rather larger integrated graphics market that Nvidia can not even touch without an x86 license. Who the hell needs that Flagship Gaming GPU albatross tied to their business necks.
Flagship Gaming is reject Pro Die cost recovery based and that’s all that that market segement is worth.
Gamers will get Flagship GPUs only when the amounts of reject professionl dies build up becaues that just smart loss recovery on Nvidia’s and AMD part. You’ll get the binned professional dies or you will get no Flagship GPU at all unless you are willing to pony up the proper amounts of dosh to fully pay for the massive R&D costs. Nvidia should really charge above $1000 for any Flagship GPU SKUs and AMD, well those Epyc sales/revenues will soon dwarf any sorts of GPU only revenues in shrort order.
Sell of RTG to some Chinese Company, are you insane? RTG makes the Radeon Pro WX 9100s/Instinct MI 25 compute/AI SKUs that sell for thousands. AMD should tell the whining Flagship GPU Fanboys to take a hike! Vega 10 was designed for compute mosty but the miners sure help AMD sell and Vega 10 based Vega 56/64 dies that did not make the grade to become WX 9100’s or Radeon Instinct MI25’s.
Nobody reads this.
Nobody reads this.