Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
Once my gear arrives, I'll post results for CPU rendering if anyone's interested (and possibly GPU+CPU). Not sure what's a good test case tho. If anyone wants me to render a scene to compare, I'd be willing to do that. I bought an 1800X. It's sitting here in front of me. But my motherboard is only available on the 8th.
As to the gaming issue, there are motherboard BIOS issues at the moment that are being corrected (or have already been corrected). And there will need to be a Windows update to fix some thread scheduling issues that are affecting games. Some people have already reported 6900K+ level gaming (Gigabyte motherboard seems to be the best right now). It's unlikely to not get 60fps on any given game with a good video card anyhow. Some games will also provide updates to increase performance further.
..@ Outrider42....true I am not like most as I am looking at primarily at CPU rendering mode in Carrara and Renderman as well as Iray.
I also am getting sort of burned out on the "GPU Race" as I call it. We have just seen the 1080 which only became usable for Iray rendering in October of last year superseded this month by the 1080Ti, a shorter span than what occurred with the 980 and 980Ti. The 1080Ti also pretty much makes spending an extra 500$ on a Titan-X a bit moot just to get an extra GB of VRAM and 8 additional ROPs as in many categories like floating point precision, texture rate, texture mapping units, and shader units, the 1080Ti is superior to the more expensive Titan-X.
It all makes me wonder what's next and how soon will it happen?
There's a test scene somewhere; one of the threads here.
From what I'm reading online from variousreviwers who already have Ryzen cpus to test...
- there ae memory speed issues, but this is apparently abug known to AMD and will likely be fixed with a firmware update. This probably should have been fixed before release, but that's another story.
- it isn't performing well on gaming compared to later gen I7s because the gaming code was compiled for Intel cpus, not Ryzen's since the architecture wasn't known to the developers. This is apparently from various game developers. As soon as games are compiled specifically to take advantage of the Ryzen architecure I suspect we'll see it's gaming performance improve.
Anyway, that's what I'm reading and hearing in my online travels. Don't shoot the messenger.
Some of the benchmarks I've seen, and the comments, suggests it's not just AMD/Intel but the number of cores - games are written for quad-core/8 thread CPUs or thereabouts and neither Intel nor AMD chips with higher core-counts are shown to their best advantage. At least one set of bencharksd showed an i5 beating both Ryzen adn high-core Intel i7 chips in some games.
Still, and not surprisingly, how good Ryzen is does seem to depend on what you want to do
If anybody can find some definitive data on Windows 7 with Ryzen I'd be curious. I see the threads basicaly saying "Windows 7 support... just kidding, no windows 7 support!" and the ones suggesting it would still work fine but wouldn't take advantage of some uknown features, and the ones suggesting it might work but could be unstable.
There was a in-depth analysis of Ryzen's performance, and it looks like a lot of the performance discrepancies are due to games not handling AMD's multithreading properly -- much like when Intel released their hyperthreading implementation at first. There's a big performance discrepancy with 4C8T versus 8C8T.
Not only that, but the shared L3 cache seems to be impacting *something*, especially since it's shared amongst core modules.
The gaming performance is odd. It seems at the higher resolutions, Ryzen is actually doing quite well. It keeps pace easily with Intel's best (and most expensive) chips at 1440p and 4k. The trouble seems to be mostly with 1080p gaming. This to me shows that it really is just a matter of proper optimization.
As for Windows 7, I wouldn't go for Ryzen if you insist on staying with Windows 7. From what I have read, Ryzen has been "validated" for Windows 7. So it will probably run...but that is not the same as having actual support. And AMD has basically stated they will only support Windows 10. So while Ryzen might run on Windows 7, it likely will not run very well, as 7 will not have the proper instructions to access what this chip can do. This chip is totally new, and as evident by how even the most current Windows is not reading it correctly, there is no chance that it will work on 7 correctly. Here's a link to AMD confirming no Win 7 support: http://www.pcgamer.com/amd-confirms-there-will-be-no-ryzen-drivers-for-windows-7/
Looks like both Intel and AMD have given up on supporting 7 with their new chips.
Well, yes and no. It is early days yet and I'm certain that as the drama unfolds there will be plenty of users attempting to run Windows 7 on the platform. My advice for those in doubt, is to simply wait, watch for developments and see. As for features that may or may not be supported, I do find it interesting that no one is actually identifying what those questionably supported features might be or why on these "expert" sites. One would think that those reporting from these so-called tech sites would have the knowledge to comment more specifically on such issues, real or imagined. But they don't. Again, we'll need to wait and see.
There is some encouraging evidence already, however. Despite reports that AMD will not be providing support for Windows 7, I took the liberty of looking up a couple of AM4 motherboards from two of the major manufacturers, and both provided chipset drivers for Win7 64-bit. Odd, wouldn't you say? Moreover, after seeing this, I popped over to the AMD site and looked up one of the AM4 chipsets and checked for driver availability, and guess what? AMD has drivers for the chipset for Windows 7 as well. So, will Windows 7 be supported under the new architecture? The evidence would seem to suggest so, but the question of what features may or may not be supported remains, as well as the question of how important they might be. Once again, its early days and we need to wait and see.
...of course this was not the situation before Iray as 3DL, and at the time, LuxRender were only CPU based. Unless one also played games, there was little need for a more powerful more expensive GPU card. 2GB was pretty sufficient to drive dual display setups and the Daz viewport. CPU coresm CPU clock speed, total memory, and memory channels were the major factors that affected rendering speed. When I built my system several years ago (12 GB tri channel) it was surprisingly quick as long as I didn't use UE (and even then not too terribly bad as render times were pretty much on par with, if not a bit less than what I see from Iray CPU mode). In Carrara, I can get near photo real quality in less time (though scene setup often takes longer as I have to rework shaders of non native Carrara content) on it.
While yes, CPU based rendering is slower than GPU mode, with enough processing threads and ample quad channel memory it would be a marked improvement over what I currently have without having to dump a lot into a component that may or may not be able to contain a scene as well as textures depending on the complexity and effects involved. For about 150$ less than the cost of the new 1080Ti I can get 64 GB of ECC DDR3 1600 (4 x 16 GB). If I go to 8 x 8 GB the price drops to the low/mid 200s, so using a 16 x 8 GB setup, I could get 128 GB for a little more than the price of a standard 1070. The dual 2690s I am looking at retail for 415$ so with a 16 x 8 GB memory configuration and the dual socket 16 DIMM slot MB, I am looking at around what my current system (sans displays) cost me to build several years ago. True I still need the drives PSU, a moderate grade GPU (maybe 4 GB) to run the displays, and case which would bring the total to around the 2,500$ mark. Not bad for a beast of a rendering system that will have little trouble handling the scenes I create.
Looking at AMD's Vega GPUs the Vega 10 series will use HBM2, the first offering having 8 GB (though with the potential for up to 32). While processor thread count is significantly higher (4,096) than even Nvidia's Quadro flagship the P6000 (3,840), as well as an FP 32 performance of 12.5 TFlops (for all the scientists and engineers out there), the one point that makes all the talk about Vega moot with regards to Iray is it does not support the CUDA platform.
What AMD does absolutely impacts Nvidia Iray users. Because their competition is what pushes Nvidia in turn to release better products at cheaper prices. There is no doubt in my mind the 1080ti would cease to exist in its current form without AMD's Vega looming large.
And before GPU rendering came along, you had a very brutal war between AMD and Intel over CPU supremacy. So many innovations went down. The first multi-cores, then hyper threading, and these were refined greatly. Beyond that, the rest of the PC was changing wildly, too. Its actually stagnated for a few years, and that was thanks to AMD falling off so hard. But even so, a modern CPU will destroy a CPU built 6 years ago. 2016 and 2017 have been pretty fantastic for PC users.
The first Vega cards will be 8gb. So as long as AMD stays at 8, then Nvidia has no reason to go above that in their regular cards. The ti might get a bit more, and we see that here with the 1080ti getting 11gb. And the Titan might just stretch to 16gb. It was pretty disappointing to people when Titan Pascal kept the same 12gb as its predecessor. But we should know why...with no competition, why would they? Its worth noting that AMD released the first 8gb cards for gaming, quite some time before Nvidia did (again, the Titan doesn't count.)
The GPU rendering solution was always about being the cheaper solution versus building a massive and expensive workstation. You still get what you pay for. But a half way decent GPU will beat those dual Xeons every single time, up until it runs out of memory. Now the question is this: How often do you break that cap? If every single scene you make does, then sure, I guess you need that. I might break my limits, too, but by balancing that with those renders that do stay under the limit, my overall time is still good. For example, lets just say half you renders exceed the VRAM. For the sake of simplicity, the GPU can render 10 times faster than CPU alone (just a ballpark figure, its often faster than that.) A scene that takes 10 hours on CPU would take 1 hour on GPU. So if you make 20 scenes, and they all run that same time, it would take you 200 hours with a CPU alone. But if even half the scenes fit on the GPU, that time drops to 110 hours total, a 90 hour savings. Lets go further. Even if only 25% of those scenes work on GPU, you still save a lot of time, it would be 155 hours total. For perspective, that's a full work week of 9 hour days! To me that is pretty significant. And the GPU only ran 5 times out of 15. Obviously this highly generalized, but you begin to see why GPU rendering is so desirable. Even IF the GPU isn't getting used that much, it still provides a massive time savings.
The reason why the experts aren't being more specific should be obvious, Ryzen is still very much a mystery. And putting out a driver is not a sign of support for that driver in the future. Support is a two way street, and Microsoft would need to update 7 for proper Ryzen support as well...and somehow I doubt that is going to happen. As the experts have noted, some have gone into great detail and explanation as it involves the L2 cache, there ARE issues right now on Windows 10. And these issues will likely need support from Microsoft more than AMD. Of course Windows 10 will get that support, but 7? Not happening. That's why I wouldn't bother with Ryzen for 7. Even if AMD supports it, when they already stated they would not, you still need MS to do its part. Sure, Ryzen will probably work on 7, but if this issue with L2 exists on 7, it will handicap the chip, and it will never be fixed.
I got a 6 year old laptop that has a few built-in devices that Windows 10 is not supporting so I don't think any time is going to be spent on Windows 7 supporting Ryzen especially when Microsoft has made it clear DirectX 12 and all the multi-threading enhancements associated with that are Windows 10 only. Microsoft has decided it is better to use their man-power to compete on advancing Windows 10 and other MS software faster than their competitors can and let the older SW be relagated to by left running on legacy hardware.
I remember decommisioning servers that had run over 10 years in closed professional environments. LOL, the heatsinks fell off the CPUs and elsewhere these machines ran so long and so hot for so many years.
[not quoting to avoid taking up a lot of page real estate]
...true that GPU rendering will always be faster than CPU mode, but with two dozen more CPU threads than I currently have and physical memory to easily cover the scene load several times over, it would still be a rather noticeable increase in performance from what I am currently experiencing (particularly in Carrara). Oh, I still would include a GPU (most likely a 6 GB 1060) to drive the displays and Viewport (In Carrara that would allow me to work in full textured mode) but at the most it would be used for render "proofs" of characters and scene elements I create, not full scenes. Also again, Carrara does not support Iray or GPU rendering (for that fact neither does Vue Infinite which I also have) so I find having a boatload of CPU/Memory horsepower to be far more useful.
I also have to look at a point of diminishing returns for the cost. If I lay out an additional amount for one or two GPU cards and I find more than half the jobs end up dumping to the CPUs, the return on the investment in all those cores for rendering speed is pretty low,and when not rendering in Iray (eg. Carrara or Vue) they pretty much do nothing to improve performance.
As I mentioned earlier, this is a "shoestring" workstation project (which is why I am using older generation components) which I could build for slightly less than the price of a Quadro P5000.
As to competition, I agree, it does spur development. Just noting for those interested in the Vega series GPUs, Reality/Lux would be the way to go instead of Iray.
Also I was likewise disappointed that the Titan-X remained at 12 GB, in spite of the increase in it's price tag, while the price for the Quadro P5000 remained the same even though both the VRAM was doubled and core count increased from the Maxwell edition.
It's weird. I can't get excited about CPUs anymore at all. Everything I need performance for seems to happen on the GPU these days.
Vue 2016 has a hybrid cpu/gpu renderer, so now a good gpu can also be a big benefit for Vue. The currently free LuxusCore plugin for Carrara can also use the gpu for rendering, but of course a gpu is not needed for the internal renderer in Carrara, and it seems to scale quite well with multicore cpu systems.
Linux desktop or server? I'd be ok with desktop. I really want to build a better Linux Mint rig.
The next big step in computer is multi-core GPU engines, I don't know what the Pixel per Millimeter density is needed in wall size displays when one is standing close enough to have their nose nearly touching the screen but once can PBR render frames on such complex scenes like crowded Manahattan you can consider the consumer tech maxed out. For businesses, they could design, and already have designed, the screens and the CPU to put put together like a puzzle.
That'll probably come very soon, if the Vega/Navi -- GPUs on interposers -- turn out to be true.
..however LuxRender is OpenCL based so that would require an AMD GPU card.
As to the Vue hybrid, what API does it use, OpenCL or CUDA?
For the foreseeable future (no more than five years), I will agree that the GPU will be faster (presuming the scene fits on the card) than a CPU setup. Certainly in our situations. Commercial renders are often CPU based (maybe always?)
I'm not prepared to predict how things will develop. No one here thought the 1080ti would be as good as it is going to be; and this was with only a month or so before we found out about.
I think AMD are on to something, in thinking that GPUs and CPUs will merge together more; cores on CPUs will likely become more important. It's the only way to build in additional performance; software developers just avoid programming for cores due to the extra workload the programming requires.
Of course, this presumes that other materials than silicon don't become available for mainstream, or at least workstations that those with a reasonable budget can afford.
Personally, GPUs seem to have got more CPU-like, although they still only excel at massively parallel processing; as well as CPUs getting a GPU component.
My final thought about PCs; the sale of them has dropped in part because folks are using something else; I also suspect they have had less need to upgrade; competition has largely made an older PC pretty much as capable as a new one for the usual tasks including light gaming. Then Microsoft stopped introducing a new OS (from W7) that always increased the memory requirements by a significant amount. I mean I have a nine year old PC that runs W10 just fine (at least it would if I had left it on); it's an early i7 920. Hell that PC has more RAM than my current, which has 16GB.
...true, PC sales have tapered off due to tablets, Smartphones, and Netbooks as the majority of users do not require a lot of horsepower for routine functions like reading emails, commenting on FB, tweeting, watching videos on You Tube, managing photo galleries, creating memes on Cheezburger, etc
The next largest segment is business where one often finds low to medium power desktop PCs networked to a central server. After that is the gaming crowd who have heavier requirements for system resources to get the best detail, sound, refresh, and fame rates (media enthusiasts also fall into this area). Finally, there's us, the 3D CG enthusiasts, who yes, make up a very small segment of PC users compared to the general and even gaming population.
I bought a Ryzen R7 1800X, but haven't put a build together yet. And I'm sorry to say, that I'm not quite sure I'm going to keep it.
All of the current reviews and benchmarks show it to be an awesome chip, as are the other two models. The lack of 1080p gaming couldn't matter less to me, and frankly I think it's silly to to be disappointed, the way most reviewers have been, in that regard. The 1800X competes with the 6900K, but what I feel like most people are overlooking is that 1700X and 1700 also compete with the 6900K, performance wise. Reviews are revealing that all three chips are basically the same, with more strict binning for the more expensive chips. The only benchmark with ryzen that gives me pause for thought it the SATA measurements from tweaktown. But I'd still be willing to overlook that. Anyone rendering in 3Delight should seriously consider a ryzen chip...it should scale per core as well as Cinebench, and ryzen killed that benchmark.
For me, the real problem (and I reason I'm contemplating abandoning an AM4 build), is the lack of quad channel memory and only 20 PCIE lanes. The CPU performance is outstanding, but the chipset is really lacking in other areas that make a true workstation board. Of course, I bought the chip knowing that already. But it's almost as if Ryzen doesn't know what it's trying to be.
...the other reason it doesn't excite me is that it only supports W10 because of DirectX 12
...Linux would be perfectly fine, if Daz and other commercial 2D/3D developers only supported it. Wine is just too much hit and miss for me.
The trouble is that new processors of either variant are all that is going to get supported going forward. I am somewhat concerned over the reduced lanes and only dual channel support; benchmarks are showing a difference. Makes me wonder if the dual cpu setups that are coming, will have an increase in lanes; making me hesitate over buying.
...well there is that one for the 32 core Naples CPU, just make sure you have a Lotto ticket with the winning numbers in your pocket.
Here's some Naples info
http://www.digitaltrends.com/computing/amd-naples-q2-2017-outperforms-xeon/?utm_source=Sailthru&utm_medium=email&utm_campaign=DT: Brief Daily 2017-03-07&utm_term=DT Newsletter - Daily Subscribers
@kyoto kid: OpenCL runs on anything if you have the drivers... CPU and video cards from any vendor. Only thing is that nVidia only has drivers for an older version of OpenCL, but it could work.