Holy HEDT Batman! Threadripper 2nd Gen is 32 Cores!!! 7nm Vega also launching 2H 2018.

2

Comments

  • nonesuch00nonesuch00 Posts: 17,929

    The thing I find fascinating is that the TDP of the 32 core EPYC is only 10W more than the 16 core one.  Of course, there are several ways to get to 16 cores (4x4, 2x8, etc.), I'm guessing that it's 4x4 for the 16 core EPYCs. Anyways, for the 32 core EPYC, apparently the 'extra' 16 cores don't come with that much of a power premium (10w additional TDP).  I tried leisurely googling for a power comparison of a 7351P vs a 7551P, i.e wattage draw when under load, but didn't find one.  The fact that Threadripper and 32 core EPYC have the same TDP is telling though...

    OK, as for the CCX cross core thing, Serve The Home did some in-depth analysis of this:

    https://www.servethehome.com/amd-epyc-infinity-fabric-latency-ddr4-2400-v-2666-a-snapshot/

    The two takeaways for EPYC are that there is a significant latency hit depending on how 'far away' the cores are from each other, and use faster memory (with tight timings).  The second point seems to apply more to the sub DDR4 3200 memory speeds.  The timings on DDR4 3600 increase a bit, and in most benchmarks I've looked at, Ryzen hasn't benefitted much at all going past 3200 MHz, but between 2400 MHz to 3200 MHz there are significant gains to be had.

    BUT, even with that 'latency hit' in play, EPYC still does quite well against it's comparably priced peers in benchmarks. 

    Also note that the STH article is also testing latency across two different CPUs/sockets entirely as well as on the same chip/socket, which is where those really high latency numbers creep in.  Threadripper is a single socket solution, so you won't see the really largest latency hits as shown in the charts in the articles, just the 'on chip' ones.

    The Threadripper situation will change the latency situation a bit (i.e. 2 cores won't have memory 'nearby'), but then EPYC contends with this on a daily basis (i.e. a specific bit of data might not be 'close by') and it still does quite well  So again, the benefit of having 16 additional cores should be significant, but of course we won't know for sure what the actual performance hit might be until the benchmarking experts take Threadripper 2 for a spin...

    I have a silly theory that since the Ryzen yields have been so phenomenal that AMD was having problems finding truly 'unusable' spacer cores to stuff inside of Threadripper CPUs, so they figured they might as well 'activate' those spacer cores in thie high end Threadripper 2's.  The 'lesser' Threadrippers (16 cores or less) can still use the occasional 'dead' CPU dies as spacers of course.

    I guess accessing the bus to load from memory uses much more power than pre-prepped data already in the CPUs dies' memories accounts for only 10W more energy needed maybe?

  • kyoto kidkyoto kid Posts: 40,589

    ...hmm, two of those and you are approaching the core counts of the midrange Nvidia 400 series GPUs.

  • nonesuch00nonesuch00 Posts: 17,929
    kyoto kid said:

    ...hmm, two of those and you are approaching the core counts of the midrange Nvidia 400 series GPUs.

    I will guess that one day GPUs will be a specialized HW thing of the past. Won't that be a surprise to nVidia? laugh

  • hacsarthacsart Posts: 2,025

    Those CPUs are approaching the size of the old IBM mainframe TCM modules.. ( and those used chilled water, typically around 15c for cooling) time for phase change cooling? Anyone remember Kryotech? 

  • nicsttnicstt Posts: 11,714
    edited June 2018
    kyoto kid said:

    ...hmm, two of those and you are approaching the core counts of the midrange Nvidia 400 series GPUs.

    yeh.

    The thing is; CPU cores are individually much more capable than graphics card cores - but CPU cores are generalists, and don't come bundled together like GPUs did.

    I wonder how much that will change?

    My threadripper (16 cores) takes 30 minutes to render a scene my 980ti (2816 cores) does in 10 minutes. I wonder if another 16 cores will  half the time. That makes it much cheaper than a p6000, and effectively no memory-cap.

    If Nvidia continue to over-charge (yes my opinion, but not only mine) for their cards, then other options become more viable; will have to see what their new gen cards do, and if they do something about the RAM bottleneck. £10k for a card with 32GB isn't 'doing something about it'.

    Post edited by nicstt on
  • kyoto kidkyoto kid Posts: 40,589

    ...the best I will be able to do is throw two older high core count Xeons at the situation as everything since Kaby Lake/Ryzen  requires W10

  • nicsttnicstt Posts: 11,714
    kyoto kid said:

    ...the best I will be able to do is throw two older high core count Xeons at the situation as everything since Kaby Lake/Ryzen  requires W10

    There is that, but it is possible to get AMD's new offerings working on 7.

  • tj_1ca9500btj_1ca9500b Posts: 2,048
    edited June 2018

    Not quite related, but the almost 20% 'Windows 10 GPU Memory tax' is still pissing me off.

    I'm thinking at this point that there should be a class action suit brought against Nvidia at this point over false advertising.  People that are buying cards with lots of GPU memory expect to be able to use most of that memory, not lose over a Gig of it to netherland.  The OS DOESN'T need this much memory, otherwise 1-2 GB cards would be a lot more painful to use for everyday tasks.

    NVidia's marketing ploy of 'well if you buy a Pro card then you get the rest of your memory' is all fine and good, albiet a 'cop out'.  Using this strategy to get people to buy pro cards, just so they can use all of their memory would be fine if they advertised their 8 GB consumer cards as 6.5GB cards (which is in practice what they are under Windows 10, and coresponding numbers for larger and smallercards), but they don't.  So they are misleading the consumer.

    So, Kyoto Kid, don't feel bad staying back with Windows 7.  I wish I could downgrade myself, so that I could use the other 2 GB+ of VRAM that's sitting idle on my system right now.  But my laptop bios is specifically configured for Windows 10 (at least according to MSI), so not an option for me.

    I suppose there's always Linux, but then Daz and Linux isn't really a thing yet.  We have some people trying and doing OK (using Wine, etc), but it's not 'plug and play' by any means.

    Post edited by tj_1ca9500b on
  • nicsttnicstt Posts: 11,714

    +1

  • ebergerlyebergerly Posts: 3,255

    Not quite related, but the almost 20% 'Windows 10 GPU Memory tax' is still pissing me off.

    I'm thinking at this point that there should be a class action suit brought against Nvidia

     

    On my Windows 10 machine with two GPU's (GTX 1070/8GB and GTX 1080ti/11GB), when I load a monster D|S/Iray scene (requiring 47GB of system RAM) to the point where the GPU's run out of VRAM and it drops to CPU rendering, the new 64 bit W10 Task/Performance monitor shows that all but 1GB (not 2-3 GB like some claim) of each card is being used exclusively for DAZ Studio/Iray (see below). That's only 12.5% of the 1070's installed VRAM being "hogged" by W10, and less than 10% of the 1080ti's installed VRAM.

    To give a little perspective, on a Windows 7 laptop with 4GB installed system RAM, the OS grabs 0.6 GB of system RAM "for the BIOS and drivers for some other peripherals". This is also known as "hardware reserved" memory (see screeenshot below). That's a full 15% of the memory grabbed by the OS and unavailable to any of your software. And I don't recall hearing any complaints about W7 hogging system RAM.

    Come to think of it, W7 and W10 each "hog" around 20GB of your hard drive. That's a lot, especially if you have, say, a 120GB drive. That's almost 17%. Does it really need all 20GB, or is it "hogging" some and owes us a refund?

    So if you believe the Windows 10 folks, who actually wrote the software that manages and schedules the GPUs' VRAM (called "VidMm"), and also wrote the VRAM usage reporting software (Task/Performance Manager), then you're led to believe that Windows 10 may actually be relatively efficient in managing GPU VRAM compared to, say, Windows 7 and system RAM. 

    Alternatively, you can believe the Iray log file reports which claim much more VRAM is being hogged by Windows 10 (up to 2-3 GB). But that's only reported when opening a blank scene, not when actually filling the VRAM and rendering. Task/Performance Manager, on the other hand, provides continuous, real time reporting, even after the VRAM is actually totally full and it drops to CPU rendering. Maybe Iray requested more VRAM when it was actually loading the scene and got it, making the log numbers outdated? Maybe "available" VRAM in the iray log means "available to applications right now upon request, but if they really, really need more later on, and if they ask nicely and nobody else is using it, we'll give them a bit more".   

    Anyway, to me, the Windows 10 reporting seems far more reasonable and believable, since it's from the software that is actually managing the VRAM. And if it's true that only 1GB is being grabbed from both my 1070 and 1080ti, that seems very reasonable compared to all the other memory grabbing going on in an operating system.

    Capture2.JPG
    1057 x 876 - 117K
    Memory1.JPG
    970 x 166 - 28K
  • kyoto kidkyoto kid Posts: 40,589
    nicstt said:
    kyoto kid said:

    ...the best I will be able to do is throw two older high core count Xeons at the situation as everything since Kaby Lake/Ryzen  requires W10

    There is that, but it is possible to get AMD's new offerings working on 7.

    ...not easily though.

  • kyoto kidkyoto kid Posts: 40,589
    edited June 2018

    Not quite related, but the almost 20% 'Windows 10 GPU Memory tax' is still pissing me off.

    I'm thinking at this point that there should be a class action suit brought against Nvidia at this point over false advertising.  People that are buying cards with lots of GPU memory expect to be able to use most of that memory, not lose over a Gig of it to netherland.  The OS DOESN'T need this much memory, otherwise 1-2 GB cards would be a lot more painful to use for everyday tasks.

    NVidia's marketing ploy of 'well if you buy a Pro card then you get the rest of your memory' is all fine and good, albiet a 'cop out'.  Using this strategy to get people to buy pro cards, just so they can use all of their memory would be fine if they advertised their 8 GB consumer cards as 6.5GB cards (which is in practice what they are under Windows 10, and coresponding numbers for larger and smallercards), but they don't.  So they are misleading the consumer.

    So, Kyoto Kid, don't feel bad staying back with Windows 7.  I wish I could downgrade myself, so that I could use the other 2 GB+ of VRAM that's sitting idle on my system right now.  But my laptop bios is specifically configured for Windows 10 (at least according to MSI), so not an option for me.

    I suppose there's always Linux, but then Daz and Linux isn't really a thing yet.  We have some people trying and doing OK (using Wine, etc), but it's not 'plug and play' by any means.

    ...yeah just means I have to stay with slightly older tech (save for GPU cards).  Currently running a Maxwell Titan-X on an old P6T.

    Been watching the Daz -Linux thread and seems like more bother than it is worth.

    Post edited by kyoto kid on
  • tj_1ca9500btj_1ca9500b Posts: 2,048
    edited June 2018

    Except that I've seen a few people mention about rendering scenes they built in Windows 7 or 8 that used to fit in a card's VRAM that won't fit under Windows 10, in various threads around the net that discuss this issue.  There are multiple threads around the net on this issue, even in the Microsoft forums, where Microsoft points the finger at Nvidia, and I've also seen Nvidia reps pointing the finger right back at Microsoft in a few discussions.

    The vram may be shown as available, but based on what numerous people have reported now, Windows 10 isn't actually making all of that ram available to applications.  It's just showing what it's actually using, not what it's reserving.  Unless you hae a Pro Nvidia card apparently.

    Also, I haven't seen ANY mention of AMD cards having this issue.  Not saying that isn't the case, just that it hasn't come up in my numerous Google searches on the issue.  Nividia cards are the only cards I see mentioned in these threads.  The fact that Nvidia Pro cards apparently dont' have this issue is also telling...

    Of course, if you'd like to share your own findings with the people here that have been documenting the issue, there's a thread for that:

    https://www.daz3d.com/forums/discussion/120541/vram-available-for-rendering-on-windows-10-systems-update-gtx-1080-ti/p1

     

    Post edited by tj_1ca9500b on
  • tj_1ca9500btj_1ca9500b Posts: 2,048
    edited June 2018

    Anyways, back on topic.  Apparently TSMC's 7nm volume production ramp up is proceeding well, and they are reportedly accelerating the ramp up.

    TSMC accelerating 7nm process production plan to meed demand

    This might explain why AMD is being a bit more optimistic about possible (albiet limited) Vega 7nm availability at the end of the year...

    Post edited by tj_1ca9500b on
  • nonesuch00nonesuch00 Posts: 17,929

    Well the less than 1GB Video RAM that MS Windows 10 uses on nVidia video cards is ancient history now. On my CPU render machine Windows 10 uses less than 3GB for every program, every service, and the Intel HD Graphics shared RAM (which totals less than 1GB easily). I can get, when I use DAZ, for DAZ to use over 13GB system RAM of 15.9GB available so that is less than 2.9GB  for everything, including shared video RAM (0.1GB is used by the BIOS or Windows 10 one I don't know which - probably a security process watching the Windows kernel in running in that 0.1GB RAM) so I know those claiming Windows 10 is using 3GB RAM on the nVidia cards are wrong and nVidia is misleading people to suggest that is the case if they are the ones doing it. At any rate, the time to file a class action suit is quickly passing, if not already passed.

  • tj_1ca9500btj_1ca9500b Posts: 2,048

    Well the less than 1GB Video RAM that MS Windows 10 uses on nVidia video cards is ancient history now. On my CPU render machine Windows 10 uses less than 3GB for every program, every service, and the Intel HD Graphics shared RAM (which totals less than 1GB easily). I can get, when I use DAZ, for DAZ to use over 13GB system RAM of 15.9GB available so that is less than 2.9GB  for everything, including shared video RAM (0.1GB is used by the BIOS or Windows 10 one I don't know which - probably a security process watching the Windows kernel in running in that 0.1GB RAM) so I know those claiming Windows 10 is using 3GB RAM on the nVidia cards are wrong and nVidia is misleading people to suggest that is the case if they are the ones doing it. At any rate, the time to file a class action suit is quickly passing, if not already passed.

    As long as the cards are currently being offered on the market as new products (not resales), then the 'legal clock' starts with the sale of said card.  The rule of thumb is generally up to 2 years after the fact, but can be longer depending on circumstances.  Also, people have been actively trying to get Microsoft and/or Nvidia to 'fix' the issue, which would of course be used to evaluate the situation.  So users of older cards might have an issue, but the 10xx series cards are still being manufactured and sold as new products.  Now as to the 'merits' of the case, that's subject to interpretation.

  • nonesuch00nonesuch00 Posts: 17,929

    Well the less than 1GB Video RAM that MS Windows 10 uses on nVidia video cards is ancient history now. On my CPU render machine Windows 10 uses less than 3GB for every program, every service, and the Intel HD Graphics shared RAM (which totals less than 1GB easily). I can get, when I use DAZ, for DAZ to use over 13GB system RAM of 15.9GB available so that is less than 2.9GB  for everything, including shared video RAM (0.1GB is used by the BIOS or Windows 10 one I don't know which - probably a security process watching the Windows kernel in running in that 0.1GB RAM) so I know those claiming Windows 10 is using 3GB RAM on the nVidia cards are wrong and nVidia is misleading people to suggest that is the case if they are the ones doing it. At any rate, the time to file a class action suit is quickly passing, if not already passed.

    As long as the cards are currently being offered on the market as new products (not resales), then the 'legal clock' starts with the sale of said card.  The rule of thumb is generally up to 2 years after the fact, but can be longer depending on circumstances.  Also, people have been actively trying to get Microsoft and/or Nvidia to 'fix' the issue, which would of course be used to evaluate the situation.  So users of older cards might have an issue, but the 10xx series cards are still being manufactured and sold as new products.  Now as to the 'merits' of the case, that's subject to interpretation.

    Yeah, but that's like those old complaints about TV screen size, monitor size, hard drive space, RAM capacity, and so on. 

  • nicsttnicstt Posts: 11,714
    kyoto kid said:
    nicstt said:
    kyoto kid said:

    ...the best I will be able to do is throw two older high core count Xeons at the situation as everything since Kaby Lake/Ryzen  requires W10

    There is that, but it is possible to get AMD's new offerings working on 7.

    ...not easily though.

    Actually, I'm going to try installing w7 soon; I have a dvd player (as USB3 is out, unless I build it myself with the drivers) and keyboard only isn't a hastle and wont be the first time i've installed w7 with only a keyboard. I keep putting it off. :) But I'll let you know.

    I seem to remember though, that I should be careful how many of the more recent updates I should let install as they change how 7 works?

  • kyoto kidkyoto kid Posts: 40,589
    edited June 2018

    ...I disabled updates back in October of 2016 as MS went to the bundled rollup format that no longer allows you to pick and choose individual updates.  You either have to accept everything in the bundle or nothing.

    Indeed, let me know how it goes (not saying I'm in the market to build a new system unless my retro disability kicks in or I hit tonight's megabucks lotto).

    Post edited by kyoto kid on
  • tj_1ca9500btj_1ca9500b Posts: 2,048
    edited June 2018

    So, on a whim I decided to look up the 16 core Threadripper Iray benchmarks again, in order to extrapolate potential 32 core performance...

    OZ-84 said:

    Ok here we go :-) 

     

    Everything on stock clock, Scene is preloaded (second renderwindows is paused)

    OptiX always enabled

    Benching on:

     

    -Threadripper 1950X (16 cores / 32 threads) all cores active 

    -4 x 1080ti Founders (Blower cards)

    -32Gb DDR4 2666

    CPU ONLY @ 3.4ghz/99%-100% Load: 

    2017-10-15 14:14:16.088 Iray VERBOSE - module:category(IRAY:RENDER):   1.0   IRAY   rend progr: 94.86% of image converged
    2017-10-15 14:14:16.093 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 04748 iterations after 541.823s.
    2017-10-15 14:14:21.659 Iray VERBOSE - module:category(IRAY:RENDER):   1.0   IRAY   rend progr: 95.06% of image converged
    2017-10-15 14:14:21.664 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 04797 iterations after 547.393s.
    2017-10-15 14:14:21.677 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Convergence threshold reached.
    2017-10-15 14:14:27.185 Saved image: XXX
    2017-10-15 14:14:27.207 Finished Rendering
    2017-10-15 14:14:27.282 Total Rendering Time: 9 minutes 13.99 seconds

    4 x1080ti:

    2017-10-15 14:17:42.861 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 04725 iterations after 29.165s.
    2017-10-15 14:17:43.613 Iray VERBOSE - module:category(IRAY:RENDER):   1.0   IRAY   rend progr: 94.91% of image converged
    2017-10-15 14:17:43.613 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 04767 iterations after 29.918s.
    2017-10-15 14:17:43.697 Iray VERBOSE - module:category(IRAY:RENDER):   1.0   IRAY   rend progr: 95.02% of image converged
    2017-10-15 14:17:43.701 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 04811 iterations after 30.006s.
    2017-10-15 14:17:43.703 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Convergence threshold reached.
    2017-10-15 14:17:44.796 Saved image: XXX
    2017-10-15 14:17:44.818 Finished Rendering
    2017-10-15 14:17:44.872 Total Rendering Time: 32.15 seconds

    CPU+ 4x 180TI

    2017-10-15 14:20:18.160 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 04715 iterations after 29.823s.
    2017-10-15 14:20:18.228 Iray VERBOSE - module:category(IRAY:RENDER):   1.0   IRAY   rend progr: 94.94% of image converged
    2017-10-15 14:20:18.233 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 04757 iterations after 29.897s.
    2017-10-15 14:20:18.389 Iray VERBOSE - module:category(IRAY:RENDER):   1.0   IRAY   rend progr: 95.07% of image converged
    2017-10-15 14:20:18.389 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Received update to 04798 iterations after 30.053s.
    2017-10-15 14:20:18.398 Iray INFO - module:category(IRAY:RENDER):   1.0   IRAY   rend info : Convergence threshold reached.
    2017-10-15 14:20:19.442 Saved image: XXX
    2017-10-15 14:20:19.469 Finished Rendering
    2017-10-15 14:20:19.518 Total Rendering Time: 32.15 seconds

     

    Somehow ... Rendering process with CPU and GPU didnt benefit at all ... strange :-|

    Hopfefully this info serves someone :-D 

     

    And some more benchmarks:

    OZ-84 said:

    Lets see...

    Cards rendered obver last hours. So results will be a little lower since they are hot. 

    1 x  GTX1080TI: 

    2017-10-15 21:02:33.624 Finished Rendering
    2017-10-15 21:02:33.674 Total Rendering Time: 1 minutes 57.85 seconds

    1 x GTX 1080TI + 16 Core Threadripper

    2017-10-15 21:04:56.331 Finished Rendering
    2017-10-15 21:04:56.410 Total Rendering Time: 1 minutes 39.59 seconds

    2 x  GTX1080TI: 

    2017-10-15 21:06:25.664 Finished Rendering
    2017-10-15 21:06:25.741 Total Rendering Time: 59.96 seconds

    2 x GTX 1080TI + 16 Core Threadripper

    2017-10-15 21:07:58.279 Finished Rendering
    2017-10-15 21:07:58.333 Total Rendering Time: 56.82 seconds

    3 x  GTX1080TI: 

    2017-10-15 21:09:03.950 Finished Rendering
    2017-10-15 21:09:04.098 Total Rendering Time: 42.14 seconds

    3 x GTX 1080TI + 16 Core Threadripper

    2017-10-15 21:10:08.587 Finished Rendering
    2017-10-15 21:10:08.667 Total Rendering Time: 41.17 

    4 x  GTX1080TI: 

    2017-10-15 21:11:20.368 Finished Rendering
    2017-10-15 21:11:20.465 Total Rendering Time: 33.1 seconds

    4 x GTX 1080TI + 16 Core Threadripper

    2017-10-15 21:12:11.197 Finished Rendering
    2017-10-15 21:12:11.269 Total Rendering Time: 33.8 seconds

    4 x GTX 1080TI + 16 Core Threadripper WITHOUT OptiX

    2017-10-15 21:13:31.895 Finished Rendering
    2017-10-15 21:13:32.004 Total Rendering Time: 52.50 seconds

     

    WOW... the result without OptiX is really bad. 

    What can i say ... it seems drzap is totaly right. The more GPUs you use, the less important is the CPU. However .. i could imagine a scenario in which somebody is building up a new render rig. Athe beginning, with only one GPU installed, the Threadripper is a good help. Afterwaards it gets quiet useless.

    It looks like a 16 core Threadripper is a little over 4.5x slower.

    So, assuming that 32 core TR2 has similar clock speeds (maybe a hair faster, although cross CCX latency will offset some of that gain), halving the CPU only times drops things to around 4 1/2 to 5 minutes per render.  Compared with a single 1080Ti which is around 2 minutes, and combining with CPU may shave another 20-30 seconds off of the render time.  With two 1080 Ti cards the CPU will be of little to no help.

    As a comparison, heres an 8 core Ryzen 1800x benchmark:

    stryfe said:

    Ryzen 1800X @ Stock
    GTX 970 x2
    32GB RAM
    OptiX On


    CPU Only:            Total Rendering Time: 18 minutes 13.51 seconds
    GPU1 Only:          Total Rendering Time: 4 minutes 12.5 seconds
    CPU + GPU1:       Total Rendering Time: 3 minutes 39.45 seconds
    GPU1+2:               Total Rendering Time: 2 minutes 23.47 seconds
    CPU + GPU1+2:    Total Rendering Time: 2 minutes 8.30 seconds
      
    Can't wait for the ASUS Strix 1080 Ti's to come out to finish my build!

    Note that it's about 50% slower than the 16 core Threadripper, so assuming a slightly less than 4x speedup for 32 cores (over 8 cores) doesn't seem that far fetched.

    Anyways, my point is that 2.5x slower if a render goes CPU only seems much less painful than say 10x slower (for an 8 core CPU).  Of course I'm guesstimating here.

    For those that work with large scenes regularly (those that routinely kick a 1080 Ti to the curb), there could be some benefit to 32 cores. 

    Airport Island has notoriously large scene requirements.  Has anyone that has both Airport Island and one or more 1080 Tis worked with that one lately, and do you run into situations where your 1080 Ti gets parked?

     

    And of course, for Carrara, etc. users (i.e. render engines that are CPU only to begin with), yeah I can see a benefit to 32 core Threadrippers...

    I guess it'll come down to the asking price.  Some people are hoping for around $1500, but somewhere in the $2000 ballpark would be more consistent with EPYC 32 core (single/P) pricing.  Of course, EPYC supports twice the number of PCIe lanes and twice as much memory.  If you need/think you can actually use 128 PCIe lanes, in my mind well the 48 core Starship CPUs are supposed to be sampling later this year... we won't see a 48 core Threadripper for probably another year or more...

    Post edited by tj_1ca9500b on
  • I just got my 2nd GTX1080TI so going to do some bench marks I've also got the i9 with 20 cores so will be intresting to see it doubles the speed or not!

    I want to get some Iray animations do for promo's

  • tj_1ca9500btj_1ca9500b Posts: 2,048
    edited June 2018

    And on the EPYC front, it looks like we FINALLY have a decent workstation motherboard with 4 PCIe-16 slots (which would fully utilize 64 PCIe lanes, leaving another 64 PCIe for other stuff).

    https://www.anandtech.com/show/12985/asrock-rack-goes-amd-epycd8-workstation-motherboard

     

    There's some server mobo configs that can handle 4 or more cards, but for those looking for an ATX EPYC mobo with four properly positioned PCIe 16 slots (i.e. 4 double width cards), this looks interesting.  Only 1 processor though.  Of course, the 3 PCIe 8 slots become useless with 4 1080 Ti's, but I suppose there are extender cables for PCIe that might be able to help with that.

    With 32 core Threadripper just around the corner, this looks a bit less appealing now... but of course 7nm/48 core EPYC/Starship is expected early next year.  This could make a nice basis for a workstation, for those that like workstation configs but that may be getting tired of paying the 'Intel Premium'.

     

    Post edited by tj_1ca9500b on
  • kyoto kidkyoto kid Posts: 40,589
    edited June 2018

    ...hmm 96 threads,  that's only 4 shy of the thread limit for Carrara.

    Of course would make for a very expensive networked render box but then I could have that running in Linux.

    Post edited by kyoto kid on
  • nonesuch00nonesuch00 Posts: 17,929

    So it sounds like with 3nm we will basically have a 200 core CPU that is it's own render workfarm. All I can say is wow to that & patience!

  • kyoto kidkyoto kid Posts: 40,589

    ...Daz better get on the ball to up Carrara's thread limit. ;-)

  • nonesuch00nonesuch00 Posts: 17,929
    edited June 2018

    It'll be pretty sweet to just say no to nVidia. I expect though that AMD will increasing integrate better & better Vega designs into these multicore designs, maybe, because I'm not sure GPU cores bring much to the table with a local 200 core CPU die. I guess the experts at AMD know their stuff though to decide that.

    AMD is also working on streaming SSD storage as RAM into main CPU RAM designs so it looks very good.

    I am excited about the mostly steady power requirements and even descreasing power requirements as well.

    Post edited by nonesuch00 on
  • kyoto kidkyoto kid Posts: 40,589

    ...indeed would be wonderful to have a beast of system running on say a 750w PSU instead of 1,500w one.

    Massively multi threaded CPUs and streaming SSD memory would allow for so much power and overhead, even my epic level scenes would render in minutes. Maybe even bring LuxRender's times down to something reasonable.

    Again we need to be sure that the render engines involved can support that many CPU threads.  Carrara tops out at 100 CPU cores/threads per node, not sure of the limit for Daz Iray. 

    Would be so nice to have that "renderfarm in a box" which also serves as a workstation.

  • nonesuch00nonesuch00 Posts: 17,929
    kyoto kid said:

    ...indeed would be wonderful to have a beast of system running on say a 750w PSU instead of 1,500w one.

    Massively multi threaded CPUs and streaming SSD memory would allow for so much power and overhead, even my epic level scenes would render in minutes. Maybe even bring LuxRender's times down to something reasonable.

    Again we need to be sure that the render engines involved can support that many CPU threads.  Carrara tops out at 100 CPU cores/threads per node, not sure of the limit for Daz Iray. 

    Would be so nice to have that "renderfarm in a box" which also serves as a workstation.

    That was probably an OS and professionally licensed restriction on the compiler used to build Cararra and not a limitation of Carrara itself.  

    With these new CPUs the old model of having to buy 'enterprise OSes' along with 'enterprise tools' and 'enterprise support contracts' to use that many CPU cores will have to be dropped. MS and other businesses reliant on such contracts will have to expect to get paid for support and not something as non-relevent as CPU cores or amount of usuable RAM available on the host machine.

    That said, I con't know if Carrara code itself would need to be rewritten or recompiled to use more cores. I'm guessing probably recompiled but not rewritten, unless to add Genesis 3 & Genesis 8 functionality

  • kyoto kidkyoto kid Posts: 40,589

    ...no G3/G8 functionality in the Carrara code yet, but Mistra did create a script to import G3F.  Not sure if the incompatibility has to do with the change from TriAx to normal weight mapping.

    Yeah I can see where the OS could be the bottleneck. I find it odd that today in the 64 bit era OS's still have memory and core caps based on price. Why can't Home Edition address just as much memory as Pro or Enterprise?  Revenue, that's why (I had to upgrade to W7 Pro just to use all 24 GB of the memory upgrade I installed in my old system as Home maxed out at 16). 

    It's just like why I don't think Nvidia would offer a 16 GB GTX card as it would compete with several of their pro ones while being more attractive to freelancers in price. This happened in the Maxwell days with the M6000 and Titan-X, both having the same SIM count and VRAM when originally released.  Nvidia afterwards doubled the memory of the M6000 to 24 GB (and kept the price the same) to set it at a more significant notch above. The Titan-X actually performed slightly better with the Iray 2015 release.than the M6000 due to both cards having the same SFP performance (7 TFlops) but the Titan having better clock speed.  Where the 12 GB M6000 was ahead was in DFP performance and a different driver set though many small studios and freelancers couldn't see that justifying an additional 4,000$.

     

  • nonesuch00nonesuch00 Posts: 17,929
    kyoto kid said:

    ...no G3/G8 functionality in the Carrara code yet, but Mistra did create a script to import G3F.  Not sure if the incompatibility has to do with the change from TriAx to normal weight mapping.

    Yeah I can see where the OS could be the bottleneck. I find it odd that today in the 64 bit era OS's still have memory and core caps based on price. Why can't Home Edition address just as much memory as Pro or Enterprise?  Revenue, that's why (I had to upgrade to W7 Pro just to use all 24 GB of the memory upgrade I installed in my old system as Home maxed out at 16). 

    It's just like why I don't think Nvidia would offer a 16 GB GTX card as it would compete with several of their pro ones while being more attractive to freelancers in price. This happened in the Maxwell days with the M6000 and Titan-X, both having the same SIM count and VRAM when originally released.  Nvidia afterwards doubled the memory of the M6000 to 24 GB (and kept the price the same) to set it at a more significant notch above. The Titan-X actually performed slightly better with the Iray 2015 release.than the M6000 due to both cards having the same SFP performance (7 TFlops) but the Titan having better clock speed.  Where the 12 GB M6000 was ahead was in DFP performance and a different driver set though many small studios and freelancers couldn't see that justifying an additional 4,000$.

     

    It's an old assumption that only big entities like NGO, governments, businesses could afford more tha 1 CPU core and more tha 4GB RAM, those old assumptions had to change and so to will the new ones eventually. It's not at all the customer buying power in the industrialized nations has increased, it has decreased and been put on credit plans and such, but they'll have to price such new CPUs and RAM cheaper if they want customers to buy. Once thing is they loose money on all those cores & RAM IR&D is they restrict it too tightly with big entity pricing.

Sign In or Register to comment.