Digital Art Zone

 
   
1 of 3
1
:  Building a Fast graphics Computer for Poser 9/2012 Renders & Carrara
Posted: 26 January 2013 03:25 PM   [ Ignore ]
Member
Rank
Total Posts:  174
Joined  2009-07-22

NOTE: Post 15, below is a Direct Link to Compare Intel Processors: Laptop, Desktop, Server That has turned out to be helpful.
.
.


I would like to build/spec out a computer to handle Poser 9 and above.  Cost is an issue and I’m looking for a guide as to where I really should not skimp.


My main objective for the new system would be to have it minimize Render times. I would also like to create larger models that don’t use up the available memory leaving nothing for the renderer. 


I would like to know what people are currently using and what they think the critical components for fast graphics are.

What do you consider state of the art and what have you all experienced with newer systems?? What would you change?

Any idea what kind of rendering time improvements I can reasonably expect over the older XP system I have now?

.
.

I don’t know much about Poser 9 system details. I am told it can access 64 bit for working , but that I can’t efficiently render with it no matter what new computer I build [edit: no: runs 32 bit only; can run 32 bit under Windows 7] .  If that is the case, what do ‘normal’ 3D programs (or Poser 2012) run best on. Does Poser 9 access more than one core if you have a multi-core system? [edit: no. 32 bit only]

Here are my basic question areas with the techy, but feel free to ignore them to tell me what you think. Also, I may not be even asking the pertinent questions:

(1) Intel or AMD?  How many cores? Core memory size?
(2) If Intel I5 or I7?
(3) Graphics card (GPU). How important? How much dedicated memory makes a difference. Any particular brand/vendor?
(4) Are busses important? [Ivy or Sandy].


I welcome input from people who use other 3D Graphics software, too. Daz users!!!  C4, Rhino, Maya, Lightwave, Autodesk and Blender.  Shade, Hexagon…. What am I leaving out?

Any places I should look (magazine articles you may have seen, threads, forums?

Thanks!

Profile
 
 
Posted: 26 January 2013 06:54 PM   [ Ignore ]   [ # 1 ]
Member
Rank
Total Posts:  191
Joined  2003-10-09

Use a 64 bit program (such as PP2012), pick a CPU with the most cores and put as much memory in as the motherboard can hold.
If you want to do CUDA GPU rendering (Octane or similar) buy a Nvidia CUDA card in addition to your standard graphics card with as much VRAM as possible, as many GPU cores as possible and as much texture slots as possible (example GTX680 4GB). If you don’t, just use or buy a “standard’ gaming card since it won’t help you with renderspeed. A current generation is fast enough for OpenGL

In other words - It all depends on your budget. Renderspeeds is defined by the number of cores, memory and CPU speed for “normal” rendering. For GPU rendering the number of CPU cores and the size of the scene you can render in it depends on the VRAM and texture slots. This is the case for all 3D software.

Is GPU rendering faster? Doing the same thing - yes. But once you have it you want to render more realistic and in the end render times are just as slow.

The real thing you probably want to know is what the best price/performance ratio is. Generally that would be last years top of the line.

 

 

Profile
 
 
Posted: 30 January 2013 09:40 AM   [ Ignore ]   [ # 2 ]
Member
Rank
Total Posts:  174
Joined  2009-07-22

These last few days I’ve been evaluating your comments. I looked at them right away and was hoping someone else would post, too so I could do some research before following up.

GPU

The Cuda & GPU is a world of it’s own and I think it’s important that any new Graphics computer be built with that upgrade in mind.

About 18 months ago I came to the conclusion that a rendering computer had to be at least one order of magnitude to one and one-half orders of magnitude faster (10x-15x) than my current computer. On the existing mother boards I didn’t see how I would get that; I’d be lucky at 4x-6x.  But it looks like Nvidia is the leader in having developed a monster parallel processor system specialized to render graphics for Video and gaming, and now that it works for real-time gaming the graphics spill over is a variant of, “why can’t we just print out what we see? Why do we have to render on the motherboard?”  Hence the Cuda and up and coming software Octane and Reality (I want, I want). Your comments helped point me in that direction and I appreciated the specific model number to look at. 

NVidia has a conference coming up in March.

CHIPSET

I seem to have settled on an Intel i7 3700 series “K” version ivy-bridge chip. i5 seems to be only dual core; i7 is quad core. Ivy-bridge seems to be the 22nm equivalent of the larger Sandy bridge; That is, how Intel got Sandy technology to work on a smaller chip.  They say it’s not a radically new design, but rather the stepping stone from one chip size to another. The next technology (H-something) will build a new design at 22nm.  The “3” in the 3700 seems to mean it is the 3rd generation, or most recent version of the i7 on the market, and the “K” seems to mean it is the unlocked (tweakable speed) version. I’m not sure how the “K” differs from a straight 3700, or if there is such a thing as a straight 3700.  There is a “T” that has significantly lower power consumption (how do they do that? Do they slow the chip down when it get too hot?) and an “S” for performance.  If I get the chance I’ll attach some links.

AMD vs Intel, 2013

Why not AMD?  Hate to say it, but in all my searches AMD really does not seem to come up at the top. They don’t seem to have anything that is being touted as equivalent. I have really liked the AMD machines I’ve used, and in the past If Intel had something on top AMD would have it’s staunch defenders.  I’m not finding the obvious links or references to get me down that path this time.


Steps to building a Powerful graphics Computer that I can’t afford all at once

My approach will be to build the minimum computer that can do the job and is one that I can upgrade over the course of 12-24 mos. So, I’m trying to choose a chipset that won’t be outdated immediately and a mother board that can be upgraded. Right now Cuda seems to be six months or more away because of cost, which is fine because motherboards I’m looking at seem to come with on-board video. But I need to make sure the motherboard has the slots. 8GB seems to be a starting point, but It looks like I want the slots to upgrade it to 32GB at least. Haven’t really seen 64GB too much.

In going through this I’m also coming up with questions about USB 2.0 and USB 3.0.  Why do motherboards carry both?  It seems there are still problems and accents with USB 3.0?

Sources

RIght now one of the top gaming computer custom companies seems to be Ailienware (part of Dell now) so I went to see what they were doing; one of the questions to be answered (based on your comment above) is “What is last year’s best technology”; don’t have that yet, but:

http://www.alienware.com/

And I’m looking at bundles on mwave:

http://www.mwave.com/mwave/index.asp?

I’m really not impressed with the selection I seem to get at mwave these days. I think it’s their website software. Right I go to bundles and then choose the Chipset (i7) and the type (Ivy Bridge) and I get about 2 motherboard bundles in the $300 range (with Chipset and 8GB memory [expandable to 32] totals about $700). But it has video to start with and seems to have the PIC expanded slots that I need.

Mwave has a lot of motherboards but they seem to have slipped up in me having to find the ones that match the i7 and ivy. Maybe I’m doing something wrong.

Still looking. Still appreciative of comments.

Profile
 
 
Posted: 30 January 2013 10:51 AM   [ Ignore ]   [ # 3 ]
Member
Rank
Total Posts:  191
Joined  2003-10-09

I built a machine a few months ago. I selected the fastest components, got myself an i7 3930K NVidia GTX 680, 32GB of memory, an SSD drive and 2 4TB hard drives. It has 2 GB network cards. That was the fastest I could get at that time. I buy at least 1 machine a year and use the previous machine as my web en email machine as as secondary Poser machine.
I have been building machine since the early 80s, so this is probably generation 15 or later. I don’t play games anymore, so the game specific features don’t interest me, but with Cuda there is for me a renewed interest in capable video cards. I did have a GTX 580 card with 1.5GB and that really did not play well with the Poser Octane Plugin - there was simple too little VRAM on board and there were not enough texture slots. 90% of my scenery would not load due to these constraints.
With the GTX 680 withh 4GB and using the GTX 580 as a secondary card the situation has improved a lot. Now I can load 80% of the scenes and render them
But GPU rendering has a trade-off. Poser Firefly is a very capable render in its own right and has features which do not translate well in Octane (or Lux for that matter). This means that any material room tinkering (texure rotation, image offsets, many of the special nodes) will not translate and are in some cases impossible to render without changing textures. So it is not a perfect match. But in many occasions you get really great results. It is defiinitely worth the money

Regarding AMD - I have never been a fan of AMD, so I always have used Intel. Motherboard - I define what I want on it and that pretty much defines which boards I can use. Which make? I have always been reasonable happy with Asus, so that is what I usually go for. Sometimes I do run into problems (mostly because it is all pretty new) but that usually fixes iself with a BIOS update.

Profile
 
 
Posted: 30 January 2013 11:24 AM   [ Ignore ]   [ # 4 ]
Power Member
Avatar
RankRankRank
Total Posts:  1009
Joined  2007-11-28

I have a USB 3.0 motherboard, and it has both 3.0 and 2.0 ports.  However, if you plug a USB 2.0 cable into a 3.0 port, it acts like a 2.0 port.

 Signature 

——————————————————————————————————————————————————————————————-
\Who gives a Rat’s Keister how many posts I had at the old Forum? LOL

Profile
 
 
Posted: 02 February 2013 11:37 AM   [ Ignore ]   [ # 5 ]
Member
Rank
Total Posts:  174
Joined  2009-07-22
WimvdB - 30 January 2013 10:51 AM

... I did have a GTX 580 card with 1.5GB and that really did not play well with the Poser Octane Plugin - there was simple too little VRAM on board and there were not enough texture slots. 90% of my scenery would not load due to these constraints.
With the GTX 680 withh 4GB and using the GTX 580 as a secondary card the situation has improved a lot. Now I can load 80% of the scenes and render them…

You mentioned it before. What are you defining as a texture slot?  I’m not real familiar with the term.

Also, what is the advantage of the SSD drive? Not sure how that comes into play these days. I still think of it as a slower HD, but non -mechanical.

And, any reason for the 3900?  I’m thinking the previous generation 3700 but with a really good cuda card 6-8 mos from now is what will be fine. My immediate budget will use whatever video is on the motherboard, and so, Firefly it will be.

Asus. Yes, that is what my tendency would be for building. Glad to hear it re-inforced.  Surprised you never used AMD. For a while they had the best price/performance.

By the way, I’m building this for 3D modeling and fast rendering, and potentially some video editing, not so much gaming; it just seems like the drive behind the GPU is the gaming movement.

Finally, sounds like your machine 3 years ago would blow away anything I have.  50 Bucks for it?  (Said just for the smile).

Profile
 
 
Posted: 02 February 2013 12:09 PM   [ Ignore ]   [ # 6 ]
Member
Rank
Total Posts:  191
Joined  2003-10-09

CUDA has a limited number of textures you can use. This depends on the chip. The GTX 80 allows 64 rgb textures and 32 grayscale textures. This is not a lot. A gen4 (or genesis) figure will occupy 6-18 slots (body, head,limbs, eyes,mouth,.eyelashes with possible and bump and specular map), add hair and clothes and you will use a large percentage of the available slots. Bump, disp, specular and transmaps are all grayscale maps. A Stonemason scene canl use well over a hundred texture maps.
WIth CUDA/Octane you do not have the option to fall back to CPU rendering. With a single videcard, CUDA and the OS share the Videomemory. Windows takes 300-400MB on a dual screen setup. The VRAM is mostly used by CUDA for texture maps. So 4000x4000 color maps fill up VRAM fast.
There are many other considerations, but this is the main one when deciding to buy a video card

SSD is much faster as a traditional drive. The life time of the SSD is a but uncertain. Most manufacturers claim the life time of an ordinary disk under normal usage. Keyfactor here is that there is a liimit on how often a block on an SSD can be written to. So it is advised to move the temp files to a RAM disk or an ordinary disk. The big advantage of the disk is its speed. Booting takes a few seconds and installing an app will go 10 times as fast.

The 3900 was the fastest at the time I bought it and the price difference with the next best was not too much

AMD - Compatibility was the main reason. I used to tinker a lot with the OS and did not want to be distracted by quirks of the processor

Profile
 
 
Posted: 02 February 2013 03:13 PM   [ Ignore ]   [ # 7 ]
Member
Rank
Total Posts:  174
Joined  2009-07-22

Can you recommend an Asus board to look at?    I’m having some trouble sorting out paired PCI express slots that will work. A number of boards I look at say the slots are shared and that to me means reduced performance.

Do you have a recommended vendor for paired Motherboard / CPU combos?    I liked mwave because in the past they would assemble and test the board and put on the CPU heatsink which I understand can be a pain to get right if you only do it once.

I understand now more about the texture slots. But I’m not sure how you know what the specs for the board are (also did you leave out a digit above or is generic to the whole series).  That’s nice info you shared. I went to look at Cuda specs to try and understand and also to the Nvida site. Neither Cuda nor the specs for the boards seem to talk about texture slots.  I wouldn’t have picked up the importance on my own without your mention of it.

Thank you.

Profile
 
 
Posted: 03 February 2013 05:15 PM   [ Ignore ]   [ # 8 ]
Active Member
Avatar
RankRank
Total Posts:  622
Joined  2004-12-14
Consumer573 - 02 February 2013 03:13 PM

Can you recommend an Asus board to look at?    I’m having some trouble sorting out paired PCI express slots that will work. A number of boards I look at say the slots are shared and that to me means reduced performance.

Do you have a recommended vendor for paired Motherboard / CPU combos?    I liked mwave because in the past they would assemble and test the board and put on the CPU heatsink which I understand can be a pain to get right if you only do it once.

I understand now more about the texture slots. But I’m not sure how you know what the specs for the board are (also did you leave out a digit above or is generic to the whole series).  That’s nice info you shared. I went to look at Cuda specs to try and understand and also to the Nvida site. Neither Cuda nor the specs for the boards seem to talk about texture slots.  I wouldn’t have picked up the importance on my own without your mention of it.

Thank you.

To wring all of the performance out of Poser 2012, keep the following in mind:

Max memory, max cores - fastest thruput

Memory - You want a lot of memory capability - think big - as in maxing out a board at 32 gb or higher - You don’t have start with 32gb, but you are definitely going to want more than 16gb.  Even if Poser isn’t using all of it, you want the head room available.  More memory = larger buckets come render time.

Video Card  I wouldn’t recommend spending more than $100 on a video card until you decide if you want to use a GPU assisted render engine. 

All of the capabilities of the video card are irrelevant, if your system slows to a crawl because Poser is starved for memory.  Example - Start Poser, check memory usage - Add V4, check memory usage, add M4, check memory - you can run out of memory, really, really fast. 

Cores - Poser will use all cores available.  It doesn’t do you any good to have lots of memory and a great GPU, if your CPU only has 4 cores.  You want to look for a dual CPU board.  My next computer will have a minimum 12 physical (24 virtual) cores.  More cores = more buckets come render time.

Thruput - Don’t be waiting on the computer, let the computer wait on you.  I am currently running 2 SSDs (2x OWC 240gb) in a RAID 0 - I can saturate the bus on my current computer - that improves the performance of everything, not just Poser.

My recommendation is to look at a Xeon Workstation - More expensive up front, but TCO costs are actually lower, because the life cycle is so much longer.  The life cycle of my current system is 5 years, and I can easily extend that out a couple more years via a CPU upgrade for about $100 dollars.

 

 Signature 

“Facts do not cease to exist because they are ignored” - Aldous Huxley

Profile
 
 
Posted: 04 February 2013 07:53 AM   [ Ignore ]   [ # 9 ]
Member
Rank
Total Posts:  174
Joined  2009-07-22

Thread link to a recent (Jul 2012) dual core discussion:

http://www.overclock.net/t/1277467/dual-socket-x79-lga-2011-intel-motherboard

This has some excerpted nuggets:

Post # 3/28 (on dual CPUs):

In things like cad and that there will be a large differance in having two cpus. However, im everything else there will be almost no differance whatsoever, so you need to make up your mind on whats more importent to you. The second is overclocking- a single 3930k overclocked to 4.8 ghz will be on par in cad with two 2.4ghz x79 xeons, while being much better at gaming and that and cheaper. However if this is a ‘workstation’ where you will be doing cad like work that your livly hood depends on, you might not overclok at all, for reliability reasons. In that case a single six core xeon might be best, as it gives you ecc memory support. So it would help to out line what you do and why alittle more. Personnaly however it seems that unless you have unlimited budget or do hours of cad and maya a day, a single 3930k would be best.

Post # 5 same reader: The Asus ROG Rampage boards are pretty popular, and so far i haven’t found any reason to dislike my extreme

Post # 8/28 (on why you can’t use i7 for dual processor and you have to go intel Xeon series):

The reason you can’t run two 3930K’s in a 2P board is that they only have one QPI (Quick Path Interconnect). Xeon chips have 2, one for the system and one to go to a second processor. the 3930K’s don’t physically have a way to communicate with each other.

As far as a single cpu board you want to look at the Rampage IV Formula http://www.newegg.com/Product/Product.aspx?Item=N82E16813131808

You can look at the extreme but I don’t think there is anything to justify the extra $100 for it.

I just picked up an ASRock X79 Extreme6 for my 3930K that I hope to get up and running soon.

Profile
 
 
Posted: 04 February 2013 01:00 PM   [ Ignore ]   [ # 10 ]
Member
Rank
Total Posts:  174
Joined  2009-07-22

Is it possible to buy a dual CPU board and only put one CPU on it to start with? [Edit: Yes. Intel’s Server line, the Xeon processor, often has computers, such as Dell’s Precision series, that come with dual slot motherboards with only one CPU installed].


Do you have a dual CPU board recommendation that will be able to put a dual slot GPU on it.

.
.
.

Edit: 6 cores per Xeon (Westmere) 5600 processor 32nm technology July 2010 Review:

http://techreport.com/review/19196/intel-xeon-5600-processors

Profile
 
 
Posted: 04 February 2013 01:15 PM   [ Ignore ]   [ # 11 ]
Member
Rank
Total Posts:  174
Joined  2009-07-22

From Creative Cow thread:

http://forums.creativecow.net/thread/61/863196

Fellow Building a fast graphics computer for Autodesk/Maya:



Josh Buchanan Computer for maya and 3ds max, more cores or faster cores?
by Josh Buchanan on May 27, 2012 at 9:35:45 pm

Hey guys,
I plan on buying a new workstation for the autodesk programs, maya and 3ds max and other 3d rendering programs. And my question is should I aim for a workstation with a lot of cores, or fewer cores but the are much faster. Say an Intel set up with 6 at 4.5ghz, or AMD with like 24 at like 2.1, how much does the Autodesk programs take advantage of the multi cores?
Thanks


 

 

Steve Sayer Re: Computer for maya and 3ds max, more cores or faster cores?
by Steve Sayer on Jun 4, 2012 at 1:47:30 pm

Very generally speaking, faster cores will help YOU while you’re working in the software, while more cores will help the RENDERER when you leave it running on its own or in the background.

If you’re going to be doing a lot of processor-intensive work, like animating extremely complex characters or running elaborate simulations, faster cores will make that less painful. However, if you’re going to be a doing a lot of rendering on that box, the more cores you have the less time you’ll spend waiting for those renders to finish.

Profile
 
 
Posted: 05 February 2013 11:38 AM   [ Ignore ]   [ # 12 ]
Member
Rank
Total Posts:  174
Joined  2009-07-22

Xeon Chips typically for Workstations and Servers, Intel “I” series typically for home (But there’s still a lot to be said for going with multiple processors to build a fast 3D graphics render machine):

Thinking out loud here, leaving a partial research trail:  Xeon chips are workstation chips and can be as new as the ‘i’ series. It is a parallel line made for servers. They typically have more cores, come with ECC (error checking correction memory) for stability, and are more expensive. It looks like if you want to run multiple processors the Xeon series really is set up to take advantage of that whereas the “i” series is not.

Googling the phrase:  “Asus,  dual Xeon,  GTX580”  to try an come up with a motherboard I can start with and upgrade brings more info from an Adobe forum (but no motherboard). This fellow is not as concerned with 3D rendering so his conclusion (going with an overclocked i7 and a GPU) may ultimately be different than mine:

He’s trying to build a fast grpahics computer to run Adobe in October 2012:

http://forums.adobe.com/message/4755559

Also, this article from Tom’s Hardware favors a six core i7 over Xeon (EXCEPT when using a high number of threads, as in rendering):

http://www.tomshardware.com/forum/331975-28-xeon-2690-3930k
.
.
.
Credit Link above, I found this excerpt a very helpful commentary for the discussion here (Date: April 2012):


“i7 3930K (or i7-3960X) vs. Xeon E5 2690 eight core

Archi_B
Hello,

I`m wondering how is i7 3930K (or i7-3960X) is holding up to the Xeon E5 2690 eight core; i saw that hp realeased their new workstation z820 wich supports a variety of 8 core processors (even dual 8 cores = 16 cores) but the system is very expensive.
So in terms to performance how are the six core i7 3930K (or i7-3960X) in comparison with 8 cores processors? Are the 8 cores E5``s really worth it? are they really that powerfull and fast? becouse, as i said in financial terms a i7 3930K system would be half the money (or more)
* i couldnd find a proper benchmark where the E5`s were listed

any help would be much appreciated

blazorthon   04-16-2012 at 09:48:05 AM

The 8 cores with reduced clock speed are fairly similar to the X79 i7 six core CPUs in highly threaded performance and inferior in lightly threaded performance. The Xeons are more expensive because of their more server/workstation oriented features (ECC memory compatibility, more stable, multi-CPUs per board compatibility, etc.), not because of them being faster, except for the fastest of the Xeons (IE an eight core Xeon with a higher than 3GHz clock frequency and the ten core Xeons).

 

Archi_B 04-16-2012 at 09:57:42 AM

thanks for your imput blazorthon,
regarding ppl`s general opinion this is what i found out:

“For the money that you spent, dual E5s do not perform anywhere near that much faster than systems equipped with single i7-39xx CPUs. In fact, dual E5s might actually perform slower than single i7s in H.264 encodes due to the excessive latencies in the switching in dual-CPU systems (and the more CPUs within the single system, the greater the latency).”

 


Archi_B
04-16-2012 at 09:59:24 AM

“Here is one major problem with all dual-CPU setups (not just dual e5s):

No dual-CPU system performs anywhere near twice as fast as an otherwise comparable single-CPU system. In fact, without all of the latencies and bottlenecks that switchers, disk systems and graphics systems impose on the system, a dual-CPU system performs at best 41 percent faster than a single-CPU system. (In fact, one would need a quad-CPU system just to theoretically double the overall performance of a given single-CPU system.) Add in the chipset, disks and GPU, and the performance advantage could plummet to less than 20 percent. That’s way too small of a performance improvement for such an astronomical increase in total system cost (which could amount to double or even triple the cost of an otherwise comparable single-CPU system). And that’s not to mention that the second CPU increases the total system cost by at least $2,000 up to a whopping $6,000. No wonder why dual-CPU systems are relatively poor values (bang-for-the-buck).”


blazorthon   04-16-2012 at 10:34:36 AM

The E5-2670 and the i7-3930K and the i7-3960X should all be about equal in highly threaded performance (12/16 threads in this context) and the i7s pull ahead significantly in anything that uses less than 16 threads. Same goes for the E5-2690. If these are your CPU choices, then get an i7-3930K or an i7-2700 and overclock it to about 5GHz (or whatever it will go to at below 1.4v). If you are also willing to overclock the i7-3930K, then you can probably get it up to about 4.5 or 4.6GHz (maybe a little higher) with an $80-$100 cooler.


Archi_B

thanks blazorthon, well i`m looking for a new computer for work, and i must say that i have little experience with building one, most of my computers, laptops, current workstation is HP ( i am a hp fan)
so what i currently have i mind is this:
HPE Phoenix h9se series
- 2nd Generation Intel(R) Core(TM) i7-3960X six-core processor [3.3GHz, Shared 15MB Cache]
- 16GB DDR3-1333MHz SDRAM [4 DIMMs]
- 256GB Solid state drive
- 1GB DDR5 NVIDIA GeForce GTX 550 Ti [2 DVI, mini-HDMI. VGA adapter]

Hp: total cost 2400$
Reply to Archi_B
Archi_B   04-16-2012 at 11:01:00 AM

is this worth it or should i consider a custom buid(ask a friend to help), that for the same amount of money get better hardware?

 

blazorthon 04-16-2012 at 05:33:24 PM (shortened from link)

Get the i7-3930K instead of the i7-3960X. It has nearly identical performance to the 3960X at all workloads for $400 less money. Also, I recommend getting 1600MHz memory. It should cost about the same as 1333MHz memory does if you buy the memory yourself from a site such as newegg. Make sure that you get 1.5v quad channel memory if you do.

Build it Yourself

...OEM computers tend to get more overcharged as they go up in performance. For example, for a low end machine, you might get the same performance for a home built machine, but for a high end machine, you might pay 50% to several times what it would cost to build it yourself. The memory and video card are usually the worst offenders in cost. Since you were opting for a fairly low end grapics card (at the bottom of the middle end class today, won’t be long before it is considered the top of the low end class), you probably weren’t getting a price as bad as a similar gaming machine would have, but it still seems like too much money for such a system.

Considering that Tom’s built an X79 computer with some frills for looks and noise reduction that had a 3930K and Radeon 7970 (the 7970 alone was about $600 of that budget), I’d say that you should try either a home built or a partially home built such as what I suggested. If you have a friend that can help, then it would just be even easier.

Going for something similar except with less frills (cheaper case, PSU, CPU cooler) and the GTX 550 TI instead of the powerhouse of a 7970, you should be able to get something similar (or greater than) the specs that you listed for about $1200-$1400…..

 

 

 

Profile
 
 
Posted: 05 February 2013 12:25 PM   [ Ignore ]   [ # 13 ]
Member
Rank
Total Posts:  174
Joined  2009-07-22

Models

Dual Xeon E5 2687W on Asus Z9 PE-D8 WS - Cinebench 11.5

http://www.youtube.com/watch?v=ZdkDsI8wihI

Asus Review (Below) from September 2012 Motherboards.org:
http://www.motherboards.org/


ASUS Maximus V Extreme vs Maximus V Formula Thunder FX (A good explanation of the names and differentiation laying out the Asus Line, not just the comparision between these two specific boards):

http://www.youtube.com/watch?v=x7G-hZ7BNbc


Nvida vs AMD GPU (Note from 2011; starting to get old):

http://hexus.net/tech/reviews/graphics/31451-asus-rog-dual-gtx-580-mars-ii/

 

Profile
 
 
Posted: 05 February 2013 03:55 PM   [ Ignore ]   [ # 14 ]
Member
Rank
Total Posts:  174
Joined  2009-07-22

One motherboard possibility (Looks like it might be able to handle the latest 6-core i7 chip [not sure] as well as the Nvidia GPU) at a reasnoable cost ($295):

Asus P9X79 Pro:


http://www.asus.com/Motherboard/P9X79_PRO/

Retailers & Prices:

http://www.google.com/shopping/product/3644098954551914614?hl=en&sugexp=les;&gs_rn=2&gs_ri=hp&cp=14&gs_id=6&xhr=t&q=asus+p9x79+pro&pf=p&output=search&sclient=psy-ab&oq=Asus+P9X79+Pro&gs;_l=&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&bvm=bv.41934586,d.dmQ&biw=1024&bih=608&tch=1&ech=1&psi=vnwRUcGgCuil0AHynIHgCQ.1360100575625.1&sa=X&ei=wHwRUe3RHsHw0gHOzYD4Bg&sqi=2&ved=0CLgBEMwD

.
.
Review: Asus P9X79 Pro (from Google Link Above)
- Jan 6, 2012
Having recently looked at the Sabretooth X79, it’s now the turn of a motherboard a little lower down the food chain, the Asus P9X79 Pro.

This board is lower down in relative terms only though, as the X79 is a high-end, enthusiast chipset. That means any motherboard featuring this chipset isn’t going to be exactly cheap.

The P9X79 Pro is about a tenner cheaper than the Sabretooth X79. And, you’d better sit down for this, nearly a hundred quid cheaper than Asus’s flagship Republic of Gamers X79.

The P9X79 Pro is still packed with up-to-the-minute features, such as PCIe 3.0 support, USB 3.0 boost technology, SSD caching, Asus’s new USB BIOS Flashback, an updated UEFI BIOS with new features and eSATA 6Gbps. So in fact you’re getting an awful lot of board for that price tag and with a fair degree of future-proofing built in as a bonus.

The new processors feature a quad-channel memory controller, which goes some way to explaining the crazy physical size of the chip and its corresponding socket on the board. Small it ain’t and while some companies are happy to stick with four DIMM slots, Asus being Asus has gone the whole hog and, as it did with the Sabertooth X79, the P9X79 Pro sports a complete set of eight DIMM slots, two per channel. In theory, that means you can load up the board with a maximum of 64GB of memory.

In performance terms too the P9X79 Pro impresses. The Sabertooth X79 may be seen as the home overclocker’s board of choice, but the P9X79 Pro seems just as capable of hitting 4.8GHz. An excellent result considering the Sabertooth only managed another 100MHz more. It may not have the looks, but it’s still got the performance chops.

Simon Crisp       TechRadar Labs, UK

 

Profile
 
 
Posted: 05 February 2013 04:22 PM   [ Ignore ]   [ # 15 ]
Member
Rank
Total Posts:  174
Joined  2009-07-22

See the i3, i5, i7, Xeon and more!


Intel Processor Comparison
(Click on the tabs once you’re on the web page for Laptop, Desktop and Server):

http://www.intel.com/content/www/us/en/processor-comparison/compare-intel-processors.html


or just follow me:

Desktop: (i3, i5, i7 series, etc.)

http://www.intel.com/content/www/us/en/processor-comparison/compare-intel-processors.html?select=desktop

Server (Latest Xeon):

http://www.intel.com/content/www/us/en/processor-comparison/compare-intel-processors.html?select=server

This is a pretty nicely laid out group of web pages. When you click on “compare” (you’re allowed to compare 5 processors at once) That takes you to a more detailed comparison page and you can see things like the date the processors came on line (how old they are) and more. I compared an i3, the latest i7 6 core, and the latest Xeon, for example. They keep the line colors the same when they are different across cells vs when they are the same.

Profile
 
 
   
1 of 3
1