Jan 29, 2010, 09:33 PM // 21:33
|
#1
|
Wark!!!
Join Date: May 2005
Location: Florida
Profession: W/
|
GPU Based Rendering
I saw this the other day and wanted to pass it on. It looks like the big thing for 2010 is GPU based rendering. Most render programs I know (like poser) of work from the CPU instead so maybe this will speed things up.
Games, I'm not sure of.
|
|
|
Jan 29, 2010, 10:34 PM // 22:34
|
#2
|
über těk-nĭsh'ən
Join Date: Jan 2006
Location: Canada
Profession: R/
|
games are rendered on the graphic card to begin with. that's the whole point of having these cards in the first place.
|
|
|
Jan 29, 2010, 10:37 PM // 22:37
|
#3
|
Wark!!!
Join Date: May 2005
Location: Florida
Profession: W/
|
I figured as much, but you never know. I mean you'd think that the powerful rendering programs would be using them by now, but they aren't.
|
|
|
Jan 29, 2010, 10:41 PM // 22:41
|
#4
|
über těk-nĭsh'ən
Join Date: Jan 2006
Location: Canada
Profession: R/
|
that's because of wildly different GPU architectures out there. it's extremely hard to write a program that can take advantage of every available architecture.
|
|
|
Jan 29, 2010, 10:45 PM // 22:45
|
#5
|
Wark!!!
Join Date: May 2005
Location: Florida
Profession: W/
|
Which I guess is the reason some games don't work well with every GFX card at release.
|
|
|
Jan 29, 2010, 11:29 PM // 23:29
|
#6
|
Forge Runner
|
Maybe someone can explain this to me. I always thought a GPU was basically an underclocked CPU, so why would a GPU be better than *utilizing* your CPU?
|
|
|
Jan 30, 2010, 12:41 AM // 00:41
|
#7
|
über těk-nĭsh'ən
Join Date: Jan 2006
Location: Canada
Profession: R/
|
GPUs are dramatically different from CPUs. one is a general computational device, the other is a specialized chip designed to accelerate the rendering of 3D objects on a 2D screen. you cannot compare the two based on clock speed, since they are so radically different.
the modern GPU is made up of a number of scalable graphics "cores", each can be programmed to execute generic code. this is a step up from the older, fixed function "shaders" design. this means, while GPUs are still primarily about accelerating graphics, they can also be used to do whatever you want them to. right now, the fastest graphic card on the market, AMD's radeon HD5870, has a whooping 1600 graphics cores. compare that, with the modern CPU, which is more or less limited to 4 cores. this means that graphic cards are the best processors for running parallelized code. if you can divide up your task to take advantage of this parallel processing power, a graphic card will do it dramatically faster than a CPU. in fact, a $5000 desktop computer, with four nvidia graphic cards in parallel, can outperform a multi-million dollar computing cluster by a few orders of magnitude on certain tasks.
video rendering/encoding/decoding are tasks that can easily be divided up into little chunks, which means that graphic cards, if properly utilized, are dramatically faster than CPUs in these tasks.
|
|
|
Jan 30, 2010, 02:36 AM // 02:36
|
#8
|
Forge Runner
|
Quote:
Originally Posted by moriz
GPUs are dramatically different from CPUs. one is a general computational device, the other is a specialized chip designed to accelerate the rendering of 3D objects on a 2D screen. you cannot compare the two based on clock speed, since they are so radically different.
the modern GPU is made up of a number of scalable graphics "cores", each can be programmed to execute generic code. this is a step up from the older, fixed function "shaders" design. this means, while GPUs are still primarily about accelerating graphics, they can also be used to do whatever you want them to. right now, the fastest graphic card on the market, AMD's radeon HD5870, has a whooping 1600 graphics cores. compare that, with the modern CPU, which is more or less limited to 4 cores. this means that graphic cards are the best processors for running parallelized code. if you can divide up your task to take advantage of this parallel processing power, a graphic card will do it dramatically faster than a CPU. in fact, a $5000 desktop computer, with four nvidia graphic cards in parallel, can outperform a multi-million dollar computing cluster by a few orders of magnitude on certain tasks.
video rendering/encoding/decoding are tasks that can easily be divided up into little chunks, which means that graphic cards, if properly utilized, are dramatically faster than CPUs in these tasks.
|
That is actually pretty damn amazing, I had no idea. Will we see CPU's with this technology in the coming years possibly?
|
|
|
Jan 30, 2010, 03:08 AM // 03:08
|
#9
|
Wark!!!
Join Date: May 2005
Location: Florida
Profession: W/
|
Right now, there are quad core CPUs. After they got to about 2.5 or so gigs in power it got easier and more efficient to add more cores and let each one handle different tasks.
I'm not really sure when we see a 1600 core CPU for the home user.
|
|
|
Jan 30, 2010, 07:39 AM // 07:39
|
#10
|
Grotto Attendant
Join Date: Jan 2007
Location: Niflheim
Profession: R/
|
The biggest CPUs have currently 4 cores, 8 threads, but Intel is already making CPUs that have as much as 48 cores. It's still experimental, and they wouldn't be as powerful as today's Quads and i7's, but the potential is huge.
Still, GPU are much more powerful when it comes to gaming and editing graphics/movies.
|
|
|
Thread Tools |
|
Display Modes |
Linear Mode
|
Posting Rules
|
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off
|
|
|
All times are GMT. The time now is 04:42 AM // 04:42.
|