Personal View site logo
Make sure to join PV on Telegram or Facebook! Perfect to keep up with community on your smartphone.
Graphics card for video editing
  • 129 Replies sorted by
  • Newest Nvidia cards have a quite good energy management (AMD cards reportedly not that much and still buggy) and the Kepler design is more energy effiecient than older designs. AFAIK, for apps such as DaVinci, amount of GPU's RAM comes in very handy.

  • A used (1GB!) Gtx460 is a real steal now for the money. 336 cores

  • @zcream Funny thing is GPUZ doesn't report my GTX460's memory controllers load as ever higher than 1%, and the GPU load is rarely above 10%. Wouldn't the memory controller be involved in transfer over the PCIe lane?

  • My understanding is that Premiere only uses one card max, if this is different let me know. So the extra horsepower is to run apps that access more than one GPU. Interested at this point however in stats for Davinci, Speedgrade, etc, if they benefit. Maybe NeatVideo. After Effects supposedly uses all cards

    Reference After Effects CS6 does use the CUDA cores of all installed cards. Details here:

    Premiere Pro will only use the CUDA cores on one GPU. Details here:

  • @Sage

    1GB is about the minimum for CUDA apps like CS. I have the 1GB 460 and for things like grading I definately feel like upgrading quite a bit.

  • It is not the best; it is the cheapest option atm, if you find it for a few dollars.

    670 ++

  • Personally I went for 670, as I need thing that will be very low noise during 2D work. So, got one with not stock cooler.

  • Can anyone confirm a card that will run Speedgrade with DPX files? Premiere is happy with a basic card. Tx.

  • I use two GTX580s in SLI and performance is pretty darn good - comparable to the 680 when over clocked (well ... nearish). @stip is correct in that you have to enable certain cards from Nvidia by editing the plist on a hackmac or the equivalent on the PC. The fact is the 680/90 is merely a stopgap before their 2013 'revolutionary' cards come out where apparently the whole ballgame is gonna change yet again. Codenamed Maxwell, Nvidia will have a cpu coupled with the gpu, and manufactured on a 22nm process, Maxwell is approximately 7.6 times faster than the Fermi C2070.

    If you can afford it, the GTX 690 has 3072 CUDA cores - basically double the 680.

    Here's a good win64 test proggie for seeing how well your card setup operates on Cuda and OpenCL;

  • use two GTX580s in SLI and performance is pretty darn good - comparable to the 680 when over clocked

    Issue this this guys is that they are hot and consume quite a power:-)

    Nvidia will have a cpu coupled with the gpu, and manufactured on a 22nm process, Maxwell is approximately 7.6 times faster than the Fermi C2070.

    I have big doubt about this. All of them have troubles even with 28nm. So, it is much better to have something real.

    Here's a good win64 test proggie for seeing how well your card setup operates on Cuda and OpenCL

    Best idea is to test them in appropriate applications that you use. As GPU apps differ significantly.
    For GTX 670 / 680 it is also very important if application use double precision math. In this ase, best idea is to get GTX 580. In other case it'll be faster. And as gaming card it'll be faster also.

  • I currently have a silent Sapphire HD 6670. I was interested in the silent Zotac GT 640 for use with CS6. Unfortunately, I think the card will not fit in my cramped Dell Inspiron. The heat sink extends out and above from the end of the card.

    I have also read rumors that a DDR5 version of the GT 640 will be released in August. Are people satisfied with the performance they are getting with the current DDR3 GT 640 in Premiere CS6?

    Another factor that makes me hesitant to pull the trigger on the current GT 640 offerings are rumors that CS6 will be patched to support AMD cards like the one I currently own.

  • To follow up, I purchased the EVGA GT 640 from Amazon:

    Rendering video adjustments instantly with CUDA in Premiere CS6 is night and day compared to the wait time I previously endured using software only. The downside is that when scrubbing footage on my modest PC (7.1 rating across the board on Windows Experience Index) there is a noticeable buffer time. Changing playback resolution seems to have little effect on the issue. This is something I can live with because the advantage of CUDA far outweighs this minor annoyance. I imagine better hardware configurations mitigate the problem.

    As far as noise goes, coming from a fanless "silent" card, the difference does not bother me. Perhaps it gets louder when gaming, I have not tried it. If I were to game I would have headphones on anyway.

    UPDATE: My initial test was using Sedna AQ1 footage and that was apparently too much for my PC. I can scrub quite smoothly with Flowmotion V2.02 footage. I guess this settles which hack I should use.

  • HI' im new to CUDA, i read some posts recently recomending nvidia gt240 for extra horsepower to programs line premiere pro- adobe ae ect

    now to the meat of the question, i got confused by all those cards. quadro, gt, gtx consumer ect so basicly i want to ask, whats the diffrence and what to you need/want if your buying anyways.

    ex the quadro fx 4800 got 192 cuda cores, and i looked up the new 660ti got 1300 cuda core, what does that mean, will 660ti be faster then the 2k$ card fx quadro.?

  • @simple

    Check first topic post first.

  • The Geforce 640 is a good deal for a low-end card, but its memory bandwidth is limited by its DDR3 RAM and 128-bit bus. For a mid-range GDDR5 card, I think the best bang for the buck these days is the Geforce 660 (but not the Ti version). Here's a comparison between the 640 and 660:

    If you have a beefy power supply, the Geforce 560 Ti 448 is a cheaper alternative to the 660:

    EDIT: I found docs on a GDDR5 version of the 640 with a 128-bit bus, but I can't find it for sale anywhere:

  • @matthewcarr Strange as Ive noticed Intra stuff is quicker to decode in NLE. Whats your PC like with Pro Res then?

  • I purchased the ASUS GTX 650 thinking my life would be better. It has lots of Cuda cores (384), and DDR5 memory, rather than DDR3. I expected great things, as opposed to my ancient GT 240. In actuality, most things are the same. Rendering out a sequence to MP4 is about the same, OK, a few seconds faster. Bu no biggie. In CS5/5.5/6 I monitor the GPU usage in real time with GPU-Z. The 240 runs 30-35 percent, sometimes a bit more, sometimes a bit less. The GTX 650 runs 7-15 percent, with some spikes. So the fact is, the card is not the bottleneck. You don't usually use all of it.And if you buy a fancy one, you are throwing your money away. EXCEPT:

    If you have say six-eight cams and you lay on the effects, the GTX 650 will allow you to pile them on more. I don't really do that, but I tried it out, and you can layer on a few more effects, for sure. Plus I suppose rendering MPEG might be faster, but I never do that anymore. Maybe one DVD bounce a year or so.

    Noise: I got the ASUS 650 with the tricked out cooler (DirectCU) and fan. It is very quiet, and you can control the fan speed in software. It is a very good design, and noise is not an issue.

    There is one more difference I would mention. The older GT 240 cards need to use older drivers. If you download the latest drivers, you will get random crashes. Uninstall the auto update thingy. The new card so far, no crashes with the latest drivers.

    Was it worth the $100 chipolatas for the 650? No, but I'm leaving it in there. I would rather have it than the silent 640 for the same price, unless absolute silence is required. A little extra GPU power may come in handy--doesn't do anything now, but may down the line. I will test it with Speedgrade, but I have not used the program that much.

    Good news is there are lots of choices. Above a certain spec, you may see no difference at all, unless you really lay on the effects.

  • I'm building an editing station: i7-3930k, Asus Sabertooth x79, 16 or 32 RAM, 256GB SSD, 800W PSU, + 5TB of footage drives. I will be editing a feature length film on it, but I want it to be ready for GH3, BlackMagic design 2.5K and maybe even 4K footage. The only thing that is left to purchase for me (apart from RAM) is the Video Card (hopefully, not very expensive). So I came here and I've read this whole thread.

    I must say, after reading this thread, now I'm even more confused than before. I was under the impression that (for rendering, realtime skimming over the timeline, realtime playback etc) Adobe PP as well as AfterEffects and SpeedGrade rely mostly on CPU, so while the vcard has to be Nvidia, even the relatively modestly priced cards like GT240 will be enough, and that there is really little performance difference in Adobe CS6 between GT240 and GTX670 for instance. But from reading this thread, it seems like some people are saying quite the opposite - that the more expensive contemporary GTX670 and 680 are the ones to go for. So which one it is?

    If I get a GT240 will that be okey on a system like this? What would be the most likely drawbacks?

    What should we pay attention to when deciding which vcard to buy? CUDA core count? GPU's memory? GPU's core clock - speed? or anything else?

    After all, there is quite a price difference between a £40 card and a £650 card, but when one decides to spend money one way or another, one must know what he/she is paying for.

  • There are some editing platforms that are notoriously picky when it comes to compatible video cards - Avid Media Composer (my choice) being a prime culprit. It's a difficult topic, trying to generalise about video cards - first identify your NLE of choice, I would say, then go from there.

  • @mrbrill Adobe Premiere Pro, with minor usage of After Effects. As for color correction, I have not yet decided (SpeedGrade, etc).

  • @Lpowell... what's wrong with Geforce 660ti? I have that card... (Gigabyte version) seems to perform pretty well....

  • Simple answers are:

    Best bang for the buck editing system today is: GT640 gpu, Asus P8B75M-LE motherboard, 3570 intel processor (default cooler), 8gb ram (2x4gb) 1600mhz or 16gb (2x8gb) if you will load more software at same time, SSD 120gb for system, 7200 rpm hd for files and scratch disk, two 7200 rpm hd in windows software raid for video, bluray burner. Cineform Neoscene to transcode video to cineform codec (much better real time playback, fluid without stutter and better quality preview in full, half and quarter resolution). Forget other brand motherboard, ASUS is always stable, no system crash or freeze.

    If you just need A/B roll, two layers in 1080p, GT240 or any other with 96 cuda cores (or more) and 1GB ram is enough for all effects including color/image grading, chroma key, dissolves... This is for Nvidia 200, 400 and 500 series. The Nvidia 600 series uses other cuda core design, more cores, less power in each core, so for the 600 series get 384 cuda cores or more, GT640 is good. Always get 1GB ram or more in the GPU. DDR3 or DDR5 makes no significant difference for real time preview.

    The PSU (power supply unit) must be a single channel 12 volts. Some PSU divides the 12v into two channels, one for the processor and one for the rest of system, and this is not good because the processor does not use all the power from it's channel and you can't use the remaining power from that channel to other components like the GPU. Some single channel 12v PSU are called VE (video edition). This online calculator allows you to find the PSU power you need:

    As the CPU is the piece of hardware which decodes the video files, for two layers with key or dissolve in cineform 1920x1080 24p, 30p or 60i files you need an intel processor with 4cores at 3600ghz or more to do full resolution preview (3570 with turbo enabled). Half resolution preview works good with slower processors, when you change from full to half, the processor usage decrease more or less from 90% to 20% or from 50% to 20% depending on system.

    Cineform neoscene is very usefull, it helps to get more fluid playback in full or half resolution, less processor intensive than avchd so better to realtime preview and when decreased to half the image is still good to look at. avchd when used in half resolution preview is ugly. also neoscene installs a plugin in premiere which makes fluid realtime playback much much better.

    Also for two cineform video layers in full resolution you need raid. just a single hd will not be enough to feed the system with two cineform 1080p videos at same time. Two 7200 rpm hd in windows software raid is enough. Today SSD storage are getting large capacity and very fast speed, maybe it can raplace a raid system just for the files you are playing, other files can keep in default HDD.

    Better option for full resolution preview is a 6 core intel CPU, (a must for BMCC). Overclock is unstable most times but if you want to give it a try in the 4 core "K" cpu, in the low budget route the Z77 chipset have overclock options and the entry level motherboards are affordable. without overclock the Asus B75 chipset motherboards with 3570 processor are the best bang for the buck.

    Half resolution preview is pretty good to edit Full HD or BMCC transcoded to cineform and will work great with any 4 core cpu without raid and without overclock. Also you can use Quarter resolution preview for 4K video. Full resolution preview is just needed for intense chroma key work (green/blue screen) because you need to see the edge between foreground and background. Working with half or quarter resolution preview you just need one monitor.

    Cineform in half resolution can play 4 layers picture in picture with 3570 intel processor with two 7200 rpm hd in raid, or two layers in full resolution.

    If you transcode the avchd footage to mpeg2 25Mbps you can play more layers in real time because mpeg2 codec is less processor intensive than avchd and than cineform. With GT240 it is possible to play something about 9 layers of mpeg2 25Mbps in half resolution, maybe it will need 3 hd in raid to do this, needs tests.

    If you need 4k, take a look at the manufacturer specifications, most GPUs just support maximum 2560 x 1600 (enough for BMCC, but not for 4k). This is important if you will buy a 4k monitor in future.

    Only thing I know which really need powerful GPU is neatvideo noise reduction. if you want fast denoise, GTX660 is also the best balance price/performance, watts/performance today for neatvideo cuda. less than this makes no sense for neatvideo. GTX660 is the most powerfull GPU which uses just a single 6 pin power connector. take a look at neatvideo forum.

    If you will work with other editing / grading software like DaVinci Resolve, Magic Bullet and so on, take a look at the manufacturer specifications and forums so you will realize if they need powerful GPU or not. These kind of software is out of my knowledge.

    this site is full of information and benchmarks about premiere and GPU (if you want to know about render time and encoding time they have some tests):

    There are compatibility issues between motherboards and GPU. Some GPU can work good in some motherboard but not in another. If you start the computer and it beeps it is a motherboard/gpu compatibility issue and only solution is change GPU or motherboard.

  • @qwerty123 The 660Ti is also a good card, but compared to the 660, the Ti is optimized more for 3D video games (more Cuda cores) and less for data throughput (lower core speed):