GPU-accelerated decoding of video files on NVIDIA hardware

I know this was an added feature in Version 12. My question is: Does this happen automatically or is there a switch that needs to be enabled to make it happen? Also, does anyone have any examples of how much acceleration happens with different levels of NVIDIA cards?

Comments

  • Triem23Triem23 Moderator
    edited June 27

    It should be automatic (Check in File>Options?). As far as comparison examples, that's probably up to the devs. I doubt most users have multiple machines to test on. Plus one would have to compare HF 11 vs 12. The "8x" speed increase on the product page is based on the following footnotes:

    • 1. Decoding UHD MPEG-4 AVC on Windows 10 machine with 8th generation Intel® Core™ i5-8600 Processor and NVIDIA GeForce® GTX 1080.
    • 2. Decoding UHD MPEG-4 AVC on Windows 10 machine with 6th generation Intel® Core™ i7-6770HQ and Intel® Iris™ Pro Graphics 580.
    • 3. Decoding UHD MPEG-4 AVC on Windows 10 machine with 6th generation Intel® Core™ i7-6770HQ.

    So the baseline comparison is a software decode on an i7-6770HQ (Hey! That's MY machine!) vs an Nvidia 1080.

  • @Triem23 I couldn't find anything in File->Options, so I just assumed it must look for and automatically find the graphics card and act accordingly. Then I started wondering, what if I missed it somewhere and several years later I discovered that I could have been working eight times faster but didn't bother to turn that feature on! :) I couldn't find anything in the manual, so I just thought I would ask.

    Then I started wondering, where does the 8x number come from, a what could I realistically expect based on what I have. I couldn't find any other information regarding different cards, so again, I thought I would ask.

    Thanks for answering Mike!

     

  • Triem23Triem23 Moderator

    In your case, moving from a 2012-13 CPU to something current (and you got a 1080, right?), you should be having significantly smoother editing on your timeline with mp4.

    Remember, the hardware acceleration currently refers to mp4 performance on the timeline only, not (yet) during render. Hardware acceleration will still be under additional development until it's cross-platform (pc/mac), cross-platform (amd/nvidia) and (I think) mp4 rendering.

    I'm pretty sure it's always on, but I'll tag, oh @JavertValbarr this time, to fact-check me here. 

  • @Triem23 Yes, I have gone from...

    16gig ram, DDR3, Intel Core i5-4460, 3.20Ghz 4 cores, Intel HD Graphics 4600

    to...

    16gig ram, DDR4 3200, AMD RYZEN 7 2700X 8-Core 3.7 GHz (4.3 GHz Max Boost)
    MSI Gaming GeForce GTX 1080 8GB Graphics Card

    It's a pretty big improvement, and it definitely shows in both HitFilm, Blender, and all the other applications that I use. Even though this is not yet a factor in rendering, even that is massively improved. I was just wondering if I hadn't missed something that might be important in between. Thanks again Mike!

  • To turn off hardware decoding you can either:

    1. File -> Options -> Render -> Use hardware decoding if available
    2. Media panel, right click on video file -> Properties and uncheck Use hardware decoding

    Otherwise, it's on if it's supported.

  • I am a new HF user and primarily using it for very simple videos with almost no effects other than some textual overlays and fades. I find the Viewer in HF to be essentially unusable as the performance is so poor when there is a transition to a new clip. If I play just one clip, it takes maybe 10-20 seconds for the clip to begin playing somewhat smooth but as soon as there is another clip it has to transition to, the viewer doesn't update quick enough to even show the footage. Here are my specs:

    16 GB of RAM, Intel i7 2.4GHz x 4, Intel HD Graphics 5500

    Based on that information, is this just a video card/overall spec issue for me, or is it possible there is another/other issues?

  • Hi,

    Sorry for interrupting the discussion, but I have similar questions regarding hardware decoding:

    It is widely known that hitfilm does not cuda nor RTX e.t.c (So no advantage here for Nvidia)  And that it is based on the openGL API (That is present in both Nvidia and AMD cards). Hitfilm uses hardware decoding for Nvidia cards, but while thinking about that, a question appeared to me.

    Imagine an exactly identical situation in which hitfilm pro 12 is running under the same system specs. The only change is the GPU. One is from AMD (No hardware decoding) and the other is from Nvidia (Hardware decoding turned on), but let´s consider that BOTH CARDS HAVE THE SAME SPECS (Except unique features like RTX e.t.c).

    -Which will be roughly the performance improvement of the Nvidia card compared to the AMD one? (just as a result of the hardware decoding).

    -In other words, is there any quantitative information regarding the benefits of hardware decoding?

    Thanks.

  • edited July 22

    "...is there any quantitative information regarding the benefits of hardware decoding?"

    Yes, and it is listed on the Hitfilm Pro product webpage. There is a reference to the increased decode performance of the HW decoder relative to a few different CPUs. Now if the decode is stated to be 8x faster, that does not mean Hitfilm will be 8x faster. Throughput is the sum of many different parts. Hardware decode really eliminates bottlenecks when decoding typical AVC/H.264 media. In my experience the HW decode is very fast.

    HW decode only works on 8-bit 4:2:0 AVC/H2.64 media. Everything else is still decoded via software decoders. So 10-bit or 4:2:2 AVC media is still decoded in software.

    "It is widely known that hitfilm does not cuda..."

    That is a big so what. Hitfilm uses OpenGL which has the GLSL shader language. CUDA is just a programming language to expose the shader/CUDA/SP cores (ALUs). GLSL does the same thing. Hitfilm has full access to the compute cores or the GPU via GLSL. CUDA, OpenCL and GLSL are all programming languages to provide access to the CPU compute cores.

  • Triem23Triem23 Moderator

    @NormanPCN One last note, the "8x" faster number compared two specific GPUs. Intel Iris Pro 530 vs Nvidia 1080. So if a theoretical user, say @FilmSensei, went from an Intel HD 4000 to an Nvidia 1080, that user might see more than an 8x increase.

    Of course hardware acceleration is only for 8-bit, 4:2:0 mp4 decode on the Timeline. I believe renders will be added later (after AMD acceleration?). Perhaps higher bit-depth or subsampling as well? The vast majority of cameras shooting mp4 are 8-bit 4:2:0, so it's a significant feature for many. 

  • From my reading of the chart, I believe the 8x is from CPU only, 6770HQ, HF11, to HF 12 with GTX 1080.

    "Perhaps higher bit-depth or subsampling as well?"

    Nope. The HW decoders are all pretty much limited to 8-bit 4:2:0 for AVC. Nvidia for sure. Also limited to 4096x4096 max.

    Again, not too sure about Intel Quicksync decoder limits.

    As for HEVC decode, if/when added. Nvidia supports 10/12-bit decode there. Also 4:4:4 support, but no 4:2:2. 8192 max dimension.

  • Triem23Triem23 Moderator

    @NormanPCN, you are, of course, correct in your reading of the chart, and I needed to pull my head from between my legs.

    In my defense, I happen to HAVE an Intel 6770HQ CPU and was in the process of doing an Intel driver upgrade while posting my prior comment. ;-)

    Else, I trust you regarding limitations of hardware decoders more than myself. :-)

  • @Triem23 If you are curious about Nvidia decode support (NVDEC), scope this page out. It has an  NVDEC table listing support for various GPU architectures.

    https://developer.nvidia.com/nvidia-video-codec-sdk

     

  • @NormanPCN; @Triem23;

    Thanks for all the info.

    But since I am obtaining more info about hardware, I am progressively getting more confused. I´ve just upgraded my old Z600 HP rig to an i9-9900K, 32Gb ddr4 3000mhz... But I still have to buy the GPU and my max. budget is 550€.

    So in consequence, I could afford these graphics cards:

    -RTX 2060Super (430€)

    -2070super (530€)

    -Radeon RX5700xt (430€)

     

    It is known that the AMD rx5700xt (430€) is outperforming the RTX2070 super (530€) and in consequence also the 2060super.

    So here is my question:

    -(Considering that I mostly use HF as my main software) Is it worth buying a 100€+ Nvidia GPU that performs worse than the AMD one, ONLY FOR GETTING HARDWARE DECODING?

     

    -What GPU would you choose for HF?

     

    Thanks.

     

Sign In or Register to comment.