Tagged with hdr - Personal View Talks https://www.personal-view.com/talks/discussions/tagged/hdr/feed.rss Sat, 27 Apr 24 17:38:39 +0000 Tagged with hdr - Personal View Talks en-CA Asus ProArt 32UCX VS Apple Pro Display XDR https://www.personal-view.com/talks/discussion/23242/asus-proart-32ucx-vs-apple-pro-display-xdr Fri, 13 Dec 2019 07:48:11 +0000 jonpais 23242@/talks/discussions While awaiting delivery of my Asus PA32UCX-K this afternoon, I thought I’d compare a few of the features and specs of the two displays, bearing in mind that the XDR has not yet been tested.

The XDR boasts 6K, while the Asus is a 4K monitor; however, only the 2019 Mac Pro and 16-inch MacBook Pro can run the XDR at full 6K resolution. Both the Asus and the Apple are true 10-bit displays.

Apple claims 500 nits for their display in SDR mode and 1,000 nits sustained and 1,600 nits peak brightness in HDR mode. The Asus can deliver anywhere from 620-656 nits in SDR and from 1,400-1,534 nits maximum brightness in HDR, depending on which reviewer you read.

The Pro Display XDR has 576 local dimming zones, whereas the Asus has 1,152.

On his YouTube channel, Marques Brownlee says his Pro Display XDR is virtually silent, while the Asus is supposed to get quite noisy.

According to Asus, the maximum viewing angle of the PA32UCX-K is 178 degrees in the horizontal and vertical (Apple claims 180 degree viewing angle for the XDR), and according to tests, hue, saturation, brightness, contrast and color temperature remain virtually unchanged even at extreme viewing angles. I expect Apple’s display will be no slouch in this department either.

As far as connectivity goes, Apple’s monitor has one TB3 port and three USB-C ports. The Asus comes with dual TB3 ports, one DisplayPort and three HDMI 2.0 ports.

The Apple Pro Display XDR runs $6,500 in the USA when purchased with stand and AppleCare+. Throw in the nano-texture glass option and that figure skyrockets to $7,500.00. In Vietnam, the ProArt 32UCX-K costs half as much ($3,300.00) and comes with stand, a three-year warranty and an X-Rite i1 Display Pro colorimeter (value $250.00 in USA, $325 in VN). With the Asus, you will however have to factor in another grand for Blackmagic’s UltraStudio 4K Mini. Ouch!

]]>
Comparison of Vegas Pro 16 and DaVinci Resolve Studio 14 for Producing HDR Videos https://www.personal-view.com/talks/discussion/20434/comparison-of-vegas-pro-16-and-davinci-resolve-studio-14-for-producing-hdr-videos- Fri, 14 Sep 2018 07:06:19 +0000 markr041 20434@/talks/discussions I created a 4K HDR10 video from each software package using the same clips, shot in Slog2.

Vegas Pro produced an HEVC HDR10 video playable on any HDR TV and playable on Windows 10 computers capable of HDR (I have a 10bit, HDR monitor).

Resolve creates an Avid 10bit 4K HDR video with the correct HDR10 metadata. It plays on no hardware of any kind in HDR and does not play in HDR using any software I know about, but the file is recognized by YouTube as HDR. Resolve has no HEVC or even H264 10bit render options!

For the exact same HDR10 4K video the Vegas-produced file size was 408Kb. The Resolve-produced HDR10 4K video from the same clips was 8.6GB. Yes almost 9 gigabytes versus half a megabyte! The Vegas Pro video took 25 minutes to render, the Resolve video 7 minutes (because the file is only lightly compressed).

Here is the Resolve version:

Here is the Vegas version:

I do not see much of a difference. The Resolve video is brighter (more DR) because I was able to use the scopes to max out the DR.

I think I understand the scopes (designed specifically for HDR) and the controls in Resolve. But I understand neither in Vegas in ACES for HDR, yet (I have mastered them outside ACES and in SDR).

These HDR10 videos were both made essentially only using the built in (ACES or ResolveCM) transforms from Slog2 to HDR10 for 1000nits screens with little tweaking by me, except for increasing the DR a bit in Resolve.

]]>
Workflow HLG HDR to YouTube https://www.personal-view.com/talks/discussion/18825/workflow-hlg-hdr-to-youtube Fri, 19 Jan 2018 23:32:09 +0000 jonpais 18825@/talks/discussions https://jonpais.wordpress.com/2018/01/20/how-to-upload-hlg-hdr-to-youtube-in-two-minutes/

]]>
Netflix publishes "Meridian" as original, 89GB sized, 4k HDR MXF file https://www.personal-view.com/talks/discussion/15757/netflix-publishes-meridian-as-original-89gb-sized-4k-hdr-mxf-file Mon, 19 Sep 2016 14:48:32 +0000 karl 15757@/talks/discussions In what seems like a really noble gesture, Netflix has published a large sample of professionally produced 4k HDR video in Material Exchange Format container under a Creative Commons license.

“Meridian” is 4K HDR video shot with 60 frames per second with a peak brightness level of 4000 nits and Dolby Atmos audio. In other words: It pushes the boundaries on all specs fronts, and includes a number of visuals that can trip up encoders.

You can download the MXF and some auxillary files at https://media.xiph.org/video/derf/meridian/MERIDIAN_SHR_C_EN-XX_US-NR_51_LTRT_UHD_20160909_OV/

]]>
Have you got an Affordable (~ € 700) UHD 40"+ HDR IPS TV / Monitor recommendation? https://www.personal-view.com/talks/discussion/16183/have-you-got-an-affordable-700-uhd-40-hdr-ips-tv-monitor-recommendation Wed, 14 Dec 2016 10:35:42 +0000 cas1 16183@/talks/discussions Hi,

I'm saving up for a G85, and with that I'm looking for a new 4k screen.

The screen is going to be used as a monitor for my pc. I like to have 40" +.

My priorities are:

  1. viewing angle
  2. colour accuracy
  3. NON flickering when set to low brightness
  4. latency
  5. HDR ready

My budget hovers about € 700.

Any recommendations?

  • Thanks!
]]>
Question for timelapse photographers https://www.personal-view.com/talks/discussion/16017/question-for-timelapse-photographers Wed, 09 Nov 2016 06:49:16 +0000 MischaB 16017@/talks/discussions Hello everyone,

I'm working on my graduation project, I want to create the following, and I'd like to hear how you think about it;

"A mobile app, with information about how to make the best timelapses, using any technique (1080p, 4k, HDR, stereoscopic 3d etc) with tips and tricks for the settings."

It will not be an app that makes the timelapse, nor will it be an app that tells you exactly which settings to use; it should be more of a reference work, a helping hand.

Now I'm trying to use the design thinking method to build this idea, so that's why I need opinions. What do you guys think? Is it worth working on? What do you think should definetely be included or excluded? I'm thinking about an app because I suppose it's nice to have it with you when you're shooting a timelapse, but maybe that's a wrong assumption?

]]>
mpv player now able to replay V-Log and HDR content directly with tone-mapping https://www.personal-view.com/talks/discussion/15488/mpv-player-now-able-to-replay-v-log-and-hdr-content-directly-with-tone-mapping Sun, 07 Aug 2016 15:20:20 +0000 karl 15488@/talks/discussions I thought I should mention here that my favourite video player software, mpv, was recently improved to be able to directly replay...

  • V-Log video material, applying the inverse transformation on the fly, as well as tone-mapping and color-space conversion.
    This allows you to review video clips as recorded e.g. by a GH4R directly, without having to go through a lengthy "video editor software import / rendering cycle".
    For example, to replay this sample V-Log video file posted by a forum member earlier, you could use an mpv command line like this:

    mpv --vf=format=gamma=v-log:colorlevels=full:primaries=v-gamut -vo opengl-hq:hdr-tone-mapping=clip SHOGUN_S001_S001_T008b.mov

... to obtain a video display like this:

image

  • HDR (SMPTE-ST-2084) video, as used (along with BT.2020) on UHD BluRay discs and generated by the latest PC and game console GPUs (see here for sample files).
    As with V-Log, both the electro-optical transfer function and colorspace conversions are done on the fly, and tone-mapping helps to display the content well also on non-HDR-capable displays. Unlike V-Log, which is not detectable automatically whether being used in a video file, HDR content is detected automatically by mpv, and reasonable default parameters chosen for the transformation.

Please notice that both HDR and V-Log display transformation support are implemented in the OpenGL video output module of mpv, which is the default to use on Linux/Windows/Mac. If you use another video output module, these on-the-fly transformation features are not available.

See here for mpv binaries for different platforms.

]]>
HDR Video https://www.personal-view.com/talks/discussion/242/hdr-video Thu, 16 Jun 2011 17:08:42 +0000 Toronto 242@/talks/discussions
This example seems to be what the near future of hdr video will be like, a non typical hdr look, but still having an slight hdr type of gain in dynamic range that is subtle enough to still look natural. It is from an epic hdrx camera.

http://blog.localheropost.com/main/2011/6/3/red-epic-hdrx-test-examples.html

"The Epic sensor without HDRx is already in the 11 to 13 stop range, depending on who you ask. With HDRx, however, you get an entirely separate exposure, 2 to 6 stops darker than your original capture...."

]]>
4K HEVC 10 Bit Mastering and Exporting https://www.personal-view.com/talks/discussion/15071/4k-hevc-10-bit-mastering-and-exporting Thu, 12 May 2016 08:25:55 +0000 Tron 15071@/talks/discussions Seeing as how things are finally moving in the direction of HEVC and 10 bit (Rec.2020) on the delivery side of things, I'm curious as to what 10 bit camera recording bitrates we could expect to see coming down the road. ProRes intra codecs seem unnecessarily overweight at 4K resolution and will probably not be offered by Canon, Sony, Panasonic and JVC for internal acquisition (prosumer class). I'd be OK with a long-GOP HEVC codec seeing as how the new GPUs all seem to support hardware decode/encode capability.

Would 200 Mbps be a reasonable bit rate to expect for next-gen cameras like the GH5 if they support internal 10 bit? Any opinions on whether companies like Blackmagic or Atomos will eventually offer a recorder with HEVC option that's easier on storage media? Anyone currently working with10 bit HEVC delivery pipelines? Thanks.

]]>
Looking for high dynamic range content for HDR experiments https://www.personal-view.com/talks/discussion/13746/looking-for-high-dynamic-range-content-for-hdr-experiments Sat, 19 Sep 2015 09:37:51 +0000 karl 13746@/talks/discussions Hi, I am looking for some sample video material which I could use to experiment with generating HDR videos compatible with contemporary TVs.

At http://www.demo-uhd3d.com/categorie.php?tag=hdr one can find only two commercial samples, that's all I could find anywhere.

I have been able to generate videos which my 4k OLED TV recognizes as "HDR" by using:

x265 --preset ultrafast --output-depth 10 --transfer smpte-st-2084 \ --master-display 'G(13200,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,1)' \ --max-cll '10000,400' input.y4m output.hevc

ffmpeg -i output.hevc -c copy final.mp4

When replaying "final.mp4", the TV switches into HDR mode, and the content is displayed with what appears to be very high contrast.

I haven't experimented much with the master-display and max-cll settings yet, but this wouldn't make much sense with the input videos that I currently have at hand, which simply are of "low dynamic range", and are only of 8bit per channel accuracy.

If you have some sample video material with a very high dynamic range (like shots of scenery with surfaces emitting 400 or more cd/m^2, like a mixed shot of dimly lit indoor stuff in one place and some brightly sunlit plants visible through a window outside), you could make me happy by making it available (in a format conveying at least 10 bits per color channel).

Actually, anyone with a Magic Lantern patched full frame camera (or an Arri Amira) at hand should easily be able to create such a sample shot.

]]>
Checker Board HDR Sensor Idea https://www.personal-view.com/talks/discussion/13447/checker-board-hdr-sensor-idea Thu, 23 Jul 2015 14:35:04 +0000 apefos 13447@/talks/discussions I think the checkerboard design for hdr sensor can be better to avoid aliasing and moire and more easy for image reconstruction in missing pixels in each exposure (dark/bright) compared to alternate lines (simpler algorithm):

Maybe this can be good for 4K hdr video in 3840 x 2160 pixels sensor, and much better for 1080p because in 3840 x 2160 pixels sensor each four pixels will be one pixel after downsize, so the debayer and interpolation will be very good. this design using 7680 x 4320 pixels will be amazing for 4k hdr, because each four pixels will be one pixel after downsize.

3840 x 2160 will be great for 1080p hdr (8.240.400 pixels, easy to do in M43)

7680 x 4320 will be great for 4k hdr (33.177.600 pixels, easy to do in sony full frame mirrorless)

If you look following the diagonal, you will perceive there are two pixels besides an empty pixel for red, green and blue, same diagonal direction for red and blue and other diagonal direction for green. this makes the interpolation more easy and better to do when comparing to the alternate line pairs design.

]]>
Music video I shot with the gh2 and driftwood hack https://www.personal-view.com/talks/discussion/3515/music-video-i-shot-with-the-gh2-and-driftwood-hack Fri, 08 Jun 2012 20:31:58 +0000 josh_chiara 3515@/talks/discussions So I finally finished this music video, and since I'm on this forum so much and I find it so useful, I wanted to post this video here.

Let me know what you guys think. The compression on youtube and vimeo is awful, so if you want to see it for real, you can download the file from vimeo. There's also an annoying glitch at the end that I couldn't fix. i uploaded the video several times and every time it did the exact same thing.

It looks a lot better if you download it from vimeo.

]]>
HDR video camera with 2 sensors https://www.personal-view.com/talks/discussion/11130/hdr-video-camera-with-2-sensors Tue, 26 Aug 2014 18:30:02 +0000 apefos 11130@/talks/discussions let us see the dark and bright areas...

HDR video: this is what the big manufacturers need to do with m43 or apsc sensors:

http://www.jai.com/en/newsevents/news/new2-ccdcameraprovideshdr

http://www.alt-vision.com/color_prisms_tech_data.htm

]]>
Dual Iso H264 software for HDR video https://www.personal-view.com/talks/discussion/9073/dual-iso-h264-software-for-hdr-video- Thu, 12 Dec 2013 17:12:17 +0000 apefos 9073@/talks/discussions

]]>
HDR solution for future GH3, GH4... https://www.personal-view.com/talks/discussion/2560/hdr-solution-for-future-gh3-gh4... Mon, 12 Mar 2012 10:31:25 +0000 apefos 2560@/talks/discussions after see magic lantern hdr video working i got this idea a future version of gh2 (maybe GH4) could have two iso in same image to build a hdr stream half pixels at iso 100, half pixels at iso 800 and merge two video streams into one hdr video same shutter speed, same fps, same lens aperture, a perfect hdr solution customizable two iso selection for user to get hdr video magic lantern use one frame at iso 100 and another frame at iso 800, but a better hdr video will use same frame half pixels at iso 100 and half pixels at iso 800 (or another iso selected by user)

Canon cameras have the HIGHLIGHT TONE PRIORITY function, it works by setting a different iso for highlights. what we need manufacturers implement is this same function but with an option to chose 1, 2, 3, 4 or 5 fstops diference.

]]>
Different times at one location: Extended dynamic range solution for 1080p Videos https://www.personal-view.com/talks/discussion/2232/different-times-at-one-location-extended-dynamic-range-solution-for-1080p-videos Thu, 09 Feb 2012 07:41:57 +0000 tida 2232@/talks/discussions MARCH-09-2012 - I did put an update of TimelapseVid.py, ExposureFusionVid.py and SlowMoVid.py (new) in the attatched Macros v2 file.

Below you will find a user friendly freeware solution where you can extend the dynamic range for AVCHD timelapses, slow motions and freeze frame videos. This solution invites your creativity to work with different time measures at one location.

Don't worry that it's based on Avisynth scripts as they were created automatically and in most cases you even don't need to modify them. You also need AvsPMod which is the basis of this user friendly method.

Install the TimelapseVid.py and ExposureFusionVid.py files into the \AvsPMod\macros folder or subfolder. You will find then as soon as you freshly start the AvsPMod application and enter the menu "macros". For the ExposureFusionVid script you need to download a fuse/fusemask plugin which you will find at http://forum.doom9.org/showthread.php?t=152109. Save the fusion.dll file into the Avisynth\plugins folder.

In my opinion this macro method has a lot of potential to be optimized by this community. I hope you will have a lot of suggestions to improve it. Maybe you have ideas for other procedures....

The following video should demonstrate how it works.

==== TimelapseVid.py ====

This procedure is focused on a simple 360 degree shutter angle timelapse workflow. It has similarities with the methods used by @GH13Timelapser discussed by http://www.personal-view.com/talks/discussion/595/mjpeg-fps-setting-for-timelapse-on-gh2/p1 but can run now with AVCHD files like *.MTS. Also @balazer described AviSynth Overlay methods in http://www.personal-view.com/talks/discussion/1559/avchd-1080p-time-lapse/p1. The resulting script of this macro is based on Avisynth TemporalSoften.

First you will be asked for the video file which should be converted into a timelapse or averaged frame video.

The second dialog box (see attatched picture) will ask you to enter used shutter speed, the frame rate of the resulting video, the timelapse rate and GOP. The generated script will take care about the correct GOP at given Shutterspeed. Usage of Group of Pictures (GOP) at different shutter speeds with 24p and Panasonic GH2 was discussed by @duartix at http://www.personal-view.com/talks/discussion/comment/39686#Comment_39686 and http://www.personal-view.com/talks/discussion/2396/gh2-motion-fluidity-360%C2%BA-shutter-test-on-slow-shutter-speeds./p1.

Put in the third dialog box the name of your script. It is recommended to save the *avs scipt under similar name of the video that in future you can trace back what you have done.

In AvsPMod you can check what you have done. Please also read the comments in the script to learn how to optimize your results.

Now you can use this *avs script in Virtualdub, MeGUI or whatever to create the final timelapse video file.

Comment: You can create an "averaged frame video" by just set TimelapseRate to Framecount. You end up in a video with just one frame. It could help for example to clean up a market place with moving people. This averaged frame should be over- or underexposed and can be used for subsequent Exposure Fusion procedure. In the Exposure Fusion procedure it would be converted into a freeze frame video with exactly same number of frames of the reverse exposed timelapse.

==== ExposureFusionVid.py ====

ExposureFusionVid is a user friendly solution to improve the dynamic range of the timelapse. You can even work with different time units at one location. You can combine timelapse, slow motion or freeze frame videos with each other. Important is that one video has to be overexposed and the other should be underexposed.

First you will be asked for the first video and subsequently for the second. If you have created an "averaged frame video" you have to open it by the first dialog box.

Then you will be asked for the name of the *.avs script which should be saved.

In AvsPMod you need to check the script for further optimization. Read the comments. You will see that there are two different mask methods which can be applied for the fuse method. Play whith them until your reach the best result.

Again, now you can use this *avs script in Virtualdub, MeGUI or whatever to create the final video file.

==== SlowMoVid.py ====

SlowMoVid.py can be used for progessive as well as for interlaced videos. If the base is progressive motion estimation will be derived by mvtools. If the video is interlaced QTGMC or Yadif will at the first step take care about deinterlacing. Subsequent motion estimation will be carried out by mvtools.

==================

You need to install following plugins in your Avisynth plugin folder:

GScript.dll download from http://forum.doom9.org/showthread.php?t=147846

MVtools2.dll download from http://avisynth.org.ru/mvtools/mvtools2.html

RemoveGrainSSE3.dll download from http://www.removegrain.de.tf

Yadif.dll download from http://avisynth.org.ru/yadif/yadif.html

Fusion.dll download from http://forum.doom9.org/showthread.php?t=152109

]]>
ANother topic on HDR for GH2 https://www.personal-view.com/talks/discussion/1872/another-topic-on-hdr-for-gh2 Fri, 30 Dec 2011 15:48:44 +0000 juliand 1872@/talks/discussions Asher Roth Feat. Akon Music video shot with GH13 https://www.personal-view.com/talks/discussion/1605/asher-roth-feat.-akon-music-video-shot-with-gh13 Wed, 30 Nov 2011 23:04:40 +0000 magnus387 1605@/talks/discussions
I shot this video at the end of July with a GH13 and a Red Mx. All of the body mounted shots that you see of the artist running around in were done with the GH13.

The concept of the video was inspired by graphic novels, and a bleeding of reality with dynamic illustrations. As a result I attempted to give the footage an HDR, almost drawn-in look. Enjoy!

]]>
Color Grading: A Conversation Starter https://www.personal-view.com/talks/discussion/1603/color-grading-a-conversation-starter Wed, 30 Nov 2011 21:13:15 +0000 magnus387 1603@/talks/discussions
I shoot with a hacked GH13, and have used this little "Monster" of a camera on 3 music videos, including my most recent video for Asher Roth's "Last Man Standing" feat. Akon. The most beneficial part of the hack for me has always been what I can subsequently do with the image in post without worrying about the degradation of image quality.

I generally have 2 methods of color grading, and I have included examples of both. The 1st way is incredibly time consuming, but produces very otherworldly HDR-looking results. I begin by exporting an image sequence from quicktime, then I open up one of the resulting TIFF files in Photoshop and adjust the levels to my liking. 3 rounds of unsharp masking later, as well as a boost to the Vibrance of the image, and I'm ready to create an Action and batch process all of the images. After all of the images have been batched through photoshop, I drag them onto a Final Cut sequence (making sure that each image is only is only 00;00:01 long) and export a raw quicktime movie. Obviously this is not for everyone as it drains ones mental sanity faster than you can say High Dynamic Range Video, and you may have some sync issues on longer clips because of the 23.98 and 24 fps disparity - but the results are cool and not achievable in FCP.

My second method is what I use to make video look more like 16mm film - It's fast, easy, and works with almost anything you can shoot. It utilizes Magic Bullet Looks and Final Cut's 3-way color corrector. I begin by applying "Looks", and starting off with the "Blockbuster" preset. I delete the diffusion filter, the vignette filter, and I slightly lower the highlight part of the "Curves" filter. Then I add the "3-Strip Process" filter and adjust the strength to around 6%. Based on how the image looks now I will usually adjust the the individual color channels of the "Curves" filter so that I make sure my Blacks are not too Blue, and my highlights are not too Yellow. Add some "Film Grain" at 3% and turn off the color mode, and you're done with the "Looks" part. Back in FCP, I add the 3-way color corrector and use the Auto Balance Color function to, again, make sure that my blacks aren't too blue, and my highlights aren't too yellow. Then I crush the black's slightly, raise the Mid's a touch, and actually crush the white's a tiny amount. Adjust the saturation to taste, and then I'm done.

I realize this is long winded, but I hope it starts a fun conversation, and I am more than willing to answer any questions about my process. ]]>
Bracketing -> MultiExposure "middle level research" Setting in Ptool https://www.personal-view.com/talks/discussion/161/bracketing-multiexposure-middle-level-research-setting-in-ptool- Wed, 01 Jun 2011 23:10:29 +0000 Toronto 161@/talks/discussions
It just says "changes bracketing settings to multiple exposure function", but how is "bracketing" different than "multiple exposure", as in an unhacked GH1 when you use the bracketing, it is taking multiple photos at different exposure levels, so they imply the same thing, so I need a clearer explanation from someone who understands what is actually going on and what is changing by enabling this setting choice.

Also as far as I know the GH1 does not combine the photos taken when bracketing into one hdr image, correct? So why would the note say "interface working but image combining not working"? Does that mean Vitaliy is attempting to turn the bracketing function into an auto hdr photo creating mode? If that could be possible it would be very high on my list as it would be amazing to have that feature instead of using something like photomatix to do the combining of all the multiple exposures taken in the (unhacked) bracketing mode into one hdr jpeg.

Thanks]]>