Human Visual System Models in Computer Graphics Tun O. Aydn MPI - PowerPoint PPT Presentation
Human Visual System Models in Computer Graphics Tun O. Aydn MPI Informatik Computer Graphics Department HDR and Visual Perception Group Outline Reality vs. Perception Why even bother modeling visual perception The Human Visual
Human Visual System Models in Computer Graphics Tunç O. Aydın MPI Informatik Computer Graphics Department HDR and Visual Perception Group
Outline Reality vs. Perception – Why even bother modeling visual perception The Human Visual System (HVS) – How the “wetware” affects our perception HVS models in Computer Graphics – Visual Significance of contrast – Contrast Detection Our contributions – Key challenges
Invisible Bits & Bytes Low High Reference (bmp, 616K) Compressed (jpg, 48K) Difference Image (Color coded)
Variations of Perception No one-to-one correspondence between visual perception and reality ! “Perceived Visual Data” instead of luminance or arbitrary pixel values
The Human Visual System (HVS) Experimental Methods of Vision Science – Micro-electrode – Radioactive Marker – Vivisection – Psychophysical Experimentation
HVS effects (1): Glare Disability Glare (blooming) Video Courtesy of Tobias Ritschel
Disability Glare Model of Light Scattering – Point Spread Modulation Function in spatial domain – Optical Transfer Function in Fourier Domain [Deeley et al. 1991] Spatial Frequency [cy/deg]
(2): Light Adaptation Adaptation Level: Adaptation Level: Time 10 -4 cd/m 2 17 cd/m 2
Perceptually Uniform Space Transfer function: Maps Luminance to Just Noticeable Response [JND] Differences (JNDs) in Luminance. [Mantiuk et al. 2004, Aydın et al. 2008] Luminance [cd/m 2 ]
(3): Contrast Sensitivity Contrast Spatial Frequency CSF(spatial frequency, adaptation level, temporal freq., viewing dist, … )
Contrast Sensitivity Function (CSF) Steady-state CSF S : Returns the Sensitivity (1/Threshold contrast), given the adaptation luminance and spatial frequency [Daly 1993]. November 6, 2011
(4): Visual Channels Cortex Transform
(5): Visual Masking Loss of sensitivity to a signal in the presence of a “similar frequency” signal “nearby”.
Modeling Visual Masking Example: JPEG’s pointwise extended masking: C’: Normalized Contrast
HVS Models in Graphics/Vision Rate the Quality HDR LDR Panorama Tone Mapping Compression Quality Assessment Stitching
Visual Significance Pipeline ˆ 1 / k N k k R R R tst ref n 1
Contrast Detection Pipeline Log Threshold Elevation Probability of Detection N ˆ P 1 1 P n n 1 Log Contrast Contrast Difference
CONTRIBUTIONS: VISUAL SIGNIFICANCE
Visually Significant Edges Key Idea: Use the magnitude of the HVS model’s response as the measure of edge strength, instead of gradient magnitude. Result (1): Significant improvement in application results, especially for HDR images Result (2): Only minor improvements observed in LDR retargeting and panorama stitching. [ Aydın , Čadík , Myszkowski, Seidel. 2010 ACM TAP ]
Calibration Procedure CSF from the Visible Differences Predictor [Daly’93] JPEG’s pointwise extended masking Calibration: CSF derived for sinusoidal stimuli, not for edges. Perceptual experiment for measuring edge thresholds
Calibrated Calibration Function Metric Ideal Metric Response Response Subjective Measurements Metric Predictions Polynomial Fit Polynomial Fit R: Visual Significance for sinusoidal stimulus, Calibration function: R’: Visual Significance for edges.
Image Retargeting
Visual Significance Maps Low High
Display Visibility under Dynamically Changing Illumination Key Idea: Extending steady-state HVS models with temporal adaptation model Result: A visibility class estimator integrated into a software that simulates illumination inside an automobile. [ Aydın , Myszkowski, Seidel. 2009 EuroGraphics ]
cvi for Steady-State Adaptation Contrast vs. Intensity (cvi): function assumes perfect adaptation L + ∆L L cvi : L C Contrast vs. Intensity Threshold Background and adaptation (cvia) Luminance Luminance accounts for mal adaptation cvia : L , L C a
cvia for Maladaptation cvia cvi
Adaptation over time High Visual Significance t = 0 t = 0.2s t = 0.4s t = 0.8s Low
Rendering Adaptation Dark Adaptation Bright Adaptation [ Pająk , Čadík , Aydın , Myszkowski, Seidel. 2010 Electronic Imaging ]
CONTRIBUTIONS: CONTRAST DETECTION
Quality Assessment (IQA, VQA) Rate the Quality + Reliable - High cost
Perceptually Uniform Space Key Idea: Find a transformation from Luminance to pixel values, such that: – An increment of 1 pixel value corresponds to 1 JND Luminance in both HDR and LDR domains. – The pixel values in LDR domain should be close to sRGB pixel values Result: Common LDR Quality metrics (SSIM, PSNR) extended to HDR through the PU space transformation [ Aydın , Mantiuk, Seidel. 2008 Electronic Imaging ]
Perceptually Uniform Space Derivation : for i = 2 to N L i = L i-1 + tvi(L i-1 ); end for Fit the absolute value and subject sensitivity to sRGB within CRT luminance range
Dynamic Range Independent IQA Key Idea: Instead of the traditional contrast difference, use distortion measures agnostic to dynamic range difference. Result: An IQA that can meaningfully compare an LDR test image with an HDR reference image, and vice versa. Enables objective evaluation of tone mapping operators. [ Aydın , Mantiuk, Myszkowski, Seidel. 2008 SIGGRAPH ]
HDR vs. LDR Luminance Luminance 5x LDR HDR LDR HDR
Problem with Visible Differences Local Gaussian Blur Detection Probability 95% Contrast Loss 75% 50% 25% HDR Reference LDR Test HDR-VDP
Distortion Measures Reference Test Contrast Reversal Reference Contrast Loss Test Contrast Amplification Reference Test
Novel Applications Inverse Tone Mapping Tone Mapping
Video Quality Assessment HDR Video DRIVQM [Aydin et al. 2010] DRIVDP [Aydin et al. 2008] (tone mapped for (frame-by-frame) presentation)
Dynamic Range Independent V QA Key Idea: Extend the Dynamic Range Independent pipeline with temporal aspects to evaluate video sequences. Result: An objective VQM that evaluates rendering quality, temporal tone mapping and HDR compression. [ Aydın , Čadík , Myszkowski, Seidel. 2010 SIGGRAPH Asia ]
Contrast Sensitivity Function CSF: ω , ρ ,L a → S – ω : temporal frequency, – ρ : spatial frequency, – L a : adaptation level, – S: sensitivity.
Contrast Sensitivity Function CSF: ω , ρ ,L a → S – ω : temporal frequency, – ρ : spatial frequency, – L a : adaptation level, – S: sensitivity. Spatio-temporal CSF T
Contrast Sensitivity Function CSF: ω , ρ ,L a → S – ω : temporal frequency, – ρ : spatial frequency, – L a : adaptation level, – S: sensitivity. Steady-state CSF S
Contrast Sensitivity Function CSF T ( ω , ρ ) CSF( ω , ρ ,L a = L) CSF T ( ω , ρ , L a = 100 cd/m 2 ) f( ρ ,L a ) x = CSF S ( ρ ,L a ) CSF S ( ρ ,100 cd/m 2 ) ( ) f = ÷ L a = 100 cd/m 2
Extended Cortex Transform Sustained and Transient Temporal Channels [Winkler 2005] Spatial
Evaluation of Rendering Methods With temporal filtering No temporal filtering Predicted distortion map [Herzog et al. 2010]
Evaluation of Rendering Qualities High quality Low quality Predicted distortion map
Evaluation of HDR Compression Medium Compression High Compression
Validation Study Noise, HDR video compression, tone mapping “2.5D videos” HDR-HDR, HDR-LDR, LDR-LDR
Psychophysical Validation (1) Show videos side-by-side (2) Subjects mark regions on a HDR Display where they detect differences [ Čadík , Aydın , Myszkowski, Seidel. 2011 Electronic Imaging ]
Validation Study Results Stimulus DRIVQM PDM HDRVDP DRIVDP 1 0.765 -0.0147 0.591 0.488 2 0.883 0.686 0.673 0.859 3 0.843 0.886 0.0769 0.865 4 0.815 0.0205 0.211 -0.0654 5 0.844 0.565 0.803 0.689 6 0.761 -0.462 0.709 0.299 7 0.879 0.155 0.882 0.924 8 0.733 0.109 0.339 0.393 9 0.753 0.368 0.473 0.617 Average 0.809 0.257 0.528 0.563
Conclusion Starting Intuition: Working on “perceived” visual data, instead of “physical” visual data.
Limitations and Future Work What about the rest of the brain? – Visual Attention – Prior Knowledge – Gestalt Properties – Free will – … User interaction? Depth perception
Acknowledgements Advisors – Karol Myszkowski, Hans Peter Seidel Collaborators – Martin Čadík , Rafał Mantiuk, Dawid Pająk , Makoto Okabe AG4 Members – Current and past AG4 Staff – Sabine Budde, Ellen Fries, Conny Liegl, Svetlana Borodina, Sonja Lienard. Thesis Committee – Phillip Slusallek, Jan Kautz, Thorsten Thormählen. Family – Süheyla and Vahit Aydın , Irem Dumlupınar
Tunç O. Aydın <tunc@mpii.de> THANK YOU.
Recommend
More recommend
Explore More Topics
Stay informed with curated content and fresh updates.