papers AI Learner
The Github is limit! Click to go to the new site.

A user model for JND-based video quality assessment: theory and applications

2018-07-28
Haiqiang Wang, Ioannis Katsavounidis, Xinfeng Zhang, Chao Yang, C.-C. Jay Kuo

Abstract

The video quality assessment (VQA) technology has attracted a lot of attention in recent years due to an increasing demand of video streaming services. Existing VQA methods are designed to predict video quality in terms of the mean opinion score (MOS) calibrated by humans in subjective experiments. However, they cannot predict the satisfied user ratio (SUR) of an aggregated viewer group. Furthermore, they provide little guidance to video coding parameter selection, e.g. the Quantization Parameter (QP) of a set of consecutive frames, in practical video streaming services. To overcome these shortcomings, the just-noticeable-difference (JND) based VQA methodology has been proposed as an alternative. It is observed experimentally that the JND location is a normally distributed random variable. In this work, we explain this distribution by proposing a user model that takes both subject variabilities and content variabilities into account. This model is built upon user’s capability to discern the quality difference between video clips encoded with different QPs. Moreover, it analyzes video content characteristics to account for inter-content variability. The proposed user model is validated on the data collected in the VideoSet. It is demonstrated that the model is flexible to predict SUR distribution of a specific user group.

Abstract (translated by Google)
URL

https://arxiv.org/abs/1807.10894

PDF

https://arxiv.org/pdf/1807.10894


Similar Posts

Comments