This page contains a Flash digital edition of a book.
A More Empirical QoE LAST LINES ///


/// By Simen Frostad


volution doesn't always proceed in a linear fashion. There are offshoots, backwaters, blind alleys even, as well as


the mainstream. Some of these offshoots result in organisms that are so well adapted to life that they don't need to evolve any more. The shark is arguably one of these.


service. They want to keep those subscriptions coming in, and they want the subscriber base to grow. Part of the strategy for


ensuring a good service is to monitor the performance of the delivery mechanisms, so that any actual or incipient failures are corrected quickly. This is the obvious part. But there is also the quite reasonable desire to test quality as experienced by the user. QoE testing began in the telecoms industry, where telcos needed a way to evaluate what the ‘average subscriber' would think of the sound quality delivered over the phone. There was no objective


Simen Frostad However, most offshoots


are really up a blind alley. They are organisms that leave themselves nowhere to go, or perhaps have developed specialist evolutions to suit one environment and can survive for a time in another when the conditions change -- but only while the going is good. In the field of technology,


evolution occurs at a vastly more rapid pace than in the real world. So we see plenty of evolutionary dead-ends. Many last only a decade or less. A lot of these dead-ends


are the result of commercial competition, and it's not always the best technical evolution that wins in a commercial context. But there are also dead-ends that arise from conceptual mistakes; where a technology is applied in an inappropriate context, and fails because it's poorly evolved for that environment. In the world of digital media


delivery, there is one example right now: The dead-end is Quality of Experience (QoE) testing.


Digital media providers


want to ensure their customers are getting a good quality


measure that could deliver this evaluation. It was a subjective assessment that the telcos had to make. To simulate the ‘average


subscriber', a panel of expert assessors would listen to test communications over the telephone service and note their evaluation of quality while a set of stock phrases such as ‘You will have to be very quiet', ‘There was nothing to be seen', and ‘They worshipped wooden idols' were transmitted. The experts would record


their scores, assessing any impairments in subjective terms on a scale of 1-5, from ‘imperceptible' to ‘very annoying.’ The scores would then be averaged and weighted using the kind of statistical manipulations common in the social sciences and market research.


This methodology became


known as MOS (Mean Opinion Score). It was standardised by the ITU-T (International Telegraph Union) recommendation P.800. For its original context and


environment, this was a perfectly reasonable approach. But when the QoE concept was applied to digital media delivery, the MOS-based, subjective methodology lost its justification and QoE became an evolutionary backwater.


In an attempt to grow the


right feathers, MOS mutated into ‘VideoMOS' and ‘AudioMOS' criteria, which tried to make opinion-scoring methodologies look well-adapted to the media landscape. Rather than using panels of expert assessors, MOS-based QoE evaluation became ‘robotized', with complex algorithmic


simulations of those subjective reactions from ‘imperceptible' to ‘very annoying'. In the case of television,


MOS/QoE robots ‘watch' the service and the data is fed into the algorithmic engine. It attempts to simulate the subjective reaction of the viewer, with scores for factors such as ‘jerkiness', ‘blurriness' and ‘blockiness'. Like a dinosaur growing an


enormously long neck so that it can prolong its survival in a changed and more challenging environment, this algorithmic evolution of an opinion- gathering methodology leads to a less than successful organism. As the word ‘opinion' in the term MOS suggests, subjectivity is still at the heart of the concept; even if it's now a robotic imitation of subjective human reactions.


But subjectivity is complex and nuanced. No human viewer would assess a transmission of Captain Philips using the same criteria when watching Chaplin's The Great Dictator. The viewer knows and accepts that one is a recently-made movie and will be brighter, steadier and sharper – and more colourful – than a black and white product of 1940. Yet in a robotized QoE


‘subjective' assessment based on MOS criteria, one would


score highly, while the other would be marked way down for ‘blurriness', scratches and other artefacts, for its lack of resolution and colour. Such distorted results


arising from the attempt to simulate human subjectivity have made QoE a much less valuable tool than it should be. And yet, unlike the sound heard over a telephone, the experience of digital media services can be evaluated in a completely objective way. There is no need to use the evolutionary dead-end of MOS- based evaluation. The Objective QoE solution


launched by Bridge Technologies dispenses with the robotised ‘average viewer' in favour of a QoE evaluation built from empirical testing of factors that diminish the quality of digital media services, such as lost packets, timeouts, and buffering. When you can accurately and continuously detect when a viewer is experiencing any of these errors, you no longer need to confect an algorithmic ‘opinion' about it. You have a completely trustworthy, objective set of data on which to assess -- in real time -- the quality of each user's experience.


By dispensing with


subjectivity, digital media providers can get a reliable and meaningful assessment of the quality users are experiencing, and QoE monitoring can get back into the mainstream of evolutionary progress. ///


Simen Frostad is Chairman of Bridge Technologies (www.bridgetech.tv)


April 2014 I TV Technology Europe 27////////////////


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28