This page contains a Flash digital edition of a book.
FEATURE BROADCAST WORKFLOW


Touch is the latest incarnation of Sony’s all-in- one system. On the audio side it offers six embedded stereo inputs and three stereo outputs, with full limiting and EQ capability and four hours of recording onto an internal HDD. Roland’s VR-50 HD is a HDSI, SDI, and 3G video unit with a nine fader audio mixer that can produce linear PCM for SDI, HDMI, and USB-audio. Gough views online


webcasting as “another means of getting content out there” and so “does not treat it any differently from broadcasting – just because it’s online doesn’t mean the quality is any less”. Webcasting specialists are


now using next-generation streaming protocols and media players, as well as proven technologies such as HTTP (Hypertext Transfer Protocol), to ensure stable audio transmission. Craig Moehl, chief executive of online video provider Groovy Gecko, maintains that having a constant audio stream is vital, because viewers can cope with missed picture frames more easily than bad quality or missing sound. HTTP is part of the


MPEG DASH (Dynamic Adaptive Streaming over HTTP) standard, which also accommodates AAC (advanced audio coding). While this allows for surround formats like Dolby Digital Plus, 5.1 is not yet a major consideration for streaming to multiple platforms and devices.


STAYING IN SYNC A big issue is ensuring that the sound and pictures are synchronised, with the matter of lipsync an important factor. Mark Pascoe, senior technical marketing manager at Dolby Laboratories, comments: “I think it would be nice to find a way to better ensure that AV synchronisation is more carefully considered, although with video and audio essences increasingly being wrapped together closer to the end of CDN [content delivery network] systems for online


www.audiomedia.com


delivery, the quality control process must be accompanied closely with a test and measurement process to ensure that the content rendered there maintains the synchronisation achieved further upstream.” In outside broadcasts there


has been a shift towards working in discrete audio on site. Dolby E continues to be used to carry multiple channels of sound to broadcast and play-out centres but Paul Fournier, head of sound at OB company NEP Visions, says the component format, with 16-channels of audio in every video feed, is set to take over in the near future. In TV studios there are


different workflows for live broadcasts and recorded programmes. Andy Tapley, a sound supervisor with BBC Studios and Post Production, works on both types and says that, in the case of a big, live prime-time show like Strictly Come Dancing, the last series of which was produced at Elstree Studios just outside London, the aim is to achieve both a stereo mix and a surround output. “Stereo is relatively


straightforward,” he says, “but surround is six-channel so we use Dolby E as the transfer mechanism. It’s a way of transferring the multiple channels as an AES pair but the downside is that every time you encode and de-code the signal a frame delay is introduced. So as the signal is sent from Elstree to Broadcasting House and then on to Red Bee Media for play- out, the pictures and stereo feed have to be delayed by one frame to match the 5.1.” When the feeds arrive at


Red Bee Media they are decoded from Dolby E and then re-encoded in Dolby Digital with metadata so that the correct configuration reaches the viewers’ TV receivers. Tapley describes the creation of metadata for the Dolby E streams as taking place in parallel to the main mix during the live show. He adds that “everything is now moving to embedded audio”, often with four groups of four


“Everything is now moving to embedded audio.” Andy Tapley


channels with the stereo mix on channels one and two and the surround in Dolby E on three and four.


IN THE STUDIO For studio recordings of TV panel shows, such as Channel 4’s 8 Out of 10 Cats, the workflow is extended to give greater flexibility for post production. Tapley says while he records the main stereo mix as the recording progresses, ISO feeds are also taken on to between five to six VT machines, for example HDCAM. These have four audio tracks that can accommodate a “variety of stereo streams”, particularly isolated feeds from the participants’ radio microphones. “The aim is to give post production all the components so they can rebuild parts of the show if anything went wrong on the evening of the recording,” Tapley explains. A consideration here,


Tapley observes, is to put the same equalisation and dynamics on the pre-fade mic feed so that it matches the stereo mix, making splicing in a replacement section easier and less detectable. Dynamic noise reduction is also used, with productions now adopting the new Cedar DNS 8 Live noise suppresser. BBC S&PP uses Pyramix, although Tapley says modern


workflows apply to Avid Pro Tools and other workstations, and JoeCo multitrack boxes. All audio components are sent in Broadcast Wave format to post production. While VT is still being used now, Tapley comments that studio production will “move more to file-based” in the future. The Digital Production Partnership (DPP) has set October this year as the target for UK broadcasters to either move entirely to file-based operations or, in the case of the BBC, begin the move to this way of working. Every broadcaster and


facility has its own methodology for both studio recording and post production. Austrian public broadcaster ORF follows the basic workflow that has emerged recently in terms of file ingest but has its own approach to recording voiceovers. The sessions are run almost as live, with the voice artist recording the narration as the audio is mixed to the pictures. Senior sound supervisor


Florian Camerer says this is an advantage because the voice talent and director get “an immediate idea of how something sounds”, while saving time into the bargain. Camerer comments that in “99% of cases” audio post will receive ME tracks from the


video editing room; these start as MXF (Material eXchange Format) files from the Apple Final Cut Pro video workstation, with a QuickTime reference file, and are imported into Pro Tools. The sessions are then


mixed, after which, as befits a facility where the chairman of the EBU PLOUD group works, they are measured for loudness compliance to R128 using a software system. Once the mix is complete the audio tracks are combined with the picture back into FCP; both are then exported in MXF format into the broadcast centre’s media asset management system. Whether the term


workflow sets your teeth on edge or not, it is now an integral part of how broadcasting is done. Formats and standards such as MXF, Broadcast Wave, and iXML, for metadata in audio files, have emerged as the foundation stones and while there is some commonality between approaches, there is probably enough flexibility for personal customisation. www.allen-heath.com www.bbcstudiosandpost production.com www.cedaraudio.com www.dolby.com www.joeco.co.uk www.merging.com www.noisesoff.biz www.sadie.com


February 2014 25


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44
Produced with Yudu - www.yudu.com