By Gary Learner, CTO
With the continued consolidation of broadcast operations, centralization of staff, and ever-increasing pressure to improve efficiency, broadcasters need affordable, reliable, and flexible solutions for remote monitoring. As always, the ability to perform quality checks proactively at A/V service handoffs is key in ensuring the highest quality of experience (QoE) for customers. The definition of comprehensive remote monitoring, however, and the challenges inherent in establishing such technology within broadcast operations, have undergone radical changes since the first solutions were implemented.
Not all that long ago, the monitoring operations of broadcast stations were focused on viewing aired content across the conventional distribution chain. To establish some form of remote monitoring, such facilities often resorted either to recording the remote transmission to tape or using the equivalent of a modified TiVo system to capture aired audio and video. While neither approach offered an ideal solution, both were preferable to dedicating staff to monitoring at remote sites.
Using one of these recording methods, engineers gained access to all aired content. They could go back and review the content that had been broadcast on a specific channel on a particular date and time, determining both video quality and compliance through visual inspection. The process of finding specific content, however, was inefficient, requiring a significant investment of a staff member’s time. So tedious and time-consuming was this approach that it often took hours or even days before problems in aired advertising and program content were detected.
Given the practical and financial difficulties of making this approach work, the introduction of digital video monitoring and logging systems was a welcome advance. Enabled by improvements in broadcast and network infrastructure, as well as increased use of IP networks for communications, such monitoring systems allowed broadcasters and other media companies to establish much more efficient and cost-effective remote monitoring models and to gain real-time visibility into the end-to-end transmission path.
Though the emergence of digital television (DTV) brought with it several high-definition (HD) resolutions, the industry did standardize on 720p and 1080i with a number of different frame rates. By embracing MPEG-2 compression for video and AC3 and AAC compression for audio, the industry further expanded the resolutions and metadata involved in broadcasting. Nevertheless, the broadcast product itself often remained a single product delivered to a specific region. Consequently, the shift to DTV made relatively little impact on how broadcasters performed monitoring. Whether analog or digital, many broadcasters chose to apply a “fire-and-forget” approach; once content was aired or delivered to the cable operator, QoE was out of their hands, and the job was done.
As the Internet emerged as a means of delivering content, broadcasters recognized the potential this groundbreaking technology held in enabling them to extend their reach worldwide and to countless viewing devices. While IP-based content delivery — OTT and Internet streaming services — opened the door to a host of new services and revenue-generation opportunities, it also presented significant and sizeable new challenges in assuring QoE. Thus, in taking advantage of the opportunity afforded by OTT, broadcasters considerably expanded the scope and complexity of services requiring monitoring.
Practical Need for Comprehensive Monitoring
Today, the services being monitored by broadcast operations both large and small include not only linear broadcasts, on-demand content, and interactive services, but also a growing volume of OTT services. While on-demand services cause exponential jumps in the amount of content being provided, the introduction of OTT services boosts the number of portals through which content can be consumed. Add to this the plethora of viewing devices — PCs, tablets, and smartphones — and associated “flavors” of content they require, and the challenge of assuring the best possible experience (within the constraints of all components) can be enormous.
Even as they provide a growing number of services and a rising volume of content in an increasingly complex environment, tough competitive and economic conditions are pushing broadcasters of all sizes to do more with fewer resources. That said, the need for effective monitoring across all these linear and nonlinear services is especially keen for smaller broadcasters. Such operations lack the sizeable infrastructure — or whole division dedicated to running a proprietary CDN — that makes internal monitoring of all services both practical and economically feasible.
Smaller facilities must manage all monitoring tasks themselves; no one else will do it. The very nature of OTT services calls for centralized monitoring. With media being delivered directly to viewers, who may be scattered across the country or even across different countries, there no longer is a “middle man” to share responsibility for QoE. Thus, even a smaller operation must be able to monitor, from a single location, the availability and quality of services across platforms, CDNs, and video service providers.
OTT service providers must offer QoE that meets a high standard. This is not easy, however, when the service relies on a heterogeneous delivery chain, with different companies controlling elements that will affect the viewer experience. Without an effective means of monitoring all services, the quality of delivery could be compromised for an extended period of time before engineering is alerted to that fact. Given the limited patience of the consumer, who today has plenty of other options available, the service provider cannot expect to get a phone call about poor video quality and other such issues. Thus, even with a multitude of different delivery points, the content provider must implement some monitoring mechanism that addresses them all.
Without staff and budgets dedicated to monitoring, it’s important that smaller broadcasters capitalize on solutions that can offer a window into all different portals — not just over-the-air, cable, and OTT, but even the different CDNs and how they deliver content. This investment also needs to justify itself over the long haul, offering the flexibility and extensibility necessary to adapt to the continued evolution of media delivery. Doing business in a dynamic marketplace, with existing standards evolving and new standards emerging, particularly for OTT, broadcasters must adopt monitoring solutions that can adapt and continue to support the full range of current service offerings.
Technical Challenges of Comprehensive Monitoring
Many of the technical challenges associated with remote monitoring of linear, on-demand, and interactive services already have been addressed by the industry. Advanced monitoring solutions today can scan hundreds of channels around the clock and automatically test signal integrity, issue alerts (via email and SNMP), and capture the problematic content when channels do not conform to prespecified limits. Positioned “behind” the set-top-box (STB), such solutions give operators a single system and location from which to access and monitor the video output continuously. Engineers typically enjoy remote monitoring capabilities that enable them to review video and audio for issues such as static or black screen, as well as errors in closed-captioning and audio levels.
Operators also can use today’s sophisticated monitoring systems to access and play out on-demand content to ensure content availability and system capacity, record DPI ad insertions for ad proof of conformance, or monitor interactive STB guides to ensure a customer’s experience. With complete STB command access, broadcasters can perform troubleshooting more effectively and use a historical review of content and services to address intermittent, yet chronic issues. Refined to enable intuitive use, such systems often combine controls familiar from VCR units (and now also from video player software) with color cues that clearly indicate the channels being displayed, whether they are live or recorded. Layouts and source selection are managed with simple mouse clicks, and a built-in clock facilitates navigation to the desired time stamp.
With these tools, remote monitoring systems have eliminated the need for expensive and time-consuming manual and visual channel inspections. Broadcasters thus have been able to adopt active approaches to remote monitoring and respond to faults they identify rather than waiting for customer complaints. Given its benefits, remote monitoring technology already has been employed in applications ranging from competitive news analysis to monitoring of “out-of-footprint” broadcast and distribution channels. Now, the challenge facing broadcasters is to apply that technology to their OTT services.
Extending Monitoring to OTT
In the linear broadcast world, operators have been able to monitor their services passively, assessing the customer’s experience and, for high-value services, continuously logging, recording, and monitoring them for viewers’ experience of metadata and A/V quality. Working within this model, broadcasters needed to monitor only a single output, but much has changed with the rise of Internet-delivered content. The days in which a preset and pre-packaged schedule of programming is consumed only on a TV set, typically viewed within the confines of the living room, are a thing of the past.
Broadcasters now must ensure delivery of a personalized user experience alongside video content. This task poses a significant QoS/QoE (quality of service/quality of experience) challenge. When providing OTT services, broadcasters effectively multiply their outputs, sometimes generating dozens of versions that require monitoring. Working with a variety of distribution platforms, broadcasters may need to deliver content to consumers via a number of CDNs. The situation is further complicated by the fact that the groups of files delivered to each CDN will need to accommodate a wide range of devices, each with its own profile. There is no plausible way to monitor all of these outputs and versions all of the time, but this doesn’t mean that it is impossible for broadcasters to institute OTT service monitoring strategies that work.
The most viable monitoring strategy looks at key areas of the overall delivery workflow: ingest, encode, packaging, delivery, and the signal received by the viewer (albeit in a controlled environment free of the vagaries of ISP service).
Monitoring at ingest is straightforward, as it typically requires continuous monitoring of just one feed, most likely a studio SDI uncompressed feed or an MPEG transport stream. It is essential to monitor the files generated in ingest because they frequently are used as a high bit rate coded reference file for all future transcodes of that content.
It is during the encoding stage that monitoring becomes more demanding; this is the point at which the broadcaster creates multiple versions of the reference file at the different bit rates and resolutions required for multiplatform distribution. Working with anywhere from several files to more than a dozen files, the broadcaster must shift to passive monitoring methods. Examination of data about the file, rather than the video itself, is a far more cost-effective approach when dealing with large numbers of files. By looking at bit rates, syntax, reference timestamps, and alignment of each of the files, the broadcaster can make a sound determination of the files’ integrity.
Once files have been packaged in the appropriate formats for the target CDNs and platforms, they are delivered along with a manifest describing those files. The simplest way for the broadcaster to confirm that packaging is being performed properly and that delivery is correct and on schedule is to “accept” files in the same way that a CDN would.
After this point, the broadcaster loses control over file-based content. Having worked to assure that the content delivered to each CDN is packaged correctly in the formats compatible with target devices, the broadcaster must hope that the CDNs and, subsequently, ISPs that carry media to the viewer will do so without introducing any faults or compromising video quality.
Though they can offer details on bandwidth usage, few CDNs provide the depth of reporting that would be necessary to uncover an error in packaging or a video quality issue. Once the CDN hands files off to the consumer’s ISP for delivery, the viewing experience can be influenced by a variety of factors ranging from a poor or slow mobile connection to a weak or spotty network in the home.
Adopting a Practical OTT Monitoring Model
The rise of OTT service delivery brings a widespread move from strictly passive monitoring models toward models that apply sampled active monitoring on the tail end of the chain. This approach acknowledges not only the complexity of multiplatform media delivery chain, but also the exponential leap in the volume of media being delivered.
A sampling of outputs can yield an accurate representation of service quality as experienced by the majority of the broadcaster’s OTT viewing audience. Thus, for a relatively modest investment of time and money, particularly as compared with the cost of monitoring all outputs all of the time, the broadcaster can assure that most of its customers are enjoying quality service most of the time.
At the end of the delivery chain, broadcasters are also taking advantage of “round robin” emulation with different devices, rates/resolutions, and CDNs. In this instance, the broadcaster alternatively checks the lower, middle, and upper bit rates; examines Apple HLS, Microsoft Smooth Streaming, and Adobe HDS formats; and monitors content for quality. (Naturally, the greatest value is realized when the bulk of sampling involves the most popular viewing devices.) When these tasks are undertaken in a controlled environment, the broadcaster can easily separate the issues over which it has control from the issues that can occur once content is subject to the variable conditions of ISP delivery.
The combination of active sampling and round robin emulation allows the broadcaster to set itself up as a consumer of its own OTT services. When these monitoring tasks are automated and alerts have been configured to warn engineers of any issues, the broadcaster is able to maintain a proactive approach to monitoring across its full complement of services.
Supporting this model, the broadcaster’s video monitoring and logging system must touch on a multitude of MPEG transport stream points and monitor adaptive bit rate streaming of both encrypted and unencrypted media. To facilitate monitoring of media delivered through apps, the system can employ an external device to emulate and record app behavior. This method accommodates both decryption and authentication while illustrating the user experience. With this functionality, the broadcaster can effectively monitor streaming media (encrypted or unencrypted) in any format.
Though maintaining QoE is a must for OTT service providers, the need to monitor for compliance purposes extends beyond conventional services and into the OTT realm. Fortunately, the very monitoring tools and techniques that support QoE monitoring of OTT services also enable broadcasters to make sure that their services comply with regulations — namely the FCC regulations stemming from the 2010 Consumer Video Accessibility Act (CVAA) — governing accessibility to media.
The FCC’s implementation of the CVAA demands that both broadcast and OTT content include descriptive video and captioning. With the aforementioned approach to monitoring OTT services, broadcasters can be proactive in ensuring that streamed content includes the requisite SMPTE Timed Text (SMPTE-TT) format.
Variables including advertising, encoding, delivery mechanisms, and target devices combine to make monitoring across OTT services a challenging but necessary task. Broadcasters and other providers must have the means to log and monitor all of their outputs, and the simplest and most cost-effective solution is, where possible, the extension of installed monitoring and logging technology to address new OTT services. In this way, broadcasters can take advantage of proven technology and workflows to assure that they are delivering the high-quality personalized content desired by today’s media consumers.