Thoughts Cory Carpenter Thoughts Cory Carpenter

Observability: The Mindset And Resources OTT Must Bring Onboard To Achieve It

The Growth of OTT Demands a New Mindset From Streaming Providers

Streaming has grown immensely over the past 12 months. Viewing time for U.S. streaming services are 50 percent above 2019 levels in June, likely the result of services launched before and during the pandemic, including Apple TV+, Disney+, HBO Max (AT&T), and Peacock (Comcast). A new forecast projects SVOD subscribers will nearly double from the 650 million worldwide at the end of 2020 to 1.25 billion by the end of 2024. While the number of subscribers is growing, so too is the revenue opportunity. In 2020, the global video streaming market size was valued at USD 50.11 billion, and is expected to expand at a compound annual growth rate (CAGR) of 21.0 percent from 2021 to 2028.

So, what does this continued growth actually mean for the industry?

The growth in the number of subscribers and resulting revenue is forcing OTT providers to take a hard look at how they provide their service. Unlike broadcast television where the network operator had visibility all the way down to the set-top box and could guarantee a three-, four-, or five-nines level of service, OTT doesn’t have that luxury. They must often cobble together monitoring systems that link dozens of distinct and separate technologies together.

To Support These Growing Services, OTT Providers Must Put Data First

Variables. Countless variables to track and monitor. This is the new norm in streaming.

When OTT first began, monitoring (and the data needed for observability into the streaming experience), was really a secondary thought. The primary focus was about reliability: keep the service up-and-running using whatever means necessary. Sometimes that involved a little bubblegum and Duct tape. But, as the previous stats support, OTT providers are now global. The demands of providing a consistent and reliable service on a global scale translates to data becoming a primary focus. This is the mindset driving today’s streaming platforms.

But this situation is more complicated than just elevating the importance of data in the operation of streaming platforms. There is no consistency around the data. Different vendors, providing different technologies within the video stack, all have their own data points. Sometimes, those data points, although named differently, represent the same variable. It’s an issue that complicates the entire process of holistically monitoring the workflow.

Without a commitment to standardizing, enriching and centralizing information, the industry is contending with an explosion of data and no way to really put it to use. It’s like trying to capture all of the water from a leaky dam using a thimble. The result is that many operators are blocked from the OODA loop (Observe, Orient, Decide, Act). To put the video stack data to use, platform operators must be able to see the forest…and the trees. They need to be able to identify patterns while also tracking down individual user sessions. But being able to do that requires consolidating the massive amount of fragmented data coming out of the workflow.

Just consider these examples: at the device level, operators have to be conscientious of device OS, screen size, memory capacity, and supported protocols which can impact client-side caching algorithms, adaptive bitrate ladders, and changes to delivery protocol. Networks can experience sudden peaks in congestion, and so delivery adaptability—such as choosing a different CDN within a specific region—becomes very important to ensuring a great viewer experience; and the content itself can be transformed to accommodate different connection speeds and bitrates.

To address any of those examples requires not only a lot of data, but the variables must be standardized and normalized. Without that, there is no way to get a clear picture of how efficiently or effectively the technologies within the stack are operating, how well third-party partners such as CDNs are performing, and how the viewer is experiencing the video.

Standardized Data + Insights = Observability

The ultimate goal to elevating the importance of data within streaming platform operations is achieving observability. Just having a lot of data is nice. Having a lot of standardized data is better. But being able to derive insights, in real-time, enables the streaming operator to have the observability they need to provide the consistent, reliable, and scalable service their viewers expect. Furthermore, observability also provides the business with the ability to make critical decisions about advertising, marketing, and subscriber retention with more certainty and accuracy.

The Datatecture is a Data Landscape for Streaming Operators

Today, our industry needs to know how to optimize the creation, delivery, transformation, and operationalization of streaming video. Doing so requires harnessing all the data from within the video technology stack: operations, infrastructure, and the delivery workflow components. Each of the technologies, whether they are vendor supplied or open-source, can throw off data critical to seeing the delivery and QoE big picture, and the individual viewer sessions. But which technology throws off which data?

That’s where Streaming Video Datatecture comes in.

Being in the industry for seven years, in order to talk about data, there needed to be a way to map out what was actually happening. There wasn’t a single resource which showed all of the components in those three main stack categories, or which addressed the ever-changing technologies. Having a resource which provided an up-to-date picture of the technology landscape was a critical first step to harnessing all the data within the workflow.

But the concept of the datatecture was more than just an industry landscape. It works as a tool which streaming operators can use to build their own datatectures. Because there is no standardization within the streaming industry, most OTT platforms are different. Every operator has figured out a way to make their technology stack work for them. But the increasing need for observability isn’t specific to one provider. Every provider needs to put data first and to do that means understanding all of the technologies in the stack which can provide data to add to that observability. This industry-wide datatecture is a map which providers can use to build their own, envisions how their datatecture could be.

Release Video Data From Its Silos: What Senior Leaders Can Do

Although it’s the engineers who will make the most use of the datatecture, executives and managers within the organization need to help with the “data first” transition. Hiring  the right people, and ensuring they are focused on data, will help to make sure new software has data collection as a priority. Another strategy for senior leaders is to make sure all future hires, no matter the department, understand that stability and growth of the business relies on observability to drive business decisions. If all teams, whether that be in advertising, marketing, security, or content groups, understand the importance of removing data from its silos, then informed business decisions can be made.

In many ways, data can help bring teams together. When everyone can speak the same language and have access to the same information, it naturally sparks more collaboration. That’s not to say everyone needs to be aware of everything, but context is important. There’s nothing like data to provide context in decisioning to ensure an organization, with all its stakeholders and moving parts, is going in the same direction.

Make Data (And the Datatecture) The Core of Your Streaming Business

The datatecture we’ve created (and the datatecture you will create) should be at the epicenter of your streaming operations, software development, and business objectives. Without a deep understanding of the data role each component within your video stack plays within your observability, it will be far more difficult to make the business and technology decisions to drive your platform forward.

Read More
Thoughts Cory Carpenter Thoughts Cory Carpenter

Don’t Let Video Analytics Keep You From Seeing the Bigger Picture

Are You Seeing the Forest For the Trees?

If you talk to streaming operators in the industry about what video data is important, they will often talk about QoE metrics: rebuffer ratio, time-to-first byte, cache-hit ratio. And, yes, those metrics are definitely important. But just having access to that player-level video analytics isn’t entirely helpful. It scratches the surface, illustrating the output of dozens of technologies used in the delivery and servicing of video viewer requests. It’s like not knowing anything about the production of a car, through a manufacturing line, and just seeing what comes out at the end. What if it doesn’t work? How can you diagnose the problems if you don’t know how the guts were assembled inside?

End-to-End Instrumentation is Key to Visualizing Video Analytics

In order to understand how all of the component technologies within the workflow influence a metric like rebuffer ratio, it’s crucial to monitor everything within the stack. You need to collect data from encoders, from packagers, from DRM servers, from server-side ad insertion, from CDN delivery, and more. Everything that’s involved in the workflow, from content ingestion to playback, is critical to getting a true picture of everything. In keeping with the title of this post, all of those data sources are the trees and your workflow is the forest.

So how do you see the forest? There are three key steps to any end-to-end instrumentation: data collection, data normalization, and data storage.

Data Collection

This is the most basic step to seeing the forest. If you can’t get to the data from each of the workflow components, you can’t get a complete picture. This may require programmatic connection, in the case of technologies like virtualized encoders which provide API access, or it may require third-party software, such as a software or hardware probe, to monitor the technology. If a technology doesn’t expose data, or a third-party doesn’t allow for data consumption programmatically (such as CDN logs), then it might be time to look at a replacement. You can’t have a data blackhole in your workflow instrumentation.

Data Normalization

Once the data has been collected, it has to be normalized. You can probably surmise that most workflow technology vendors are not coordinating with each other regarding data representation. They employ different fields and different values, sometimes for the same metric! So to make sense of it all, to ensure there is a relationship between the encoding data about a chunk and that same chunk in the cache, all of the data being collected should be normalized against some standardized schema. Doing so will ensure that the forest you see has all the same types of trees.

Data Storage

Of course, collecting and normalizing all this data without a place to store it doesn’t make much sense. You need a repository that is also flexible and programmatically accessible. This could be a data lake provided in a cloud operator, like Google BigQuery, supported by an additional, transient storage mechanism, like Memcache, for lightning-fast retrieval.

With The Forest in View, Your Video Analytics Can Make You a Data Ranger

With end-to-end instrumentation of all the workflow technologies, you can get down to making sense of it all. For those just getting started with this kind of approach, that will require a lot of manual connections. You will spend your time tending the forest, pruning trees, grouping them together, and relating them. That work, of course, will pay dividends in the future as your video analytics visualizations and dashboards become ever the smarter. But making manual connections between data sets within your storage isn’t scalable. Most streaming operators will look for ways to automate this through machine-learning or artificial intelligence systems. These systems, once trained, could propose connections on their own, making suggestions about the nature of a data value. For example, if your rebuffer ratio is high and your encoder is through errors, a system like this could bubble up a recommendation that one of the bitrates in the bitrate ladder is corrupt. An intelligent system might even analyze each of the bitrates and identify the one which is causing the higher rebuffer ratio.

Let the Forest Tend Itself

With a continual flow of data coming from throughout the workflow, normalizing and visualized for quick decisions, you are well on your way to taking the next step in streaming operations: automation. Edge-based, serverless processes, such as Lambda functions, could analyze results from different data sets in real-time (leveraging that machine-learning or AI layer that we mentioned previously) and take action against them based on pre-determined thresholds. For example, if viewers in a specific geographic region were having high TTFB values, the system could automatically switch to an alternate CDN. If that did not fix the problem, the system could then serve a lower bitrate, overriding the player logic with some data. You get the idea. A system like this not only provides granular, robust analysis to operations (through real-time, dynamic visualizations), but it also participates in continuous improvement and self-healing. Automation within the streaming video workflow could even get predictive by comparing real-time video analytics being collected with historical heuristics. What if the system knew that on Mondays, CDN A usually had a tough time in a certain geographic region? Rather than relying on the analysis of data to make a switch, why not automatically switch to CDN B during that time frame?

Data Enables Decisions, Video Analytics Visualizes Them, But You Have to Make Them

Don’t be the streaming provider that just sees the numbers. That’s looking just at the trees. To truly make informed business decisions that affect QoE and QoS, which in turn affect subscriber satisfaction and churn rates, you need end-to-end instrumentation. With a system that collects all the data, normalizes it, and visualizes it, you can be assured that your operations personnel can see the forest to make better, holistic decisions rather than fixing the value for a single data point.

Read More
Thoughts Cory Carpenter Thoughts Cory Carpenter

A Video Data Platform First, Analytics Second

It’s becoming more generally accepted in the streaming video industry that having more data can provide greater insight. With better graphs, better algorithms, better analytics, more improvements can be made. As such, many streaming operators look for new tools or providers to make more data from their technology stack available. However, the result can be overwhelming: a dozen different interfaces all providing detail and insight into a different aspect of the streaming workflow. Of course, none of them are connected together.

Unfortunately, this is an all-too-common approach to streaming video data monitoring. Operators put analytics first, seeking a way to visualize a data source without having an overall strategy for data in general.

What’s needed first, before meaning can be derived, before any attempt at analysis, is a video data platform.

What Are Analytics?

Analytics, simply put, is the use of visualization tools to display data in a form that can be analyzed. Although it’s possible to analyze data points within a table, the concept of analytics, especially within streaming video, usually includes some sort of dashboard or graphical interpretation of the data.
But analytics does not necessarily map to insight. That’s because analytics, in the current state of the streaming industry, is often carried out against siloed data. Each source of data often has its own tool or dashboard that makes “sense” of the data itself. Of course, these are helpful. Just looking at data tables doesn’t reveal much. Yet because each dashboard is independent, true insight must be inferred by looking at multiple tools. This increases not only the complexity of deriving meaning from the data, such as root-cause analysis but it also significantly increases the time it takes.

Putting the Cart Before the Horse

When streaming operators focus on analysis without a video data platform strategy, they get short-term gains at the cost of long-term benefits. For example, with access to CDN log data through the CDN visualization, a network operations engineer may be able to ascertain a low cache efficiency on a particular piece of content. But the cause of that may not be the CDN at all. Rather, it may be a bad encode for a specific bitrate in the bitrate ladder. Without an overall data platform strategy, which would include a means to relate the CDN log data to the encoder data and even other sources like the player, the analysis is only partially helpful. The low cache-hit ratio reveals a problem. With some help from the CDN operations engineers, the streaming operator may be able to discover that it’s a result of the encoder. This kind of approach is repeated over and over again as new data sources from streaming stack technologies are made available. It’s an analysis-first mindset.

When You Put Strategy First

Rather than looking at how to visualize each type of data, streaming operators need to implement a video data platform. A good video data platform is comprised of three layers:

  • Data retrieval and transport (Level 1)

  • Data normalization and standardization (Level 2)

  • Data relating and visualization (Level 3)

Video Data Retrieval and Transport (Level 1)

The first layer of a video data platform is getting the data from the tools. In many cases, this means programmatic access to log files or the tool’s database. Once access has been achieved, the data must be transported to a common location (i.e., a data lake). Most often, this is cloud-based, such as through a provider like Amazon Web Services or Google Cloud Platform, and has programmatic access built-in. Key to this as well is the speed at which data can be transported. Some data should be provided in real-time, such as QoE data from the player, while other data can take longer.

Data Normalization and Standardization (Level 2)

The second layer of the video data platform is a process by which to normalize and standardize the data. Many tools collect similar data points. For example, the average video player utilizes over 15 software development kits (SDKs) from various technology vendors. These may collect data points that are duplicative and need to be scrubbed, normalized, and de-duped. The streaming operator can build a machine-learning system on top of the data lake to take care of this normalization.

Data Relating and Visualization (Level 3)

The final layer of the video platform is making the connections between elements in different data sources and carrying out the calculations that are needed to derive meaning. This usually involves a mapping of data elements or utilizing a master table (based on standardized data elements) which links data elements between sources together under a single master element. This can often be accomplished through third-party tools like Tableau, Datadog, or Looker. These tools also provide visualization features so streaming operators can create customized dashboards for different business groups or roles.

A Streaming Video Data Platform Grows With the Business

The best part about making analytics a product of your video data platform is that you don’t have to rebuild everything when you want to include new data into your visualization. A video data platform is flexible by nature. The architecture is intended to facilitate new data sources by just connecting them through an API (which is a function of the platform itself). New logic can be added to the normalization and standardization layer, again not “rip and replace,” enabling new relationships to be created between different data sets which can be exposed through enhanced visualizations. The video data platform, then, becomes the foundation for all monitoring and analytics across the organization.

Datazoom: A Ready-to-Implement Video Data Platform

Of course, you can build all of this yourself. However, is building a video data platform your core business? As a streaming operator, probably not. Furthermore, you can’t just rely on any provider to supply something so fundamental to the health and success of your streaming business. You need a fire-tested, battle-hardened, proven platform to ensure that the data you need to provide the best possible video experience is available quickly, normalized to your business needs, and visualized for actionable business decisions.

Read More