Thoughts Cory Carpenter Thoughts Cory Carpenter

CDN Management: The Secret to the Future Success of Streaming Video Platforms

Content delivery networks (CDN) are critical to the success of streaming platforms. Without their huge networks and experienced engineers, streaming video experiences might be spotty at best. Resilience, consistency, scalability… achieving those streaming platform attributes requires the use of multiple CDNs. But managing multiple delivery networks is hard enough when they all provide their own logs and their own visualization tools. Savvy streaming providers often build complex systems and tools to not only switch quickly between CDNs but also to collect all that log data, normalize it, and visualize it for use by operations engineers.

But CDN management means more than just understanding which CDN is doing well isn’t enough. Identifying root cause is often a collaborative approach between multiple CDNs, streaming operations, and other engineers. To facilitate that, though, everyone has to be able to trace data from each system to the same playback…and that means access to a unified set of data.

Step 1 of CDN Management: Setup a Video Data Platform for Your CDN’s

The first step is to implement a video data platform. This platform serves the central purpose of aggregating all of the data, from different providers and sources, and normalizing it against a standard, agreed-upon set of data elements. In many cases, this will be the streaming operator. As the owner of all the data for their streaming video technology stack, the operator can provide access to any third-parties, such as the various CDNs. In an ideal world, the video data platform will support programmatic access so that the data can be consumed by other services, like a visualization tool.

Step 2 of CDN Management: Create Shared Dashboards

Now that you have a video data platform, which is collecting, normalizing, and storing all of the data from the workflow, you can create dashboards representing different aspects of performance monitoring. For example, you might have a QoE dashboard which includes a CDN provider in addition to all of the telemetry coming from the player. By giving access to the CDN reflected in the dashboard, the streaming operator can work hand-in-hand to identify root-cause issues involving that delivery network. What’s more, CDN network and operations engineers can see other data to which they might not normally be privy. They can see how potential issues in their delivery may be impacting aspects of the viewer experience such as start-up time, video exits (because of latency or buffering), etc.

A Video Data Platform: The Modern Day Rosetta Stone

This approach to creating a unified solution for streaming video telemetry data is as much one of enabling collaboration as it is gaining consensus. Part of the issue with a streaming video operator sharing data with others in its provider ecosystem is the disconnection on the data itself. Providers may have different measurements, different data names, different values, etc. This can all create confusion when the streaming operator complains about a low value which the provider is measuring as normal or high. The video data platform can be like a Rosetta Stone within a provider ecosystem. Through normalization against a standard set of data elements, there is no longer any reason to compare data sets or data values.

Not Just a Unified Solution for Ecosystem Providers

Of course, having this kind of solution is great for getting third-party providers, like CDNs, onto the same page as the streaming operator. But a video data platform, collecting data from throughout the streaming workflow, both internal and external components, can also help to standardize the metrics, measurements, and values that are used to drive the business. Marketing, for example, may already have their own way of calculating a viewer engagement rate. But if that rate is calculated within the video data platform by employing specific metadata collected from the player, marketing doesn’t have to do that work anymore and doesn’t have to justify or validate its approach. They can simply point to a shared dashboard that captures and visualizes the metrics important to them.

Part of the Value of This CDN Management Approach Is In the Conversation Itself

Of course, having all of the ecosystem providers like CDNs working from the same playback is ideal. But setting up this kind of solution can also be collaborative. Streaming operators can involve a variety of vested parties such as CDN operations and network engineers to help identify what’s important, the thresholds for values, and the data elements themselves. When this solution is approached in such a manner, when conversations happen between streaming operator resources and third-party resources, everyone has skin in the game. The conversation about the solution itself, even before conversations involving the shared dashboard, serves to align everyone closer together. And that kind of collaboration is just as valuable as having a unified data solution.

Read More
Thoughts Cory Carpenter Thoughts Cory Carpenter

Why Video Data Telemetry is Critical to DevOps Success (And How to Get It)

DevOps has been a revolution in software development. To support agile methodologies, in which developers can incrementally release features that are tested by users in real-time, developers needed to have an increasingly larger role in operationally supporting their code. Waiting for operations to make configuration changes to environments or push out new code didn’t support the new scrum worldview. So the role of DevOps was born and how software was created and released never looked the same. But the success of this new role in the software ecosystem, both programmer and operations, depends on data which is especially relevant in streaming video. New features or application versions can negatively impact Quality of Service (QoS) or Quality of Experience (QoE) and ultimately influence whether subscribers keep paying. Video data telemetry is critical to DevOps managing components within the streaming video technology stack.

Not All Data is Created Equal

There are several kinds of data which can influence software development:

  • User feedback. This is the kind of direct feedback from users which can be instrumental in understanding the impact of a new feature or version. And although this data is very helpful, it’s not quantitative which makes it difficult to use objectively. Often times, the mechanism by which this kind of data is collected (i.e., a survey) can impact the usefulness.

  • Performance metrics. This data is very useful in determining how well changes to software are impacting the overall user experience or how well new software is operating. For example, if parts of a feature or interface are dynamic, are network factors inhibiting the loading of certain elements? How quickly does the software respond to user interaction? With this information, DevOps can ensure that software is operating within acceptable levels of speed.

But when it comes to streaming video, there is another kind of data which is far more valuable in understanding the impact of new features or software on the end-user experience: experience data.

The Data of Experience

In streaming video, the software experience is concentrated in the player which can reside on a variety of client devices ranging from computers to smartphones to boxes that plug directly into the television. This player is full of additional software, encapsulated in Software Development Kits (SDKs), that provide both functionality and data collection. Often supplied by third-parties, these SDKs are installed directly within the player. This software architecture is well understood by DevOps. Additional functionality within the player can easily be encapsulated in SDKs and deployed quickly.

But these SDKs can also provide that crucial experience data. Many of the components loaded by the video player also send back telemetry data which is critical for understanding overall QoS and QoE. This data can tell operations, or DevOps, how critical KPIs such as rebuffer ratio are performing which provides information about the overall user experience. Of course, that is subjective, as each user has a different threshold for certain metrics but the data ultimately provides a general indicator of the positive or negative impact on the user experience.

A System for Collecting and Normalizing Data

Rather than developing functionality to collect that data into new SDKs or other player functionality, DevOps needs access to technology that can be easily integrated into existing systems. When a streaming operator has already embraced the idea of a video data platform, the mechanisms by which data is collected from components within the streaming technology stack, such as a player, are well defined. DevOps needs access to such a system so that approved and proven approaches can be employed as part of the development process to ensure user feedback, performance metrics, and experience data can be collected.

Only collecting data isn’t the only consideration. For the data to be useful to DevOps, and the rest of operations, it must be normalized first. When there’s a video data platform, this is easily accomplished through a standardized schema that can take data elements from a variety of SDK sources and process them before dropping the data into a storage solution such as a cloud provider, like Google BigQuery.

Without the Right Tools…

DevOps is the way software is created now: programmers with operational knowledge and access to the tools and systems for deployment. But to be truly effective, their efforts must be tied into larger systems like video data platforms. In this way, the software they create, whether new versions, new applications, or new features, can also be instrumented to collect the data necessary to evaluate the impact on both quality of service and quality of experience.

Read More
Thoughts Cory Carpenter Thoughts Cory Carpenter

Understanding the Datatecture Part 1: The Core Categories

The relationship of technologies within the streaming video stack is complex. Although there might seem to be a linear progression of components, from content acquisition to playback, in many workflows, the connection between the technologies is often far from such. Data from one piece of technology may be critical for another to function optimally. APIs can be aptly leveraged to connect optimization data from different technologies to each other, and to higher-level systems like monitoring tools and operational dashboards. That’s why the datatecture was created: to better visualize the interconnection between the technologies and ultimately document the data they make available.

How to Visualize the Datatecture

Think of the datatecture as a fabric which lays over the entire workflow and represents the underlying flow of data within the technology stack. How the datatecture is organized, then, is critical to not only understanding the basis of that fabric but how to categorize your own version of it, suited specifically to your business. Regardless of the individual technologies you end up employing in your datatecture, they will be ultimately categorized into three major areas: Operations, Infrastructure, and Workflow.

Datatecture Core Category: Operations

A major category within any streaming workflow is the group of technologies which help operate the service. The other categories and technologies within this group are critical to ensuring a great viewer experience:

  • Analytics. One of the primary categories within the Operations group, this subgroup includes a host of components found in any streaming video technology stack. The technologies found here may include tools for analyzing logs (such as for CDN logs), tools for analyzing data received from the video player, and even data useful in understanding viewer behavior regarding product features and subscriber identity. Without these and other technologies in this subgroup, it would be nearly impossible to provide the high-quality viewing experience subscribers demand.

  • Configuration Management. Although not as sexy as analytics, this is a critical subgroup of the Operations category as it covers such technology as multi-CDN solutions. Many streaming providers employ multiple CDNs to deliver the best experience. But switching from one CDN to another can be complex. The technologies in this subgroup can help provide the functionality in a much easier way.

  • Monitoring. Perhaps one of the lynch pins of the streaming video technology stack, this subgroup enables operational engineers to continually assess the performance of every other technology within the stack, whether they are on-prem, cloud-based, or even third-party. The data pulled into these monitoring tools ensures operations and support personnel can optimize and tune the workflow for the best possible user experience.

Read more about the Operations category.

Datatecture Core Category: Infrastructure

Underlying the entire workflow is the infrastructure. From databases to storage to virtualization, these fundamental technologies power the very heart of the streaming stack:

  • Containers and Virtualization. As the streaming stack has moved to the cloud, virtualization, and the tools to manage containers and instances, has become a crucial technology. These technologies ensure scalability and elasticity as well as providing a means to quickly and easily deploy new versions of workflow components.

  • Storage and Caching. At its heart, streaming video is about data. Whether those are the segments which comprise an HTTP-chunked file or the data gathered from throughout the workflow, it’s all about bits and bytes. The challenge is how to store them and, in the case of caching, how to make them available to the users and applications that need it. The subgroups and technologies in this group are critical to building and managing that data.

  • Queueing Systems. Scale is a major challenge to streaming providers. How do you handle all the user requests for content, and the influx of QoE and QoS data, when parallel sessions climb into the millions or tens of millions? Queuing Systems provide a means by which to organize and handle those requests to prevent systems, such as caches or databases, from being overrun and tipping over.

Read more about the Infrastructure category.

Datatecture Core Category: Workflow

This core category is where the magic happens. It’s all of the subgroups and technologies which enable the transformation, security, delivery, and playback of streaming video which makes sense that it’s the deepest category with the most technologies:

  • Delivery. From CDNs to Peer-to-Peer, this category deals with well known and established technologies for getting the streaming segments to the users who are requesting them. This subgroup also contains other technologies, such as Multicast ABR and Ultra-low latency, which are becoming increasingly important in delivering a great viewer experience, with high-quality video, at scale especially for live events such as sports.

  • Security. The streaming industry has long contended with piracy. That’s because of the nature of streams: they are just data. It is much more difficult to pirate a broadcast feed because there is no way to steal the signal. But with streaming, which employs well known web-based technologies like HTTP, that’s not the case. So this subgroup includes technologies like DRM, watermarking, and Geo IP to do everything from encrypt the content to determining where it can be played.

  • Playback. Without a player, there would be no streaming video. This subgroup addresses the myriad of playback options from mobile devices to gaming consoles. But players also come in many shapes and sizes. While some are commercially available, and provide a lot of support, others are open-source and present highly-configurable options for streaming providers that want a greater degree of control, with less support, than may be available with commercial players.

  • Transformation. Content for streaming doesn’t come ready-made out of the camera. Just like broadcast, it must be encoded into bitrates that are appropriate for transmission to viewers. But unlike broadcast, the players and devices used to consume those streams may require very specific packages or formats, some of which require licenses to decode and others which are open-source. The subgroups in this category cover everything from encoding to packaging and even metadata, the information which is critical for streaming providers to categorize and organize content.

  • Monetization. Of course, most streaming providers aren’t giving away their content for free. They have some sort of strategy to generate revenue. These can range from subscription services to including ads. The subgroups in this category cover a broad range of monetization technologies ranging from subscription management to the many components of advertising integration, such as SSAI and DAI, and tracking.

  • Content Recommendations. This small subgroup is becoming increasingly important in streaming platforms. Suggesting content to viewers, whether it’s based on past viewing behavior or the viewing behavior of similar users, is critical to keeping users engaged which can ultimately impact attrition.

Read more about the Workflow category.

No One Core Category is More Important Than Another In Datatecture

You may be wondering if you can scrimp on one core category, like Operations, for the sake of another, such as Workflow. The short answer is, “no.” These Datatecture core categories are intricately connected, hence the Venn diagram structure. Operations depends on the Infrastructure to store all of the data while Workflow depends on Operations to find and resolve root-cause issues. Of course, there are countless other examples, but you get the picture: these core categories are joined at the hip. So when you are examining your own streaming platform or planning your service, building your own datatecture depends on understanding the relationship between these core categories and ensuring you are providing them the proper balance.

Read More