Thoughts Cory Carpenter Thoughts Cory Carpenter

Improve Streaming Video Ad Revenue Blog Series

Making The Most of Streaming Video Advertising

As streaming video has grown, so too has the opportunity for advertising. Although many brands still utilize traditional television for their ad spots, a growing number are increasing their ad spend budgets to target video streaming platforms as those streaming platforms have adopted AVOD subscription tiers. Hulu, Peacock, and Paramount all have ad-supported and ad-free subscriptions. And as recent news about Netflix’s loss of subscribers shows along with their likely launch of an ad-supported tier, everyone is either acknowledging or coming to the conclusion that ads are a viable part of the revenue stream. Those traditional SVOD providers supporting ad-funded tiers, combined with pure-play ad-supported streaming offerings like PlutoTV and Tubi, coupled with more people watching more online video, results in an ever growing pool of ad opportunities for brands (which is perhaps why more brands are shifting television ad budgets to streaming video). But a lot of ads doesn’t equate totally with a lot of ad revenue. Errors and lack of insight can leave ad dollars on the table. To improve streaming video ad revenue.

But, let’s face it, streaming video advertising is nothing like broadcast television advertising.

Sure, there are similarities and streaming video has even adopted broadcast advertising technologies like ad signaling. But the process of inserting ads into streaming video is an order of magnitude more complicated than broadcast. There are so many systems in play, from DSPs to ad servers to the player itself meaning there are countless ways something could go wrong. Still, there are major benefits to streaming video advertising, the most notable being the data. Unlike traditional broadcast, there is an incredible amount of data available to advertisers when delivering via streaming video. This allows those advertisers to target the ads against viewer demographics and viewer behaviors. Whether in aggregate or individually, advertising in streaming video can be distilled down to the very session between a viewer and the content. Imagine if you could do that in traditional broadcast?

The Complexity of Streaming Video Ad Data

Yet that data is also an aspect of the complexity. With so many systems throwing off so much data, it can be very difficult to achieve the visibility that’s needed to take advantage of the promise of the targeting (we have tried to illustrate the variety of systems in the Datazoom Datatecture for Streaming Video). What about the data that makes it so difficult?

  • Fragmentation. There is no standard way to store or transmit data, especially amongst different technology vendors within the ad technology stack. That fragmentation creates a lot of issues for companies that want to remain nimble and agile. That’s why we’ve created a data dictionary to help standardize values and close the fragmentation gap.

  • Post-processing. When data is really fragmented, ad, product, and operations teams have to spend countless hours piecing together a picture of behavior or errors by post-processing the data to align better. That’s time spent that could be used to actually resolve the issue or take action against some viewer behaviors.

  • Visibility. Although many technology vendors within the data ecosystem of the streaming video stack make data available programmatically, they all use proprietary visualization tools. And even if you pull the data into your own dashboard, how many sources does it take until the dashboard is so complex that it defeats the purpose of data visualization?

A Blog Series To Help You Improve Streaming Video Ad Revenue Through Better Use of Data

In this blog series, we will explore best practices, using a Datazoom integration with Looker as an example, to overcome those challenges and enable streaming operators and brands to improve the revenue and impact of their advertising. Five posts will cover everything from collecting the data to optimizing performance:

  • Step 1: Combining ad data from different sources into a single feed

  • Step 2: Setting and monitoring KPIs to improve ad performance

  • Step 2: Creating a visualization of advertising KPIs across the business

  • Step 4: Identifying and reducing ad errors to decrease opportunity cost

  • Step 5: Understanding and optimizing the relationship between ad success and content

Each of these blogs is based upon the Datazoom webinar featuring Alex Savage, Director of Digital Advertising for ABS-CBN.

There’s no need for you to leave money on the table. With the right data visualized through meaningful KPIs, you can increase CPM, decrease errors, and generate more revenue from your streaming video ads.

Read More
Thoughts Cory Carpenter Thoughts Cory Carpenter

How To Put The Datatecture to Real Use

Over the last several posts, we’ve talked about different areas of the Datatecture and why they are important. But putting together a Datatecture for your own streaming video stack is more than just selecting technologies. The companies and technologies listed in the many Datatecture categories represent the lifeblood of a streaming service: data. From ingest to playback and encoding to security, every selected component is a means to derive greater insight into what’s happening within the workflow.

Having that insight can mean the difference between growth and churn.  But even as you may want to collect and use as much data as possible, there are significant challenges in doing so.

Using The Datatecture to Overcome The Challenges of Collecting All That Data

One of those challenges is data fragmentation. Each of those components within the Datatecture is an island unto itself. There is no impetus for an encoding vendor to share data with a CDN and yet, as the Datatecture illustrates, all of that data is connected. Deriving insights from data within the streaming video technology stack, requires linking between those datasets which means supporting and ensuring compliance of data elements such as a common session or user ID. It can be like herding kittens to get all the vendors to accommodate an individual customer’s requirements . And even when it is possible, it requires a tremendous amount of data post-processing, valuable time which could be spent by optimizing the workflow to improve viewer QoE.

Another challenge is a lack of interoperability. Vendors within the stack have no reason to work with one another which creates a rather thorny problem: variable naming. With no standardization of names, vendors can end up collecting the same data using different nomenclature (or worse, different data using the same nomenclature), making it very hard to normalize and reconcile when post-processing. It can become almost a jigsaw puzzle to ferret out the relationships between data sources.

A final challenge is data accessibility. Even as many of the technologies within the stack have become virtualized and available as cloud-based services or software, there is still no standard way to allow accessibility. Some vendors provide proprietary interfaces, others provide ways to export data, and even others provide APIs for programmatic access. And even when the majority enable APIs through which to gather the data, there is no saying that the APIs are consistent. Some can be REST, some can be SOAP, and some can be based on OpenAPI. The lack of a standard way to access data again puts a tremendous strain on the streaming operator’s engineering resources as developers must be tasked with building and maintaining connectivity to each data source even while contending with normalization.

From Metrics to Raw Data to Observability

There are many issues within the Datatecture that can create challenges to deriving insight but, ultimately, it comes down to post-processing and requires serious data engineering effort. The more challenges there are, the greater the time required in dealing with the data after it has been collected. And the more time that needs to be spent dealing with connectivity issues, normalizing, and fragmentation means the business decisions, like which part of the workflow to optimize or where the root-cause of an issue resides, take longer to make.

Many technology vendors provide a visualization tool to look at the data. Unfortunately, these tools can sometimes reflect a metric rather than expose raw data. This metric is often a calculation carried out against one or more data points using some equation or algorithm the vendor has created, and not shared. Although metrics can be useful, they are only useful within the context of that tool, which doesn’t help in identifying patterns and relationships between data sources or insights seen in other tools. When looking to derive real insight across the streaming workflow, the viewer experience, and viewer behavior, metrics alone are insufficient.

The natural evolution has been, though, to offer direct access to the data itself. In most cases, this is done programmatically (although, as mentioned before, not necessarily in a standardized way). But, again, pulling the data from dozens of technologies within the streaming video stack still runs up against the challenge of normalization. Yes, you can create a giant datalake with all of the data from all of those sources and, yet, post-processing can still be a nightmare.

What streaming operators want, and what many industries have already begun to build towards, is observability. This is the concept of using some filtering or normalizing middleware to handle data relationships prior to the data hitting the lake. What that means is when the operations engineer pulls up the primary visualization tool, like a custom Looker or Datadog dashboard, they don’t have to figure out how the data is related. They can see bigger patterns across a lot of the data sources within the Datatecture and even derive remarkable insights which can positively impact the business like the effectiveness of certain ad types on certain devices within specific content elements. Combined with rich metadata about ads, viewers, and content, this can be truly insightful and end up optimizing ad targeting and, ultimately, CPM.

How Data Enrichment Can Improve Observability

With all of your data gathered from throughout your Datatecture, you can begin your journey towards enabling observability within your business. One of the ways to do that more efficiently is through data enrichment. Remember that you’ll need some middleware or other processes between gathering and storing the data to normalize and relate data sources. Instead of using a computational means, you can simply add data into the stream from other sources. For example, when ad events are captured from the player you could enrich that stream with data from your ad manager, like Google Ad Manager. Because the enrichment is based on an ad ID that is already shared between the ad server (for delivery), the player (for display), and the ad campaign (within GAM), there is absolutely nothing that needs to be done. This stream of data is immediately useful when it hits the datalake.

A Video Data Platform Can Help You Actualize Your Datatecture

The Datatecture is a fundamental layer within the streaming video technology stack. In fact, it’s right above your infrastructure, a river of data flowing between components and technologies that can truly provide valuable insight to affect both the business and the streaming service. But gaining that observability you need can require a lot of upfront development and always begs the question, “is building observability a differentiator for your business?” The answer is probably no. The success of streaming services isn’t gauged by how it can gain insight from its Datatecture. Rather, it is gained through unique content, reliability, availability, and even the user experience.

Datazoom is a powerful Video Data Platform which can enable you to connect all the components within your Datatecture into a single stream of data, enrich it with third-party sources, and deliver it anywhere you need it to be such as a datalake and existing visualization tools.


To learn more, visit and explore the Datatecture site.

Read More
Thoughts Cory Carpenter Thoughts Cory Carpenter

Understanding the Datatecture Part 4: Workflow Deep-Dive

In Part four of this series, we dig into some of the deeper layers of the Streaming Video Datatecture in the Workflow category, defining many of the individual sub-categories and explaining their purpose in the broader workflow.


As we covered in the first post of the series, the Datatecture is governed by three main categories: Operations, Infrastructure, and Workflow. Within these categories are also a myriad of other sub-categories, often branching into even more specific groups. This structure isn’t intended as a parent-child hierarchy. Rather, it is just a way of illustrating relationships between specific components and categories of functionality. For example, there are many systems and technologies within analytics that don’t compete against each other because they handle different sets of data from video player metrics to customer behavior.

What is Workflow?

As was discussed in the initial blog post, Workflow refers to the core systems which enable a stream to be ingested, secured, delivered and played.

Delivery, Security, Playback, Transformation, Monetization, and Content Recommendations

Within the Workflow category, there are six primary sub-categories which were outlined in the first post of this blog series. Let’s dig past those and go deeper into Workflow to understand the individual systems involved in this area of the Datatecture.

Delivery

At the heart of streaming is delivering a video stream to a viewer’s player. In technical terms, this most often means a web server sending video segments in response to an HTTP request. But there are many ways to accomplish that as evidence by the sub-categories within this Datatecture group:

  • Content Delivery Network (CDN). A CDN is a cache-based network which improves the act of responding to user requests for video segments by placing popular segments closer to the user and reducing the round-trip time. Most streaming operators employ multiple CDNs which have strengths in specific regions (because of network saturation and size) or overall global penetration. CDNs often work hand-in-hand with network operators (ISPs) by existing within their network (as caching boxes) or terminating at their network in peering fabrics. There are three primary types of CDNs: private networks, cloud deployments, and algorithm-based (this is only Akamai). Private networks often employ lease-wavelength with their own optical gear to build a private loop network. Cloud deployments leverage existing Cloud Service Providers (CSPs) to provide distribution and scale without having to build physical infrastructure.

  • Ultra-Low Latency Streaming. Certain use cases, such as online gambling, which frequire real-time interaction need to ensure delivery that is sub-second. Often relying on non-traditional streaming technologies like WebRTC, these services (sometimes offered by traditional CDNs) ensure super-fast round-trip times at the cost of scalability.

  • Multicast ABR. Streaming has historically been a unicast approach: each user that requests gets their own unique version of the stream. The reason for this is because streaming is often over-the-top (OTT) and requires the use of public internet for last-mile delivery. The distributed nature of the internet doesn’t provide for the network services to manage that delivery like a traditional broadcast network (multicast). So, when there are millions of concurrent users, the unicast approach can require significant bandwidth and ultimately force a reduction in quality to meet bandwidth constraints. Multicast Assisted Adaptive Bitrate, or Multicast ABR, is a suite of technologies to enable the use of multicast (a single stream that is consumed by every viewer) over the internet.

  • Peer-to-Peer (P2P) Streaming. P2P streaming is the use of peers, such as a viewer, to deliver content to other viewers in a very limited geographic region. The technology “seeds” peers with video segments. These peers act as local caches for other peers within the P2P network. This network approach can significantly reduce bandwidth requirements for a platform operator by taking advantage of available viewer bandwidth they might not be using. P2P can be an especially useful approach for live content when working in conjunction with a traditional CDN.

Security

Unlike traditional broadcast which has a closed end-to-end system (from network operator to set-top box), streaming is a more open ecosystem. As such, content rights holders must utilize other technologies to ensure the security of their content when delivered via streaming. These methods can include:

  • Geo IP. This security technology attempts to limit access to viewers who only meet specific geographic requirements. For example, a streaming operator may only have rights to distribute content in a specific geography. If viewers from outside the geography attempt to gain access to that content, they can be blocked by resolving their IP address to geographic location and comparing against whitelist locations.

  • Digital Rights Management (DRM). This security technology employs encryption and decryption to keep content secured. A viewer that has purchased rights to watch content can be provided a license. When they request to watch DRM-encrypted content, the license is checked against a licensing server to verify rights. If rights are verified, the player can decrypt the content.

  • Watermarking. In some cases, such as live content, DRM may not be a viable option (as it can introduce additional latency). In these cases, watermarking can be a significant deterrent. The watermark is layered into the frames of a video pixel-by-pixel. The resulting pattern of pixel manipulation is a binary hash representing critical data about the content such as who originally purchased it, the IP address of the purchasing user, etc. If watermarked content is found on the internet, forensic technologies can pull the data from the watermark to identify how the content was made available.

Playback

This is where the rubber meets the road. Unlike traditional broadcast in which there is a single endpoint, streaming supports an infinite number of endpoints from which the viewer can consume content. In fact, any device with a screen and an operating system that can support a video player can be an endpoint. This includes SmartTVs, mobile phones, tablets, gaming consoles, and more. As such, this Datatecture category is broken down in a multitude of sub-categories which reflect both the endpoints and the player technologies itself:

  • Devices. The sub-categories within this category represent the endpoints on which a video player might exist and allow playback of streaming content. These endpoints can include:

    • Connected TVs. These are TVs with a software platform that allows the installation of applications such as streaming services which would include a player)

    • Gaming Consoles. Many gaming consoles, such as Microsoft XBox, Sony Playstation, and Nintendo Switch include video player software for content playback)

    • Mobile. Not only do the main operating systems provide a player, but each OS also supports an application ecosystem which may include other players as well)

    • Set-Top Boxes/OS. These companies create IP-based STBs, which include a player component, as well as STB operating systems that can be installed on generic hardware and also include built-in video player software while also sometimes supporting the installation of third-party players.

    • Connected Streaming Devices. Perhaps the newest entrance in the endpoint category, these represent self-contained platforms for users to consume video from a variety of service providers. They are similar to a SmartTV, but portable so they can be moved from television to television. They include built-in video player software as well as supporting third-party applications, such as a streaming service, that also can include proprietary video player software.

  • Players. The sub-categories within this category represent the three main flavors of player implementation:

    • Commercial. These are companies which have created and support video player software that can be installed within an application or as a standalone implementation.

    • Open-Source. Similar to commercial but without the price tag, open source player technology includes software created and supported by a community of developers.

    • Offline. A key functionality of many streaming platforms is the ability for the viewer to download a movie and watch offline (rather than streaming). To facilitate this, the player functionality needs to support it. Rather than building such functionality into a commercial or open-source player, some streaming operators opt for a commercial player that can support download-to-go functionality.

Transformation

Unlike traditional broadcast, streaming video must be transformed (encoded and packaged) prior to delivery to provide a stream which does not take all the available bandwidth. What’s more, different player implementations on different devices (often a reflection of licensing costs) require different formats. All-in-all, this can significantly complicate the video workflow by requiring operators to support multiple codecs and packages. The sub-categories within this Datatecture group represent the technologies which streaming platforms use to ensure the content is consumable at the viewer endpoints. This can sometimes happen in real-time.

  • Encoding. This is the process by which the source material, say from camera acquisition, is converted into a format playable by an endpoint. This requires a specific codec which is often optimized for the kind of delivery, such as broadcast versus streaming. Once encoded, the endpoint player will then also need the same codec to do the decryption. There are a variety of ways to encode including using on-premise equipment (most often with traditional broadcast) to using virtualized encoders (offering scalability) to using an encoding-as-a-service provider (which obviates the need to keep the encoding software up-to-date).

  • Transcoding. This technology represents the re-encoding of content into a different format without changing the underlying aspect ratio of the content. Transcoding is the primary technology employed in adaptive bitrate (ABR) ladders, allowing endpoint players to “switch” between bitrates depending upon the current parameters of the environment such as available bandwidth, available CPU, available memory, etc. Transcoding can happen via commercial and open-source software (i.e., FFMPEG) as well as service providers. Unlike encoding, it can also happen in real-time enabling streaming operators to deliver specific renditions when requested.

  • Packaging. Packaging is a group of technologies to “wrap” encoded or transcoded content into a format that is playable by the endpoint. There are a host of popular packages including Apple HLS, MPEG-Dash, and CMAF. Streaming operators can build their own packaging services or opt to utilize a service provider. In the later implementation, there is little maintenance involved by the streaming provider and they can rest assured that the packages are always up-to-date.

  • Metadata. One of the fundamental differences between streaming and broadcast content is metadata. This data, which is part of the streaming package, represents information about the content from the title to the content developer to even actors and other details. Metadata is crucial to streaming platforms as it provides the means by which content can be organized and recommended. The providers within this Datatecture group represent stores of content metadata from which a streaming provider can draw to add metadata to their content.

Monetization

The transition from broadcast distribution to streaming distribution is fraught with technical challenges. One of those is monetization, especially for streaming operators that have opted for advertising-based distribution models (rather than, or in conjunction with, subscriptions). The delivery of advertising in a traditional television broadcast is based on numerous standards with technology that has been tested and improved over time. With streaming, though, monetization of a video platform, such as embedding advertising into the videos, can involve a multitude of technologies which often aren’t built to interoperate. Furthermore, streaming operators are still gathering data to better understand the translation of the broadcast television advertising model to the streaming ecosystem. The sub-categories within this Datatecture group reflect the myriad of technologies involved in monetizing streaming video.

  • Paywall. As the name suggests, this is a barrier between free content and content which the viewer must pay to watch. This monetization strategy can often complement an advertising-based approach and be used to create FOMO which can lead to more consistent and predictable revenue, such as a subscription.

  • Advertising Systems.

    • Supply-side Platforms (SSPs). SSPs are software used to sell advertising in an automated fashion and most often used by online publishers to help them sell display, video, and mobile ads. SSPs are designed by publishers to do the opposite of a DSP: to maximize the prices their impressions sell at. SSPs and DSPs utilize very similar technologies.

    • Ad Exchange. An ad exchange is a digital marketplace which enables advertisers and publishers to buy and sell advertising space, often through real-time auctions. They’re most often used to sell display, video, and mobile ad inventory.

    • Video Ad Insertion. Getting advertisements into a video stream is in no way as easy or straight-forward as doing so in broadcast television. Streaming workflows which want to monetize content through advertising need technology to stitch the ad into the video stream. This process can happen server-side (SSAI) or client-side (CSAI). SSAI is often used for live content while CSAI is more utilized for on-demand content.

    • Buy-Side Ad Servers. Buy-Side Ad Servers are video ad servers utilized by the advertiser.

    • Ad Network.

    • Video Ad Servers. An ad server is a technology which manages, serves, tracks, and reports online display advertising campaigns. The process by which ad servers operate is relatively simple. First, a user visits a video where the publisher’s ad server gets a request to display the ad. Second, once the ad servers receive the request, it examines the data to choose the most appropriate ad for the viewer. The ad tag contains an extensive list of criteria fed by the advertiser. The ads will be selected based on several factors such as age, geography, size, behavior, etc. Third, once the best match has been made, it’s passed to the video ad insertion technology (again, client-side or server-side) where it can be delivered to the player for playback. Finally, the player gathers information relating to the user interaction with the ad such as clicks, impressions, conversions, etc.

    • Demand-Side Platforms (DSPs). Demand-side Platforms (DSPs) are used by marketers to buy ad impressions from exchanges as cheaply and as efficiently as possible. These are the marketer’s equivalent to the SSP.

Content Recommendation

Perhaps one of the most exciting aspects of delivering video via streaming rather than broadcast is data. With streaming video, there is a myriad of data generated from each view, data that is not available in a broadcast environment. As such, streaming platform operators can tailor the viewing, content, and even advertising experience, more tightly with each individual viewer providing for a far more personalized experience. One of those technologies is content recommendation. Often packaged into an “engine,” these software components installed within the delivery workflow analyze data and, using the metadata attached to each piece of content, can recommend content for the viewer to watch based on what they, or people like them, have watched. This can significantly improve engagement metrics, such as viewing time, as well as revenue.

The Workflow is a Process

Unlike the other two categories, Infrastructure and Operations, the Workflow category of the Datatecture represents a somewhat linear progression: content is transformed, secured, delivered, played back, and monetized. Of course, some of the individual technologies may be integrated within different functional components of the workflow (such as watermarking happening during transformation) but there is generally a flow within the workflow pipeline. What this demonstrates, like in the other categories, is a very intricate web of technologies which must all work in harmony to provide a scalable, resilient, and high-performing streaming service.


To learn more, visit and explore the Datatecture site.

Read More