Thoughts Cory Carpenter Thoughts Cory Carpenter

When Viewers Chromecast, Are You Left In The Dark?

When Google Chromecast entered the market in 2013, it exploded in popularity as it represented a way for viewers to watch streaming video when there wasn’t an easy way to access some streaming services, like YouTube, on SmartTVs. And although the landscape has matured significantly over the subsequent 9 years, casting still remains a popular method of watching on the big screen. In fact, according to Conviva’s State of Streaming Q1 2022, Chromecast accounts for 4.3% of total streaming activity.

So casting is probably part of your streaming traffic. At least 4.3%. But what happens when the viewer initiates a content session on their mobile phone and elects to cast it to their SmartTV? Who knows, because you are now in the dark.


Understanding the technology of casting to Chromecast

When a viewer initiates a casting session (this is called the Sender: an application on a device in which the viewer has started a video and clicked or tapped on the “cast” button), it is basically slinging the video stream from the device to a player on the Chromecast. That player comes in one of three types:

  • Default Receiver. This is the basic player app hosted by Google

  • Styled Receiver. This is Google’s basic player app plus some customized user interface changes to make the player which appears on the TV better match the publisher’s brand.

  • Custom Receiver. This is a dedicated HTML5 page built & hosted by the publisher allowing them to have advanced control of the playback experience on the Chromecast device.

This would be very similar to a viewer opening up Chromecast on the TV and typing a URL into a player or selecting an app and launching a content title. But, of course, that’s not how Chromecast works.


What happens to the player data during a Chromecast session?

The challenge is when the user initiates a “cast” session, the burden of capturing behavioral data (like stopping, starting, pausing, fast forward, ads watched, etc.) is usually handed off to the Chromecast receiver as that is the player through which the viewer is actually interacting. It’s possible for the receiver to send some data to the sender (so Chromecast to the app from which the cast event started) but, depending on the implementation of this relationship, the sender may disconnect during playback which would stop even this data stream.And although you can capture the cast activity within the sending application (from which the user has initiated the “cast”), you can’t capture data after the cast unless you have some code on the Chromecast itself. It’s probably pretty clear that this can get complicated as you need to maintain collection code on multiple players.

That’s where the Datazoom SDK comes in.

The Datazoom SDK is a piece of JavaScript code you can install in a sender application and within a Chromecast custom receiver page. This will allow Datazoom to collect playback data during a user’s casting session.


How the Datazoom Chromecast collector system works

Whenever a Datazoom Collector SDK is instantiated, an app_session_id is generated to help tie all of the data together that belongs to that session. When media is requested, a content_session_id is also created to help identify events and data that belong to a specific media playback session. During casting both the Sender & Receiver will have active App & Content sessions.

When the user chooses to connect to a Chromecast and begin casting the content to the biggest available screen, the Datazoom SDKs on both the Sender & Receiver coordinate a data collection hand-off. This signals that the user has begun casting and that the casting application has the Datazoom SDK as well. This process tells you, during data analysis, that data about the viewing session will come from the Chromecast rather than the player application on the device.

We call this orchestration.

When the Sender Collector SDK & the Receiver Collector SDK have a successful orchestration handshake, they will fire an event to Datazoom called cast_transfer. This event is intended to signal that a handoff has taken place between a Sender Collector and the Receiver Collector.  They will only fire if a Datazoom Collector SDK is present on both the Sender & Receiver applications.

So do you absolutely need the Datazoom SDK JavaScript on both the Chromecast and in your player? The short answer is yes…if you want visibility.

Three use cases for data collection during a Chromecast session

To understand the relationship between the sender and the Chromecast, think about these three use cases:

  • Datazoom SDK is only installed on the sender device

  • Datazoom SDK is installed on both Chromecast and sender device

  • Datazoom SDK is only installed on the Chromecast

#1: Datazoom SDK is only installed on the sender device.

Maybe you’ve elected to use the Default or Styled Chromecast receiver. Or maybe you don’t think you need any programmatic hooks on that device. Regardless, when casting begins, the Datazoom SDK on the sender fires but there is no orchestration so all you’ll see, in your data analysis, is the cast_start event followed by any data from the sender. If the sender disconnects from the Chromecast device, data collection will stop.

#2: Datazoom SDK is on both devices

You’ve decided that you want to collect viewer behavioral data on both your sending device and the Chromecast. When casting begins, the handshake is initiated and the following takes place:

  1. The sender application sends its session identifiers to the Chromecast and the Chromecast sends its session identifiers to the sender

  2. The sender application sends a cast_transfer event to Datazoom which contains the Chromecast’s identifiers which were provided during the handshake

  3. The Chromecast sends a cast_transfer event to Datazoom which contains the sender’s session identifiers which were provided during the handshake

  4. The sender application stops sending more events until after casting ends

  5. The Chromecast sends all selected Collector data to Datazoom, including both the sender and Chromecast session identifiers (to facilitate data analysis)

#3: Datazoom SDK is only installed on the Chromecast

In this case, the Chromecast collector will function just like a normal video player collector, sending data back to the Datazoom platform. But with no cast_transfer event linking it to a playback session on a sending device, there’s no way to really attribute the behaviors during the Chromecast session to a specific user.

How the Chromecast collector allows you to see the big picture about casting viewership behavior

You’ve hopefully come to the conclusion that it doesn’t make sense to install the Datazoom SDK on just one end of this relationship. To understand the Chromecast viewing session in relation to the viewer who cast it, you’ll need the Datazoom SDK integrated both with the sender (your app or website video player) and the Chromecast (only via the Custom Receiver). So what does analysis look like?

The Sender & Receiver data are represented by unique app & content sessions. Although their data, including events, metadata, and timers, can be analyzed independently (treating them as separate users) it is possible to join the two datasets together and create a comprehensive view showing the user journey, QoE, and time spent viewing across both devices. You can then derive all sorts of interesting insights, such as the amount of time a user casts versus watching through the sender device, behavior while casting (as it differs from watching on the sender app), etc.

Don’t be in the dark about casting when your viewers cast to Chromecast. Let Datazoom light your way

Although casting may not be a dominant way that users watch streaming video, it’s not inconsequential. As such, you need to have that insight about behavior which happens after the user has cast a video from their device to the big screen. This will help you not only create a better picture of your viewers but also provide insight to advertisers and other business partners. And, it can even help inform you about content and recommend titles, which are commonly cast, to users who cast more often.


Interested to learn more about how you can join data collection from end devices to the Chromecast? Contact Datazoom today.

Read More
Thoughts Cory Carpenter Thoughts Cory Carpenter

Improve Streaming Video Ad Revenue Blog Series

Making The Most of Streaming Video Advertising

As streaming video has grown, so too has the opportunity for advertising. Although many brands still utilize traditional television for their ad spots, a growing number are increasing their ad spend budgets to target video streaming platforms as those streaming platforms have adopted AVOD subscription tiers. Hulu, Peacock, and Paramount all have ad-supported and ad-free subscriptions. And as recent news about Netflix’s loss of subscribers shows along with their likely launch of an ad-supported tier, everyone is either acknowledging or coming to the conclusion that ads are a viable part of the revenue stream. Those traditional SVOD providers supporting ad-funded tiers, combined with pure-play ad-supported streaming offerings like PlutoTV and Tubi, coupled with more people watching more online video, results in an ever growing pool of ad opportunities for brands (which is perhaps why more brands are shifting television ad budgets to streaming video). But a lot of ads doesn’t equate totally with a lot of ad revenue. Errors and lack of insight can leave ad dollars on the table. To improve streaming video ad revenue.

But, let’s face it, streaming video advertising is nothing like broadcast television advertising.

Sure, there are similarities and streaming video has even adopted broadcast advertising technologies like ad signaling. But the process of inserting ads into streaming video is an order of magnitude more complicated than broadcast. There are so many systems in play, from DSPs to ad servers to the player itself meaning there are countless ways something could go wrong. Still, there are major benefits to streaming video advertising, the most notable being the data. Unlike traditional broadcast, there is an incredible amount of data available to advertisers when delivering via streaming video. This allows those advertisers to target the ads against viewer demographics and viewer behaviors. Whether in aggregate or individually, advertising in streaming video can be distilled down to the very session between a viewer and the content. Imagine if you could do that in traditional broadcast?

The Complexity of Streaming Video Ad Data

Yet that data is also an aspect of the complexity. With so many systems throwing off so much data, it can be very difficult to achieve the visibility that’s needed to take advantage of the promise of the targeting (we have tried to illustrate the variety of systems in the Datazoom Datatecture for Streaming Video). What about the data that makes it so difficult?

  • Fragmentation. There is no standard way to store or transmit data, especially amongst different technology vendors within the ad technology stack. That fragmentation creates a lot of issues for companies that want to remain nimble and agile. That’s why we’ve created a data dictionary to help standardize values and close the fragmentation gap.

  • Post-processing. When data is really fragmented, ad, product, and operations teams have to spend countless hours piecing together a picture of behavior or errors by post-processing the data to align better. That’s time spent that could be used to actually resolve the issue or take action against some viewer behaviors.

  • Visibility. Although many technology vendors within the data ecosystem of the streaming video stack make data available programmatically, they all use proprietary visualization tools. And even if you pull the data into your own dashboard, how many sources does it take until the dashboard is so complex that it defeats the purpose of data visualization?

A Blog Series To Help You Improve Streaming Video Ad Revenue Through Better Use of Data

In this blog series, we will explore best practices, using a Datazoom integration with Looker as an example, to overcome those challenges and enable streaming operators and brands to improve the revenue and impact of their advertising. Five posts will cover everything from collecting the data to optimizing performance:

  • Step 1: Combining ad data from different sources into a single feed

  • Step 2: Setting and monitoring KPIs to improve ad performance

  • Step 2: Creating a visualization of advertising KPIs across the business

  • Step 4: Identifying and reducing ad errors to decrease opportunity cost

  • Step 5: Understanding and optimizing the relationship between ad success and content

Each of these blogs is based upon the Datazoom webinar featuring Alex Savage, Director of Digital Advertising for ABS-CBN.

There’s no need for you to leave money on the table. With the right data visualized through meaningful KPIs, you can increase CPM, decrease errors, and generate more revenue from your streaming video ads.

Read More
Thoughts Cory Carpenter Thoughts Cory Carpenter

How To Put The Datatecture to Real Use

Over the last several posts, we’ve talked about different areas of the Datatecture and why they are important. But putting together a Datatecture for your own streaming video stack is more than just selecting technologies. The companies and technologies listed in the many Datatecture categories represent the lifeblood of a streaming service: data. From ingest to playback and encoding to security, every selected component is a means to derive greater insight into what’s happening within the workflow.

Having that insight can mean the difference between growth and churn.  But even as you may want to collect and use as much data as possible, there are significant challenges in doing so.

Using The Datatecture to Overcome The Challenges of Collecting All That Data

One of those challenges is data fragmentation. Each of those components within the Datatecture is an island unto itself. There is no impetus for an encoding vendor to share data with a CDN and yet, as the Datatecture illustrates, all of that data is connected. Deriving insights from data within the streaming video technology stack, requires linking between those datasets which means supporting and ensuring compliance of data elements such as a common session or user ID. It can be like herding kittens to get all the vendors to accommodate an individual customer’s requirements . And even when it is possible, it requires a tremendous amount of data post-processing, valuable time which could be spent by optimizing the workflow to improve viewer QoE.

Another challenge is a lack of interoperability. Vendors within the stack have no reason to work with one another which creates a rather thorny problem: variable naming. With no standardization of names, vendors can end up collecting the same data using different nomenclature (or worse, different data using the same nomenclature), making it very hard to normalize and reconcile when post-processing. It can become almost a jigsaw puzzle to ferret out the relationships between data sources.

A final challenge is data accessibility. Even as many of the technologies within the stack have become virtualized and available as cloud-based services or software, there is still no standard way to allow accessibility. Some vendors provide proprietary interfaces, others provide ways to export data, and even others provide APIs for programmatic access. And even when the majority enable APIs through which to gather the data, there is no saying that the APIs are consistent. Some can be REST, some can be SOAP, and some can be based on OpenAPI. The lack of a standard way to access data again puts a tremendous strain on the streaming operator’s engineering resources as developers must be tasked with building and maintaining connectivity to each data source even while contending with normalization.

From Metrics to Raw Data to Observability

There are many issues within the Datatecture that can create challenges to deriving insight but, ultimately, it comes down to post-processing and requires serious data engineering effort. The more challenges there are, the greater the time required in dealing with the data after it has been collected. And the more time that needs to be spent dealing with connectivity issues, normalizing, and fragmentation means the business decisions, like which part of the workflow to optimize or where the root-cause of an issue resides, take longer to make.

Many technology vendors provide a visualization tool to look at the data. Unfortunately, these tools can sometimes reflect a metric rather than expose raw data. This metric is often a calculation carried out against one or more data points using some equation or algorithm the vendor has created, and not shared. Although metrics can be useful, they are only useful within the context of that tool, which doesn’t help in identifying patterns and relationships between data sources or insights seen in other tools. When looking to derive real insight across the streaming workflow, the viewer experience, and viewer behavior, metrics alone are insufficient.

The natural evolution has been, though, to offer direct access to the data itself. In most cases, this is done programmatically (although, as mentioned before, not necessarily in a standardized way). But, again, pulling the data from dozens of technologies within the streaming video stack still runs up against the challenge of normalization. Yes, you can create a giant datalake with all of the data from all of those sources and, yet, post-processing can still be a nightmare.

What streaming operators want, and what many industries have already begun to build towards, is observability. This is the concept of using some filtering or normalizing middleware to handle data relationships prior to the data hitting the lake. What that means is when the operations engineer pulls up the primary visualization tool, like a custom Looker or Datadog dashboard, they don’t have to figure out how the data is related. They can see bigger patterns across a lot of the data sources within the Datatecture and even derive remarkable insights which can positively impact the business like the effectiveness of certain ad types on certain devices within specific content elements. Combined with rich metadata about ads, viewers, and content, this can be truly insightful and end up optimizing ad targeting and, ultimately, CPM.

How Data Enrichment Can Improve Observability

With all of your data gathered from throughout your Datatecture, you can begin your journey towards enabling observability within your business. One of the ways to do that more efficiently is through data enrichment. Remember that you’ll need some middleware or other processes between gathering and storing the data to normalize and relate data sources. Instead of using a computational means, you can simply add data into the stream from other sources. For example, when ad events are captured from the player you could enrich that stream with data from your ad manager, like Google Ad Manager. Because the enrichment is based on an ad ID that is already shared between the ad server (for delivery), the player (for display), and the ad campaign (within GAM), there is absolutely nothing that needs to be done. This stream of data is immediately useful when it hits the datalake.

A Video Data Platform Can Help You Actualize Your Datatecture

The Datatecture is a fundamental layer within the streaming video technology stack. In fact, it’s right above your infrastructure, a river of data flowing between components and technologies that can truly provide valuable insight to affect both the business and the streaming service. But gaining that observability you need can require a lot of upfront development and always begs the question, “is building observability a differentiator for your business?” The answer is probably no. The success of streaming services isn’t gauged by how it can gain insight from its Datatecture. Rather, it is gained through unique content, reliability, availability, and even the user experience.

Datazoom is a powerful Video Data Platform which can enable you to connect all the components within your Datatecture into a single stream of data, enrich it with third-party sources, and deliver it anywhere you need it to be such as a datalake and existing visualization tools.


To learn more, visit and explore the Datatecture site.

Read More