The scene was set. Two heavyweights, ready to go at each other in prime-time with millions of PPV subscribers tuned-in to watch. In one corner, a legit heavyweight who has spent his career dealing blows to his opponents. The other, a YouTube sensation that spent the last year training for the bout.
By all estimates, it was going to be a short exhibition, and broadcasters assumed that their standard streaming architecture was going to be sufficient.
And then it failed.
Not only did it fail, it dealt a crushing blow to a network that was already suffering from multiple failures before.
The simple cause: the thundering herd.
It’s hard to predict how popular an event might be when it comes to streaming audiences. In some regions, a particular team or athlete may have a massive following and end up overwhelming local servers. But for an event that not only had one of the biggest and most successful names in boxing, but also had one of the biggest YouTube stars, how could the network not predict a thundering herd of online viewers?
The massive rush to deliver content to millions of viewers simultaneously can easily overload servers that rely on HTTP architectures and attempt to deliver in-sync with broadcast. When all these requests come in at once, you experience streaming issues such as lag, dropped packets, or even preventing users from joining the broadcast.
CDNs use edge caching, which queues up a portion of the content for delivery. When you put a CDN between the origin server and the clients, then, essentially, you are adding a caching or storage layer in between yourself (origin) and the clients. However, with HTTP delivery, if there’s a lost packet, the client requests the packet to be resent. If, for some reason, the cache has already purged the content, the client either gets stuck in a request loop, disconnects and tries to establish a new connection to the stream, or the player fails to build a playlist.
Now, amplify that by thousands of requests and actions at the same time.
While CDN caching can help alleviate the demand for VOD content, live sports broadcasting is much more difficult. That’s because caches equate to time lost. It’s latent; sometimes 30–60 seconds behind the real-time action — which is unacceptable for live sports that need real-time delivery to sync broadcasts, or enable in-game betting. It’s even more unacceptable to the fans that paid upward to $50 per stream to view it live.
Can You Afford to Fail?
First let’s crunch some numbers.
It’s estimated that 1 million people subscribed to the Mayweather vs. Logan Paul fight, netting the fighters just over $40m in guaranteed and PPV royalties. Based on the deal structure, this means that the PPV network nets just $20m to stream the fight. That’s a nice chunk of change.
Not every sporting event or PPV broadcast would have the record numbers of subscribers that boxing or UFC fights consistently garner.
However, what happens if customers demand refunds because the stream failed? The performance of the stream doesn’t impact the royalties that need to be paid to the fighters, meaning that the PPV network could have to “eat” the cost of streaming the event. Likewise for live concerts, premium streaming services or other real-time events.
Since January 2021, as the world has been locked down, more people have been attempting to stream live events from the comfort of their homes. Yet, increasingly fans are experiencing failure after failure — further eroding the trust they have in streaming services. From failed MLB baseball streams in Philly, to the Glastonbury festival concert in the UK, fewer fans believe they’ll have satisfying streaming experiences if they subscribe.
How it Gets Solved
The Mayweather vs. Paul broadcasters should have prepared better, especially knowing that a YouTube star with 23M subscribers was on the ticket, and having experienced multiple failures in the past. But even if nobody was fighting, live event producers can’t rely on legacy HTTP infrastructures anymore.
HTTP is buggy, doesn’t enable real-time streams, and causes more playback issues when it experiences packet loss hiccups. Users get spoiled experiences and inconsistent playback between devices.
Phenix’s WebRTC-based solution has solved this with a unique combination of real-time, scale, quality, and synchronous viewing, unlike any other offering in the market. Phenix is scalable to millions of concurrent users and has delivered the largest WebRTC audiences in history.
With a global network of 32 PoPs across 5 continents, the delivery architecture doesn’t cache at the edge, but employs a proprietary scalable edge computing infrastructure to deliver real-time HD content. Phenix uses patent-pending early detection and management of flash-crowd events in real-time streaming, employing AI algorithms that automatically provision resources to reach global audiences and stay ahead of the demand wave.
Live event producers can’t afford to have failures. Both the financial ramifications and brand impacts of a failure could deliver a knock-out blow.
The question should be, can you afford to not plan for the herd?