why deepstreamHub? compare us getting started feature roadmap faq
use cases pricing
blog contact

“deepstream’s functionality, performance, flexibility and scalability is truly disruptive and is driving amazing new innovations in our technology stack and team collaboration.
The features and plugins offer great opportunities to build and integrate with many different existing techs, tools, platforms and services.”

With profits from album sales dropping by as much as 40% over the last 15 years, the music industry is shifting towards tickets and live events as its primary source of revenue.

Standing at the forefront of this trend is the global leader in live entertainment ticket software, sales and distribution: Ticketmaster. Through Ticketmaster’s constantly evolving technology on its platform, fans can easily and quickly discover and purchase live event tickets safely and securely anytime and on any device.

But with the steady growth of mobile plus the addition of increasingly data heavy features like Virtual Venue and interactive seat maps, Ticketmaster began reevaluating how to optimize its back-end while continuing to serve millions of fans at scale during major onsales. This meant rethinking elements of its system architecture to cater to growing client and consumer demands and retain its position as marketplace leader.

"deepstream’s performance, durability and scalability rocks the house during the day and lets our devops sleep peacefully through the night."

At the core

At the core of the Ticketmaster ticketing software stack is a series of 50+ "Hosts", which are not only the ultimate Source of Record for transactional ticket sales but also simultaneously serve up multiple sources of data to non-transactional components. During major onsales, high ticket demand requires prioritizing the transactional side over select data sources. Likewise, excessive polling of data can drive up loading resources from the transactional side.

To make things even more challenging, all access to these Hosts is achieved using proprietary protocols, which can impact adoption of new micro-service clients based on modern software stacks.

The solution: Offloading the non-transactional functionality of the Hosts into a new real-time Source of Record that was accessible via standard protocols and can be scaled up to handle any amount of load, without impacting ticket sales.

One Solution: Phase 1

Multiple solutions were put into place to combat these issues. This use case will focus on one of them. The first phase involved creating a B2B service that provided a friendly JSON-based ReSTful endpoint for upstream clients while hiding the details of the lower level proprietary protocols. This opened up access to the non-transactional parts of the Host to modern clients that otherwise might have had trouble accessing that data.

This phase didn't alleviate any notable load from the Hosts, though, and so work on the second phase started in earnest.

One Solution: Phase 2

The next phase architecture involved the creation of an In-Memory Data-Grid (IMDG) that stored all applicable data from the Hosts in a scalable store, which was kept up to date in near real time. The original implementation used Apache Geode as the IMDG with Kafka providing the notifications and push updates. This was a scalable solution but the implementation had its own set of problems, primarily that State and Messaging were decoupled - meaning that a lot of development effort was needed to keep the data grid and messaging in sync. And changes proved costly.

The next implementation replaced Apache Geode with deepstream, which became both the object store and the access point for realtime updates. This introduced a number of benefits:

Dramatically reduced load during transactionally high load cases:

Calls to generate sales reports or to query seat statuses for a real-time Seat Map or scores of other operations that historically would have run in parallel with ticket sales on the Hosts are now accessible directly in deepstream. This is a huge win-win because not only do the data queries become much faster, but Hosts are enabled to sell tickets faster as well when not having to compete for resources.

Reduced complexity and development time through data-sync:

The initial implementation - a separate datastore and messaging - caused a lot of development overhead and complexity as both systems had to be kept in sync and updates from one had to be stored in the other. deepstream solves this using an approach called data-sync: schema-less JSON documents called 'records' that can be manipulated and observed across millions of clients and backend processes simultaneously. Any change is both stored and synchronized across all subscribed endpoints within milliseconds.

This creates a stateful, distributed architecture that updates in realtime. For Ticketmaster, this meant that it could move the data that was previously accessible only in the static Hosts directly into deepstream and stream delta messages from Kafka directly into the grid. This significantly reduced complexity and development effort and created a durable, scalable, fast (in-memory) updating Source of Truth that can be accessed from across the organisation by many clients concurrently. deepstream's scalability allowed high demand access during peak "onsales" without impacting the object store performance or transactional timing.

High availability through dynamic endpoint allocation:

With a constantly growing number of clients, incurring downtime while reconfiguring or extending their systems was simply not an option for Ticketmaster. deepstream allows them to dynamically add new microservices at runtime that provide answers through remote procedure calls, send updates into the cluster and provide active publishing of dynamic subjects as and when they are requested - all without any downtime or need to restart the system.

Like to discuss a usecase?