
PODCAST
Reinventing Master Control: Cloud Workflows with BBC, RTE & Zixi
Tom Everest, Head of Architecture and Supply Chain,
Broadcast and End-User Technology, BBC
Rostam Kilgour, Manager, First Line Response, RTE
Chris Fellows, Director of Global Solutions Engineering, Zixi
And we’re here to talk about the IBC Accelerator project that we’ve all been involved with this year. So, Tom, do you wanna give us a bit of an overview of of what the Accelerator Project’s about? And Yeah. So it’s a it’s a program that IBC run every year. I think they’ve done it for sort of five, six, seven years, something like that. And, essentially, the idea is that, champions, as we call, broadcasters, media companies set a challenge to the industry. And then over the course of four or five months, we work with participant vendors to solve that challenge. And this year, one of the challenges that that we set from the BBC was about, could we make a master control room for the cloud? We’ve got a real need for one of those, in the way that we do things, back at base. And so, yeah, to try and get a bit of, enthusiasm behind it, to get a bit of, a bit of, expertise behind it, we set the challenge to the industry and Zixi and other companies joined in. Yeah. Yeah. It’s an interesting project. It’s one that we’ve been involved with the past couple of years. I think we’ve done it for the last three or four years now. And it’s an interesting concept to get in all the different vendors to work together and ultimately try and kind of accomplish a goal that the champions have set. So, I mean, Rosten, why do you think why is it important for us to move master control into the cloud? Why is that why is that an important goal for you guys? Well, I think, you know, master control for most big broadcasters is sort of that that central point of sort of where everything goes. And, if we sort of consider SDI environments, it is literally a centralized router. Everything passes through it, and it is sort of what stitches together the entire broadcasting sort of chain for a for a broadcaster. You know, it connects all the studios. It connects them to the presentation areas, and it essentially allows you to route anything to anywhere within the broadcasting realm. But it it’s normally physical. It normally involves a lot of extra sort of technology attached to it or sort of connected to it, contribution technologies. You know, if if we’re talking about video, it might be sort of some sort of video codec into it or audio contribution for sports. And that as as a challenge, you have cables everywhere all over a building. Tom will be able to talk to you about more detail about Broadcasting House. But, like, in RTE, we still have power infrastructure there. We still have, you know, vast amounts of copper within the building. And trying to change that while you’re still live in the building, while you’re still broadcasting is a real challenge. And so being able to sort of extract that from physical and being able to get it up into the cloud provides a real opportunity for broadcasters, both current broadcasters and potentially future broadcasters who want to sort of enter into the sort of environment to to not have to deal with that that headache and that sort of capital investment, which is required every so often, which is significant for for a big broadcaster, like, such as RTE or BBC. Yeah. Yeah. Okay. And do you think it’s it’s kind of there’s more of a requirement for it now? Do you think, like, than sort of five years ago, ten years ago, or is it just that you feel like the technology’s caught up so that it’s possible to do it now? Or is it is there a requirement? No. I think it’s it’s definitely the fact that the technology is possible definitely makes it more interesting for broadcasters. I think it it’s difficult. The technology is just moving so rapidly nowadays that maybe where sort of broadcasters historically might have sort of taken a pause, seen how the technology environment was developing, and then sort of made their longer term decisions. It’s It’s moving so fast, but, actually, that’s becoming really challenging. Tom, you can probably talk to more about that. Yeah. And I think for us, we’re receiving signals now in the cloud. Right? And, we’re doing quite a lot of production activities on those signals in the cloud. And we’re in this sort of slightly mad situation where we get something in the cloud, we bring it on prem to do some manipulation or some routing or some monitoring of it, and then immediately send it back to the cloud for the next step. And that seems to me like a massive waste of time. Why why if we if we’ve got something in the cloud and it’s going to the cloud, we need to be able to do everything on it in the cloud. And that’s the real unlock for us so that we can not only so we can start to address our, you know, real challenges over technology replacement and technology debt. You know, the those those core routers are absolutely physically embedded inside our organizations. They’re gonna be a nightmare to replace. So let’s try and do as much as we possibly can in a software way in the cloud, and and the tech seems to be there to do that. Yeah. I mean, from from our experience at Zixi, I mean, we save our by moving stuff to the cloud, you save on a couple of aspects. I think number one is you can save on kind of latency. If you’ve got a workflow like you described where you’re going in and out of the cloud, which doesn’t really make use of the cloud, I mean, you’re adding a lot of latency because you gotta get it back down. You gotta get it back up to the cloud. You’re also adding quite often encode and decode steps, which is actually making it a lower quality production that the stream that you’re finally sending out. So And and for and for minimum value. Right? So this is if if all your one of the use cases we have, we do quite a lot, is we take, agency feeds, and we put those out on onto onto the BBC’s iPhone, onto the news and and app. And what we’re really doing is adding some production graphics. Now all of that kind of stuff happens in the cloud. We’re already doing that in the cloud. We’re doing all the assembly of that stream in the cloud. But at the moment, we downlink that feed on premises and then push it back up to the cloud. It’s adding no value whatsoever. Yeah. Yeah. It’s interesting. And it’s sort of like it’s trying to learn where the capabilities of software in the cloud is and where those restrictions are. And I guess that was why this is quite an important Accelerator to run because we need to understand the different vendors’ capabilities. And quite often, the whole concept of running things in the cloud is the best of breed concept. Right? So being able to pick and choose people’s software and use that software for its its ideal purpose, which brings us to one of, one of the major challenges that we saw, I guess, for for this Accelerator. It was kind of how do you have different vendors equipment? How do you get them to work together? So, I mean, what was that an issue you think during this Yeah. Etcetera? You know, we had a lot of conversations, a lot of meetings, and, eventually, we just sort of decided, let’s make an Excel spreadsheet and Yeah. Put all of the information in there and define, you know, each connection as a, you know, SRT one, SRT two. You know? And yeah. Look. I I I’ve sort of always described it that for the proof of concept, you know, we rather hand cranked it. You know? It’s it was it was quite a manual process in regards setting up. But I think, at least in my mind, and, the the story I think we were trying to paint was it’s possible. It’s possible to get from that point where, you know, for for example, in in the proof of concept, it was a sporting event at Santry in Dublin, the National Athletics. And we got it up into the cloud from the OB truck, and we didn’t want it to come down again effectively. And, that was a sort of challenge in my mind that I wanted to try and set for us at RTE to try and achieve. And, you know, the the we we we succeeded. We succeeded in getting that directly into our player environment Yeah. In a way that we’d never done before. And yeah. And I think I think the the orchestration of it and the joining the vendors together is really easy in one part and really hard in the other. It’s really easy in one part because SRT gives us a way of just passing streams to each other. What we’ve talked about in the Accelerator is essentially trying to create as long a chain of functions, as long a workflow as we possibly can in each vendor’s tool Yeah. So that you’re not doing that constant recode and encode. You’re not doing that constant, adding latency to it. You do as much as you can in the vendor’s tool, but you do wanna pass between them. And SRT gives you a way to to pass between them. The real challenge, the real headache is that orchestration piece. Yeah. How do you how do you make one vendor’s component talk to the next vendor’s component? And as Rosten says, we’ve got a we’ve got a big big diagram on a bit of paper and an Excel spreadsheet. Well, that doesn’t scale. Right? So we’ve we’ve got a the next step for us is to find a way that we can, we can scale that out. And my my personal belief is pretty much every vendor we’ve worked with in this in this Accelerator, they’re all approaching it in the same sort in the same sort of way. Right? They’re all, you build a workflow in their tool, and then it gives you an opportunity to save that as a template or a blueprint or a concept or whatever the the phrase is. So what I would really love to be able to do is just call off those work those workflows, those templates in a standard way. Yeah. And if I can do that in a standard way across multiple vendors’ tools, then the vendor gets the ability to show their best to innovate without being constrained by some universal API. Yeah. But, we get the ability to really just instantiate that quickly when we need to. Yeah. And I guess it I I think with with vendors, there’s a tendency to to try and do everything sometimes. So maybe you don’t as a as a as a customer, you probably don’t want that. You want someone to tell you what they do and be good at that that specific thing. Well, I think because it you know, this well, look. We’re we’re at IBC. There’s so many vendors here. Yeah. There’s so many different companies, and all of them have their own sort of specialties and strengths. And, it from a broadcaster’s point of view, we don’t wanna curtail that. We we wanna be able to sort of pick from the best. Yeah. I I I want Lego bricks. Right? Yeah. I want to be able to bolt these things together in the right order because at the moment, we’ve got this we’ve all got this problem where we’re knitted into ecosystems in one way or another. Yeah. And that well, that just slows us down, and it means that if one part of the ecosystem fails, then the whole ecosystem is out. We and it’s a nightmare to change it. So, yeah, LEGO bricks that I can compose together, to achieve an outcome. That’s that’s exactly what we have to do. And I and and I know that one of the components of this Accelerator was to to think about using kind of an API to control those and and use a common API for different vendors. Do you think that’s ever is that something that you think will ever take off? Do you think that’s going to happen? Or I really hope so. I think I think we can make an argument for why it’s necessary. I think we can show through the Accelerator, you know, the real practical headache of trying to orchestrate multiple vendors together Yeah. And how opinionated you have to be in how you you know, as well as we do that, at the moment, if an IP address changes or or thing gets redeployed, the IP address changes, and that means that the next person in the chain doesn’t know where they’re picking it up from. Yeah. And it becomes a it becomes a bit of a rat’s nest. Yeah. We’ve gotta find a better way of doing that. I think I think we’ve had some interest in it. It. My my absolute belief is that we need to make that way of orchestrating it as straightforward and as simple and as easy to implement as possible. If we’re trying to define a universal API that covers everybody, we’re onto a hiding to nothing. It’s just Yeah. It’s not Yeah. Yeah. Okay. And rightly, it won’t. So, in terms of kind of the control aspect and the operators, obviously, when we when we did the track and field event, you had operators, at the event. There was people at the event, but there’s also operators at home accessing the the software externally. So where does the, the the handover point happen? Like, why is that is that one of the challenges to work out what what people are doing in the different venues? Yeah. Look. For the proof of concept, we we we try to sort of prove as big a concept as possible. You know? We we wanted to say, traditionally, we tend you know, an MCR is based in a location, is physically anchored, and we wanted to try and break away from that as much as possible. So not only did we want to try and make it so that the signals remained in the cloud, but also the functionality could be achieved from anywhere because Yeah. Otherwise sort of what’s what’s the point in doing it in the cloud if you if you also can’t take the benefits of the fact that you can remotely control it from anywhere? So Yeah. Look. We we probably made it slightly more complicated for ourselves than we needed to, but it was to sort of try and prove a point. It was to try and prove a point that what we could achieve was actually a collaborative affair. It it could be achieved by multiple broadcasters. It could be achieved not necessarily one location. I think from a broadcasting point of view, we’re all we’re also thinking about not just a function, but the resilience of a function. You know, the ability to do it from anywhere is is massively attractive to broadcasters because, you know, we always think about our business continuity. We always think about, you know, what happens if. And that flexibility and that agility is is is key. And I think I think we also prove that with the fact that, you know, we had an MCR, in two MCRs in the UK with the BBC, an MCR in Dublin, and, you know, one person working from home. That’s great. You know? And, yeah, there’s some learnings around the operation about how you communicate, who does what. Yeah. But that’s that’s sort of trivial. That that’s something we can easily solve. There’s more of a sort of technical challenge that we wanted to sort of explore on this. Yeah. I think that that there was sort of two other interesting learnings that came out of the of the proof concept. One was about how important a unified interface is for, for the for the MCR operators. Having to work between different tools and then work between different ways. Windows. What Exactly. Can be done by part of that is the way we did the Accelerator. Yeah. Yeah. In a real scenario, you wouldn’t have Either way it’s all of those vendors. You’d probably have, you know, a couple of those vendors doing set things. Yeah. But I guess yeah. I guess, the the opening up the different operational interfaces was Yeah. Exactly. And I think and the other part was just that realization that each of the vendors and perfectly reasonably, each of the vendors has a slightly different, response to failure, and failure modes. And so whereas, you know, as a traditional MCO operator, you can be be quite used to what’s gonna happen if something fails. Yeah. You know, some vendors do error concealment. Some vendors fail over hard. Some vendors fail over soft. Some vendors fail to black. Some vendors fail to black or what have you. Yeah. And getting your head around all of those different strategies, I think, was a bit of a it was a bit of a brain melding thing for our our coaches. It’s interesting because when you think we have to take it back to the SDI router comparison, you’d normally have a bypass kind of switch that just that just send it through. Yes. And so you’d always have something going through the system. Whereas if you’re running something in the cloud and the cloud instance fails, you’d you’ve got no signal then. Yeah. Exactly. Yeah. Yeah. And being able to trace it back, the the the use of a of a good mimic or a good graph in order to be able to trace it all the way back Yeah. That’s super, super important. Yeah. But I guess the way you have to deal with that is at the edge. Right? And I guess at the edge points, you have to have a fallback slate or a fallback file that you can get to. And building more monitoring points into your, into your workflows so that even if you have said, I’m gonna do as much as much as I possibly can inside a vendor’s, a vendor’s tool and create a whole chain. Having monitoring points inside that chain so you can you can grab it at before a particular process happens. Yeah. Yeah. That also turned out to be quite important. Okay. So in terms of, like, the technical output and the quality of the stream that you had out, was there anything different in the output stream compared to a normal production? Was it the same kind of standard and quality as you’d expect? You know, I think probably the biggest learning from an operational perspective was latency. Latency was a real challenge with trying to, you know, coordinate getting on air, off air, where you’ve got sort of real time talkback Yeah. But then you’ve got super latent sort of streams. And so when someone says take us off air, actually, you need to be waiting for, like, ten, fifteen seconds. I think from a quality point of view, the quality was as good as anything we’d seen through a normal process. Yeah. You know, we didn’t see anything on our OTT platform that looks out of the norm in regards to quality. I think in our MCR, we also experienced that. So I think from a quality and from a delivery point of view, we we definitely achieved and proved that you can do it. You can achieve it. You can you can make this sort of thing work. Yeah. I think it’s just doing it at scale Yeah. And and ironing out maybe some of the little sort of operational nuances of doing it in the clouds. I actually think some of the latency things you could probably get around with with using things like SCTE markers. So if you had, like, SCTE markers being inserted at the venue, then that can trigger automated actions in the clouds. Right? So you could take to different slates or different streams. So I think, I think that’s that that really interesting. So, kind of mandatory question at the end of a podcast. So, where do you think this is going to go in the next sort of six to twelve months? Do you see do you see yourselves using this more? For the BBC’s perspective, yes. We want to we need to do this. Right? We we need to we need to start breaking the back of our our of that disjointed workflow that I described earlier on. Yeah. So, yeah, over the next 6 to 12 months, I expect us to be buying and building the orchestration for those components. We’re already at a point where we’ve we’ve built enough inside the organization to be able to get get a stream from one place in the cloud to another place in the cloud and do some monitoring of it. Yeah. Adding this manipulation part into it, adding the MCR function into it is absolutely what we’re planning to do over the next two months. Yeah. And for RTE, we’ve recently gone IP within our TV operations. So this sort of the need isn’t as urgent for us, but I think we’re, you know, we’re we’re undergoing, or due to undergo quite a significant transformation within the organization. And so I think we’re just we’re trying to look ahead. We’re trying to see whatever organizations like the BBC are doing and just trying to sort of keep our finger on the pulse so that when it comes to those long term decisions about what are we gonna do, how are we gonna structure ourselves, what is the technology that we’re gonna use, we know what’s out there, and we know what’s feasible. And so that’s that’s sort of where we are. Yeah. Okay. Great. Well, thanks for your time today. Appreciate you coming and and speaking to us. So, and this guy will be seeing it, Chris as well. Obviously, hopefully, fingers crossed, we might win the awards. Who knows? Right. Right. We don’t have a plane, but we might win it. You never know. Bye bye. Alright. Cheers, guys. Thank you.
Overview
When live signals already originate in the cloud, why pull them back on-prem just to send them up again? In this Signal Shift episode, BBC and RTÉ unpack their IBC Accelerator project with Zixi and other vendors, where they set out to build a Master Control Room that lives entirely in the cloud. They explore what it takes to keep signals in the cloud end-to-end, orchestrate “LEGO brick” vendor workflows, manage latency and failure modes, and prove that cloud-first MCR can match traditional quality.
Key Takeaways
- Stop the cloud–prem–cloud loop
Keeping signals in the cloud avoids unnecessary encode/decode cycles, extra latency, and infrastructure overhead. - Orchestration is the real headache
Passing streams between vendors is easy with SRT; the hard part is orchestrating multi-vendor workflows at scale with templates and APIs. - Think in “LEGO bricks,” not monoliths
Broadcasters want best-of-breed components they can bolt together in different orders, not lock-in to a single ecosystem.
- Operations change in the cloud
Latency, vendor-specific failure behaviors, and multiple UIs require new operational thinking, better monitoring points, and more unified control surfaces. - Quality is proven; now it’s about scale
The proof of concept delivered output comparable to traditional MCR, and the next 6–12 months are about industrializing orchestration and resilience for real-world deployment.
Contact our Sales team to find your perfect solution.
