While AI certainly has captured the most buzz over the past year, the ongoing shift to public cloud technology as a replacement for legacy, on-premise hardware is probably still the most important technology trend in the broadcast TV business. After successful implementations of the cloud in master control, ingest and archive workflows at several station groups, the cloud is now making solid progress in news production as well.
Driving that growth is improved latency, better interoperability between vendors and more efficient software that helps rein in the costs of running production workflows on cloud compute, said technology experts who gathered at TVNewsCheck’s NewsTECHForum in New York earlier this week for the panel “News Production and the Cloud,” moderated by this reporter.
They emphasized that no broadcaster is leaping into the cloud unless it makes financial sense. Even groups like Sinclair that have been aggressive in their overall cloud adoption are being conservative in moving control of their live newscasts to the cloud.
“We’re seeing every customer defining cloud a little bit differently on where they want to go, and it’s really a business decision still,” said Bob Valinski, sales manager for Vizrt Americas, which is providing cloud-based tools to both Sinclair and NBCUniversal for local newscasts.
NBCU’s ‘Virtual Production Control Room’
The early leader in producing local newscasts in the cloud is NBCUniversal Local, the station group comprising the NBCU-owned NBC and Telemundo affiliates as well as the NBC regional sports networks. NBCUniversal Local has launched cloud-based newscasts at three Telemundo stations: KBLR Las Vegas; KTDO El Paso, Texas; and KASA Albuquerque, N.M. It is using Vizrt’s Vectar cloud switcher working with Ross OverDrive automation in what NBCU calls a “virtual production control room.”
“We have been able to successfully port over a lot of your traditional devices such as graphics devices, video switchers, audio mixers, control room automation systems into this model,” said Michael Masek, senior director, production application engineering, NBCUniversal Operations and Technical Services. “Our historic ‘North Star’ or guiding principle has been to deploy everything in an infrastructure-as-code methodology. Because we want to make sure we can deploy things at will, as needed, from a DR perspective from operator stations. That is the intent of what we have.”
The cloud news launches at KBLR and KTDO dovetailed with moves to new facilities, while KASA made the jump in order to launch a brand-new newscast from scratch.
While the cloud tools have been very successful at the three Telemundo stations and NBCUniversal Local is bullish on the model, it isn’t rushing to swap out legacy systems for cloud control rooms across the group. Instead, it will look strategically at any control rooms that are due for a refresh, Masek said, and carefully weigh whether it makes sense “to reup that capital investment” by replacing hardware or instead turn to something “more appealing” like the cloud model.
He did note the speed of spinning up a cloud control room was key in getting news launched within a tight timeframe at the newly acquired KASA.
“It allowed us to get that platform up quickly first and foremost, because nobody wants to have to spend six, seven, eight months to bring in potentially a third-party integrator to have to start implementing these things in a time-consuming fashion,” Masek said. “If we could deploy these things rapidly and at scale, I think that’s advantageous for us moving forward.”
Hybrid Model Drives Sports For Scripps
E.W. Scripps is also considering the cloud’s possibilities for local news. But the first place where the group implemented cloud-based production is with its sports programming, mainly in a hybrid model using cloud tools like replay in combination with on-premise hardware. Scripps uses this cloud/on-prem combination to produce pre- and post-game shows for sports franchises in markets including Las Vegas, Miami and Salt Lake City.
“That was something we needed to solve for first,” said Mark Gray, Scripps SVP of network and station operations. “We moved editing to the cloud for sports, then we started to move production to the cloud. We also have a number of Florida stations, and we have implemented it as a disaster recovery piece. And we are able to basically take newscasts and produce them outside of any physical location. If we need to abandon a studio, we can.”
As Gray explained, some of the buildings in Scripps’ Florida markets are very old, and staff has to evacuate for anything above a Category 2 hurricane. Cloud tools now allow a station to run a control room in another location, ingest what it needs to from the field and from its reporters, and then feed it back to the transmitter directly to stay on the air. Scripps can also repurpose satellite capacity typically used for distribution of network programming for DR purposes.
Gray now is looking at cloud production tools as a way to standardize functionality across the mishmash of systems that is currently across Scripps’ group.
“We were in an acquisition mode before COVID and so we have a lot of different types of equipment in our stations,” Gray said. “And as we’re due for a refresh, this is also a way to look at standardizing things across the company, which is something our company really needs to accomplish, since we sort of amalgamated a bunch of smaller groups over the course of those acquisitions.”
A Focus On TCO For Hearst
Hearst Television has already been relying on the cloud for news production for years, said Stefan Hadl, its SVP, broadcast engineering & technology, as the TVU Networks bonded systems its stations use for live feeds and content sharing are inherently cloud-centric. Hearst also has 28 interconnected data centers for master control functions like content ingest, and Hadl thinks of those resources as the group’s own private cloud. But he still has reservations about relying on someone else’s connectivity to connect to public cloud compute for essential functions like news control rooms.
“Cloud for us is something we’ve been involved with mostly on the acquisition side and getting things in and out of our buildings,” Hadl said. “As far as production goes, that’s one of the things we’re looking at. As we have to refresh the technology, we’re trying to find the right tool for the job.”
As a local broadcaster, Hearst historically hasn’t done a lot of sports, Hadl said. But it’s starting to do more big events at its stations in Louisville, Ky., and Boston, including coverage of the Boston Marathon, which WCVB Boston has been producing since 2023 in partnership with ESPN. To date, Hearst has done the marathon coverage in a traditional manner by hiring large production trucks, and it will do so again next year. But Hadl would also like to use the 2025 race to explore the potential of a cloud control room with a test production that would coexist side-by side with the traditional workflow.
“This year I’d like to do a POC [proof of concept] where that whole control room is in the cloud,” Hadl said. “You set up all your cameras like you would, you set up the desk for the talent on site, and bring all those things back and have that not in a traditional control room, but in a separate control room so we can see what that looks like.”
Hadl is interested in cloud production platforms like Grass Valley’s AMP, and said he would like to spin up such a system so Hearst staffers can get educated about its pros and cons. Most important, he would like to compare the cost of the cloud control room with the cost of producing the race in a traditional manner.
“Because for us everything is about total cost of ownership,” Hadl said. “Moving to the cloud makes sense in places, and sometimes it doesn’t. You’ve got to make sure you’re covering all of the bases.”
Sinclair Tackles Tickers
Sinclair has been ahead of the curve in moving all of its content ingest and distribution to the cloud several years ago. It is now migrating playout at its local stations to the cloud, with some 60 channels up now and 100 total planned by mid-2025. It has also been exploring the feasibility of moving live news production to the cloud as well and has run POCs of several leading cloud switchers.
But the group is moving at a slower pace with news, said Walid Hamri, Sinclair Broadcast Group AVP, media systems engineering, mainly because the financial benefits with news aren’t as readily apparent as they were with workflows like content ingest. So, Sinclair isn’t planning to do a full-scale leap into cloud control rooms like NBCU did with those three Telemundo stations, but instead to initially bring a piece or two that seems feasible.
“Maybe we can move the content first to the cloud, maybe we can do graphics in the cloud,” Hamri said. “We’re trying to make sure what makes sense. It depends on the business model with the partner, it depends on the amount of assets, and it depends on the usages that we have for a specific component. Some of it makes sense to move to the cloud. It’s not all or nothing.”
One area that does make sense now is news tickers, which to date have been running off dedicated hardware servers at each station. Sinclair wants to get off that aging legacy hardware and centralize its tickers in the cloud and deliver them from there to each station’s newscast. It is adopting HTML5-based tickers from Vizrt that are being integrated with Amagi playout software. Two stations are already live with the new integrated ticker system, with more planned for next year.
“That was something that when we looked at it, it was a no-brainer,” Hamri said. “It was actually making the ROI, making savings, it’s easy to support and it’s easy to maintain. And now even if an election is coming and you want to change one ticker across multiple stations, you want to change the layout, you only have to do it in one space and one place, instead of coordinating the effort across multiple stations and multiple markets.”
Managing Latency: Location Matters
Latency has long been a stumbling block for cloud-based production operations, particularly when using a cloud switcher to cut between different cameras. Hamri was quick to distinguish between the end-to-end latency of a cloud-produced newscast, which might have a roundtrip glass-to-glass latency of several seconds to the consumer, from the latency within a cloud control room after an operator presses a button to make a switch.
With the former, a one- or two-second delay is acceptable. But with the latter minimizing delays is essential, and even a few milliseconds matter.
“When we’re talking about switchers, that’s something that’s extremely sensitive,” Hamri said. “Ten, 50, 100 milliseconds is what we’re working on.”
Fortunately, he added, vendors like Vizrt have been working hard at lowering latency and have made significant progress. For his part, Valinski said the delay in Vizrt’s cloud switcher is now less than a frame a video.
“It’s negligible; you don’t even see it on the display,” he said.
But Valinski noted that there are still laws of physics that Vizrt and other vendors have to obey that limit the speed of a signal. As such, he tells customers that being as close as possible to the computer makes a tangible difference in cloud-based switching.
“What we found with our customers is to have the cloud instance that’s running the switcher closest to whoever is controlling it,” Valinski said. “In the early days of cloud, we wanted it closest to the cameras, but now we want it closest to the operators. So, when they push a button, they see the change instantly. Otherwise, you can’t direct a live show, if you see a delay from the button.”