The evolution of the multimedia cloud


Multimedia creation and its content delivery are prime candidates for the cloud. Already a rich source of information, creativity, and complex media generation capabilities – leveraging the cloud for its production, assembly, and user migration helps industry build things like applications, posts, advertising, blogging and almost anything that requires a multi-step or iterative process.

Cloud-based multimedia can be useful for addressing global challenges, telling stories, linking business content to programs, or for any combination of these or similar elements. While in the almost distant past, videotape was a widely adopted and accepted methodology for packaging media into a consistent and linear format. This model has obviously changed.

After the videotape, users transported the content (in file form) through a facility’s ecosystem, then to the home via various streaming and streaming media, including the Internet and the Web. . Today, even these processes are darkening because of the cloud.

Before the rapidly scalable adoption of cloud services, media content consisted of segments made up of a conglomerate of audio, video, and other data. Each of these processes, before the exclusivity of desktop content creation, were essentially individual steps with conversions, transfers, and usually manual transfers and adjustments just to move from one step to another in the process.

Items were typically created by selecting from a myriad of applications, video and audio slices, software sequencing components, complete packages and application suite extensions, and hybrid sets of non-professional computing solutions, general public and professionals. The assembled elements took many paths, which essentially ended in a storage solution (tape, disk or otherwise) where the materials waited for a linear path for “on the air” or other transport.

None of the activities of that time needed the “cloud”. There was no cloud at the time, despite facility operators trying to develop the concepts in their own central equipment rooms or, later, perhaps in a “CoLo data center” that ‘they owned or rented.

Even before that time, the exchange of content was fairly consistent across the industry. Without a few pirate formats, linear videotapes or file-based servers have become the market winners for content storage, streaming, and distribution. Gradually, segments of these endpoints (eg storage) were moved to the cloud and backup / protection took place in an archive that no longer lived “on premise”.

Although at that time the main medium for broadcasting such content was a stick TV transmitter, other distributions were still done by microwave, satellite or terrestrial transport on copper or fiber optic media. Eventually cable / satellite took hold. Until video on demand became possible, linear live playback and home recording from OTA or cable was the basis for storing and replaying library and content in real time. Once mobile devices appeared, even VCRs, CDs, and DVDs lost their popularity.

Web services have once again changed this model. Private services became repositories that stored and hosted similar activities, and while not necessarily called ‘cloud’, it was essentially the basis for a multitude of storage and / or streaming services like cloud.

Content playback, from a broadcaster’s perspective, has been a constant evolution of the end-to-end application of elements. Take for example how the video server has evolved not only in physical terms but also in terms of performance, acceptance and the many possible applications due to the non-linear basis of the video server.

The content of the program would benefit from sequential reading of files from a storage bin to the MCR or continuity reading platform. He took one file, linked it to another smoothly and completely, and delivered it to an encoder who prepared the content for the end user. These workflows evolved into a “channel,” which was consistent, repeatable, and reliable. Yet it still has not “leveraged” the cloud beyond a method of storing finished content or its elements as a protective backup medium.

Most of these workflows were generated using a set of individual discrete components housed in the central equipment room or network center of a broadcast station. Hard physical connections made this possible. The software elements were then tightly coupled with devices to obtain precise synchronization sequences according to a “log” generated by another entity.

The model was no different than what happens in a CDN (Content Delivery Network), but the delivery of the broadcast still hadn’t reached the cloud as a whole. Unshared physical data centers owned by content creation entities have started to see web-based distribution capability emerging, but not for long. Outsourced cloud services were not far behind.

A common thread in this model is linearity, but it lacked any other form of interaction. In the past (pre-home video VCR or DDR), viewers enjoyed a distraction-free program; without the possibility of stopping, rewinding or replaying; and without the possibility of interacting, commenting, changing or modifying the means of transport. The cloud and the web would change that model again.

Legacy broadcast models have since “left the dock” and will likely never return to port. Enter the next generation of content delivery.

Cloud streaming services are now pretty well drained. The cloud now offers many options, is flexible, and is used by many content providers in addition to local or network entities such as broadcast stations or networks.

Cable and satellite service providers saw new needs for workflow support and seized the opportunity, but still haven’t tackled the next major transition, that of the cloud. After years of building gigantic equipment spaces to support broadcast / distribution, vendors would end up reducing huge equipment costs and physical space (and infrastructure) requirements to the point that, for some vendors there is little or no amount of ‘creative’ production material left behind in their facilities other than for live studio type productions, which are also now ‘in the process of’ migrating to the cloud as well. .

Individual entities that produce interstitial content (advertising) used by multichannel video program distributors have seen cloud writing on the wall for a longer period of time than the MPVD itself, but that too takes a serious change of direction. .

Like the multimedia workflow chain described earlier, the cloud enables the harmonization of interstitials with program content at a rapid pace. Like the shift to remote capabilities enhanced by COVID-19, the ability to create and manage end-to-end requirements is now cast in the cloud, and by a workforce that can, in many case, “do not be in the physical installation” to do so. The cloud is helping to achieve these capabilities at a rapid pace. The discrete stages of the workflow now became “services”, which users and non-service users could access in various ways (see Fig. 1).

Fig. 1 (Image credit: Karl Paulsen)

Bringing all the pieces of content together is possible due to the many paths, options, processing speeds, and the multitude of extended capabilities that the cloud brings. Service providers can now manage workflows, becoming automated and intelligent along the way.

Organizations now manage both workflow and workforce factors in real time. The change is more than expected and flexibility is now a daily reality.

As end users discovered, legacy workflows that required merging applications to generate a single base found that executing these combinations could take longer than the entire creative process of developing the product. multimedia content message itself. Once users and owners became familiar with the workflows, capabilities, and skills of the cloud, there was no turning back.

Karl Paulsen is CTO at Diversified and a frequent contributor to TV Tech in storage, IP and cloud technologies. Contact him at [email protected]

Source link


About Author

Comments are closed.