LAS VEGAS, April 24, 2017 /PRNewswire/ — IBM (NYSE: IBM) announced today at the National Association of Broadcasters Show plans to roll out a Watson-enabled cloud service designed to help companies extract new insights from video with a level of analysis not previously possible.
The service highlights IBM’s continued focus on combining artificial intelligence with the IBM Cloud to help media and entertainment companies make sense of unstructured data and make more informed decisions about the content they create, acquire and deliver to viewers.
This content enrichment service, expected to be available later this year, will use Watson’s cognitive capabilities to provide a deeper analysis of video and extract metadata like keywords, concepts, visual imagery, tone and emotional context. This is a unique offering to enter the market because it applies a range of artificial intelligence capabilities — including language, concepts, emotions and visual analysis — to extract insights.
The service will use several Watson APIs, including Tone Analyzer, Personality Insights, Natural Language Understanding and Visual Recognition. In addition, it will use new IBM Research technology to analyze the data generated by Watson and segment videos into logical scenes based on semantic cues in the content. This capability identifies scenes based on a deeper understanding of content and context beyond what’s available in current offerings in the market.
For example, the new offering can enable a sports network to more quickly identify and package specific basketball related content that contains happy or exciting scenes based on language, sentiment and images, and work with advertisers to promote clips of those scenes to fans prior to the playoffs. Previously, someone would have had to manually go through every piece of video to identify each piece of content and break it into scenes. Now each scene can be more quickly identified to attract viewers and advertisers for quick-turn campaigns. The new service can also be applied to repackaging specific scenes from years of TV shows to be used by an advertiser that wants its brand associated with certain moments — like the family eating dinner, or driving in a car.
In addition, the new service could help media and entertainment companies better manage their content libraries. For example, a company might want to prioritize content that targets viewers who want more uplifting stories about world adventures. To address this need, the new service could help this company analyze their content library with a new level of detail to determine whether they are meeting this specific interest.
“We are seeing that the dramatic growth in multi-screen content and viewing options is creating a critical need for M&E companies to transform the way content is developed and delivered to address evolving audience behaviors,” said Steve Canepa, general manager for IBM Global Telecommunications, Media and Entertainment industry. “Today, we’re creating new cognitive solutions to help M&E companies uncover deeper insights, see content differently and enable more informed decisions.”
Today’s media and entertainment companies are delivering an increasing amount of content over the cloud to consumers through mobile phones, tablets, laptops and streaming media players, but it’s challenging to extract insights from this volume of content to further engage viewers. Whether it’s attracting new viewers or advertisers, this new Watson-enabled service is designed to help media and entertainment companies identify these new opportunities more quickly.
New Services Extend IBM R&D Capabilities for Media
This new service will build on previous projects by IBM to infuse Watson and other cognitive technologies in its cloud video solutions to uncover data and insights. Last year, IBM Research used experimental Watson APIs to create a “cognitive movie trailer.” The system learned from trailers from previous thrillers what likely made them effective and identified relevant scenes in an unreleased movie to make an insight-driven trailer.
IBM also worked last year with the US Open to convert commentary to text with greater accuracy by having Watson learn tennis terminology and player names before the tournament. In addition, IBM has completed pilot projects that used cognitive technologies to segment video into scenes based on higher-level concepts and provide consumer feedback of livestreamed events by analyzing social media sentiment.
Additionally, the Tribeca Film Festival and IBM recently announced “Tribeca Presents Storytellers with Watson: A Tribeca Film Festival competition for Innovation sponsored by IBM.” The 2017 Tribeca Film Festival, presented by AT&T, will be taking place from April 19-30 in New York City. Participants in the U.S. can submit ideas on how they would apply Watson to any storytelling medium, such as film and video, web content, gaming, augmented reality and virtual reality. For more details on the rules and how to enter, visit the website here.
Contact:
Joe Guy Collier
IBM Media Relations
jgcollie@us.ibm.com
+1 248 990 4707