AI and image recognition can turn passive watching into a true lean-in experience
Advertisers and brand marketers have all been abuzz about shoppable video recently. But their focus has been almost exclusively on selling products. It’s time to start thinking bigger.
Think of it this way: Do you go to a search engine just to find something to buy? Do you read articles only to click on embedded links that send you to a store? Do you surf the web only to make a purchase? Probably not. So why are links in video limited to shopping? Video has to become truly interactive.
What does that entail? Current video links can go beyond simply connecting people to a place to buy a product. They can create an even richer experience by helping viewers engage more deeply with the topic. Imagine the unprecedented data, unparalleled engagement and deep insight you could get from going beyond shoppable video. The technology is here, but it has not been widely adopted, yet.
People do a lot more than shop
People are interested in a broad range of topics—celebrities, places, experiences, finance, education and, yes, retail products. But consumers of digital video—things like in-stream ads, short-form content and branded content—can only click through to buy what’s being sold. They have never had the opportunity or the ability to engage and dive deeper into that content.
By 2020 online videos will make up more than 80 percent of all consumer Internet traffic, according to Cisco. I find it illogical that web and mobile platforms, all fully engageable, have little way to connect with the most popular form of content—video. In fact, when you think about it, ads are really the only form of video that you can actually click on.
Being able to truly engage with video will be a paradigm shift for everyone. And that advanced interactivity requires automation and scale.
Current HTML5 technology that is used for most shoppable video overlays can take weeks to implement and is a custom, manual process. That time-consuming process is why interactivity is limited to shopping. It hasn’t evolved because there isn’t the scale.
But with advances in AI and machine learning, we can now quickly identify, describe and link any person, place or thing within a video in a matter of hours. This has the potential to make video far more immersive and experiential because it can now all be done at scale. Consumers will be able to explore, learn more and, yes, even shop within a video.
Making video a true lean-in experience
A company that has thousands of videos in its library can now, in a matter minutes per video, make every individual aspect in the video interactive. This means that every person, place or thing can be quickly identified, described and linked out to. And those links can be native to the video and non-intrusive to the user.
Video has always been thought of as a lean-back media. Now, consumers can now lean in as deeply as they want to with video content and create their own experiences.
This has the potential to change what advertisers and brands can learn from video viewership. For instance, they’ll be able to see how their video content is performing on a scene-by-scene level and how users are engaging with it. They can then use that data to help drive future content creation and personalization. Video becomes a format that is truly accountable.
Interactive video is a technology whose time has come. As an industry, we need to evolve from repurposing linear TV content and ads for online use and holding them to the same success metrics as other digital channels. At KERV, our goal is to revolutionize digital video. And that’s going to come from thinking of interactivity as something that goes beyond shoppable.
Jon Flatt is the founder and CEO of KERV Interactive, which uses a patented AI image recognition technology to identify products in a video stream, opening up new horizons for customer engagement and attribution. Before KERV, he was CEO and founder of Red McCombs Media, which was acquired by LIN Media.