Adobe Digital Experience Summit 2021 Sneaks-Catchy Content
Turkce yazi icin burayi tiklayin
Adobe reveals some of the innovations that they are working on every year at Adobe Summit Sneaks. One of the very interesting Sneaks of 2021 was Catchy Content. It is a feature that is built via Adobe Sensei, Adobe's Artificial Intelligence engine that powers many of Adobe's products.
Catchy Content is still in beta program with some of the Adobe's clients and I am very curious to see if it is going to achieve what is intended. If it does, this will make UX experts, content creators and digital analytics and optimization experts' life easier. Especially if you are running content site and apps, you will get the most benefit.
According to Adobe "Catchy content uses artificial intelligence to help digital marketers measure characteristics and attributes of content that engage audiences and then optimize and personalize content with that insight."
One of the hardest things for content creators is to create relevant content for its users. Content is one of the biggest tools for digital competition. Companies who can create engaging content does have a better chance than their competition.We all want our content to be engaging and effective for our audience but how do we decide to which content to choose ? And how do we optimize this content?
We all have been using A/B and Multivariate testing. We have been testing images, text, titles and even different UX design. This feature just takes these tests to another level with Artificial Intelligence. Its main purpose is to make content testing faster and more effective and highly automated.
When we do content testing we have lots of tasks we have to complete.
We need to tag our content to collect data, we need to design our pages to be more trackable, we need to set up test rules and then analyze the data.
So this new product basically takes over couple human jobs. It analyzes your online content, text and images and tell you how your customers engage with this content, how and why
Adobe Sensei does pretty cool stuff to automate this process. For instance you will not have to tag each image and you don't have to give image a friendly name, Adobe Sensei will recognize the image elements automatically.
Adobe Sensei gathers data on the characteristics of content that customers interact with and then uses that information to find patterns to understand and create personalized experiences based on that insight
How Adobe Sensei collect data?
Adobe tries to make data collection little bit easier and tries to collect data automatically without any extra tagging work through experience provision technology. Adobe sensei automatically generate independent detailed metadata to describe every element and identify characteristics and images including objects like sunset, women, silhouette, colors, animals, plants etc.
It uses the latest in computer vision technology to automatically identify characteristics and images and creates a data layer called Visual Attributes. As you can see below this technology can recognize many different characteristics of an image. Ocean ,summer, woman, sunset etc.. That way you don't actually have to tag anything to collect data.
But what about text and the copy? Adobe sensei uses NLP(natural language processing) to read the text and identify emotional sentiment, reading level and tone of the copy. It creates Language Attributes data layer as you can see below.
Now we've collected this data what's next?
I now can login to Adobe experience cloud and see the results of my each segment.
Here I can see what drove customer engagement automatically and then when I create a new content, Adobe Sensei helps me to choose the content.
As an example we want to choose a TITLE for our text and want to decide the best subject line without doing any testing. We want to know without testing to beat their competition! Here we just type the TITLE we like and Adobe Sensei shows us the likelihood of success of that title. For instance here we see that our Title is long.
Then we shorten the Title and try again. And here we get a green light.
We can also evaluate and choose visuals.
It is important to understand that the more time Adobe Sensei learns the better results it will give. Machine learning is all about machine collecting data and learning over time like a baby. And over time results can also change, maybe this time short title performs better but maybe next month long title might perform better.
We do not know all the features of this product. I think it would be a really powerful tool if we can also evaluate page design elements. Where should I put my checkout button? Is my product page too long? How should I present my products page? Which elements are engaging? Which are cluttering? How much product information do i need?
If Adobe Sensei can do it all, then we have a great product here. We will wait and see.