From Knowledge Graphs to Multimodal Search for E-Commerce websites
A Talk by Andrea Volpini and Bo Wang
About this Talk
Description
The practice of using knowledge graphs and deep learning to develop a fast and scalable multimodal search engine for fashion E-Commerce.
With the recent advancements in deep-learning models, we wanted to introduce a new multimodal fashion search paradigm designed to help users interact with an online catalog using different modalities: speech, text, and images.
We traditionally build product knowledge graphs using schema.org to exploit metadata in modern SEO and digital marketing. By adding semantic markup we help search engines understand the products being sold and the audience that best fits the offering. Can we use this same approach to create a novel search experience for users landing on a fashion E-Commerce website? Is doing SEO on scale helpful to create a new search experience?
In this Masterclass, we will learn how to use product knowledge graphs to build a multimodal search engine that can more naturally help users find the product they want.
After this Masterclass participants will be more aware of structured linked data, neural search, and their impact in the E-Commerce domain. They will be equipped with concrete strategies and techniques to leverage existing data - within their organization - for improving their content discovery and search functionality while doing SEO.
We will also cover some essential elements of neural search using Jina AI and a combination of OpenAI’s CLIP (for image-to-text) and DistilBERT (for semantic text search).
Key Topics
- The interplay between the product knowledge graph and neural search (WordLift):
- The role of semantically rich data in multimodal search
- The importance of data curation in SEO
- Challenges of using State of the Art AI models in a multimodal search system (Jina AI):
- AI engineering: from off-the-shelf pre-trained models to end-to-end applications
- Intent detection: facilitating faceted search navigation on mobile devices
Target Audience
- Web Publishers
- Marketers with a data-centric approach
- Information Architects
- AI deep learning specialists and data scientists
- Data Modelers, content creators, and data practitioners who are (expected to be) involved in developing and/or enriching knowledge graphs to improve SEO and to enrich the user experience.
Goals
Get hands-on experience using product knowledge graphs and neural search to develop a multimodal search system.
Session outline:
From product feeds to multimodal search:
- Building a product knowledge graph
- Extracting features from images and metadata (structured vs unstructured)
- Enriching product description using GPT-3
- Building the UI of a multimodal search engine
- Evaluating the quality of results
- Take-Aways
- Additional resources
Format
- This class will be highly collaborative and interactive.
- Participants will form small teams, each of which will work on the reference website and run a "search query test" using the reference website's Product Knowledge Graph and Jina AI using Colab.
- Each team will experiment with Jina AI to experiment with text and image queries.
- Finally, we will see how to improve the quality of the experience by reviewing the set of suggested queries. Product pages might include machine-generated content created using an autoregressive language model and product data stored in the graph.
Level
Intermediate - Advanced
Prerequisite Knowledge
Google Colab and WordPress/WooCommerce will be used.
You need an access pass to attend this session: Diversity Access Pass or Full Access Pass apply