Introduction
Our platform is adjusted to consume business customers' standard format APIs for product feeds and our proprietary product catalog database infrastructure. (see Fig. 2{reference-type="ref" reference="fig:architecture"}).
The system is based on Reactive Microservices Architecture [@RMA_2016; @rmanifesto], implementing its core principles which are: elasticity, scalability, fault tolerance, high availability, message driven and real-time processing. Especially real-time processing is crucial in order to provide tailored and high quality recommendations taking into account not only the latest changes of in-session user behavior, but also changes in system performance. Not only scores and recommendations are being calculated during the request time, but also user representations are being updated and exposed to models after each event flowing through event stream.
The conceptual diagram of an architecture is presented on Fig. 2{reference-type="ref" reference="fig:architecture"}. The system is accessible throughout an extensive API which is exposed by recommendations facade. When a new request for recommendation appears, before it is be passed to recommendation logic module, it is validated by the facade and enriched with business rules via recommendation campaigns. Rules may include things like: type of recommendation, recommendation goal or filtering expressions formulated in our dedicated control language, i.e. items query language IQL.
IQL custom query language provides a very flexible framework to build new recommendation scenarios based on item meta-data and recommendation request context. In Fig. 3{reference-type="ref" reference="fig:iql"} there are a few examples of building recommendation filtering rules. IQL expressions are being handled by an items filter, which performs filtering of candidate items based on given constraints. To achieve high throughput and low latency, items filter uses its own compressed binary representation of items, serving thousands of requests per second and filtering sets of million+ items. In case of IQL expressions with low selectivity, transfer of the data structure containing candidate item IDs over the network infrastructure could be expensive, therefore a binary protocol between filter and logic has been implemented. The model which will handle the request is selected by the Optimizer. Optimizer implements a form of a Thompson Sampling algorithm solving multi-armed bandit problems allowing not only to easily A/B test new ideas and algorithms, but also to optimize results of running recommendation campaigns. Finally one of the models receives a request to score available candidates based on model itself and to update entity embeddings.
Although most of the system works in real time, the offline part is also present but limited to model training. Algorithms are trained on two main data sources. The first one is a data lake into which events of different types and origins are being ingested through an events stream. To name a few events types: screen view from a mobile app, product add to cart from a web page, offline transaction from a POS system etc. The second source is a master item meta-data database where items are being kept along with their attributes and rich data types like images.
Our algorithms can be fed with various kinds of input data. The system analyzes long- and short-term interaction history of users and has a deep insight into item metadata. For this purpose we use a multi step pipeline, starting with unsupervised learning. For images and texts off-the-shelf unsupervised models may be used. For interaction data we identify graphs of user-entity interactions (e.g. user-product, user-brand, user-store) and compute multiple graph or network embeddings.
We developed a custom method[^1] for massive-scale network embedding for networks with hundreds of billions of nodes and tens of billions of edges. The task of network embedding is to map a network or a graph into a low-dimensional embedding space, while preserving higher-order proximities between nodes. In our datasets nodes represent interacting entities, e.g. users, device IDs, cookies, products, brands, title words etc. Edges represent interactions, with a single type of interaction per input network, e.g. purchase, view, hover, search.
Similar network embedding approaches include Node2Vec, DeepWalk and RandNE [@zhang2018billion]. These approaches exhibit several undesirable properties, which our method addresses. Thanks to the right design of algorithm and highly optimized implementation our method allows for:
three orders of magnitude improvement in time complexity over Node2Vec and DeepWalk,
deterministic output -- embedding the same network twice results in the same embeddings,
stable output with regards to small input perturbations -- small changes in the dataset result in similar embeddings,
inductive property and dynamic updating -- embeddings for new nodes can be created on the fly,
applicable to both networks and hyper-networks -- support for multi-node edges.
The input data is constructed from raw interactions - an edge (hyperedge) list for both simple networks and hypernetworks. In case of hypernetworks, where the cardinality of an edge is larger than 2, our algorithm either performs implicit clique expansion in-memory (to avoid excessive storage needs for an exploded input file). For very wide hyperedges star-expansion results in less edges, and can be used instead - via an input file containing virtual interaction nodes.
Our custom method works as follows: At first we initialize node vectors (Q matrix) randomly via multiple independent hashing of node labels and mapping them to constant interval, resulting in vectors sampled from uniform (-1, 1) distribution. Thus we achieve deterministic sampling. Empirically we determine that dimensionality of 1024 or 2048 is enough for most purposes. Then we calculate a Markov transition matrix (M) representing network connectivity. In case of hyper-network, we perform clique expansion adding virtual edges. Final node embeddings are achieved by multiplying $M*Q$ iteratively and L2-normalizing them in each intermediate step. The number of iterations is depends on the distributional properties of the graph, with between 3 and 5 iterations being a good default range.
The algorithm is optimized for extremely large datasets:
The Markov transition matrix M is stored in COO (co-occurrence) format in RAM or in memory-mapped files on disk;
all operations are parallelized with respect to the embedding dimensions, because dimensions of vectors Q are independent on each other;
the $M*Q$ multiplication is performed with dimension-level concurrency as well;
clique expansion for hyper-graphs is performed virtually, only filling the entries in $M$ matrix;
star expansion is performed explicitly, with a transient column for the virtual nodes in the input file.
The algorithm's results are entity embeddings contained in the $Q$ matrix. Creation of inductive embeddings (for new nodes) is possible from raw network data using the formula $M'*Q$, where $M'$ represents the links between existing and new nodes and $Q$ represents the embeddings of existing nodes.
It is worth noting that the algorithm not only performs well on interaction networks, but also on short text data, especially product metadata. In this setting we consider words in a product title as a hyperedge. This corresponds to star-expansion, where product identifiers are virtual nodes linking title words.
However our general pipeline can easily use embeddings calculated using the latest techniques of language modeling, e.g. ELMO, BERT embeddings, especially for longer texts.
Another data source is visual data (shape, color, style, etc.) i.e. images. To prepare visual data feed for our algorithm we use state-of-the-art deep learning neural networks [@kucer_detect-then-retrieve_2019; @dodds_learning_2018] customized for our use [@wieczorek2020strong].
Indeed, any unsupervised learning method outputting dense embeddings can be considered as input to our general pipeline.
Having unsupervised dense representations coming from multiple, possibly different algorithms - representing products, or other entities the customers interacts with, we need to aggregate them into fixed-size behavioral profiles for every user.
As most methods of representation learning assume nothing about embedding compositionality (with simple assumptions made by Bag-of-Words models), we develop a custom mechanism of compositionality allowing meaningful summation of multiple items.
Our algorithm performs multiple feature space partitionings via vector quantization. The algorithm involves ideas derived from Locality Sensitive Hashing and Count-Min Sketch algorithm, combined with geometric intuitions. Sparse representations resulting from this approach exhibit additive compositionality, due to Count-Sketch properties (for a set of items, the sketch of the set is equal to the sum of separate sketches).
All modalities and views of data (all embedding vectors) are processed in this way, their sketches are concatenated.
One of the central advantages of the algorithm is the ability to squash representations of multiple objects into a much smaller joint representation which we call (sketch), which allows for easy and fast subsequent retrieval of participating objects, in an analogous way to Count-Min Sketch. E.g. the purchase history of a user can be represented in a single sketch, the website browsing history as another sketch, and the sketches concatenated.
Subsequently sketches containing squashed user behavioral profiles serve as input to relatively shallow (1-5 layers) feed-forward neural networks. The output structure of the neural network also is structured as a sketch, with the same structure.
Training is done with cross-entropy objective in a depth independent way (output sketches are normalized to 1, across the width dimension). During inference, we perform a sketch readout operation, as in a classic Count-Min Sketch, exchanging the minimum operation to geometric mean - effectively performing averaging of log-probabilities.