AI

The Math Behind Coframe's Optimizers

Understand how Coframe's optimization methodology works under the hood, and how it stacks up with A/B testing and standard multi-armed bandits.
3
 min read
September 17, 2024
Coframe Team

Coframe pushes the boundaries of optimization by going beyond traditional A/B testing.

A/B testing has long been hailed as a cornerstone of UX research, whether it’s for website UI, copy text, or SEO: it promises statistical significance as an assurance of one version’s superiority over another. But, as we’ve covered in previous posts, this approach comes with its own set of limitations: it takes time to produce meaningful results and risks losing traffic to bad variants. These drawbacks highlight multi-armed bandits (MABs) as a viable alternative to A/B testing.

Coframe’s method for optimization goes beyond these limitations and is backed by some powerful probability and statistics. In this post, we’ll dive into the theory that forms the backbone of Coframe’s approach, particularly focusing on how we continually generate new content variants—or "arms"—to dynamically optimize your website.

Coframe utilizes Thompson sampling, a simple but effective MAB algorithm, to enhance your website's performance. Suppose we have \(K\) possible “arms” or variants. For Thompson sampling, we model the “reward” for each variant \(k \in \{1, \dots, K\}\) as a Bernoulli random variable, meaning each has a probability \(\theta_k\) of “success,” whether that’s defined as conversion rate or some other indicator metric. Each \(\theta_k\) is initially unknown, but we aim to estimate it over time, enabling us to determine which variants have the highest probability of success.

With a Bayesian approach, we use a beta distribution to model our prior and posterior for each \(\theta_k\), which depends on two parameters \(a_k\) and \(b_k\). Every hour, Coframe updates its belief over \(\Theta\) as well as the corresponding allocations to each variant. Specifically, at each step of the Thompson sampling algorithm (in our case, once per hour), we sample from our belief \(\hat{\theta_k} \sim \text{Beta}(a_k, b_k)\). We then select the arm with the highest sampled value to show to the user:

\[\text{argmax}_k \hat{\theta_k}\]

Since the beta distribution has mean \( \frac{a_k}{a_k + b_k} \), we increase \( a_k \) if the user converts and increase \( b_k \) if the user does not convert.

However, Coframe takes this a step further by dynamically generating new arms based on performance data. If our estimate for a given arm’s \(\theta_k\) is consistently low, we don't just drop it—we replace it with a new, potentially better variant. This is where Coframe's AI-driven content generation comes into play.

Over time, new variants are generated, and performance increases as Coframe learns

Continuous Optimization

Coframe continuously creates new content variants by learning from the performance of existing ones. When an underperforming variant is identified, our system analyzes the characteristics that led to its low performance and generates a new variant that aims to improve upon those aspects. This constant generation of new arms allows Coframe to explore a wider space of possibilities, increasing the chances of finding high-performing content that might have been overlooked in a traditional A/B test or even a standard MAB setup.

For example, suppose variant \(k\) consistently underperforms. Instead of persisting with a low-converting option or waiting for a lengthy A/B test to conclude, Coframe's algorithm replaces it with a new variant \(k'\) generated by our AI. This new variant is not a random guess; it's informed by the data collected from previous variants, taking into account what worked and what didn't. By doing so, Coframe effectively "learns" from past experiences to propose better-performing content.

This approach addresses one of the key limitations of both A/B testing and traditional MAB algorithms—the finite set of variants. By continuously introducing new arms, Coframe avoids stagnation and ensures that the optimization process doesn't get stuck in a local maximum. It allows the system to adapt to changing user behaviors and preferences over time, making the optimization process more robust and effective.

Balancing Exploration and Exploitation

Of course, this method involves subtle considerations. We need to ensure that new variants are given enough exposure to gather meaningful data before making judgments about their performance. This is achieved by implementing an incubation period for new arms, during which they are protected from being dropped prematurely. During this period, the algorithm balances exploration (testing new variants) and exploitation (favoring known high performers) to optimize overall results.

We also carefully select appropriate priors for our Bayesian models. If we lack information about what works well, we might start with a uniform prior—a safe bet that assumes nothing about which variants work but might converge slower. Alternatively, using a prior informed by industry best practices can speed up convergence but risks overlooking innovative solutions. Coframe's system can adjust these priors based on the specific context, allowing for a more tailored optimization strategy.

Real-Time Adaptation and Optimization

By modifying the Thompson sampling MAB algorithm to include dynamic arm generation, Coframe continually adapts and optimizes your website in real time, avoiding many of the pitfalls of A/B testing. At the same time, it still meaningfully answers the same statistical questions, but with greater efficiency and adaptability.

Coframe's approach leverages advanced AI and statistical techniques to not only select the best-performing content variants but also to constantly adapt by generating new ones. This ensures that your website is always evolving to meet the needs and preferences of your users, leading to improved engagement and conversion rates.

Get started today

Transform your website with AI-driven optimization and personalization that boosts engagement and conversions.