·Announcement·Karine Mellata

    Announcing our $21M Series A

    How We Got Here

    Variance did not begin as a financial crime company. It began with a simpler conviction: that many of the most important operational decisions inside large organizations are made through workflows that are both highly consequential and poorly served by existing software.

    Our first attempt at solving this problem was in content moderation.

    At the time, this seemed like a natural place to start. Content moderation sits at the intersection of policy, operations, and adversarial behavior. It requires organizations to make large numbers of decisions quickly, often under uncertainty, and across many different categories of risk. A single platform may need to determine whether an item is allowed to be sold, whether a user is engaged in fraud or abuse, whether a set of accounts is coordinated, or whether a specific pattern of behavior indicates manipulation. These are operational judgments made under pressure, usually with incomplete context and imperfect tools, rather than simple classification problems.

    Michael and I had seen versions of this problem before. We met in 2021 on Apple's Fraud Engineering team, before large language models had entered the mainstream. At the time, teams fighting abuse at scale relied on some combination of rules engines, narrow classifiers, and large analyst organizations. Those systems worked, in the sense that they existed and produced outputs. But they were also obviously insufficient. Fraud and abuse adapt faster than rigid systems do. Analysts become the layer that absorbs all the ambiguity. The result is a constant pattern of escalation, retraining, patching, and manual review.

    In many operational domains, detection is only the opening move, one that is often extremely noisy. The real work begins afterward: collecting evidence, interpreting context, deciding whether a case is part of a broader pattern, and producing a result that another human team can trust. This is why so many workflows remain stubbornly manual even after years of software investment. The bottleneck is not just identifying risk. It is carrying out the investigation around it.

    That conviction became the foundation of Variance.

    What We Learned

    As we built the company, our platform became increasingly effective at one particular class of task: conducting complex investigations in unstructured and adversarial environments. This meant working across posts, comments, listings, images, videos, metadata, behavioral patterns, and signals drawn from the open web.

    What we were building was no longer well described as workflow automation in the ordinary sense. It was better understood as infrastructure for investigative AI agents.

    Why Financial Crime

    Once we understood that, our move toward financial crime and compliance was a continuation.

    The most important workflows inside financial institutions are often investigative by nature. They involve understanding counterparties, tracing ownership, reviewing documents, reconciling signals across internal and external systems, assessing adverse media, identifying inconsistencies, and building enough evidentiary support to justify action. These tasks are often lengthy, procedural, and high stakes. They are also exactly the kinds of workflows where traditional software has historically been weakest.

    We believe AI is now reaching the point where this process can be meaningfully encoded and scaled.

    We are building investigative AI agents for risk and compliance teams: systems that can gather context across many tools, reason over long-running workflows, interpret documents and web evidence, apply customer-specific procedures, and produce outputs that are usable in serious operational environments.

    The Stack We Believe Is Needed

    As our work shifted toward more complex investigations, it became increasingly clear that the underlying infrastructure had to change.

    An hour-long investigation is not simply a short decision repeated many times. It requires persistence, memory, tool orchestration, evidence handling, and the ability to maintain coherence across many steps. It requires moving across internal systems, external data sources, web intelligence, and document surfaces without losing the chain of reasoning. It requires producing outputs that are traceable, explainable and auditable, because of the constraints of deploying these agents in regulated industries.

    This is why we do not think the future of risk and compliance will be defined by isolated models, point solutions, or fraud SaaS platforms. It will be defined by agentic systems capable of operating across the full investigative loop.

    That loop includes detection, but it also includes context gathering, document analysis, procedure execution, escalations and self-healing automated policy improvements. In many settings, the quality of the system will be determined less by whether it can flag something suspicious than by whether it can correctly assemble and explain the case around it.

    To do that well, the system needs to reason consistently over policies and standard operating procedures. It needs to incorporate both proprietary and external information, across established registries and open-source intelligence. It needs to be able to detect subtle forms of tampering or facial mismatches with over 98% precision. And it needs to produce a narrative or recommendation that a human investigator, auditor, or compliance leader can actually inspect. Without these building blocks, the system doesn't work and opens the gaps that allow fraud to proliferate in the first place.

    Where We Are Headed

    Today, we are announcing our Series A, led by Ten Eleven Ventures, to accelerate that work.

    This funding will help us deepen our investment in the infrastructure required for investigative AI agents, expand our work with financial institutions, and continue learning from the analysts and operators whose workflows define this category. We do not see the future as replacing expertise with automation for its own sake. We see it as building systems that can extend the reach of high-quality investigative judgment.

    Over time, we believe this extends beyond workflow execution alone.

    The first step is building AI systems that can reason consistently over an institution's own policies, procedures, and internal data. Then, the second step is for Variance to become the vehicle for which AI agents in regulated industries collaborate as a coordinated defense against fraudulent actors. In much the same way that the fraud industry already shares intelligence, there needs to be a common clearinghouse for the agentic reasoning layer to safeguard our institutions.

    We intend for Variance to help build that united front.

    To learn more, please visit variance.com. If you are interested in joining us, please see our careers page.

    Karine Mellata signature

    Karine Mellata

    Co-Founder & CEO

    Michael Lin signature

    Michael Lin

    Co-Founder & CTO