Our mission
Last updated
Last updated
We learned a lot from how machine learning and artificial intelligence worked, and didn't work on Web2. It created some of the world's most powerful companies, but we'd also argue it created some of our thorniest problems.
The mission of Deep3 Labs is to reimagine the way Web3 businesses will create and deploy vital AI/ML technology for the enhancement of blockchain dapps, products, and user experiences in a manner that:
Restores agency to online users (through decentralized governance)
Lowers the barriers to entry for builders (through turnkey implementation)
Recognizes the monetary value of online user data (through sustainable value sharing)
Our collective motivation has both professional and personal origins because each of us are both creators and consumers of AI/ML technologies. The importance of this fact can’t be understated in terms of morale, engagement, and recruitment.
We view the growing prevalence of data- and technology-related legislation as a figurative “shot across the bow”. While well-intentioned, laws such as GDPR and CCPA – both of which primarily restore consumers’ data rights – would be extremely problematic if similar frameworks were applied to machine learning production processes, which we're already starting to see. On the one hand, providing an individual the means to “opt-out” of a machine learning process is technically far more complex than being excluded from data harvesting or warehousing. On the other – and more importantly–, these laws tend toward prohibition rather than a mutually-beneficial collaboration between users and platform creators, which given the nascency of the AI/ML space, could dramatically hinder future discovery. AI/ML technology will be central to solving some of the world’s most important problems, but only if we are free to continue exploring.
The above has served as a powerful “rallying cry” in Deep3 Labs’ early recruiting efforts.
Our personal motivations may in fact be far more compelling as they generalize to any internet user, regardless of their AI/ML knowledge base. We live in a world where the largest corporations on Earth consume user data as their primary input to production and, for a variety of reasons, no mechanisms exist to ensure a fair price is being paid for this resource. Furthermore, the value-generating potential of user data only stands to accelerate in the future as artificial intelligence gains increasing commercial traction. And finally, growing disparities in wealth and influence have become undeniable in nearly every country and culture the world over, a phenomenon exacerbated by the growing use of AI/ML technology. Considered together, it‘s reasonable to expect that existing negative externalities will only intensify in the future, such as the specific social impacts of screen-time maximizing machine learning objectives or racial biases observed in model outcomes, as well as a more general perpetuation of inequality in the markets they operate in.
These problems touch all of our personal lives in profound ways, thus cementing this as a powerful driving force behind our work.
If you're ever the unfortunate victim of being cornered by one of us at a cocktail party or a conference, you're sure to hear some ranting along the following lines.
If big tech ran on an ecosystem with the features and objectives of the Deep3 platform, Ms. Haugen wouldn't be famous, and we suspect that'd be fine by her.
In short, Ms. Haugen revealed to the world, and the United States Congress, that the executives at Facebook and Instagram knew that their algorithms were destroying young lives. But, they chose to continue operations as-is, and in many ways, double-down on some of the most detrimental elements of their design.
The relentless drive for ad dollars has warped the way our digital platforms operate. Machine learning algorithms are fine-tuned not to enrich our online experiences, but to maximize clicks and engagement—even if that means promoting content that’s sensational or divisive. In this model, every interaction is distilled into data points, reducing your online life to metrics that serve profit margins, not your well-being.
Equally troubling is how personalization traps you in a bubble of your own making. By curating content that mirrors your past behavior, these algorithms limit your exposure to new ideas, deepening divisions and reinforcing biases. Over time, this self-reinforcing cycle turns our digital spaces into echo chambers, fragmenting public discourse and undermining the diversity of thought essential for a healthy society.