A dream

Back in 2016, I had an idealist view of what the world needed—a time-efficient way to discover and learn. Picture a search engine mixed with rich data and summarized wiki data. A startup was born and later died. For us, the concept was simply too large for a small team to execute on; too many potentials led to a slow, inevitable death. But the magic was there. I was consumed with solving how to present information in ways that are frictionless, to the point where a person’s inner curiosity is rekindled. I have never seen a site that gives me this feeling, but I remain convinced it is possible. Over the years Google has made small inroads into creating this, but the opportunity remains wide open.

With the rise of decentralized organizations, the timing may finally be right?

Determining the truth.

Structured data is able to be categorized, summarized, and filtered very effectively now by machines. Not perfectly, but the rate of improvement is encouraging. What automated systems lack is the ability to understand what is true, and what that truth implies. This was a major problem for our startup that went unsolved, even after spending numerous hours scheming on how to version control big data. Fact-checking is hard. Determining bias is hard. Removing bias is even harder. I guarantee you that canonical data and entailment will be referenced at an increasing rate in AI going forward.

Why? Because generating fiction is easy. GPT-3 lies, convincingly, with reckless abandon. This makes sense as it was trained on Common Crawl, a truly massive collection of web data. If there are lies on the web, the AIs will follow suit.

We need a system that represents facts at the lowest practical level. A set of data that does not let language introduce ambiguity. A system that modern language models can learn from. How do we create this? Up until this point, I have just shrugged. It is a hard problem that will likely require a massive amount of human effort. But now we have Web3® to fix it. Enter Project #1.

Golden (Protocol, DAO)

I have followed Golden since it first launched in spring 2019, seemingly as a technical Wikipedia alternative. Since then they have steadily shipped new features, including some pretty clever AI-generated content additions. The core concept they are working on is that truth can be best determined at the lowest level, a data triple. Side note: a triple is just a subject → predicate → object relationship, such as ‘Google→Founded in→1998’. Aggregating and validating data at this granular level greatly benefits composability. Building up a dataset of immutable facts, lets you deploy those triples almost anywhere. Plus, since they are immutable, the dataset’s value will increase monotonically over time. Validate once, sell forever—a dreamy business model. This potential has attracted famous funding sources like a16z, Founders Fund, and many more to join the cap table.

Recently Golden launched the concept of the Golden DAO, stating that they planned to build a decentralized version of Golden. This new protocol would incentivize agents (individuals, companies, AIs) to operate as both submitters and validators in a shared effort to build a massively multiplayer fact graph. Most people are still comparing this to Wikipedia, but at this point, I believe that is very short-sighted. A collated set of atomic facts has SO MANY uses beyond just a simple encyclopedia. I am cautiously hopeful that they pull this off and spawn an industry in the process. The team is finalizing the white paper currently and seeing as Golden.com already exists, I give the decentralized protocol a high probability of success.

This is the closest project to my overall dreams, and I hope to help along the way where I can. In-context facts will be very needed for AR experiences, combating malicious misinformation, and more generally for creating smarter nlp models. Watch this one.

Golden DAO launch page | Golden Discord (warning it just experienced high growth and will be getting stronger moderation)


Ok, so we are getting a better general knowledge system. What else needs major improvement? Well, finance comes immediately to mind. Investing is a strange hodgepodge of ancient tech mixed with blistering algorithms. None of it is straight forward and incentives are aligned to keep it confusing. Professional investing can often be a precarious zero-sum game. You never want to give away too much of your strategy, lest someone take advantage of that knowledge. I believe there are ways to break out of this mold. Enter Projects #2 and #3.

Invest Like the Rest

Following the same thought process as laid out by the intro, Invest Like the Rest is a contextual information resource I am currently building. Where the vast majority of websites and apps deliver raw data, ILTR seeks to describe what that data means. This leads to a more approachable system where users are encouraged to compare and read about multiple asset classes. Instead of storing rapidly-changing facts directly, ILTR curates, summarizes, and displays links to external resources. Whenever possible this is done in natural language to ensure that nuance is not lost to assumptions.

This platform has been my main work effort for most of the last year and it has been truly exciting to develop. While there is much left to do, I am confident it will be worth the wait. My goal here is to build the best jumping-off point to learn about a new company/asset. Where research doesn’t feel academic. A single link you could send to a friend without caveats or explainers.

This will be an entry-to-prosumer targeted service with a simple SaaS model and a token-gated discussion community. Clean and easy. This one will get its own full newsletter soon enough. ;)

Twitter: @ResearchFaster

Primary DAO

On the professional side of investing, advanced tools are even more chaotic. A Bloomberg terminal hasn’t materially changed in over a decade, all while still charging 5 figures per seat, per year. Seasoned investors and traders operate in tight-knit circles, wary of letting plans slip out to wider audiences. This secrecy leads to highly asymmetric information between parties—where everyone has a different interpretation of the facts and no shared understanding. For an industry where incorrect assumptions and incomplete information could cost billions, it amazes me that there is no open ‘source of truth’ for baseline investment information.

This presents a direct opportunity to build a ‘disruptive’ professional workgroup that produces detailed, open information. Financial models, on-chain raw data, price targets, market event planning, expert commentary, machine learning models. These data sources today are at best fragmented, and at worst fully recreated by every firm.

This is where a decentralized solution shines. By forming a token-gated DAO, users can remain anonymous while still enforcing strict criteria for data acceptance. Contribute your specialist knowledge, then leverage everyone else!

Maintaining a high signal-to-noise ratio within an investment group is paramount, as shown by the myriad of failed social investment startups. We avoid this altogether. Primary is a data workgroup. No feed to check, no egos to stroke. A small group of us (public market investors) have been working on Primary DAO for the last few months to make this happen. Reply to this email to be the first to see our white paper–very soon!

Twitter: @PrimaryDAO

With crypto, DAOs, meme stocks, and choppy markets, the amount of quantifiable data is only going up. Everyone wants to know more, faster. I think this year we finally see major strides towards accomplishing that at scale. Should be fun!

-Tyler