Having read the comments of a former business partner of mine, which will be addressed later in a dedicated blog post, I’ve decided to draft my thoughts on how IOHK approaches the design of Cardano and by extension all the cryptocurrencies it works on. As this space has become polarized with the politics of personal destruction, financial incentives to lie and a stunning lack of respect for critical analysis, I’ll try not to mention project names- just my opinion on what good design principles ought to look like. I freely admit, I could be misguided or stuck in my ways.
First, we have to define what is the point of our labor. What goals are we trying to achieve and who needs the solution? It’s stunning to me how we are littered with solutions seeking problems connected to a token actively trading. Decentralized computation, storage and other services need an audience in order to be useful and they need a necessary edge in order to survive beyond hype.
For example, replicated computation that is byzantine resistant is what Ethereum brough to the table. We freely admitted that market demand was unclear and that use cases would materialize after we launched (the field of dreams gamble). Whether those uses are economically viable or optimal, was and still is an open question being explored and forcing enhancements.
What is undeniably valuable was the beginning of a conversation about outsourcing computation in a way where the server couldn’t be trusted; either couldn’t trusted to return a correct result or not trusted to de-prioritize a particular program due to some agenda. The net neutrality debate is highlighting this concern most directly.
There seems to be an audience that likes the problems that Ethereum is trying to solve. Thus it begs the question what is the best way of doing so? What tools do we have in our bag and who are the craftsman who ought to weld them?
The point of the Cardano project has always been to build something from first principles using a functional programming approach, embracing formal methods, and checking our progress through peer review. We chose these three pillars because experience tells us that humans are good at self-deception, forming personality cults and making extremely subtle mistakes that eventually cascade (heartbleed is a great example).
Functional programming is getting code close to math. It’s saying scientists draft a beautiful blueprint and then let’s pull that blueprint directly into reality. There are some wonderful lectures from the Clojure community on the elegance of functional programming techniques (1)(2), but the broader point is that simplicity, modularity and conciseness matters more than performance.
Machines keep getting faster; legacy code is like a tattoo. You’re going to have to live with it so make it pretty. We chose Haskell because it has the perfect intersection between practicality and theory. It gives us wonderful libraries like Cloud Haskell and a community that’s extremely smart and supportive of new techniques and ideas as they become necessary.
Formal methods are an acknowledgement of the semantic gap. Humans and computers are fundamentally different animals and until Ray Kurzweil delivers us to the Singularity, we will be quite distinct. This axiom extends down to the computer’s understanding of our intent versus our own.
The DAO hack is a recent textbook example. The engineers who wrote the contract had a clear understanding of intent, but it differed slightly in code and as a consequence a hacker could cause havoc. The point of formal methods is to close the gap between man and machine.
Specification captures the intent of the scientists who spend countless hours of rigorous labor carefully writing mathematical proofs. These proofs are riddled with ideal functionality and ambiguity from an implementation perspective. Basically, such papers are the inky equivalent of the spirit Billiken- the god of things as they should be instead of what they are.
A formal specification process is slow, uncomfortable, pedantic and requires exotic languages and skills. As a consequence, it’s also terribly expensive and not fun for most people. But such techniques save lives (think planes and trains), money (think Mars Rover) and dramatically enhance our understanding of the protocols we wish to deploy (lies melt).
We’ve written some blog posts on techniques (1)(2)(3) and the philosophy we follow as well as have done a whiteboard video. As most of our work is transparent, the specification of Ouroboros Praos is no different. The repo can be seen here. Like a fine painting requiring exhaustively small brush strokes to gradually make the whole, we are paying that price of craftsmanship.
Finally, there is peer review. It somehow is conflated, misunderstood or in some cases discarded as an unnecessary formality to appease irrelevant ivory towers out of touch with the plight of normal man. I counter every single one of these attacks with a single question: can you understand the papers cryptographers write?
Humility will yield an answer of no for the majority of the populate. This statement isn’t self-serving arrogance. It’s respect for a language born from decades of careful study. Medicine has research. Physics has research.
Why is it so controversial to state that a paper like this is outside of the ken of most people? It isn’t elitism; it’s an acknowledgement that the people who wrote it spent decades of their lives learning how to write that paper and think like they do.
Somehow in the cryptocurrency space, we have forgotten that our underlying technology is constructed upon foundations of cryptography, distributed systems, game theory and programming language theory amongst other considerations. The people who study these fields literally have invested tens of thousands of hours to become proficient- meaning they can read and understand the papers- and that’s not even making a statement about meaningful and original contributions.
The question we ought to ask isn’t can I understand the papers. That’s like asking the public to understand the US Federal Budget. The question ought to be what process should this work go through in order for it to be considered correct?
Peer review via IACR conferences is an excellent option. The conferences are managed by domain experts who don’t have a financial incentive to like or dislike any particular work. The review process is double blind. The conferences hold high standards where most submissions are rejected. And acceptance means you have to show up and discuss the work with your peers.
It isn’t a perfect process by any means, but it’s a standard of quality that is objective. It’s a benchmark to start a conversation with and provide some assurance that the work meets basic standards. That someone who actually can read the paper, has read the paper and thinks it’s ok.
Like all good science, one needs to continue evolving, continue pushing the boundaries and continue asking difficult and often uncomfortable questions. The ultimate point of peer review is to acknowledge you aren’t an island and you don’t want to go on that journey alone. It’s asking for help from fellow travelers who are just as capable if not more so than you.
As an outside observer, many of which who are directly investing their hard earned money, one should be actively asking about processes that produce truth. We chose peer review because it’s the best tool in our box that we know how to use to check our claims. It has given us modern medicine. It has given us modern physics. There is no reason it can’t be used to help give us better money.
As a final point, both Algorand and Snow White carry similar structural properties to Ouroboros. The exact same criticisms that my former business partner naively applies to Ouroboros could be applied to them - meaning that a Turing prize winner and Cornell are both inferior as well given that logic.
There also was a lack of appreciation for the holistic nature of protocol design. Raw TPS isn’t an end, it’s a necessity of large scale use. Yet there are other considerations such as network performance and the ability to store with high availability the eventual exabytes of data these systems will demand.
The Zen of protocol design is understanding that all things have to flow from a common source. That this source needs to be on bedrock, simple and secure. That this source needs to be perfectly balanced and grow naturally to meet the needs of its users. When a protocol achieves this state, like TCP/IP did, the results are magical. Others like PGP have failed despite their brilliance.
The point of how we have gone about designing Cardano is to seek this balance in our design. Ouroboros was built very carefully and in the most general way we could understand. It can be tuned to operate like many conventional protocols or run in new modes.
It will eventually include modifications to dramatically scale performance when it is necessary. We’ve even broadened the discussion to include topics like RINA and Delta-Q because they are absolutely required for natural scaling.
Yet in all these things, we are doing it with principles, craftsmanship and honesty. It’s a long and very hard journey, but has been a fun one.
Thanks for reading
<{Axiom}>