This is a detailed interview about Andre Cronje, with a total duration of 1 hour and 20 minutes. It includes Andre Cronje’s summary of his past career, as well as many experiences, guiding advice, and personal ideological judgments. This article is sourced from the interview on the Lightspeed YouTube channel, compiled, translated, and written by Panews.
(Table of Contents)
1. Introduction
2. ICO Era: Andre’s Cryptocurrency Journey
3. Establishing Yearn Finance
4. Mistakes and Testing in Production
5. Fantom L1: Making Software as Efficient as Possible
6. The Road to ETH Scalability
7. Fantom’s Market Plan
8. SVM is the Best Virtual Machine
9. Airdrops, Regulations, and Bull Market Predictions
Andre Cronje, as an OG in multiple fields of the industry, this is the most recent and in-depth interview, worth reading and referencing. The article is about 20,000 words in total and is divided into 9 sections.
1. Introduction
2. ICO Era: Andre’s Cryptocurrency Journey
3. Establishing Yearn Finance
4. Mistakes and Testing in Production
5. Fantom L1: Making Software as Efficient as Possible
6. The Road to ETH Scalability
7. Fantom’s Market Plan
8. SVM is the Best Virtual Machine
9. Airdrops, Regulations, and Bull Market Predictions
Andre Cronje believes that considering any blockchain project as an “Ethereum killer” is foolish. Even if you add up the total value locked (TVL) of all blockchain networks, including Bitcoin and Ethereum, they are still insignificant in the entire financial world. If you think a blockchain project can solve global financial problems, that is simply crazy.
Host 1:
Hello everyone, welcome to the “Enlightenment Speech” program! Today, we are fortunate to have Andre Cronje as our guest. He is the founder of Yearn Finance, Phantom, and Keeper Network, and is also a significant contributor to many DeFi projects. Andre, welcome to our program!
Andre Cronje:
Thank you. Well, that introduction was a bit exaggerated. I’m just someone who enjoys coding.
Host 1:
When I was listening to the “Extraordinary Core” program in 2020, they described you as a builder. I myself am not a developer, just an integrator, but I think you underestimate yourself. You have an interesting story, and we can learn a lot from it. Maybe we can start from 2017, when you entered this field for the first time, which happened to be the era of ICOs. So it’s great to hear how you entered this field and explain to those who were not present at that time how crazy that era was.
Andre Cronje:
Yes, what I mean is, before I got involved in cryptocurrencies, I was a very traditional cryptocurrency skeptic. I come from a traditional financial background and used to be the architect and CTO of a small financial company. We did some high-throughput things, and at that time, we used Kafka and Scala. That’s my background in high-throughput financial solutions.
That era in 2017 was very similar to the present, in many ways, because there was too much noise, and many teams claimed to have solved industry-wide problems. Traditional finance and traditional distributed systems had been struggling for decades. But these 18 to 20-year-old guys, without any work experience, released an ICO, raising 20 million or 40 million, claiming that they solved distributed systems or other problems.
So initially, I entered this field just to test my skepticism, to make sure I didn’t miss anything. You know, to create a disruptive technology and replace the previous technology, this is not the first time such a thing has happened, and it will happen. My concern was that there was a lack of research evidence in the blockchain field and a lack of strong evidence, while many people claimed to have achieved something. So I entered this field, started reading whitepapers, and the whitepapers theoretically proved many things that seemed reasonable. But now, there is another problem, which exists today, that is, there are indeed many proofs that sound good, and you would say, “Makes sense, it can work.” But when you put it into practice, there are some hard limitations that don’t allow it to work as you expected.
Even if the theory is correct, or even if the concept is correct, emmmmm (it may not necessarily be feasible). So after reading many whitepapers, I started looking at a lot of code, and I started doing my own code reviews. I didn’t do these code reviews from the perspective of value creation or due diligence. It was purely a record of me reading a whitepaper that claimed to solve X problem, and then I looked at the code and thought about whether it solved X problem. It was more like a record for myself.
So you know, when I wrote down these processes on Medium, I just wrote it down, saying, well, this piece of code doesn’t match what they said here, this code repository has nothing to do with their claims. I shared them for some reason, and in the ICO era, they became very popular because there weren’t many naysayers at that time, and not many people said, “This won’t work because your code proves that you don’t have what you claim.” There was a problem that arose, and it was important. The reason I eventually stopped doing code reviews was that people started using them as investment signals, rather than any code-based research. Because I shared them for others to learn and go through the same learning journey that I was trying to experience.
So, I did my own public reviews, and then eventually I started working with a company called Crypt Briefing, working with Hana and John and those people. They are still great, and I still keep in touch with them today. I started doing some reviews for them, but then it became more, mixed with a shift that I didn’t like, which was that I liked reviewing open-source code. So you know, if it’s on GitHub, I can see the code, and everyone can see the code, so people can verify whether what I said is true or tell me if there are any errors.
But with the growing influence, more and more teams wanted us to review their private code and release the review results, which made me uncomfortable because it was purely an investment signal. Anyway, that was a parallel (another thing) that we can delve into at another time, but going through all of this, you know, 99.9% of it is garbage, but there is that 1% of real value that has always troubled me and attracted me.
So looking back at that time, my focus shifted from trying to understand what was happening to catching up with the development of the booming industry. I think I did that in about two years. I think it was around 2019, maybe a little earlier, maybe at the end of 2018, I think I successfully caught up. It is difficult to catch up in this field because new things are appearing every day, and you have to read what the other 98% released to know what actually happened, but the actual things that happened are very few, only 1% to 2%.
At that time, I started focusing on one thing, and that was that POW (proof of work) was obviously a bottleneck. Looking at blockchain systems, you would think, well, the speed is obviously limited. The longest chain rule standard for Bitcoin at that time was that transactions took 10 to 30 minutes. Before that, I was very fascinated by cross-border payments, cross-border settlements, and instant online payments.
I am South African, and South Africa is not even a member of SWIFT or IBAN. We are subject to foreign exchange controls and restrictions on online consumption. Our banking system is very limited, and it has always been a challenge. Seeing this freedom not controlled by a single entity really attracted me, and it matched my background.
So, I started focusing on consensus research. During that time, the research I did and the code reviews I started guided me to understand Fantom and the team there and started getting more involved. They were very enthusiastic in the fundraising market at that time and managed to raise about 40 million dollars through ETH! It is worth mentioning that they held onto these ETH, even during the bear market, and I remember they sold them when the price of Ether reached about 300 dollars. But they made a lot of promises that sounded good but couldn’t actually deliver. They seemed to realize this but didn’t choose to proactively exit, such as spending money or doing something to consume this funding. Eventually, they asked if I could use the research I started to release, and I had been considering starting my own chain, so it matched because I didn’t have the experience of interacting with venture capitalists or raising funds or any other related experience. This is not my expertise; it is a skill that I don’t have.
You know, that’s also why I have never launched anything, whether it’s Yearn, Keeper, or anything I have launched, without VC investment or any of these things. Many people think it’s some kind of statement I’m making about professional ethics, but that’s not the case. I’m just not good at it, so I came up with a way to avoid it, and that’s it.
So in the end, they had the funds, they had a branded team, so in the end, I pushed my research into it, and the first thing was consensus. The original consensus was ABFT (asynchronous Byzantine fault tolerance), which they called Lachesis, but it was actually based on a paper from the early 1990s, Common Concurrent Knowledge. It was actually just an ABFT point-to-point communication system. We initially launched it around the end of 2019 or the beginning of 2020. The consensus itself was great, you know, it was one of the first ABFT solutions, and it jumped from the maximum 7 TPS transaction speed at that time. At that time, we didn’t have virtual machines yet; we were just doing raw transaction linking because it was just a pure payment network. We could easily touch pure payment frequencies between 30,000 and 50,000, depending on the connectivity or participation of validators.
But we wanted to allow virtual machines because smart contracts are powerful, and at that time, we chose EVM, which was our only viable choice. We had considered using WM, we had considered using the risk-based compiler, and so on, but even then, you know, to make a blockchain truly viable and adaptable, to make it difficult for people to do anything on the underlying chain because everyone says, well, we’re just doing EVM, people are just forking EVM, so we’ll stick to EVM, and then we’ll use our consensus as the basic layer linking because consensus is just a sorting system, that’s all it is. It accepts transactions, sorts them, and then these transactions can beEasily handed over to virtual machines and executed as a state, we noticed that our TPS would drop to a maximum of 200 between 180 and 200. This is purely a limitation of the EVM. Over the next three years, we focused purely on improving the EVM and made some progress, but I have to say that if I could go back and change that decision, I definitely would.
I believe we chose the easiest route at the time, which was the route of the EVM. This allowed us to integrate with all these third-party vendors more easily. It was a positive choice because we didn’t have the capacity to build our own wallet, set up our own RPC node provider, or do our own instant deployment, etc. But regardless, this is a topic we can delve into later.
Andre Cronje:
In the topic mentioned earlier, they raised $40 million and kept all the funds in ETH. However, when they eventually converted it back to USD, only about $2.5 million remained. I want to talk about this because it was the operational capital for our entire team. To manage this capital, I started exploring many lending protocols that were available at the time, such as Compound, BZX, Full Crim, etc. Apart from Compound, all the other protocols disappeared. I would review these protocols every day and remember that gas fees on Ethereum were only three to six cents, so I could execute operations every day. Every morning, I would check these websites to see which one offered the highest APY (Annual Percentage Yield), and then manually transfer funds between these protocols. Over time, I realized that it was annoying to check these websites every day, and they should have on-chain smart contracts that display interest rates. I could gather all the data and display it.
The first smart contract I wrote and deployed on Ethereum was just an APY aggregator. It could fetch data from all these different places and display it. I did this because at the time, I couldn’t figure out the RPC infrastructure, such as Web3 JS or anything related, to fetch data from nodes and execute operations. So, for me, the easier way was to deploy it on-chain and read from there.
Therefore, I began my journey in Solidity development. With this smart contract, at least I could check every morning which rate was the highest and then transfer funds. Then I realized, hey, I can actually write a smart contract to do this for me. That’s the origin of Yearn. Later on, it became more sophisticated, and the current state is like rocket science compared to the code I wrote. But that’s the foundation. What I wanted was to automate the manual operations I did every day until it could manage the funds I managed. I eventually opened it up for others to use the same system. I no longer needed to click buttons every morning to reallocate funds between different protocols because whenever someone interacts with it, either by depositing or withdrawing, it will automatically reallocate funds. This eventually automated the whole process, and that’s the origin of Yearn.
However, with the development of Yearn, the launch of tokens didn’t go as planned. The token launches were not fair. I was just mocking these worthless tokens. I said that I would give away these tokens for free if someone provided liquidity. This seemed like the dumbest thing in my mind, but apparently, I was wrong. However, it attracted a lot of attention, and people started joining, and things became more complex, involving strategic investments, infrastructure, etc.
With the evolution of strategies, we put a lot of effort into harvesting. We were like any protocol dumping tokens. This also became a thing. I used to manually operate these scripts to do this. So, I thought, there must be a way to do this in the public space where anyone can call it, and they would have the motivation to call it. That’s where tasks and keepers came in. Eventually, it evolved into the keeper network, which works well for Yearn. So, we decided to open it up so that anyone could connect a task, and then there would be keepers to execute it. I don’t know who these keepers are, but they would execute the task. The first task I launched on-chain was fascinating because we didn’t advertise, didn’t release anything, we just activated the task, and then bots started calling it. Seeing these things happening on-chain was chaotic, and that’s probably why it used to be called the dark forest, but now I guess it’s just the MEV forest.
Andre Cronje:
Then, there were many… mistakes that lack better words to describe. Before Yearn, someone noticed me in this field, but I didn’t have a public reputation, fame, or attention, so I developed a lot of bad development habits. For example, I often tested in the production environment, which meant that I would put some experimental things into the actual running system, and you know I did that. Another example is being completely disconnected from intent and direction. Because mixing testing and production was something I thought would have a high risk. It’s like telling someone, “Hey, I’m testing in production, and it’s not something you should be doing if you interact with it because the chances of things going wrong are very high.” I said this as a warning, so if you interact here, you need to understand that there is a high risk.
Testing in production eventually became a somewhat reckless and arbitrary practice of putting funds in, although it wasn’t my intention. Anyway, I continued with my past development practices, and I continued building Eminence. At that time, I was very dissatisfied with the NFT culture. I think it has improved now, but at that time, people were using NFTs in a very foolish way. They turned a painting into an NFT and sold it for $100k. I liked the concept of NFTs because I’m an avid gamer. I thought they were a perfect use case for NFTs. So, I obtained the IP license for Eminence, which came from another game company. We planned to build some silly games to showcase how NFTs work. I think there will always be issues with the IP of NFTs because it can’t just exist in one game. The whole idea was to build a series of different games that all use the same underlying layer.
But anyway, I deployed a bunch of tests, people interacted with them, and there were serious vulnerabilities that resulted in a loss of about $60 million. I took a big step back because that’s when I realized how dangerous this field actually is, how quickly things can go wrong if you don’t have the right safeguards, etc. At the same time, because of Yearn, I also faced significant pressure from many regulatory bodies that classified it as a financial instrument. I think that’s fair, but I also wanted to keep some distance from it. Ultimately, I firmly came back because one thing bothered me for a long time, and that was how to improve AMM curves. At the time, you know there was only one standard stable trading curve, which was Curve Finance founded by Mitch, who is absolutely a genius developer, founder, architect. Perhaps I still think he’s one of the smartest people I know in this field. But I got obsessed with it, and I wanted to create something as simple as Uniswap, like XYK. As a result, I eventually designed the entire X to the power of 3 Y plus Y to the power of 3 X curve, and it worked really well. You can define this curve, and it’s simple. At the same time, I added a bunch, at the time, you had TWAP (Time-Weighted Average Price), and I added RWAP (Reserve-Weighted Average Price). Because of how these XY pools work, I don’t even need to explain it, you just need to know that for TWAP, it’s a fixed price point that completely ignores the amount of liquidity. It says, hey, you can sell a billion of this thing at this fixed price, and that’s a big problem for me.
Note: TWAP (Time-Weighted Average Price) and RWAP (Reserve-Weighted Average Price) algorithms calculate asset prices using different methods and are integral to almost all DeFi principles.
Because a lot of liquidation bots, liquidation engines, lending, even fully decentralized stablecoins, etc., need to understand how slippage is calculated. Let’s take the example of a liquidation bot. Its operation is simple: it needs to check if I can repay someone’s debt, get their collateral of one million ETH, and sell it in the Uniswap pool while still making a profit. If I use TWAP, my bot would say, no problem, the profit is good, it can be executed. But if there is actually a significant slippage after selling, I would suffer a loss. So what I need is a method that takes liquidity into account so that I can check realistically, and it’s specifically time-weighted so that you know there are no large flash loans entering the liquidity right now. I can sell, but at the same time, it’s also an opportunity for other bots to front-run mine. So, I needed to go back in time to check, everything is there, and then I built that method. It worked well on Fantom, although it was a huge mess there for a week or two. But besides Fantom, I always thought this is what decentralized protocol founders should do. If your protocol is completely immutable, with no updates, no changes, you need to step away because you can’t be the figurehead associated with that thing. I think Yearn and keepers did well because they managed them in a very decentralized way. For these two protocols, you can’t really pinpoint who they belong to, although it was definitely a big mess on Fantom. It has become one of the main AMMs for many new VM exchanges, such as Velodrome Aerodrome, and many others that I don’t know. So, it achieved what I wanted, although not in the iterations I did. After that, I decided my development days were over, my smart contract days were over, and I didn’t have the necessary infrastructure, so I went full-time back into Fantom. Sorry, this has been a very long history, and I’ve been holding you up here for a while.
Andre Cronje:
I think databases definitely have their uses, and I believe FVM is currently the best standard. I don’t think there’s a better virtual machine than that right now. From a data structure perspective, I think the situation is like this because with Karma and the new database, we went through some regular processes. Initially, we used Badger, and then we did a lot of research on various different databases and switched to Pble, which gave us a nice boost in throughput, but not much change. All these existing databases have one problem, which is that they are designed for generic data and can store any content in any way. At the same time, if you use a structured query language (SQL) at the top level, it means that there are a lot of things happening in the background. They build their own indexes, build their own P-trees, etc., which adds a lot of additional overhead.
So, even when you switch to a key-value store, you still have to build your own index, which is a waste of space. And when you try to query efficiently, you have to build your own indexing mechanism. With Karma, we tried to change that. We wanted to create a specialized database specifically for blockchain data. We wanted to have a database that understands the blockchain natively. That’s where Karma comes in. It is designed to be the most efficient way to store blockchain data.