Since the Taproot upgrade in 2021, Bitcoin development has lacked direction, with each proposal only solving specific problems and failing to fully enhance scalability and self-sovereign service capabilities. The “Great Code Recovery” may be the way forward for Bitcoin development. This article is sourced from SHINOBI, and compiled, translated, and written by Block Unicorn.
Article Contents:
The Great Script Recovery
OPCODES
Why are we doing this?
Self-Review (Introspection)
Forward Data Carrying
Public Key Modification
How to ensure security?
The driving force for progress
Despite the wide range of proposals, what is the reason why Rusty Russell’s “Great Code Recovery” may be the way forward for Bitcoin development?
Note: Rusty Russell is an active developer in the Bitcoin community and is highly respected in the community. He has outstanding work in Linux kernel development and has participated in many Bitcoin core development projects.
When Bitcoin was initially designed, it had a complete scripting language intended to cover and support any potential security use cases that users may propose in the future. As Satoshi Nakamoto said before his disappearance:
His whole point was to give users a language generic enough to allow them to organize their own transaction types as they wish. That is, to give users the space to design and experiment with how to write their own currency.
Before his disappearance, Satoshi Nakamoto removed 15 opcodes from the language, completely disabling them, and added a hard limit on the size of data blocks that can be operated on the script engine stack (520 bytes).
This was because he actually messed up, leaving behind a way for complex scripts to be used to launch Denial-of-Service (DoS) attacks on the entire network, creating huge and expensive transactions that could crash nodes.
These opcodes were not removed because Satoshi Nakamoto deemed these functions dangerous or should not be utilized to build what they can achieve. They were only removed (at least superficially) because of the risk they posed to the entire network without resource restrictions, the worst validation cost they could impose on the network when unrestricted.
Since then, every Bitcoin upgrade has been ultimately optimizing the remaining features, correcting the other less severe flaws left by Satoshi Nakamoto, and expanding the functionality of the remaining subset of scripts we have.
In the past few years since the activation of Taproot (an important upgrade of Bitcoin aimed at enhancing privacy, security, and scalability), the development field has actually been somewhat aimless.
We all know that Bitcoin lacks sufficient scalability to truly provide self-sovereign services to any significant population in the world, or even to provide scalability in a minimally trusted or custodial manner that can surpass very large custodial institutions and service providers, not truly breaking free from the constraints of government-controlled service providers.
This article points out the understanding of Bitcoin’s technical aspects, which is not a matter of debate. The debated issue is how to address this deficiency, which is a highly controversial topic. Since the proposal of Taproot, everyone has been proposing very narrow proposals aimed at solving problems that can only be achieved with specific use cases.
For example, ANYPREVOUT (APO) is a proposal that allows signatures to be reused in different transactions as long as the input script and amount are the same. This proposal is specifically designed to optimize the Lightning Network and its multi-party versions.
CHECKTEMPLATEVERIFY (CTV) is a proposal that requires coins to be spent only by transactions that fully match pre-defined transactions. This proposal is designed to expand the functionality of pre-signed transaction chains by making them completely trustless. OP_VAULT is specifically designed to set a “timeout period” for cold storage solutions so that users can “cancel” extraction from cold storage by sending it to a colder multisig setup to prevent key leakage.
There are many other proposals, but I think you already understand the point. In the past few years, each proposal has been either slightly increasing scalability or improving a single small feature, as this was considered desirable. This is why these discussions have not made progress. No one is satisfied with other proposals because they don’t meet the use cases they want to see.
Apart from the proposers, no one considers any proposal to be comprehensive enough to be considered a reasonable next step.
This is the logic behind the “Great Code Recovery”. By pushing for and analyzing a full recovery of the script, just as Satoshi Nakamoto originally designed it, we can actually attempt to explore the entire feature space we need, rather than arguing and quarreling over which small feature extension is good enough for now.
OP_CAT: Retrieves two pieces of data from the stack and concatenates them to form one piece of data.
OP_SUBSTR: Takes a length argument (in bytes), retrieves a segment of data from the stack, removes the specified length of bytes, and returns it to the stack.
OP_LEFT and OP_RIGHT: Take a length argument, retrieve a segment of data from the stack, and remove the specified length of bytes from one side or the other.
OP_INVERT, OP_AND, OP_OR, OP_XOR, OP_UPSHIFT, and OP_DOWNSHIFT: Take a data element and perform the corresponding bitwise operation on it.
OP_2MUL, OP_2DIV, OP_MUL, OP_DIV, and OP_MOD: Mathematical operators for multiplication, division, and modulo operation (returning the remainder of division).
In addition to the opcodes mentioned above that are proposed to be recovered, Rusty Russell also proposed three additional opcodes aimed at simplifying the combination of different opcodes:
OP_CTV (or TXHASH/ equivalent opcode): Allows fine-grained enforcement of certain parts of a transaction, requiring them to match predefined content exactly.
CSFS: Allows verification of signatures not only for the entire transaction but also for certain parts of the script, allowing for more specific requirements or usage of resources.
以上是翻译的内容。Signature is required to execute the transaction.
OP_TWEAKVERIFY:
Verification based on Schnorr operations, involving public keys, such as adding or subtracting individual public keys from aggregated public keys. This can be used to ensure that when an unused transaction output (UTXO) is unilaterally spent by one participant, the funds of all other participants are sent to an aggregated public key that can be cooperatively spent without requiring the signature of the departing participant.
The second layer network is essentially an extension of the Bitcoin base layer, constrained by the functionality of the base layer. The Lightning Network requires three separate soft forks to be implemented: CHECKLOCKTIMEVERIFY (CLTV), CHECKSEQUENCEVERIFY (CSV), and Segregated Witness (SegWit).
Without a more flexible base layer, it is not possible to build more flexible second layer networks. The only shortcut is to trust third parties, which is very straightforward, and I hope that we all aspire to remove trust in third parties as much as possible from every aspect of Bitcoin scalability.
We need to be able to do things that are currently impossible in order to securely merge two or more individuals into a single unused transaction output (UTXO) and execute it on the base layer without trust. The current flexibility of Bitcoin script is not sufficient. At the most basic level, we need contracts, and we need scripts that can actually enforce finer details about executing transactions to ensure that a user safely exiting their own UTXO does not put other users’ funds at risk.
At a higher level, this is the functionality we need:
We need to be able to actually check specific details about the spending transaction itself on the stack, such as “this portion of money will flow to this public key of an output.” This allows me to extract my funds using my specific Taproot branch while ensuring that I cannot take anyone else’s funds. The executed script will ensure that the funds of all other owners are sent back to addresses composed of their individual public keys, protecting against any loss of funds caused by other participants.
Assuming we further develop the concept of a single UTXO with a large number of people freely entering and exiting, we need a way to track who has how much money, typically using a Merkle tree and its root. This means that when someone exits, we must ensure the “record” of who is entitled to receive what as part of the change UTXO of other people’s funds. This is essentially introspection for a specific purpose.
We need to ensure that modifications to the aggregated public key can be verified on the stack. In a shared UTXO scheme, our goal is to facilitate cooperation and efficient flow of funds through an aggregated public key that includes all participants. When someone unilaterally exits the shared UTXO, we need to remove their individual public keys from the aggregated public key. If all possible combinations were not computed in advance, the only option is to verify whether subtracting a public key from the aggregated public key will generate a valid public key composed of the remaining individual public keys.
VAROPS As I mentioned above, the reason for disabling all these opcodes is to address a DOS attack (causing network collapse by sending a large number of junk requests), which can cause nodes forming the network to crash. There is a way to solve this problem, which is to limit the amount of resources any of these opcodes can use.
When it comes to signature verification, the most expensive part of the Bitcoin script, we already have a solution called the signature operation (sigops) limit. Each use of a signature check opcode consumes a certain “budget,” which is the number of signature operations allowed per block, setting a hard limit on the cost of verifying a block for a transaction.
Taproot changes this by no longer using a single global block limit, but instead having its own sigops (signature operations) limit for each transaction, proportional to the size of the transaction. This is essentially equivalent to the same global limit, but easier to understand how many sigops are available for each transaction.
The change in Taproot’s approach to the sigops (signature operations) limit per transaction provides a possibility for a generalized approach, which is also proposed by Rusty Russell in terms of varops limits. The idea is to allocate a cost for each re-enabled opcode, considering the worst-case computation cost that each opcode may incur during verification. This way, each opcode will have its own “sigops” limit, limiting the amount of resources it can consume during verification. This will also be based on the size of any transaction using these opcodes, allowing for convenient reasoning while still accumulating to the implicit global limit per block.
This will address DOS attacks (causing network collapse by sending a large number of junk requests), as these junk transactions are also the reason Satoshi Nakamoto initially disabled all these opcodes.
I believe many of you may think, “This change is too big.” I understand this idea, but I think an important aspect to understand as a proposal is that we don’t have to do it all at once. The value of this proposal is not necessarily in fully restoring all these functionalities, but in thoroughly examining a large foundational component package and asking ourselves what we really want in terms of functionality.
This will be a complete shift from the past three years of arguments and debates, where we have been arguing over minor, narrow changes that only have certain functionalities. It’s like a square where everyone can gather together to collectively examine the future direction. Perhaps we will eventually restore all these functionalities, or perhaps we will only enable some functionalities in the end, as consensus is about which functionalities we all agree need to be enabled.
Regardless of the final outcome, this can be a transformative change that positively impacts the entire conversation about our future direction. We can actually map out and fully understand the situation instead of groping in the dark when arguing about the next steps.
This is by no means the only path we must take, but I believe it is the best opportunity for us to decide which path to take. It’s time to start collaborating again in a practical and effective manner.
Related Reports
Explainer | To understand BRC-20, learn about the “UTXO model” of Bitcoin first.
Bitcoin rune transaction volume drops 99% from its peak, transaction fees below $3, market calms down.
Seizing the Opportunity: Countdown to the launch of the Rune Runes: Participating, wallet registration, and comprehensive guide to UTXO splitting.