Is Satoshi Nakamoto Proven Right as Scalability Appears Uncrackable?

Is Satoshi Nakamoto Proven Right as Scalability Appears Uncrackable?

Ethereum’s ratio has fallen because it has failed to scale in a reasonable time according to the co-founder of a crypto hedge fund. While bitcoin hasn’t been affected as much because of its narrative of a settlement layer, he says.

That narrative was created by Hal Finney on the 30th of December 2010, just weeks after Satoshi Nakamoto’s last public statement on the 12th of December 2010. It was further consistently promoted during the blocksize debate.

Harold Thomas Finney II imagined banks would transact in paper-bitcoin with the actual bitcoin being used to only transact between banks. The blockchain so becoming sort of like SWIFT, accessible to banks only.

His two “students,” Gregory Maxwell and Peter Todd, argued for this settlement vision and prevailed in the great scalability debate.

The argument here is that bitcoin doesn’t really need capacity as its blockchain shouldn’t be used at all by ordinary people who should be using Lightning or maybe Coinbase “paper” coins or perhaps Liquid “paper” bitcoin.

Bitcoin instead is a store of value, like gold, and like gold is used by central banks to transact with each other, so too bitcoin – although here commercial “banks” can use it too.

That’s based on the premise of the 21 million bitcoin limit, a limit that they argue needs on-chain fees to be very high to reward miner once the subsidy, as Todd would call it, comes to an end through the halvenings.

In this view, if the blocksize limit is not kept, then the 21 million limit would have to go as otherwise they argue there would be no incentive for miners to secure the network.

That argument is based on the premise that if there is sufficient capacity, some miners would rather get some fees instead of none, so there would be a race to the bottoms with fees going to zero if it was left to miners themselves to decide the cost of inclusion.

The obvious problem here is that it assumes demand. That there would be enough demand for “paper” bitcoin to allow “banks” to pay high on-chain fees, and that demand would come solely because of the 21 million limit.

The other side of the argument assumes demand too, which would also come from the 21 million limit, but in addition there would be demand due to the bitcoin blockchain being more competitive than the fiat or settlement blockchain due to not needing these “bank” intermediaries.

This use as a means of exchange would provide sufficient fees for miners as obviously miners need to eat so they can’t have a race to the bottom. Instead a level would be reached where only the necessary amount is paid for security, with an elastic supply and demand.

Thus we get to the proxy argument that blockchains can’t scale, so this settlement layer is not an ideological view, but a technical reality. Is it true?

When ethereum miners refused to scale their version of the blocksize limit, the gas limit, some wondered if it was miners, instead of the Todds, that kept that 1MB.

Todd himself of course was “working” for F2Pool, which at the time was one of the biggest bitcoin mining pool. Blockstream has now revealed a semi-pivot of sorts into mining.

If you control supply, then you can extract rent presuming there is demand for whatever you are supplying. Thus, from a miners’ perspective limiting capacity is a no-brainer presuming there remains demand.

Todd’s favorite saying is the restaurant is too crowded, no one goes there. Forgetting of course why it is crowded, and wondering not if it would still be crowded if the food they serve becomes rubbish, or the price for it hikes 1,000x.

The evidence so far has been that if there is change either in price or service, as in use as a means of exchange or high transaction fees, then the crowded restaurant does indeed become far less crowded.

That brief period in December when fees reached $70 to Maxwell’s delight, gave way to nearly two years of fees in the cents.

That is probably because where actual use cases are concerned that do not necessarily care about price and therefore are consistent, there is considerable sensitivity to costs.

Presuming for example you have $1,000 worth in Venezuela and want to leave the country without it being seized. If you can access bitcoin, you can probably access any other crypto, so why would you pay $100 to use bitcoin instead of even dogecoin.

Liquidity and so on might be one reason, but there are plenty of crowded restaurants that went bust. Usually, it doesn’t happen in a day. Otherwise said, that $1,000 is liquidity that has just gone somewhere else.

As bitcoin’s capacity supply is inelastic or unresponsive to demand, its ability to scale depends on the Lightning Network (LN) substituting. LN, however, has its own demand requirements.

So far and for the foreseeable future, because it does not solve the double spending problem, demand for LN seems to be very unlikely as ultimately no one is really desperate to use bitcoin for payments.

It would be nice, sovereignty and all that, but not if one has to pull a tooth to go through it.

Bitcoin needs to compete. It needs to answer why not gold? Easier to move, you can say, but obviously not with backlogs. It needs to answer why not fiat? Faster, cheaper, limited, but the first two apply no more, and the third one goes back to why not gold or diamonds or art or even stocks.

Better said, there has to be a reason for there to be demand, and if such reason is limited to grey/illegal use cases, then they too have their own cost basis because obviously they too have other options after some price point.

So then has this layered scaling idea failed and if it has, can something like bitcoin actually scale?

Solving the double spending problem is an extremely difficult task and in fact no one has managed to achieve it except for Satoshi Nakamoto.

The design is set in stone, he said, in as far as there are only certain ways you can combine certain things to solve the double spending problem.

You need a set of nodes that share history with some sort of “identity” that creates a 51% rule to enforce the rules.

The Lightning Network has no shared history, and no “identity,” thus relies on trusting the other party doesn’t double spend you. Because of that, LN should have been a lot more centralized with massive hubs which you can trust, in Todd’s view. Instead what LN has is what would be massive watchtowers which you need to trust.

Another potential second layer solution may be snarky starks where verification requirements, and thus data storage, can be reduced considerably with it not quite attracting sufficient scrutiny so far in regards to how exactly you keep accounts in this “blackhole” smart contract without having a shared history to ensure there is no double spending.

Conceptually, if you can, then the question becomes why have the first layer at all? If you can solve the double spending problem that way, then why do we need all these nodes and miners or stakers etc.

The expected answer is obviously you need it for the if/then rules, but, the base protocol can itself be those if/then rules.

Without some alternative solution to the double spending problem, then there is no second layer scalability. How do you keep account while preventing double spending?

If we rule out second layers as far as general scalability is concerned rather than niche use cases, then as far as we know you’re only left with sharding: chopping up the blockchain in mini-networks that somehow talk to each other and so manage to have a universally shared history.

Getting blockchains to talk is of course a very difficult problem because you require shared history. Like if we take Google. You can read from Google, but you can’t write to it. A transaction of course requires writing. You can write on Facebook, but Facebook can just change whatever you wrote.

To be able to both write and read and it be unchangeable, you need a blockchain with code produced “content” that is hosted on the computers of many participants with anyone free to leave or join as they please.

So if you host it on only shard A, then how does shard B know what you are hosting, or tell you what it is hosting? Well, you host both shards, so now we have a shard AB… or CDElphabet. Meaning you need to run the nodes of all shards to know the rules are being followed.

Making it a blocksize increase in a roundabout way but with perhaps some benefits in that you can just care about your own shard if you wanted, although if what you want is people’s money, then you have to care about all of them.

That caring is done at the Beacon chain. The central “director” that orchestrates all the shards and so probably has its own bottlenecks, with it quite a bit difficult to get out of this double spending problem if you do not have a shared history.

Here however there could be trust in people in general that someone would validate shard AB, and someone else shard CD, just as one can see it as a roundabout way of basically “cheatingly” increasing the blocksize by adding some bells and whistles.

So it is far too early to dismiss ethereum or to declare bitcoin has won in regards to scalability because ultimately, history may well show Nakamoto was right.

For it is doubtful anyone would say the internet is centralized, just as it is doubtful anyone save for specialists knows how it works.

It’s decentralization is not maintained by everyone running an internet node, but by everyone having the ability to do so if they have the resources or want.

It is that ability to freely enter and exit the network, that gives it the decentralized and peer to peer quality.

They can not collude because everyone can join and because of the 51%. Ultimately, if honesty has fallen so low and they do, you can always have a revolution. Not all things are code. Man is always the maestro.

For 51% of the people will never be dishonest because if they do, there would no longer be people for much longer.

That touch of objectivity presented by Nakamoto where reality is seen more than subjective speculation, may well be why in the end all scalability attempts will fail save for the one he himself presented:

“Long before the network gets anywhere near as large as that, it would be safe for users to use Simplified Payment Verification (section 8) to check for double spending, which only requires having the chain of block headers, or about 12KB per day. Only people trying to create new coins would need to run network nodes.

At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware. A server farm would only need to have one node on the network and the rest of the LAN connects with that one node.

The bandwidth might not be as prohibitive as you think. A typical transaction would be about 400 bytes (ECC is nicely compact). Each transaction has to be broadcast twice, so lets say 1KB per transaction. Visa processed 37 billion transactions in FY2008, or an average of 100 million transactions per day. That many transactions would take 100GB of bandwidth, or the size of 12 DVD or 2 HD quality movies, or about $18 worth of bandwidth at current prices.

If the network were to get that big, it would take several years, and by then, sending 2 HD movies over the Internet would probably not seem like a big deal.”

Two HD movies are probably less than one hour on Twitter. Yet the internet still manages to function in a very decentralized way and quite peer to peer with it uncontrollable globally by any country.

Where bitcoin to reach that same scale, it is very difficult to see what worked for the internet, can not for it too. Something that history may well show as the question of scalability continues to become more clear.

Editorial Copyrights Trustnodes.com

Share your thoughts, add a comment!

You must be logged in in order to place a comment.

Article comments

Loading...
No comments yet, be the first to comment this article