Is Vitalik Buterin Proving Satoshi Nakamoto Right?

Is Vitalik Buterin Proving Satoshi Nakamoto Right?

Vitalik Buterin, ethereum’s co-founder, has been on a roll recently, publishing a number of proposals on ethereum 2.0.

How to simplify it, how to smooth sharding “connections,” how to transition from eth 1.0, and how the system will look overall.

The simplified specification is now out, with the design looking quite different from universes that somehow had to be connected together, to shards being more a neighborhood that share the same road or bus service.

“Every shard A maintains in its state, for every other shard B, two values: (i) the nonce of the next receipt that will be sent from A to B, and (ii) the nonce of the next receipt that will be received from B to A,” Buterin said.

Translated that means more resources are now required to run a node because that shared data aspect is now a bit less sharded.

That’s presumably because they couldn’t quite figure out how to connect all these different shards in a way that keeps it still one network like it is currently, instead of with some considerable tradeoffs.

These changes were made after an article argued there were some problems with ethereum 2.0. That it abruptly led to these, in many ways drastic changes, suggests either the article was right or that devs had a knee-jerk reaction.

It may also suggest perhaps they have no clue how to do this sharding stuff in a way that does really parallelize the blockchain while keeping it as one “talking” system.

“I have been talking about how we need to revamp the blockchain using sharding to scale since 2014,” Buterin said in response to a “gotcha” mentioned in the article where Joseph Lubin of ConsenSys says they always knew ethereum couldn’t scale in the current design.

What he didn’t say was he had no clue how to do it as late as in 2018, with work on it beginning not until the middle of last year, and now not far from two years on, its design is completely changed on the whims of an article written by a non coder.

Bitcoiners would say no. Gavin Andresen would say yes. The difference between the two is same as why bitcoiners would say 0-confirmed transactions don’t work, or that Simple Payment Verification (SPV) wallets are not secure.

They’re wrong on both, and they’re right on both too. Non fully confirmed transactions would not be secure if you’re receiving one bitcoin for a car which the buyer drives away with. Even then, in the west at least, there’s the car plate which can lead to the thief’s arrest.

SPV wallets would not be secure in circumstances where your specific wallet is targeted with huge resources through a conspiracy of sorts.

For everything else, both work fine, and under the same analysis as above, bitcoin doesn’t work or is flawed because you can imagine circumstances where miners attack, even though that is quite unrealistic and there are defenses.

So if Mike Hearn was still around in this space, or if Andresen decided to stop being in self-exile, they’d probably say you can have sharding through an SPV method where you verify some parts fully and some parts through more simple verification by using fraud proofs.

They both spoke to Satoshi Nakamoto, the bitcoin inventor, in writing. Whether he told them how exactly it can be done is not too clear, but coincidently Hearn happens to be the one that invented SPV.

Whether Buterin had the pleasure of taking the advice of either of the two, is not too clear, but his design is beginning to look more and more like just an increase of the blocksize.

With some efficiency gains at the edges, fitting more data in less space, or it could end up in what bitcoiners would call a masternode or supernode and a slave node.

If we call a full node a supernode, then the SPV wallet would be a slave node. Slave because it obeys the master. It has no way to reject the full node (or so is claimed because it does), to say no, or to verify if he is lying. Instead it has to fully accept him.

The theory goes that if we have this full shard node and SPV other shard connections, the one that is running all the full shard nodes is the only one that is actually running the network.

The previous ethereum shard design tried to address that by having other shard connections be not quite SPV, but leveled through a somewhat complex process at the expense of such connections being “primitive.”

The new design does away with that, and merges a lot closer the shards, at the expense of more resources, and with the added bottleneck of needing certain data for all shards. So going back to basically an increase of the blocksize but with some compression.

The crucial tension in scaling open blockchains is the difficulty of aligning the thinking model with a wholistic system where all parts have a symbiotic relationship with the commons, rather than just feed on it.

From Nakamoto’s perspective it is probable he thought all actors would have no choice, but to act a certain way. Something that may have well been the case had he not been tired down to the point he just added that 1MB limit.

That spanner in the machine’s workings changed the incentives quite decisively, not least because the “students” of Hal Finney, who after tirelessly trying finally succeeded in clogging the wheels with that 1MB limit, were four years after bitcoin launched, to exploit the opportunity.

Peter Todd now works for banks. Who he worked for in 2012-2013 isn’t too clear. Gregory Maxwell has just left .

They very successfully, and through careful planning starting at almost the beginning, managed to do everything they could to sabotage actual scaling, and kicked out option after option to the point none was left.

Pruning has been dismissed by both out of hand. Yet both claim the main reason bitcoin can’t scale is because its history is unbounded.

If history is unbounded then at some point it will reach its limit, whether at 1MB or whatever else. That’s full limit where it’s not possible to run a node at all. Its “datacenter” limit has arguably been reached even now with a node taking weeks to sync. Their solution?

Ethereum devs have made it quite clear that sharding or no, there has to be pruning. If there is pruning, then scaling can move a lot faster to the point sharding is almost irrelevant.

However, since many options have so successfully been taken out, and since many of the coders are just volunteers (or rich) who perhaps don’t even care to think too hard or are under too much pressure to have the luxury of engaging in architecting, they’re all king of chasing windmills, with little astuteness, and perhaps with complacency, with the confidence of cowering to an article.

Editorial Copyrights Trustnodes.com

Share your thoughts, add a comment!

You must be logged in in order to place a comment.

Article comments

Loading...
No comments yet, be the first to comment this article