Bitcoin Considering Proposal to Reduce the Blocksize to 300KB, Crazy or an Actual Solution?

Bitcoin Considering Proposal to Reduce the Blocksize to 300KB, Crazy or an Actual Solution?

A proposal to temporarily reduce bitcoin’s blocksize to 300kb from the current 1MB has been put forward with some apparent support.

Luke- Jr, a Bitcoin Core developer and Blockstream co-founder, has written the soft-fork code (pictured) with the ostensible aim of decreasing node resources.

The bigger hurdle is how to get 7.5B people to run their own full node. Until someone is using their own full node, they shouldn't be transacting. — Luke Dashjr (@LukeDashjr) February 11, 2019

Unlike in other systems, history in bitcoin will only increase. To run a full node, you need to go back all the way to the genesis block and download all the transactions that have occurred in the past decade.

That means you have to download 238.32 GB. Then you have to transfer the new blocks to all the nodes that are connected to you with studies showing there are incredible amounts of inefficiencies on how this data is being transferred especially as it regards “fake” requests.eval(ez_write_tag([[580,400],'trustnodes_com-medrectangle-3','ezslot_1']));

Any given public node probably “talks” to 200 others at any given time. You can limit the number, but all this “talking” requires bandwidth. The more data in a block, the more bandwidth required.

There's a movement stirring to lower block weight limits. A soft fork to lower the block weight would be a possible testing ground for a dynamic block size scheme. Successfully testing a dynamically adjusting block size could actually help pave the way to a hard forking increase. — Alex Bosworth ☇ (@alexbosworth) February 5, 2019

As history will only increase, the system in its current design is unsustainable. The “crazy” suggestion by Luke-Jr, therefore, might not be that crazy, just impractical.

Putting aside considerations of how this would affect the usability of the network, a decrease of the blocksize by 3x would only slow down the growth by 3x.eval(ez_write_tag([[580,400],'trustnodes_com-medrectangle-4','ezslot_2']));

That can be a temporary reprieve, but it is not a solution for history will still keep growing, just a bit slower.

At some point, the growth would reach a stage where you can’t really sync. In a long enough time frame, it may be the case that even one node in the entire globe can’t run.

The solutions to this problem have tradeoffs. One way to deal with it is to delete previous history. That can be kept somewhere, perhaps distributed through BitTorrent, with nodes no longer needing it.

Such pruning has been rejected in bitcoin because it does require some trust in regards to what happened during that deleted history. You can go and download it from BitTorrent, but then how do you know what you downloaded is real?

Another potential solution is to have checkpoints. That basically means that a certain block sort of becomes the genesis block. Who imposes such checkpoint has power and of course power can be abused.

It is possible, however, to have a decentralized way of implementing checkpoints. Ethereum’s now ditched Hybrid Casper was that very system. You have cryptonians stake their crypto and decide according to the protocol rules which is the canonical blockchain. Now your node can take it from there, instead of going back all the way to the genesis block.eval(ez_write_tag([[468,60],'trustnodes_com-box-4','ezslot_6']));

In some edge cases there may be problems for a node that hasn’t turned on for say four months. They could now be targeted specifically and be given fake data in what is called a long range attack.

The third potential solution is what Luke-Jr is proposing. Having blocks as small as possible in the hope technology improves faster.

That would come at the cost of the system having very limited uses as there would be only few people who can transact. At 300k, that would probably be around 100,000 transactions a day, less than 10% of eth’s December 2017 all time high of 1.4 million which still was no where near enough.

In addition, this would only be a temporary solution, but whether temporary means three decades or a century is a more difficult question to answer.

This proposal, in addition, might have something to do with paying for bitcoin’s security. There’s a halving coming up next year. That will reduce block rewards, or subsidy as some call it, to 6 BTC per block. In 2024 it goes down to three, and then in a decade to just 1.5 bitcoins per block.

Whether that matters would depend on whether bitcoin’s fiat price compensates for it, but at some point there won’t be any noticeable new issuance. The network will have to pay for security through fees or inflation.

For more than a year now, fees in bitcoin have been negligible. To Luke-Jr and others at Blockstream that’s a problem because if people don’t pay high fees in the current system design, then in a decade or so there won’t be enough incentive to secure the network.

They may, therefore, be thinking that perhaps blockspace is already too high. Reducing supply increases price according to econ 101, but only if demand remains the same or increases.eval(ez_write_tag([[300,250],'trustnodes_com-banner-1','ezslot_5']));

Just because something is rare doesn’t mean it has demand or value. Some say value is subjective, but realistically value is based on usefulness. A house, or electricity, or a car, etc, have value because they are useful.

To make bitcoin useful, while increasing fees, while maintaining resources for nodes at a minimal level, they have come up with the Lightning Network (LN) which could in theory deliver all three.

Bitcoin is programmable money, although to a limited extent. So you can sort of freeze your coins in a way that the network thinks nothing is happening. During that period, you tell others here are the coins, I’ll send you 1BTC, but not now, in about a week. You can trust me because here is this key that proves the transaction will happen. You can look at the code, you can see there’s nothing I can do about it. If we make this deal, it will go through, nothing can stop it.

Now during this one week period, you can have these arrangements with anyone however many times you like. The accounts are kept in the cloud so to speak, on the Lightning Network. Then on Sunday, or whenever, ownership is distributed in a settlement.

Because there have been so many transactions on LN, all the fees add up even if they are small for each transaction. When it comes to the settlement transaction, a big fee can be afforded. That is meant to be the demand part.

In this design you have a base layer which is decentralized, but not very accessible, and a layer on top which is centralized because LN doesn’t and can’t quite solve the double spending problem in a decentralized way.eval(ez_write_tag([[250,250],'trustnodes_com-large-leaderboard-2','ezslot_7']));

You know the promise of 1 bitcoin made to you? He or she can now promise that one bitcoin to someone else. You can see this happened, and now you punish them by taking all their funds. Well, then, why do we need LN? Why not do this seeing thing on the base layer itself?

Because perhaps you haven’t seen it. Maybe you’re on holiday, offline, you lost your data, or maybe the service provider never gave you the data. So you appoint someone to watch for you. The problem repeats.

Instead of bundling transactions through a second network, sharding bundles nodes on the same network. In bitcoin, they called it sidechains, but they couldn’t quite figure out how to do it in a trustless and decentralized manner.

Research now suggests that one way of achieving this bundling of nodes is to have a central chain designed to manage coordination between the other chains which thus connects them all together into one network.

All this is a bit of rocket science, quite in flux and very much at the bleeding edge with numerous current and proposed public blockchains in a race to launch first and do it properly.

Many promises are made, but we’ve learned now that what devs say and what turns out in practice can be very different things. In addition, it doesn’t quite solve the problem of ever growing history. It just multiplies the same blockchain a thousand times or more.

All of that means public blockchains are in a stage of rapid development with this being the part where devs through spaghetti at the wall to see what sticks.

In that context, a trial of 300kb blocks might perhaps not be such a crazy suggestion, but you’d expect a bit more in rocket science than a baby tweaking of a parameter.eval(ez_write_tag([[300,250],'trustnodes_com-leader-1','ezslot_9']));

Copyrights Trustnodes.com

 

Share your thoughts, add a comment!

You must be logged in in order to place a comment.

Article comments

Loading...
No comments yet, be the first to comment this article