Recently I’ve been working on a project code-named Matreon. It’s like Patreon, but for the matriarchy. In a world of increasing online censorship, being able to host your own website and process your own payments really helps.
Matreon is self-hosted, which means you’re no longer dependent on the whims of one CEO or some untransparant content. It uses Bitcoin and the Lightning network, so you no longer have to worry about demonetization policies. I’m working on making it as easy as possible to deploy on AWS, the steps are described below.
I started to contribute code and reviews to Bitcoin Core a few months ago and gradually accumulated lists of stuff I’d like to see. Some of these I can do myself, others are still beyond my level of expertise, still others might be terrible ideas and never happen.
A note on process
Just to make it abundantly clear: this is my personal wish list, not anyone’s roadmap. There is no roadmap, there’s just stuff in progress. Every contributor has their own things they’re working on, their own set of priorities.
A pull request generally gets merged if and only if: 1. it has received enough review, by relevant experts; and 2. there are no unresolved objections, e.g. a) it is of sufficient quality; b) it doesn’t break things; c) it doesn’t cause more problems than it solves
Criterion (2) is easiest to deal with as a developer: if someone reviews your code and they find a problem, either fix it or explain why it’s not actually a problem. There are also automated tests and other (e.g. formatting) checks that save reviewers time. It’s not uncommon to revise a pull request a dozen times until it’s good enough.
Criterion (1) can be a bottleneck. If helps to work on stuff you know other developers care about. Hence this list; if you work on anything on this list, or stuff that indirectly makes this list more likely to move forward, your chances of me reviewing your code increase.
There’s other ways to increase the chance of getting your code reviewed. Keep changes small and focussed. The more stuff you change in a single pull request, the more difficult it becomes to review and the fewer developers even have the prerequisite knowledge to understand all of it. Use a clear title and description, add a screenshot if it’s a UI change, and keep these up to date when you change things.
In order to keep things moving after the first feedback you receive, it helps to actually address that feedback and to do so quickly, as this motivates reviewers. Even the most experience Core developers sometimes have to wait up to a year to get stuff merged, so patience is useful.
People may forgotten the days of high fees, but those days will probably return some day. If you’re not in a hurry to send a transaction it’s a good idea to set a low fee. However, that may cause it to get stuck, especially if fees rise after you send it.
If a transaction gets stuck, you can replace it with a transacion that pays a higher fee. Just right-click on it and select Increase Transaction Fee, et voila:
Unfortuately this UI is rather inflexible. It doesn’t let the user specify the new fee, nor provide any hint what a good amount would be. Due to implementation details the fee can’t be more than the change amount of the original transaction (and doesn’t work if the original transaction doesn’t have a change address), a restriction that’s hard to explain to the user.
The solution probably consists of automatically adding new inputs to the transaction as needed, as well as reusing the existing fee recommendation UI.
We could go beyond that, e.g.
pre-generate a series of transactions with escalating fees, so that a user can specificy a deadline and maximum fee. The wallet would then broadcast these as the deadline approaches
append new transactions to existing unconfirmed transactions
peer2peer mixing, i.e. mempool compression (kidding… or am I?)
Needless to say, all the above is surprisingly non-trivial and there’s some privacy issues and safety gotcha’s as well.
An additional complexity here is that the GUI and RPC have different ways of composing a transaction, which could lead to duplicate work.
Hardware wallets like the Ledger Nano S keep your private keys very safe, but they rely on a web based backend to show your balance and compose transactions. Only signing happens on the device. This is not ideal for privacy and if their servers disappear you most likely have to fall back to paper backups.
At the same time, Bitcoin Core saves your private keys on your machine
So I’d like to be able to use a hardware wallet directly with Bitcoin Core.
User friendly backup and recovery
Most consumer wallets nowadays tell the user to write down a 12–24 word phrase and keep it somewhere save. They also tend to strongly remind users to do so. Bitcoin Core does have backup functionality and it’s present in the UI, but I’m not sold on it. I haven’t had a chance to really dig into this topic, so I’ll keep this section vague.
More broadly, I have a seperate wish list of stuff I’d like to see improved around recovery phrases and hierarchical deterministic wallets.
I suspect more people would run a full node if they knew it didn’t eat 180 GB of their precious hard disk space. It’s trivial to enable pruning, i.e. just open bitcoin.conf, add prune=10000 and restart to prune to ~10 GB. However I’d like to lower that barrier even further, e.g. by having the wallet check the users disk space on first launch. If it’s not huge, propose some reasonable number.
A related issue is that pruned nodes currently are quite slow to sync due to implementaiton details, but that should be easy to solve.
Somewhat related, performance is still not great for non-SSD drives: #12058.
Whether or not it’s useful remains to be determined, but I bought an iPhone X with 256 GB of storage, which means it fits the entire blockchain. That’s enough reason for me to want to run a full node on it, but I need some help figuring out build pipeline magic and getting it to interact with an iOs app.
This is probably a bit controversial, but I as long as other people in the real world use things like euros, I’d like to be able to see how much I’m about to send in fiat terms. However you can’t just fetch a price feed from an external website. For example that would reveal the users IP, which combined with the timing of requests could be enough to reveal a users addresses. Then there’s the issue of which price to trust. But now I find myself googling conversion rates for the exact amount I’m about to send, which can’t possibly be good for my privacy. 🙂 Any creative solutions out there?
Easier deterministic (Gitian) builds and verification
How do you know the program you downloaded is actually based on the Bitcoin Core source code? Every release a number of developers make a deterministic build and publicly attest to it. It’s getting easier for more developers to do so. There’s probably enough eyes on this to make that any funny business with the public release would be caught, but perhaps not for indiduals to see if they’re indivually targetted.
This is already a huge improvement over the pervasive App Store model where you just blindly trust Apple, Google or Microsoft to not mess with the software, which they automatically update, while you’re logged into an account with them that has all your personal details. However I think this process — of developers publicly comitting to specific source code for every update, and each computer verifying this — should be the norm for all software, and for that it needs to be made much easier.
I was trying to understand BIP-70 Payment Requests a bit better, mainly because I am confused by BitPay’s claim that they can somehow block “mistaken” transactions:
We can also analyze transactions to make sure an adequate bitcoin miner fee is included. If the fee isn’t sufficient to allow the transaction to confirm on the bitcoin network on time, BitPay can return a helpful message back to the wallet to let the user know. Mistaken payments will never reach the Bitcoin network.
The wording suggests that a wallet sends the transaction to BitPay for approval and they forward it, but afaik that’s not what the BIP-70 does. According to the specification wallets broadcast the transaction via the P2P network like any other transaction. Of course BitPay can always choose to not honor a transaction they receive, but I don’t see how BIP-70 changes that.
In fact, it would be quite unsafe if it did work that way. Other than reputation, what’s to prevent BitPay from rejecting a transaction, telling the customer to submit a new one and then broadcasting both to the network? The wallet would have to be very careful to prevent this; it would have to reuse at least one input in each attempt. Or it could propose unsigned transactions. Or, combined with Replace-By-Fee, perhaps users could send BitPay a series of overlapping transactions with escalating fees, and they would then broadcast the higher fee ones if the confirmation deadline is approached. However that’s totally beyond what BIP-70 is about.
Perhaps they meant that BIP-70 makes it less likely for a user to pay the wrong fee? However the specification doesn’t have a field for a (suggested) fee. Neither does the simpler and more commonly used BIP-21.
Even if there was an ad-hoc method to suggest a fee, at least Bitcoin Core doesn’t honor that. Maybe other wallets do?
Time to look under the hood.
Whenever I get confused, I prefer to just look at what actual software does, rather than speculate based on what people write on blogs or even what a spec says.
BitPay has a demo page where you can generate a payment request. You can use software like QR Journal on MacOS to see what’s in the QR code:
The URI starts with bitcoin: which is defined in BIP-21. Both browsers and mobile apps use URIs like this to determine which application to open. In the case of bitcoin: that’s usually whichever Bitcoin wallet you installed last. This is similar to how mailto:email@example.com?subject=Hello opens your mail app (even if it’s web based) and creates a draft email with the right address and subject.
A typical BIP-21 URI would contain the destination and amount, e.g. bitcoin:3AcqBykYEos8EHREr7oEzSxg7DxxjH6DCf?amount=0.0001
In this case, the first and only argument is r= which is defined in BIP-72 as an extention of BIP-21 to indicate a URL where further details can be fetched. This saves space compared to putting all the details directly in the QR code, although I wonder if the more QR friendly bech32 could mitigate that.
If you open that URL in a browser, it will show you the invoice page. But a wallet will pass a special HTTP header to tell the server it wants the actual payment request:
The result is a protocol buffer; you can tell Mike Hearn, one of the BIP-70 authors, worked at Google before. 🙂 There’s various ways to decode those, though none are properly documented in the BIP.
The first step is to download the Payment Request protocol buffer description file. The paymentrequest.proto file linked to in the BIP doesn’t specify the protocol buffer version, so I’m using paymentrequest.proto from Bitcoin Core.
You can recognise the above high level structure with a simple command (I replaced long binary stuff with…):
Notice that indeed no fee is specified. However they do increase the amount to offset fees they need to pay to sweep it. This raises some other questions such as why they don’t just give you the the address that they utlimately want to forward it to (and would they have the same policy, and so forth, meaning fees should be infinite?). Also, what time horizon do they have in mind for that sweep? If they’re not in a hurry, or wait for signature aggregation techniques to become available, those fees might be much lower. Anyway, this has no bearing on BIP-70.
I think the confusion arises around the Payment message:
This is sent by the wallet to the merchant before the wallet broadcast the payment via the P2P protocol, but there’s nothing in the spec that says the wallet needs to wait for approval (and see above for why this would be risky with signed transactions).
Note that in the description these steps are reversed, but it doesn’t really matter:
BitPay can inspect the Payment message and refuse to send a PaymentACK, but that’s too late. That said, perhaps it’s not actually implemented that way in some wallets; I haven’t checked. For that I’d need to figure out how to intercept the message Bitcoin Core sends (or study the code). Maybe I’ll update this post later.
Sidechains — assuming they work — peg the exchange rate. Similar to Lightning, it locks up X Bitcoin on the main chain and allows redeeming that X bitcoin. Who gets to redeem what part of X depends on the rules of the sidechain. As long as those rules don’t have bugs, sidechain coins should retain the same value as mainchain coins (maybe slightly less because it takes time to redeem and you never really, really know there’s no bug, or maybe more because it’s more convenient, or because there’s overhead cost as you suggest).
That said, there’s probably plenty of use cases where altoins are fine, especially if you don’t hold them over long periods of time.
When sending less than €1,000 of Bitcoin it’s worth paying attention to fees, but keep mind that your payment is competing with transactions that move €100,000 on equal terms. Transactions are charged per byte, not as a percentage of the amount. But willingness to pay is obviously a percentage of the amount.
It’s interesting to note that although the price has increased more than 10x over the past year, transaction amounts in BTC terms haven’t changed much:
Neither have fees as a percentage, they still hover around 0.75%:
Why is this? Perhaps it’s because many hodlers became rich and got comfortable moving 10x the amounts around that they were used to. Or perhaps higher value activity entered the ecosystem, pushing lower value out, or users became more efficient.
The use of SegWit addresses allows for 50% cost savings, yet its adoption is stalling around 12%. This is likely because major services haven’t upgraded yet, and that may be partially due to them being distracted by massive user growth due to the price rise. But then, why aren’t people switching over to competitors that already offer SegWit support?
Is the market for wallets so inefficient that users don’t churn in order to save 50%? It’s understandable that free wallets don’t have advertisement budget to get this message accross, but not all wallets are free. If we were to take Erik Voorhees’ $40 fees claim at face value (which you shouldn’t), then Ledger could advertise their wallet by saying you can earn it back after using it thrice.
In that case, the following affiliate link should make me rich:
Why aren’t they sold out? Why aren’t there $25 e-courses on saving transaction fees? Why isn’t Youtube full of advice to save on fees, targeted at non-technical folks? Most, but not all, of the content around this topic serves to promote investment offers, an obviously very lucrative market. But at present 500 BTC per day levels, fees are becoming a $2.5 billion market. You’d expect that to attract some entrepeneurs as well. Where are they?
Why are people not flocking to coins with low fees, despite good wallet support and descent liquidity? BCash has about 29,000 transactions per day, Litecoin 77,000 transactions vs Bitcoin’s 220,000. With no mechnism to prevent spamming, because those coins are not at full capacity, those numbers are probably optimistic. Pareto’s principe suggests that there should be far more low value transactions than high value transactions, so if there really was huge demand for low-fee currencies, one would expect these altcoins to have far more, not fewer, transactions.
There’s much room for improved efficiency to take better advantage of the existing block space, but a more pessistimic interpretation of what’s going on is that the majority of existing Bitcoin users don’t care about fees. And if that’s true those fees won’t go down no matter what we try in terms of better coin selection, SegWit, Replace-By-Fee, transaction batching, etc. The people moving these large amounts of money won’t bother using these techniques, thus freeing up space for others, unless perhaps we make it trivial, default and beg or guilt-trip them into using it.
Which brings me to Drivechain, which offers an interesting advantage in this situation that Lightning doesn’t have (yet?). In order to use Lightning you need to open a channel first and you need to close it later. In addition to that, you first need to actually receive those coins from somewhere else. Drivechain on the other hand, is more similar to altcoins in the sense that any user can just generate an address and buy the sidechain-coin directly, very much like how they buy bitcoin. It decouples the usage of layer 2 from the task of moving between layer 1 and 2. That allows economies of scale for moving between layers, i.e. far fewer transactions. That is, if I understand Drivechain correctly; it’s hard to wrap my head around. This interview is a good place to start:
Of course these technologies don’t contradict each other and Lighthning is much closer to actually being deployed. It also benefits high value transactors due to it’s orders of magniture improvement on speed of settlement, giving them a stronger incentive to move off-chain than SegWit did. It’s also possible to unilatterally fund a channel, so exchanges can do for every new customer, allowing them toreceive their first coins directly on a Lightning channel (someone still needs to pay fees though, even if the channel is never closed in practice).
I was trying to improve the functional tests for bumpfee, a Bitcoin Core wallet feature that lets you increase the fee of a transaction that’s unconfirmed and stuck. Unfortunately I introduced a bug in the test, which I’m still in the process of tracking down. Every disadvantage has its advantage, so I took the opportunity to better understand the functional test framework and its powerful debugging tools.
Thanks to everyone who pointed me in the right direction on IRC (as well as for tips on how to use IRC without going insane).
You can view the changes I made, including the bug, in this pull request. Caveat: this is a PR to my own fork: don’t make pull requests like this to github.com/bitcoin/bitcoin. It’s generally a bad idea to change so many things at the same time, if only because it’s too much burden for code reviewers.
To reproduce the error yourself (assuming OSX or Linux, see here for Windows build instructions at the time this post was written):
The test will start running and you’ll see a log message “Running tests”.
There’s not much information between the “Running tests” and “Assertion failed” log messages. To see more, switch the log level from the default INFO to DEBUG:
As I was investigating, I added severalself.log.debug statements to the test file, to get a better sense of what was going on before the error. Now my log looked something like this:
I used self.log.debug(rbftx)to print information about the RBF transaction that the test generated, self.log.debug(rbf_node.getrawmempool()) to show that the transaction made it into the mempool of the node that created it.
self.log.debug(peer_node.getrawmempool()) shows that it didn’t propagate to the other node. At least not immedidately, which makes sense: synchronisation of both mempools is not expected to happen until the next statement sync_mempools((rbf_node, peer_node)). This test helper function waits for both mempols to be identical and fails otherwise.
A good way to learn more about what’s going on is to intentionally break things. For example if I modify the spend_one_input() helper function to pay a 1 BTC miner fee, the test predicatably fails in a different way:
Print RPC commands
The functional tests work by sending commands to the test nodes via RPC. You can log these commands and their responses using ./bumpfee.py --tracerpc. Now you can clearly see sync_mempools in action:
sync_mempools works by sending both nodes the getrawmempool command and comparing the result. After a while it gives up and throws an error.
We still don’t know why the mempools aren’t synchronizing. So let’s dig deeper.
View node log files
So far we’ve been looking at the test logs. But there’s more: each test node has its own directory which includes a log file that you can inspect.
Note the path given after “Initializing test directory”, in this example /var/…/test4ivt2s9y. The logs for the node that created the test transaction are in /var/…/test4ivt2s9y/node1/regtest/debug.log.
You can change the level of detail in these blogs by adding "-debug=all" to self.extra_args in set_test_params(). More fine-grained log options can be found via ../../src/bitcoind --help:
-debug=<category> Output debugging information (default: 0, supplying <category> is optional). If <category> is not supplied or if <category> = 1, output all debugging information. <category> can be: net, tor, mempool, http, bench, zmq, db, rpc, estimatefee, addrman, selectcoins, reindex, cmpctblock, rand, prune, proxy, mempoolrej, libevent, coindb, qt, leveldb.
You can even combine the log files of all test nodes, in order to get a chronological picture of what happened:
It’s clear that node1 sent the transaction to node0 and there’s no obvious error message. One interesting observation is that node1 broadcast the transaction twice, even though the test was only run once.
View other node artifacts
Both node directories contain a file mempool.dat. Although you need other tools to really inspect their contents, in my case it was trivial to see that this file was empty for node0 and not empty for node1, consistent with the--tracerpc output above.
Use Python debugger
So we still don’t know what went wrong. Perhaps through manually interacting with the test nodes we can find out more. One way to do that is to use bumpfee.py --pdbonfailure.
This gives you a Python console, where you ask things like:self.nodes.getrawmempool() and notice it’s still empty.
Let’s try a manual broadcast:
Aha, that’s interesting! After this manual broadcast, we find that our transaction finally made it to node0. This still doesn’t solve our mystery, but at least provides another clue.
Inspect running nodes
If things get really desperate, you can leave the nodes running after the test, using--noshutdown. That way you can poke at it using some other tool.
No, that’s not a tool, it’s a proccess. Just go do other stuff. Eventually you might run into a solution. It turned out the test nodes thought they were still in IBD (Initial Blockchain Download), during which process they don’t synchronize their mempools. To tell the test nodes IBD is over, you need to mine an additional block using peer_node.generate(1). So I broke the tests by removing peer_node.generate(110).
More tests welcome
There’s still plenty of tests to write and improve in Bitcoin Core. Some integration tests, like the one in this article, are written in Python. Those could be a good place to start, until you’re a bit more familiar with the codebase. Please follow recommended practices. There are also C++ integration tests, as well as unit tests.
This article is based on the slides I used for a presentation at the Hong Kong Bitcoin Developer meetup on November 1st, plus some feedback I received on the chainspl.it Slack. This was before SegWit2x was called off, but in the interest of (my) time, I haven’t adjusted this article to reflect that. I’m sure something similar will happen again anyway and it’s a good mental exercise to think through what could have happened.
For non-technical readers a useful perspective — even if technically not accurate — is to distinguish between airdrops and contentious hard forks. This assumes you are in possession of your private keys, as you should.
“free” coins based on BTC balance at date X
safe to ignore, risky to use
Free money?! Bitcoin Cash, Bitcoin Gold, etc.
1 BTC on Aug 1 means 1 BCH
same private key controls both
distrust “official” wallets; assume malware. Better safe than sorry. Sooner or later one of these airdrops coins will contain malware. Even without malware, simple incompetence of developers can lead to loss of your bitcoin. Most Bitcoin developers have better things to do than inspect this code. They will write gloating articles explaining what went wrong after you lost your Bitcoin. Wait for well established wallets to support; but they can make mistakes too. Remember Cryptsy.
move BTC to fresh wallet first (just in case)
privacy (traces on two blockchains)
It’s safe to ignore due to replay protection, risky to use due the above concerns.
Contentious Hard Fork:
disagreement on what Bitcoin is
not safe to ignore, unless you HODL
SegWit2x might have gotten messy:
1 BTC on Nov ~15 -> 1 BT1 + 1 BT2
some companies claim BT1 is Bitcoin
other companies claim BT2 is Bitcoin
several companies will go back and forth
no or little replay protection
never assume companies know what they’re doing
It’s not safe to ignore due to lack of replay protection, unless you don’t use it (HODL). It’s risky to use due the above concerns, though unlike airdrops at least the official wallets are unlikely to contain malware.
Remember The DAO?
Code is Law!
$60M ETH stolen from smart contract
Most developers, holders and miners agreed on need to fork
Soft-fork wasn’t possible (halting problem)
Deadline for hard fork was not self imposed
Ethereum Classic is born
Not everyone agreed with this hard-fork. Initially many people didn’t think ETC had a chance to survive, as the theory up to then was that majority hash power would simply crush a minority chain into obvlivion.
The First Replay Attacks
Don’t assume companies in this space know what they’re doing under all circumstances.
Note that the SigHash field is four bytes when you sign it, but it gets truncated to the last byte when you serialise the signature (same in SegWit).
Ledger hardware signing
Sometimes it’s really hard to figure out what’s going on based on just (lack of) specs and blog posts. So just read the source code! The changes made on the Ledger hardware side seem pretty simple:
The Chrome extension which generates the unsigned transaction is a bit more complicated, but the magic seems to happen here. I think that when it uses BIP143, if there’s no SegWit it assumes it must be Bitcoin Cash.
I find the following diagram somewhat helpful to visualize what’s going on:
Replay protection mechanism TDB… (YOLO)
They did commit to making addresses start with a G (A for SegWit), which is nice.
SegWit2x imposed a number of constrainst on any potential replay mechanism. I don’t think these were terribly well though out, but they make some sense.
minimal changes to software of participants (most participants are adding non-protocol level replay protection)
PR 131, not that this approach is quite different from BCH
Sets bit 8 in pre-image (BCH used bit 6)
Bit 8 isn’t appended to signature
Core node consider signature invalid
hard-fork relative to BU
The Future — Spoonnet and other proposals
Spoonnet is a series of proposals of things that can be improved in a hard-fork. It also contains a proposal for replay protection, which is somewhat similar to the 2x-only SIGHASH magic above.
uses nVersion (“A tx is invalid if the highest nVersion byte is not zero, and the network version bit is not set”)
hardfork network version bit is 0x02000000
0x02000000 is added to the nHashType
leaves serialized `SIGHASH_TYPE` alone
Another proposal is being discussed on the bitcoin developer mailinglist, which also includes an address change.
SegWit2x — Unprotected?
HODL! Easiest thing to do during fork is to not use Bitcoin for a while, but not everyone has that luxery.
>1 MB transaction (actually slightly less than 1 MB)
Or just use a custodial service 🙁
Custodial wallets and exchanges can take care of the splitting. They can split customer funds in batches, saving money. Unless something goes wrong and they become insolvent.
UTXO Fairy Dust
Update: chainspl.it has thought about these proposal more than I have.
Ask miner: coinbase tx unique for each side (natural, organic replay protection, but can’t be done until 100 blocks after the fork)
Service can split using other method
paid) API with anyone-can-spend UTXO’s?
Wallet coin selection must include these inputs (they would need some sort of proof-of-replay-protection…)
nLockTime — 4 easy steps
nLockTime: not mined (consensus rule) or relayed (IsStandard() rule) before block N.
H1: block height of 1x chain, H2: block height of 2x chain
generate two addresses (A1, A2)
check which chain moves faster (e.g. H2 > H1)
sign tx to A2 with H1 < nLockTime < H2
send to A1 w/o nLockTime (wait until confirmed, try again if needed)
This is hard to do manually, but also hard to automate for non-custodial wallets. User needs to come back several times, lots of edge cases to handle in UI.
wallet must monitor both chains
need to wait for gap in block height; only works while one side of fork has a big enough lead. Can’t be used immediately after fork
sweep is bad for privacy
must wait for step 4, risks: * reorg (e.g. intentional wipeouts) * fees in BTC terms > balance
receiving new unsplit funds. When receiving new funds, wallet must reason if those funds are already replay protected, or its coin selection must always include coins that are known to be protected.
We’ll learn all sorts of new problems as people start losing their money.
> 1MB block
Actually the block needs to be smaller than 1 MB (ex. witness), such that it wouldn’t fit into a 1 MB block due to the space needed for the block header and coinbase transaction. It would just fit under the 1,000,000 byte transaction limit on the 2x chain.
It’s non-standard, so requires coordinatation with miner. Expensive, so easier for a service.
Maybe use CoinJoin (if there’s a way to guarantee the tx will be big enough)?
Opt-in hard-fork without alternate transaction history?
IETF’s RFC 7282 is an eloquent document which describes important aspects on consensus, and worthwhile if you want a more nuanced interpretation than “widespread agreement and disagreements addressed (even if not acommodated)”.
Once we have a concrete technical proposal, and it seems to have some traction, we need to figure out if we really have consensus before it gets deployed.
Moving from RFC 7282 style technical rough consensus to economical and political (rough) consensus is quite problematic. If you want to stay in the spirit of RFC 7282 then you should only use polls to see if there is any opposition. You then need to actively go out and figure out what people’s concerns are and make sure those are reasonably addressed. You have to go through all that before you accept anything below 100% support.
This seems impossible in many cases, as non-technical objections can go all over the place; you may end up having to refute all of Nitsche to adequately address some convoluted philosophical objection and finally reach rough consensus between all users. It gets even worse if you need to consider potential future users.
The most pragmatic way out of this problem seems to be to make changes opt-in, hence a preference for soft-forks (though not all kinds).
This doesn’t work when it comes to hard forks; you can’t guarantee they won’t be controversial. Once a hard fork is controversial, exchanges start trading it and users will get confused. Replay protection doesn’t solve this problem, because users still need to choose which chain they believe in which is an enormous burden. They might not have agreed to the code changes had they known this outcome.
You can certainly hold off on any hard fork while it’s controversial, but you can’t predict if it suddenly becomes controversial after the point of no return.
The easiest solution would be to never risk a hard fork. One problem with that solution is that you can’t stop others from doing a hard fork and persuading a large economic and hash power majority to join. There’s always someone willing to take more risk. When the scope of this fork is far outside technical rough consensus, perhaps ignoring it and informing users about the risks is the best approach. However when it’s close to rough consensus, pre-empting may be better than ignoring. Thus it may be prudent to have one or more well tested hard-fork candidates ready to go at any moment, even if the preference is to never deploy them.
A second solution, something I think is worth (re)considering, is to kill off the original chain during a hard fork. Perhaps through some sort of merged mining, where the old chain only gets empty blocks or through a soft fork which makes the entire UTXO set unspendable on the original chain. This requires being even more certain about non-technical consensus, which I’ve argued above is near impossible.
We may need to look for a third solution. Something that is opt-in but doesn’t create two alternate transaction histories.