Last year around this time, during the height of the crypto boom, Mack Flavelle brought the Ethereum network, then valued at fifty billion US dollars to a standstill. With just one Dapp, the network slammed into the upper limit of its theoretically possible transactions per second. Gas prices surged, ICO’s were forced to change their raising strategies and on the whole it was near impossible to complete any transactions using Ethereum.
Mack’s creation was Cryptokitties, a blockchain based video game where users buy, breed and trade tokenized cats. Its effect on the network was so damaging that it shifted ICO marketing strategies and left a sour taste in fervent supports mouths as they realized how quickly their "world-changing" decentralized networks hit the upper bound of its transaction throughput. Maximum transactions per second (TPS) were not even a discussion point before Cryptokitties. Afterwards, it became the primary selling point for many blockchain projects, including Credits, Sparkster and Nano.
The knee jerk reactions to focus on TPS post-Cryptokitties was ultimately short sighted, as other issues such as network latency and data throughput were also important. Regardless, a realization had taken place, removing the wool and presenting hard truths about network capacity. Pre-Cryptokitties, the narrative for Ethereum stumbled from “Bitcoin 2.0” to “world’s computer” to “dapp provider.” Most devs knew about the network limits, but crypto holders and traders were oblivious, up until the point a digital cat put their crypto addiction on hold.
I became a bit jaded at the time. Fond memories remain of 100+ gwei gas fees, hour plus wait times for transaction confirmations, and constantly checking the number of unconfirmed transactions on Etherscan to see if the network was usable.
In their current form, most blockchains are not designed to compute high amounts of data on chain. The suffer from what’s known as the scalability trilemma, which revolves around choosing differing importance levels for decentralization, security and scalability. It’s not that one advantage is greater than the others, rather, each blockchain must optimize their architecture to meet their technology stack’s strategic goals.
Ethereum optimized for maximum decentralization. Millions of computers are hashing away currently trying to solve a puzzle and earn Ethereum as a reward. The Proof of Work system that the network uses prevents all but the largest attackers from ever conducting a 51% attack. As a result, vast amounts of energy are required to solve the hash problem and compromises are made on speed and scalability. In its current form, there is absolutely no way that Ethereum could scale for enterprise use.
This is why Vitalik and his band of developers have been working towards release of Ethereum 2.0, which will switch the network from Proof of Work to Proof of Stake (PoS). The limitations of the current network because of its reliance of PoS render it useless to larger businesses entertaining the idea of transacting the public chain. The switch to PoS increases transactions per second, lowers the cost of operating on the network and reduces the costs needed to maintain the network.
Ethereum’s scaling problems have been tackled by numerous other projects, such as EOS, Zilliqa, and IOTA. However, each of these projects has issues derived from the scalability trilemma that reduce their security and decentralization. EOS relies on 21 block producers, who were exposed for collusion to aggregate votes. Zilliqa's reliance on PoW opens it to targeted attacks. IOTA isn't even a viable. There are no good solutions yet.
Recently, I had a chance to sit down with Nick White, Co-Founder of Harmony. He and his team believe they have solved the scalability trilemma with their soon-to-be-released blockchain. Harmony’s motto is “Open Consensus for 10 billion people,” and their goal is to create a system that could scale for use by the entire future global population in 2040.
One of the biggest problems for blockchains is how do you scale TPS and data throughput, while keeping security and decentralization robust. Many competitors have tried and failed, but Nick and the Harmony team think they have the answer.
Harmony’s solution is two fold. First, the network implements Proof of Stake (PoS), paired with a modified Practical Byzantine Fault Tolerance (PBFT) consensus algorithm called Fast BFT (FBFT). Second, they use sharding, which distributes “not only network communication and transaction validation like Zilliqa, but also shards the blockchain state,” according to the whitepaper.
Sharding distributed transaction validation into groups, instead of having the entire blockchain complete the task greatly increases data throughput and TPS. Zilliqa, a blockchain launched in 2018, also implemented sharding. However, their network relies on PoW for key system operations and does not split blockchain data into different shards. As a result, running a Zilliqa node is expensive and their reliance on PoW makes the network susceptible to “single-shard takeover attacks.”
Harmony’s developed several cool blockchain solutions. Let’s take a closer look at some of Harmony’s tech stack:
As mentioned before, Harmony’s network is secured through PoS and consensus is found with a modified PBFT model, called FBFT. The PBFT model provides network tolerance against Byzantine faults, also known as malicious nodes, based on the assumption that some nodes will fail and others will have their information maliciously manipulated. It’s designed for high performance and runs asynchronously.
It works as follows: there is one leader node and the rest are validators. Every node communicates with each other to come to an agreement on the network state. The leader node broadcasts a transaction, which the validator nodes verify and respond to in masse. In BFT, a ⅔f+1 consensus must be found between all nodes for the new system state to be committed. Once completed, a new leader is chosen who begins the process again.
“The way PBFT works is the leader proposes a block and everyone then votes on that block saying 'yes' or 'no.' They have to send their votes to everyone else and everyone else has to say 'Okay I see these guys are voting,' and then they have to talk to each other again to verify ‘okay this guy said the same thing to you.’ There's a huge amount of communication overhead and what's great about PBFT is that it reduces that to just everyone telling their votes to the leader who aggregates it and sends it back out.” - Nick White describes.
The problem with this consensus mechanism is that network growth is exponential. Each new node must broadcast its validation to all other nodes. This rebroadcast increases communication complexity and is not scalable. Harmony’s solves this with FBFT, first utilized by Byzcoin. Instead of asking all of the validators to rebroadcast verification of the system state, the leader node collects votes using a Boneh–Lynn–Shacham (BLS) multi-signature signing process. Individual nodes only have to communicate with the leader node, thus reducing communication complexity from exponential to linear.
“So essentially the leader has a block. He proposes it to everyone. He broadcasts it and then he collects signatures from everyone. And once he has 2f+1 (2/3rds + 1) signatures then he combines it into one multi-signature then sends it out to everyone saying 'hey this block has been approved.' One of the advantages of FBFT is that if we were to use two signatures as normal, you would need to do two rounds of communication to actually set up the multi signature. But with FBFT it only takes a one round trip.”
The multi signature addition is what makes their consensus method stand out. Unlike ZIlliqa, the data and time required to broadcast and commit a system state is fixed, it doesn’t matter how many nodes join the network, the multi signature data size will remain constant.
“So the advantage of a multisig is that you can aggregate signatures together and the signatures can become a constant size. Imagine if there were 600 people and we had to pass around 600 full signatures of the block. It can start to add a lot of communication overhead; but with a multi signature all of a sudden you just take everyone's signatures and combine them into one.” Nick told me in the interview.
For a blockchain that intends to grow and service billions eventually, linear growth is essential. Linear growth allows business and miners to plan capital expenditures and fees for network costs.
“Let's say there is a killer app or let's say we started to gain meaningful adoption and we start hitting that upper boundary and the networks start to get congested, then we spin up a new Shard and the congestion is gone,” Nick said.
The current number of active Bitcoin users/hodlers is estimated at 4-5 million. A network like Whatsapp has 1.5 billion users and sees 60 billion messages sent per day. Bitcoin can’t handle this kind of traffic, which is why layer 2 solutions are being developed, such as the lightning network. Layer 2 will open the door for faster applications and transactions, with value settlement occurring on the main chain. Sharding is therefore key for blockchains to handle massive network growth and adoption.
Harmony “draws inspiration” from Zilliqa, Omniledger and Rapidchain.
“Omniledger employs a multi-party computation scheme called RandHound  to generate a secure random number, which is used to randomly assign nodes into shards. Omniledger assumes a slowly 4 adaptive corruption model where attackers can corrupt a growing portion of the nodes in a shard over time. Under such security model, a single shard can be corrupted eventually. Omniledger prevents the corruption of shards by reshuffling all nodes in the shards at a fixed time interval called epoch. RapidChain builds on top of Omniledger and proposes the use of the Bounded Cuckoo Rule to reshuffle nodes without interruptions.” - Harmony whitepaper
All of this together creates a “PoS-based full sharding scheme that’s linearly scalable and provably secure.” At the beginning of every epoch, each 24 hour period, a new leader is elected, along with a beacon chain, and multiple nodes are grouped into shards. “The beacon chain serves as the randomness beacon and identity register, while the shard chains store separate blockchain states and process transactions concurrently,” they write in their whitepaper.
Random Number Generation
Random number generation is an essential function of their blockchain. Every new leader, beacon, and shards are all chosen randomly and so it’s an important matter of network security. According to their whitepaper, true random numbers need to be “unpredictable, unbiaseable, verifiable, and scalable.” A fusion of Omniledgers Randhound, which is “a leader-driven distributed randomness generation”and Rapidchain’s VSS (Verifiable Secret Sharing) form the basis for generating random numbers with Algorand’s VRF-based (Verifiable Random Function) cryptographic sortition.
The VRF is a process where validators generate an encrypted random number based on the hash of the last block. At the beginning of the process, they generate a random number, a proof and a secret key and send the first two items back to the leader. The leader combines all of the numbers to generate his own random number and proceeds with consensus and commit. After committing the leader’s number, it’s shielded from being known by the Verifiable Delay Function (VDF), which prevents the actual final randomness from being known for a certain number of blocks after.
“The VRF is kind of similar to doing a signature. There is a defined message the leader signs and then the everyone sends in their little signature to the leader. In a normal VRF protocol the problem is that the leader can basically cherry pick the signatures that he wants to keep and that way he can bias the randomness. What we do is we force the leader to choose a subset and commit that to a block and then that subset is then put into the VDF and the output of that is the final randomness. The leader has to commit to the pre-image of the VDF, so he can't know what that randomness will be in the end,” Nick described.
Paired together VRF and VDF secure the network against attackers who seek to uncover the random number before it's fully decrypted.
Harmony's tech looks very promising if it can scale like they claim and still remain secure. If you are still interested in learning more about Harmony, check out their whitepaper. Nick and his team are poised to launch, grow and build on Harmony over the next year. This is going to be a usable product, not just a decentralized computer science experiment.
If you want to hear more about Harmony and Nick, listen to the full podcast interview.