Compromised Keys Hacks

We present an overview of recent attacks where keys have been compromised, along with possible prevention and mitigation.

  • December 02, 2021Badger DAO: A front-end attack, due to a basic OPSEC error of granting unlimited approvals to an EOA, $120 million taken in various forms of wBTC and ERC20.
  • December 05, 2021BitMart: the self-proclaimed “Most Trusted Crypto Trading Platform”, has lost ~$196M from two of its hot wallets on Ethereum and BSC.
  • December 07, 2021–8ight Finance: keys got compromised, no further details shared. $1.75 million taken.
  • December 11, 2021AscendEX: lost $77.7M, again from a “compromised” hot wallet.
  • December 13, 2021Vulcan Forged: $140M wiped from users wallets due to compromised managed wallet provider.

All of the above protocols were unaudited, the total loss just from poor SECOPS for the month of december amounts $535M, without counting losses due to the value of the stolen tokens plummeting following each hack.

There isn’t much to write about, the self-proclaimed “Most Trusted Crypto Trading Platform”, has lost 100M USD worth of several ERC20 tokens, from one of its Ethereum hot wallets and 96M USD a BSC wallet.

Nothing else was disclosed in the post-mortem of the accident, indicating whether it was a security breach, poor devops, or an internal employee.

An unknown party inserted additional approvals to send users’ tokens to their own address.

As the news of users’ addresses being drained reached Badger, the team announced they had paused the project’s smart contracts, and the malicious transactions began to fail around 2 hours 20 mins after they had begun.

Rumours that the project’s Cloudflare account was compromised have been circulating, as have other security vulnerabilities.

The approvals presented themselves when users attempted to make legitimate deposit and reward claim transactions, building a base of unlimited wallet approvals that allowed the attacker to transfer BTC related tokens directly from the user’s address.

The first instance of approvals for the hacker’s address was almost two weeks ago, according to Peckshield. Anyone interacting with the platform since then, may have inadvertently approved the attacker to drain funds.

Over 500 addresses have approved the Hacker’s address: 0x1fcdb04d0c5364fbd92c73ca8af9baa72c269107

A user flagged the suspicious increaseAllowance() approval in Discord.

Sadly, the team brushed it aside.

Their opsec might have been bad, though not as bad as their English in the below Discord announcement.

Was this really their first project though? Apparently not.

They even lied to everyone about the fact that a multi-sig was in place, which it clearly was not.

Given the evidence, it seems like this was an inside job rather than a hack, and the best thing of all, they are still using Facebook.

Ironically, Ascendex made the following Tweet the day before the “hack”, they were quick to delete it afterwards, however somebody managed to immortalized it first:

Peckshield puts the losses at $60M on Ethereum, $9.2M on BSC and $8.5M on Polygon, all drained by the same compromised hot wallet.

In order to facilitate these use cases, user accounts are linked to an integrated wallet — a service provided by Venly.

The private keys of 96 addresses were compromised, allowing the attacker to drain their contents. As well as $PYR, users also lost substantial amounts of other tokens including ETH and MATIC.

Subsequent sales of the stolen PYR had a large impact on the token price, which dropped ~30% initially, from around $31 to a low of $21.47.

According to the latest update, the majority of affected wallets have already been reimbursed from the treasury, and the team aims to pursue a 100% decentralised system going forward.

Who is to blame for this incident? Is it Vulcan Forged, for a lack of due diligence, or their wallet provider; Venly?

Preventative Techniques

Check the approved address yourself. Don’t trust the site’s UI.

Take the address manually from the metamask data, and look at the contract on Etherscan.

Things to pay attention to at this point:

  • Is the contract brand new?
  • Who deployed it?
  • Where did the funds come from to the deployer
  • Is it a proxy?

Never approved more than you plan to use. You can always approve more in the future. Yes, it costs a few $ more, so is psychological help once you will get rugged.

So if you approved WETH on some shady contract only your WETH is at risk due to that approval.

You are not only approving the current implementation, you are also approving the next implementation, and the next implementation, and the next implementation….

https://revoke.cash Go over each approval, verify if it makes sense. See what is less of a hassle, revoking the odd approval or migrating all tokens to a fresh address.

Protect your ERC20 token balances by revoking access to the apps you used in the past.

No need to revoke tokens you don’t have anymore and don’t plan to use in the future.

Not sure what an infinite approval looks like? it looks like this:

We’ve seen this many times in the past, a project fails to implent access controls and something unexpected happens, e.g. Parity “hack” where a user accidentally destroyed the contract.

Smart contracts are public, anyone can see and call the code. So it becomes fundamental that contracts implement secure and reliable access controls.

OpenZeppelin ownable pattern where ownership of a contract is assigned to a EOA, multiple EOAs, multisig accounts, or governance. The private keys for these accounts can be held in wallets such as MetaMask, hardware wallets, or more advanced setups using secure vaults with Defender Relay. If multisigs are used, they can be managed through a user-friendly UI with Defender Admin or Gnosis Safe.

There are other similar implementations of the ownership pattern, for example the DSAuth contract by Dapphub.

Here is an example of an ERC721 ownable contract with a mint function that auto increments token ids.

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.2;

import "@openzeppelin/contracts/token/ERC721/ERC721.sol";
import "@openzeppelin/contracts/access/Ownable.sol";
import "@openzeppelin/contracts/utils/Counters.sol";

contract MyToken is ERC721, Ownable {
using Counters for Counters.Counter;

Counters.Counter private _tokenIdCounter;

constructor() ERC721("MyToken", "MTK") {}

function safeMint(address to) public onlyOwner {
uint256 tokenId = _tokenIdCounter.current();
_tokenIdCounter.increment();
_safeMint(to, tokenId);
}
}

The number and privileges of roles will heavily depend on each particular application and the teams behind them. Too many roles can be difficult to manage, but it limits the risk of granting all control to a single account.

The following example is the same implementation of an ERC721 contract as the one shown in the preceding paragraph, but these roles can be simply implemented using the AccessControl rather than Ownable. It is more complex but supports multiple roles, in this case a MINTER_ROLE and a DEFAULT_ADMIN_ROLE which is the equivalent of the owner role. It is possible to add any number of role, typically one for burning tokens or one for pausing critical functions.

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.2;

import "@openzeppelin/contracts/token/ERC721/ERC721.sol";
import "@openzeppelin/contracts/access/AccessControl.sol";
import "@openzeppelin/contracts/utils/Counters.sol";

contract MyToken is ERC721, AccessControl {
using Counters for Counters.Counter;

bytes32 public constant MINTER_ROLE = keccak256("MINTER_ROLE");
Counters.Counter private _tokenIdCounter;

constructor() ERC721("MyToken", "MTK") {
_grantRole(DEFAULT_ADMIN_ROLE, msg.sender);
_grantRole(MINTER_ROLE, msg.sender);
}

function safeMint(address to) public onlyRole(MINTER_ROLE) {
uint256 tokenId = _tokenIdCounter.current();
_tokenIdCounter.increment();
_safeMint(to, tokenId);
}

// The following functions are overrides required by Solidity.

function supportsInterface(bytes4 interfaceId)
public
view
override(ERC721, AccessControl)
returns (bool)
{
return super.supportsInterface(interfaceId);
}
}

Timelocks are a common mechanism for allowing time-delayed opt-out changes in a system.

The OpenZeppelin TimelockController contract can be used to implement role-based access control with an adjustable time delay.

Other popular timelocks are the Timelock contract used in the Compound protocol, the Timelock contract used in Uniswap, and the DSPause contract used in the [Maker protocol](https://docs.makerdao.com/smart-contract-modules/governance-module/pause-detailed-documentation.

Smart-contract-based governance modules are becoming increasingly common in decentralized applications. In simple terms, they can be understood as:

a set of contracts given access powers to execute sensitive actions on others.

Users can propose and vote for different actions, which can only be executed if enough votes in favour are submitted. Usually there’s some kind of governance token involved to represent the voting power of participants. Examples of this kind of token are UNI, COMP, MKR, and many others.

Compound’s GovernorAlpha is one of the most well-known examples of governance contracts. A simplified version can be found implemented in Uniswap. It can easily be wired to a timelock contract in which the original version includes a “guardian” role with powers to cancel proposals, transfer rights, and abdicate

Recently, the GovernorAlpha contract evolved into a more sophisticated and upgradable version: the GovernorBravo. Upgradability operations performed by the admin also have a timelock.

Some additional considerations when implementing governance mechanisms in access controls schemes:

  • Guardians (privileged accounts with special powers) can help in at least two ways:
  • Push for proposals if there’s no active community participation.
  • Stop proposals if there’s malicious intent.
  • Delegation mechanisms of voting power may also allow interested actors to participate on behalf of others that may not be always interested or available.
  • If governance has full control over a system, and there are no privileged accounts involved, setting up off-chain validations, checklists and documentation on procedures becomes crucial to ensure the system is always modified to follow standard procedures.

Finally, as mentioned before, it is possible governance tokens are involved to represent voting power. Therefore, all other attack vectors apply, particularly those related to DeFi such as price Oracle manipulation, in order to grant the attacker more voting power, i.e. the amount of tokens the attacker owns relatively to the total amount of tokens in circulation.

One way to subvert these attacks this is by not measuring voting power at the current block.

One way to achieve this is by not measuring voting power at the current block. Openzeppelin’s ERC20Votes extension contract supports Compound-like voting and delegation. This version is more generic than Compound’s, and supports token supply up to 2224–1, while COMP is limited to 296–1.

Many of these questions are well beyond the scope of a code review. Still, they can serve as triggers for development teams to build a more comprehensive threat model that considers other components aside from smart contracts.

  • Who controls the privileged account? A single private key? Multiple keys with a multisig? A governance module?
  • Does the privileged account hold any other role in the system? If so, could that introduce any risk worth analyzing?
  • How are the private keys stored? Are there backups? Who’s got access to them? Are access operations logged, monitored and audited?
  • How are the private keys generated? Do they sign transactions in standard ways or is the system using some custom implementation?

All private keys that are used by individuals to sign administrative transactions to smart contracts should be stored in hardware wallets that are offline except when used to sign transactions, with the mnemonic for account recovery stored on paper also offline. For further protection, never share the private keys across devices and never use the same internet-connected device to sign transactions with multiple administrative keys. In other words, try to maintain a “one internet-connect device one administrative key” policy at all times.

Some applications also require the use of administrative keys stored on servers and used as “hot wallets” to sign transactions submitted to smart contracts. This may be required for cron jobs that call administrative functions that need to be activated at specific intervals, or at specific times, or in response to identified events. Server side signing keys may also be used for server-to-server integrations or meta-transactions.

Any administrative keys stored on servers to sign transactions should be stored within a secure key vault. Once created, private keys should never be exported from the key vault and should only be invoked (via secure connection to the vault) for transaction signing. Examples of secure key vaults you might use are the AWS Key Management Service or HashiCorp Vault. Also note that Defender Relayers and Autotasks provide secure key vaults automatically.

As further protection, rotate all administrative keys every six months, including the keys used for signing transactions and any keys required to obtain access rights to signing (for example keys required to invoke signing when using a secure key vault). Rotation further reduces the chance of the administrative keys being leaked, and practicing key rotation is also useful in case a key leak is detected and you are required to quickly rotate the keys to stop potential attackers. If you are using Defender Relayers, rotate the API keys every 6 months.

Utilizing the above best practices along with requiring multiple signatures for every administrator-signed transaction (using a multi-sig wallet such as Gnosis Safe) provides adequate security protections against stolen administrative keys.

The following is example code that can be used to securely sign a transaction on a server without extracting the private key from the vault, using AWS KMS. Note that this would also require the implementation of Ethereum signature details. Defender Relayers provide this functionality automatically.

import KMS from 'aws-sdk/clients/kms';

public async sign(kms: KMS, keyIdOrAlias: string, payload: Buffer): Promise<Buffer> {
const params: KMS.SignRequest = {
KeyId: keyIdOrAlias,
SigningAlgorithm: 'ECDSA_SHA_256',
Message: payload,
MessageType: 'DIGEST',
};

const response = await this.kms.sign(params).promise();
if (Buffer.isBuffer(response.Signature)) {
return response.Signature;
}
throw new Error(`Error using key ${keyIdOrAlias} signing: ${payload.toString()}`);
}

Critical administrative tasks should not be controlled by a single signer. Requiring multiple signatures to execute these sensitive tasks reduces the impact of private keys being lost or compromised.

Transactions that require multiple signatures can be implemented with a Multisig Wallet or a Threshold Signature.

As with many parts of this young ecosystem, there are not many documented best practices or experiences for using multiple signatures. Take into account that while they can increase the security of your system when carefully implemented, they are also an extra layer of complexity that can introduce new risks and points of failure.

Following are a set of guidelines and recommendations to take into account when implementing your multisignatures.

  • Assign different roles for different types of tasks. OpenZeppelin Contracts provides the Roles library for implementing role-based access control.
  • Define a hierarchy of permission levels linked to the roles.
  • Require a higher number of signatures for roles higher in the hierarchy.
  • Use independent multisignatures for every role.
  • Give ownership of every private key to a different individual.
  • Train every key-holder on the secure handling of their devices and communications. The Electronic Frontier Foundation published good resources for safe online communications.
  • Distribute the keys to individuals that guarantee diversity of geographic locations, hardware manufacturers, and software applications.
  • Do not use a 1 of N multisignature because losing control of a single private key means the system is immediately compromised.
  • Do not use an N of N multisignature because it does not provide any redundancy in case some of the private keys are lost.
  • Consider using an M of N multisignature, with M = (N/2) + 1 to balance concerns. A lower M allows for quicker response. A higher M requires a stronger majority support.
  • For configuration tasks, like changing parameters, a 2 of 3 multisignature might be enough. Consider using other kinds of fail-safes, like allowing the parameters to vary only by a small delta and with some time window before they are applied, so the functioning of system does not completely change with a single transaction. Consider separating the tasks that are adjustments from the ones that disable or enable functionality. For example, do not allow a parameter to be adjusted to 0 if it means a functionality will stop working; define a separate function with a separate role to disable it.
  • For more critical and less common tasks, like upgrading the implementation of a contract, a 3 of 5 multisignature might be better. If the tasks are time-sensitive, the key-holders must agree to be on-call to react immediately after being notified.
  • For tasks that require representation from different stakeholders, consider giving ownership of private keys to two representatives of each stakeholder category.
  • Document the roles, corresponding tasks, and key-holder contacts in a private repository.
  • Document the threat model for every role and task.
  • Automate the monitoring, administrative triggers, and notification of key-holders.
  • Test and run simulations of all the situations that will require multiple signatures.

Note that multisignatures are part of permissioned systems. While they can be implemented with a little decentralization, they are far from the ideal of permission-less systems that are maintained stable by the interest of all their participants. Consider them as a fail-safe during the early stages of your system, or as a temporary solution until safer governance mechanisms evolve.

See the Defender Advisor article “Secure All Administrative Keys” for best practices related to the individual private keys.

Gnosis Safe provides a graphical user interface to set up multisignature wallets for the Ethereum blockchain.

The Defender Admin service acts as an interface to manage your smart contract project through one or more secure multi-signature contracts.

The following are just a few examples of events to monitor for, specifically related to (privileged) accounts activity.

If approved administrators did not trigger the related transaction / set of transactions, private keys controlling the privileged account may have been compromised. The entire system, including users’ funds, might be at risk. Also, mistakes in executing administrative transactions could lead to unintended and potentially widespread effects. Instituting a post-transaction audit and review process for administrative changes is considered a security best practice.

Therefore it is recommended to monitor and confirm all administrative transactions. This can be done by monitoring emitted events (if implemented), or specific contract functions, or by monitoring any use at all of privileged accounts especially if the use of those accounts should be limited to performing specific administrative transactions.

For example, setup monitoring for:

  • Changes in sensitive parameters of core contracts
  • Calls to functions which add, renounce, or transfer ownership
  • Withdrawals of central funds
  • Calls to function which allow or block access to assets or capabilities for specific accounts
  • Calls using privileged accounts not related to expected administrator operations, detected via inspection of mined transactions

Extremely frequent use of the protocol by a single account may be the reflection of malicious operations on the protocol, either attempting to exploit a zero-day vulnerability or spam transactions in the network. This takes particular relevance for accounts that never interacted with the protocol before.

When funds stored in contracts drop significantly, it may be an indicator of either users exiting the system or a vulnerability being exploited to steal funds at large. Setup monitoring to detect funds dropping below specified thresholds, and notify administrators or stakeholders when such drops occur.

Resources

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Extropy.IO

Oxford-based blockchain and zero knowledge consultancy and auditing firm