📜Finding Severity Criteria

Severity Matrix

The severity matrix serves as the baseline for assigning a severity to a bug.

It's often easier to think about the impact and the likelihood of the occurrence of an issue in two different categories.

  • A High impact situation would be one where funds can be lost.

  • A High likelihood situation would be one in which any participant can trigger such a bug in the protocol.

SeverityImpact: HighImpact: MediumImpact: Low

Likelihood: High




Likelihood: Medium




Likelihood: Low




However, this is just the first step in assigning severity and has a lot of subjectivity in practice. When in doubt, consider what would happen to the protocol when such a bug is not fixed. If it leads to a catastrophic scenario that can be triggered by anyone or occur naturally, then it is very likely to be a High severity bug.

If the protocol can function without the bug getting fixed, it's likely to be a low severity bug.

The last consideration to make is to put yourself in the shoes of the protocol designer. The best way to do this would be to think about how to mitigate the bug.

We highly recommend everyone to submit a recommended mitigation for every finding. Doing this exercise can help you understand the tradeoffs that protocol designers have to make.

For example, building a permissionless protocol often means making tradeoffs on certain properties -- the Uniswap protocol allows anyone to deploy a pool and in most cases, these are perfectly legitimate pools, but, pools can be setup with malicious tokens to act in certain ways.

There are two ways to mitigate such issues, adding a whitelist of all 'genuine' pools or alternatively rely on off-chain trust on what are the legitimate pools.

If your suggested fix goes against the design philosophy of the protocol, it's very likely to be at most an informational issue.

Important Considerations

  • If you submit a High or Medium severity issue, we strongly encourage you to submit a proof of concept. This is a piece of code, often an addition to the test case, that can be used to confirm the validity of the finding.

  • Issues that are ultimately user errors and can be easily managed in the front-end should at most be informational.

  • Issues that require admin access (or equivalent) to perform should at most be low severity, unless the protocol was designed to be resilient against such actions in the first place.

  • AI generated findings: submitting AI generated findings without validating them can lead to disqualifications or worse, a permanent ban.

  • The goal of getting a security review is for protocols to make meaningful changes to the protocols to improve its security. Make sure the findings you submit contribute to this cause.

  • In case of public competitions, a judge will act as an independent arbiter for any disagreements. After judging is done, there will be an escalation phase where people can contest some judgements. Anyone can currently contest a judgement, but there will be a penalty for escalations that were invalid.

  • Be mindful of the judges' and the protocol-team's time.

Protocol Behavior

  • The competition README must be used as the main reference for protocol behavior and not other sources.


  • Approval race conditions for ERC20 tokens will be considered invalid.

  • Assume by default that a protocol will be using only standard ERC20 tokens. Any findings that rely on weird token properties should at most be a low severity finding.

  • Losing dust amounts, say, due to rounding is at most a low severity finding.

  • Any finding that has been acknowledged in a previous report will be considered invalid.

Last updated