Welcome our AI Audit Overlords

Extropy.IO
4 min readJul 2, 2023

--

AI Audit

Introduction

The recent introduction GPT–4 has prompted many claims about its value, in this article I want to look at how we can use such tools to assist in auditing smart contracts. I will focus on the Solidity language, and smart contracts written for DeFi applications, as this forms the majority of the auditing that we see.

Current Process and Tools

Imagine we are about to start an audit, we have some Code, maybe some documentation and perhaps some details from developers, how do we go about finding the vulnerabilities ?

Audit companies vary in the approach they take to auditing, but commonly will use

  1. Code inspection

2. Static Analysis tools

3.Enhanced testing

4.Formal Verification

For our audits we have concentrated on code inspection which relies on the experience of the auditor, both experience of Solidity, and of vulnerabilities in DeFi. The reason that we approach the audit at this level is tied to the way that contracts are exploited.

The majority of exploits are at a quasi economic level, more complex than simple logic errors, or misunderstanding of language features. For example an exploiter may take advantage of a flaw in the business logic of the project to move a price in a favourable direction and exploit it. For this reason it is important that auditors have a clear understanding of a project’s business logic.

Some of the logic will be common across projects, underlying DeFi principles are reused across projects and versions of projects, hence the advantage of an auditor’s experience.

Static analysis tools can be useful to find vulnerable patterns in code, but they rely on the vulnerable patterns being known, and vulnerabilities that require complex interactions will be difficult to detect.

Enhanced testing techniques such as fuzz testing can help expose niche use cases that may have been missed by the development team, however the search space of inputs is so large that it is unlikely to feasibly provide sufficient coverage.

Formal verification ( and I include symbolic execution tools here) take a different approach in that they can examine all possible paths through code and / or compare it to an abstract specification of how the project should work. This rigour comes at a cost however, building a formal specification is time consuming, and the end result is only as good as the specification.

Example audit

The deliverable of the audit is a report, this will list the issues that have been found, along with details of the code audited and the methodology used. As an example, you can see our audit of the Float protocol

Use of Chat GPT

ChatGPT is a large language model (LLM) that has some experience of smart contracts, though this may not be current. It is wholly dependent on the quality and scope of its training data. Its conversational abilities could mean that it can provide understanding of context, and possibly link together pieces of code.

ChatGPT (and other automatic tools) can process large volumes of code quickly and will be consistent in their performance. These tools currently have limited understanding of context: AI models, including large language models, are trained on vast datasets and may lack specific domain expertise (but see below for the work we are doing)

As a result, they might not always understand the broader context and implications of the smart contract’s functionality, leading to incomplete or inaccurate evaluations.

A further problem with AI tools is explicability, that is the tool may produce a potentially useful result but it may not be obvious how that was produced, and hence the auditor may be reluctant to use it without time consuming further investigation.

At Extropy we have written an ChatGPT tool — Soltron, intended as a first step when checking solidity contracts, you can try it out here. We are working on a model that is trained on existing audit reports, if you would like to know more, contact use via our website

Comparison with static analysis tools

I feel these tools are most similar to static analysis tools, in that they will be able to spot common patterns in code that have lead to vulnerabilities in the past, but will be unable to perform the complex analysis that code inspection or formal verification can provide.

AI models and static analysis tools are at disadvantage when it comes to evolving threats: AI models are trained on historical data, and their knowledge is limited to the information they have been exposed to during training. Consequently, they may struggle to identify novel or emerging threats that have not been previously encountered, which could leave smart contracts vulnerable to new attack vectors.

In essence here I am comparing how humans and automatic tools can upgrade their understanding of the threat landscape.

Conclusion

We are at an early stage still and no doubt tools based on LLMs will improve and become more useful to auditors. As noted above I place such tools alongside static analysis tools at the moment, they are useful, and may highlight some otherwise overlooked issues, but I will still primarily use code inspection when auditing.

You may be wondering if this article was written by ChatGPT …. It wasn’t, though I did use GPT–4 to explore how it thought of itself as an auditor, not that I assume it has a self to see with.

If you are a developer wishing to learn how to audit code, please take a look at our online auditing course

If you would like a human to audit your project, or learn more about or services, visit our security website

--

--

Extropy.IO
Extropy.IO

Written by Extropy.IO

Oxford-based blockchain and zero knowledge consultancy and auditing firm

No responses yet