TY - GEN
T1 - Towards a Certified Proof Checker for Deep Neural Network Verification
AU - Desmartin, Rémi
AU - Isac, Omri
AU - Passmore, Grant
AU - Stark, Kathrin
AU - Komendantskaya, Ekaterina
AU - Katz, Guy
N1 - Funding Information:
R. Desmartin and O. Isac—Both authors contributed equally. R. Desmartin—Funded by Imandra Inc. E. Komendantskaya—Funded by EPSRC grant AISEC (EP/T026952/1) and NCSC grant “Neural Network Verification: in search of the missing spec.”.
Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.
PY - 2023/10/16
Y1 - 2023/10/16
N2 - Recent developments in deep neural networks (DNNs) have led to their adoption in safety-critical systems, which in turn has heightened the need for guaranteeing their safety. These safety properties of DNNs can be proven using tools developed by the verification community. However, these tools are themselves prone to implementation bugs and numerical stability problems, which make their reliability questionable. To overcome this, some verifiers produce proofs of their results which can be checked by a trusted checker. In this work, we present a novel implementation of a proof checker for DNN verification. It improves on existing implementations by offering numerical stability and greater verifiability. To achieve this, we leverage two key capabilities of Imandra, an industrial theorem prover: its support for exact real arithmetic and its formal verification infrastructure. So far, we have implemented a proof checker in Imandra, specified its correctness properties and started to verify the checker’s compliance with them. Our ongoing work focuses on completing the formal verification of the checker and further optimising its performance.
AB - Recent developments in deep neural networks (DNNs) have led to their adoption in safety-critical systems, which in turn has heightened the need for guaranteeing their safety. These safety properties of DNNs can be proven using tools developed by the verification community. However, these tools are themselves prone to implementation bugs and numerical stability problems, which make their reliability questionable. To overcome this, some verifiers produce proofs of their results which can be checked by a trusted checker. In this work, we present a novel implementation of a proof checker for DNN verification. It improves on existing implementations by offering numerical stability and greater verifiability. To achieve this, we leverage two key capabilities of Imandra, an industrial theorem prover: its support for exact real arithmetic and its formal verification infrastructure. So far, we have implemented a proof checker in Imandra, specified its correctness properties and started to verify the checker’s compliance with them. Our ongoing work focuses on completing the formal verification of the checker and further optimising its performance.
KW - AI Safety
KW - Deep Neural Network
KW - Formal Verification
UR - http://www.scopus.com/inward/record.url?scp=85175806265&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-45784-5_13
DO - 10.1007/978-3-031-45784-5_13
M3 - Conference contribution
AN - SCOPUS:85175806265
SN - 9783031457838
T3 - Lecture Notes in Computer Science
SP - 198
EP - 209
BT - Logic-Based Program Synthesis and Transformation. LOPSTR 2023
A2 - Glück, Robert
A2 - Kafle, Bishoksan
PB - Springer
T2 - 33rd International Symposium on Logic-Based Program Synthesis and Transformation 2023
Y2 - 23 October 2023 through 24 October 2023
ER -