London [UK]: With a growing interest in generative artificial intelligence (AI) systems worldwide, researchers at the University of Surrey have created software that is able to verify how much information an AI farmed from an organisation's digital database. Surrey's verification software can be used as part of a company's online security protocol, helping an organisation understand whether an AI has learned too much or even accessed sensitive data.
The software is also capable of identifying whether AI has identified and is capable of exploiting flaws in software code. For example, in an online gaming context, it could identify whether an AI has learned to always win in online poker by exploiting a coding fault. Dr Solofomampionona Fortunat Rajaona is Research Fellow in formal verification of privacy at the University of Surrey and the lead author of the paper.
He said: "In many applications, AI systems interact with each other or with humans, such as self-driving cars in a highway or hospital robots. Working out what an intelligent AI data system knows is an ongoing problem which we have taken years to find a working solution for.