Artificial intelligence is becoming increasingly capable, but early releases may not be fully prepared for the real world: Microsoft
MSFT
“Real-world examples are limited, and the number of non-benign conditions out there is large,” Dr. Lisa Dolev, CEO of AI specialists Qylur Intelligent Systems told Forbes. “That combination can cause problems.”
The US Air Force will need to update AI software and retrain it when it encounters adverse conditions, and the AFWERX directorate, tasked with accelerating technology adoption in the Air Force has awarded a phase I Small Business Innovation Research contract to Qylur . This will provide a means to update AI systems in the field rapidly and efficiently using the Social Network of Intelligent Machines or SNIM AI® , managing updates for AI-based systems such as drones and ground robots.
SNIM AI identifies problems within AI models and helps solve them with data collected by all the machines connected to the system.
No machine-learning system can be trained on every possible situation, and when it encounters something new it may not be able to deal with it. Dolev says this could be something as simple as an object recognition system seeing snow for the first time and finding that everything looks different. Dolev says such issues are inevitable in the type of systems the Air Force deploys because of the limited data available.
“It’s not like learning to recognize cats, where you have endless data from the Internet. You are working with small, noisy data sets,” says Dolev. “It’s a messy environment.”
The problem of trained systems running into difficulty is called AI model drift, and is well known in the business world. But while commercial operations can tolerate drift problems over the course of a few days, anything in the defence world needs to be rectified as soon as possible.
“In our system we have drift monitors embedded inside the loop so we can see if something is incorrect. If it’s incorrect, we need to go back and retrain,” says Dolev.
SNIM AI allows data from all of the connected machines to be pooled and used for retraining, so in the case of encountering snow, it would be able to draw on all the images of snow-covered objects captured in the field. The updated recognition system would then be tested and verified by a human operator before being pushed back out to some of the machines in the field.
Dolev says they do not deploy everything to everyone. The sparse computing resources on board drones and other mobile systems means they do not get burdened with all data. SNIM AI is ‘mission adaptive’ so for example the snow update would not be applied to machines in the tropics.
Dolev has plenty of experience in this field, having started 17 years ago, a time when the very idea of embedding artificial intelligence in drones and other systems seems like magic to some people. Qylur’s products are used in the commercial security field where they are used for the automated detection of explosives and other threats at public venues. The need for rapid, responsive updates led Dolev to develop SNIM AI.
While in theory SNIM AI could be completely automated, Dolev has a strict policy of keeping human oversight in the process.
“I insist not only on looking at the math and results, but also on having a forced physical ’sanity test’ to verify that what we see in the lab matches up with what we see in the field,” says Dolev.
Putting human common sense in the update loop removes the risk that an AI will retrain itself in a way that makes the problem worse. AI may be smart, but is notoriously brittle and prone to bizarre errors, so Dolev’s approach gives a degree of security. This is going to be increasingly important as AI gains momentum. Businesses – and militaries – are scrambling to get systems deployed so they are not left behind. These autonomous machines needs to be safe and reliable.
As the example of the Tay chatbot shows, any system may encounter adversaries who deliberately try to trip it up, feed it misleading data or confuse it with situations not included in its training data. This is especially true in defence, where there is already research into types of camouflage specifically to fool automated recognition algorithms. An entire specialist field of ‘counter-AI warfare’ may emerge with the goal of finding the brittleness of AI systems which can cause them to fail in ways which no human ever would.
Being able to exploit the weakness of AI could give a decisive advantage, but not if those exploits are immediately identified and corrected. Rapid update processes like SNIM AI will help keep the Air Force’s autonomous machines one step ahead of its adversaries.
Read the full article here