AI Obsolescence: Is your Algorithm as accurate today as it was yesterday?

Nadia Hutchings
Marketing Manager, Zircon Software

In a series of blogs on the Zircon Software website covering software obsolescence, we’ve looked at the lifespan disparity between digital and physical systems in critical infrastructure, and the inescapable link between cybersecurity and obsolescence.

But there’s another area with its own unique form of obsolescence; artificial intelligence.

If we were to ask you, could you tell us how accurate your AI algorithm is?  Not at the time it was first purchased or developed, but right here right now? This week compared to last week?

To help get a grasp on what this question really means, I spoke with Dr. Peter Overbury, a foremost expert in the field of ML and AI – and Head of AI at Zircon Software.  We discussed obsolescence management in AI, and what the organisations dependent on these algorithms should be doing to safeguard their futures.

A form of Obsolescence unique to AI

As history has shown, factors that work to cause typical obsolescence within software include end of vendor support, changing user requirements and hardware end of life or incompatibility.  These are all things that are easy to spot and address.

AI on the other hand, though it will also have to deal with the “traditional” forms listed above, also has its own unique flavour of obsolescence.  Black box AI systems can degrade or even fail – sometimes rapidly – despite no change in the hardware or the models employed.

This is because, while progress in AI is often stop-start with occasional seismic shifts, as evidenced by the seemingly overnight one-upping of ChatGPT by Deepseek, obsolescence in AI doesn’t necessarily arise from quantum leaps in hardware or modelling, but from data.

Sudden environmental changes, temporary pattern shifts, biases built into data, and outlier events can all cause AI algorithm accuracy to degrade. And this can be due to how AI systems are trained.

Peter explains that AI training comes in two main groups; supervised and unsupervised.

“Supervised [training] is like teaching someone French in a classroom, and unsupervised is like dropping them in France.

“Some unsupervised systems adapt to new data over time, but they can make the wrong assumptions if they learn too fast; like nobody ever using trains again because of COVID.”

The COVID pandemic presented a sudden striking change to long-established behaviours, and to AI training.  During the lockdown periods, people being forced to stay home and the use of face masks whilst in public led to widespread issues in everything from passenger flow prediction algorithms, to the facial recognition algorithms behind the iPhone’s FaceID.  For example – in Uruguay, during the pandemic, public transportation usage in Montevideo decreased by 71.4%.

As Peter shared: “COVID wrecked a lot of these systems because suddenly it went from ‘I can predict how many people are going to be using your trains based on unsupervised learning’, to ‘oh, no one’s using the trains – well, no one will ever use trains. No one will ever use trains ever again…”

Peter shared another example, where pre-pandemic AI models for urban traffic flow prediction struggled to handle how remote work and hybrid schedules altered commuting patterns. A post pandemic study developed an artificial neural network model to correlate the impact of COVID-19 response measures on urban traffic flows. 

As anyone who had an iPhone in their possession can tell you, recognising the sudden obsolescence within their software, Apple rushed to make adjustments to allow users to enable mask compatible FaceID recognition.  Even now, despite being several years out of the pandemic, this adjustment is still in place and has become an optional part of the set up process for new devices.

These examples highlight how a sudden, rapid change brought on by external factors can suddenly cause system-wide prediction failures.

And if change is the only constant, then AI will remain perpetually vulnerable to obsolescence.

The difference a pixel can make

COVID is quite an extreme example of a sudden large shift in data input, however it doesn’t always take such a significant change – in fact, just changing the value of one single pixel can fool a neural network into thinking it’s seeing something totally different.

When is a horse not a horse? When it’s a frog.  A “one pixel attack” can make an algorithm say it’s looking at an image of a frog, with 99.9% certainty, when in fact it has been fed an image of a horse.  By modifying just a single pixel, horses can become frogs and turtles can become rifles.  AI is susceptible to drops in accuracy over time, without any kind of malicious intent or attacks being coordinated. Changing conditions, from new street lighting installations to changing fashions, can cause algorithm accuracy to drop – because it’s acting on “obsolete” data. And that drop in accuracy can be anywhere from tolerable to unusable.

Peter puts it this way: “The challenge for businesses is, ‘What’s an acceptable level of accuracy?’ If your system is identifying trespassers on railway tracks, how often is a false alarm acceptable?

“There’s a famous example in computer vision about a person wearing a shirt with a full-body image of another person on it – would [the system] count that as another person? A lot of the time, we rely on logical rules to separate that out.

“But if conditions change, like new lighting, new clothing, the accuracy might start to degrade – and eventually it becomes unusable. That’s a big part of managing AI obsolescence.”

Do you know if you’ve got a problem?

As we stated at the beginning of this article, could you tell us how accurate your AI algorithm is? For some organisations the answer is probably not.

Peter notes: “Many businesses never check their systems again after deployment. Others set up ways to detect performance drops and retrain.”

For organisations that don’t check in, or that lack self-monitoring systems, the only warning sign they’ll get is failure. For those that do check, the red flags will be clear; increased interventions and a drop in accuracy. But the root causes may be less obvious, and the AI might not be able to adapt.

Which brings us to whether an AI system could simply train itself out of obsolescence.

Largely, the answer is no. Self-evolving AI has practical risks. As we briefly touched on when we mentioned unsupervised learning, this kind of error in detection can be a major problem for algorithms designed to continuously improve.  Once horses start becoming frogs and turtles are being registered as rifles, like an ever decreasing circle this negative reinforcement will do nothing but continue to confirm these detections to be true regardless of the high degree of inaccuracy.

It’s best practice to use a hybrid of the two – allowing the AI to make discoveries and pathways for itself – in semi-unsupervised approaches. This involves partial automation, plus human checks, or periodically retraining the system manually from new data.

It’s not a one-size-fits-all situation, but organisations like Zircon can help swap out modules and retrain a system that has fallen foul of faulty data.

Overcoming obsolescence

First, businesses and organisations need to ask themselves;

  1. How accurate is our system today, versus when it was implemented?
  2. Would we be able to notice a problem before it becomes too big?
  3. If there are problems, can we change individual components of our system?
  4. Is our system self-monitoring – and is it training itself unsupervised?
  5. Scrutinise the training data; is it clean, reliable – unbiased?
  6. Is our system open source or is it closed, with vendor tie-in?

Organisations need to adopt a future-thinking mindset rather than being swept up in the new things AI technology can do today. They need to remember that an AI system will only ever be as good as the data it’s trained on, and that unsupervised training can lead to unwanted results if completely unmonitored.

Seek a modular design

Central to managing obsolescence in AI is the adoption of a modular system architecture. Peter stresses the importance of designing AI systems with interchangeable components:

“Businesses that sank money into older models now realise they can be outdated quickly…. Businesses want modular solutions, so they can replace parts without throwing everything away. Data pipeline, predictor module, retraining… Each part can be swapped out – you’re not stuck.”

A modular approach has a few benefits, and it’s emerging as current best-practice in AI. It allows for easier updates and maintenance, but it also diminishes your reliance on a single vendor or proprietary technology.

Comprehensive documentation and standardised application interfaces will also make the seamless integration of newer, more efficient algorithms possible, as they become available.

This is really at the heart of overcoming obsolescence in AI.

Maintain algorithm accuracy with continuous monitoring and retraining

How often should businesses be checking in on their accuracy? Peter recommends continuous, automated self-monitoring: “Build a self-monitoring system. If accuracy dips, retrain or raise an alert.”

These systems can provide early warnings if accuracy falls below a predetermined threshold. With regular performance evaluations and automated alerts, companies can respond to emerging issues more quickly.

Uphold documentation religiously – and be vigilant of biased data

The secret to easier obsolescence management – in AI or otherwise – is accurate, complete documentation. Without it, you won’t know what material you’re working with, and will likely have to reverse-engineer a system to pick out its various components and diagnose problems.

Documenting training methods (and the data supplied for training) must also be extremely thorough – beyond the normal level of documentation that other forms of software engineering would require.

“You’re almost having to keep documentation of the experiments you run, rather than just the end product”, adds Peter.

In systems like facial recognition for security and traffic prediction for signalling systems, faulty data can lead to misidentification, and lives being upended. Facial recognition technologies have repeatedly failed, disproportionately affecting minority groups and women.

This is a bigger risk than it seems; obsolescence can be baked-in, thanks to skewed, incomplete or biased training data. Once a system rolls out, the real world can quickly expose its flaws.

And in ML and AI, a lot of these systems are black boxes; self-contained, proprietary systems, the internal workings of which are unknown. This makes reverse-engineering virtually impossible, and leaves organisations beholden to one supplier.

So – if we were to ask you right now, could you tell us how accurate your AI algorithm is?

Hopefully, you’ll now see why this question is so important.

With hardware, obsolescence is obvious; chips go out of production, with plenty of prior warning. It’s the same for software, too – libraries are decommissioned, with plenty of warning.

But AI is a black box. it can be totally hidden, masking its obsolescence. If you need to peer into that black box and understand what needs to change, we’re here to help.


About Zircon Software

At Zircon, we specialise in analysing, updating, and securing obsolete systems.

Our obsolescence management solutions include AI, helping you identify losses in accuracy, and implement self-monitoring systems – as well as giving our partners better documentation and an actionable plan.

We provide ongoing support, too, but we’ll always strive to leave you with a modular system that can be updated and augmented by anyone.

Interested in learning more?

Get in touch – call 01225 764 444, or send your message to enquiries@zirconsoftware.co.uk.

About the interviewee: Dr. Peter Overbury, Head of AI at Zircon Software

Peter’s education and career history are unique, and he’s been deep in the world of AI for at least 10 years. Peter earned his undergraduate degree in neuroscience, and his PhD in the use of genetic algorithms before his career began. Peter knows first-hand the intricate details of ML, and he’s overseen its evolution (quite literally) from the fledgling machine vision systems learning to drive, into the hyper-accurate models and LLMs we have today.