Legacy Modernization: How AI Lets Us Accelerate and De-Risk Transformation
I woke up a couple of weeks ago to our Fujitsu Japan announcing an AI-driven software development platform that chains multiple agents across every SDLC stage. It sounded like a miracle cure, until I mapped it against what I hear every day from customer's about their bespoke enterprise apps. Modernization is rarely blocked by architecture. It is blocked by confidence. If agents can document, refactor, test, and deploy faster than teams can coordinate a steering committee, then AI becomes the safety net that finally lets us touch the systems we keep tiptoeing around.
My remit in Fujitsu Oceania is to keep enhancing our approach to app development and legacy modernization. That means continuously assessing current and passed engagements - success and failures - and new and emerging tools and approaches. The Japan announcement matters because it validates an approach we have been exploring in smaller slices. Pair human leads with specialized agents per lifecycle phase, keep telemetry flowing, and you can take bigger modernization bites without betting the farm.
Microsoft’s Azure team shared a similar pattern a few weeks ago when they demoed an AI-led SDLC on GitHub. Agents scoped user stories, wrote code, reviewed pull requests, and produced deployment artifacts with humans acting as orchestrators. For bespoke enterprise software, that orchestration role is the missing link. Most of our legacy systems are a patchwork of custom logic, regional compliance rules, and decade-old infrastructure decisions. No single vendor tool understands that nuance. But agents tuned to each stage can, provided we feed them the right context.
Here is how I am applying it. First, catalog every modernization candidate and tag it by risk: data sensitivity, regulatory obligations, dependency depth. Second, assign an agent stack to each phase. Planning agents read old BRDs and generate user story templates. Code agents refactor COBOL or .NET modules into documented APIs. Test agents generate regression scenarios directly from production logs. Release agents build deployment runbooks that devs can approve without rewriting them. Humans stay in the loop, but their time shifts from repetitive work to auditing what the agents produced.

This matters for application owners because it changes the math on velocity. Instead of requesting a full rewrite budget, you start by showing a working slice: an AI-generated service wrapper, the automated tests that prove it does not break anything, and the telemetry Microsoft recommends to watch the agents. Suddenly your stakeholders can approve the next slice because you have evidence, not just a slide deck. The CIO piece on composable AI calls this building a “portfolio of reusable agents.” I translate that into a reusable modernization kit that any app owner or organization can borrow.
Governance is where most teams stumble, so I borrowed a trick from Fujitsu Japan’s solution and leverage here in Australia and NZ. They stress collaborative agents with shared context. We can enforce that by storing prompts, training data references, and agent output logs in the same repo as the application code. When audit asks how we modernized a claims engine, we can show the exact prompts, checkpoints, and approvals. No mystery box, no compliance surprises.
The takeaway for ANZ organizations is simple: treat AI like scaffolding for modernization. Start with one application, pick the ugliest workflow step, and surround it with agents that can shoulder the grunt work. Measure the time you get back, reinvest it into the next legacy module, and keep repeating until the old stack is no longer a blocker.
If you are an application owner, what part of your system would you let an agent tackle first if you had audit-ready guardrails baked in?
--Layne