Four times a year, Nike's order entry platform became the most important digital asset in the company. Thousands of product merchants from every size and type lined up to place orders for the next season. One week, one deadline, $13 billion in inventory commitments.
The platform and the quarterly pressure it couldn't escape
The scale was impossible to ignore. Nike processed over $13 billion in planned inventory orders during each quarterly ordering cycle. These orders came from everywhere in Nike's retail ecosystem, from major national retailers down to independent mom-and-pop sports stores. All of them had a deadline. All of them converged within a single week, four times per year.
The platform that handled this was built on the legacy infrastructure of the past. Expensive dedicated hardware designed for predictable, steady-state workloads. It was a 20th Century behemoth. It could not scale dynamically. It could not handle the quarterly surge without creaking at the seams. After each ordering deadline passed, 75% of that expensive infrastructure sat idle for three months until the next deadline approached.
The structural problem was clear. A platform built to handle an average workload was being asked to handle peak workloads that were orders of magnitude larger. Traditional hardware simply could not absorb that kind of variability without massive waste.
Nike had been using this system for years. It worked, technically. But it worked expensively. And it worked in a way that guaranteed constraints on how the business could evolve.
Why lift and shift was the right call
TechSparq's existing partnership with Nike was the accelerant. Years of understanding Nike's business logic, the software systems, the operational rhythms. When the order entry migration was green-lit, TechSparq engineers were brought on immediately. There was no ramp-up period spent asking "what does this system do." The team already knew.
The strategic choice was a lift-and-shift approach. Each component of the legacy application was duplicated within the Amazon environment using as close to the original physical hardware footprint as possible. This was not an attempt to optimize or rebuild from scratch. This was a deliberate decision to preserve the known-good logic, to prove the migration worked, and to defer optimization to Phase 2.
The scale of what was being moved was substantial. Over a hundred nodes consisting of Java application servers, Oracle 11i databases, MongoDB clusters, Apache Web servers, Apache SOLR search infrastructure, RabbitMQ messaging, and various other interconnected technologies. All of it needed to work the first time, or Nike's ordering cycle would suffer.
The team adopted Agile Scrum methodologies. This choice proved critical to what happened next.
How a 12-month timeline became 5
The project kicked off with a 12-month delivery window. That was the baseline estimate based on the scope and complexity. By month two, something had shifted.
Agile Scrum created visibility into what was actually possible. Velocity metrics showed the team was moving faster than projected. The sprints revealed dependencies that could be eliminated. The retrospectives showed what was working and what wasn't. Instead of waiting 12 months to discover problems, they found them every two weeks.
After two months of development, the timeline compressed from 12 months to 9 months. The team was simply moving faster than the original estimate accounted for. As velocity continued to compound, the timeline compressed again. Nine months became seven. Seven months became five.
All of this happened within the first two months of development. The team had moved so fast and learned so much that they knew the project could land in five months, not because they worked longer hours, but because waterfall planning had been pessimistic about what was achievable with iterative delivery.
By the time the five-month mark arrived, the entire order entry platform was running on AWS. The project was a raving success.
What AWS unlocked beyond cost savings
The immediate financial impact was impossible to miss. Nike is saving millions of dollars a year in hosting costs from the migration alone. The massive infrastructure that sat idle 75% of the year was gone. In its place was a platform that scaled only as much as it needed, when it needed to.
But the bigger unlock was capability. The entire eCommerce development pipeline was moved into AWS. The continuous integration pipeline that builds, tests, and deploys code now runs inside AWS itself. Development teams can stand up environments on demand. They can tear them down when finished. The friction of hardware provisioning has been eliminated.
Phase 2 has begun. The next evolution of the platform will add dynamic scaling. Computing resources will ramp up as quarterly ordering deadlines approach. After the deadline passes, resources scale back down. The platform will now match its resource footprint to actual demand. Nike discovered that a system built for the burst is often the best way to serve the average as well.
Hosting costs are estimated to be 20% of the original yearly expense. An 80% reduction. But that number alone doesn't capture the real transformation. What mattered more was that the business became nimble. The eCommerce team could release software faster. They could experiment with features that would have been impossible to test in the old hardware model.
What this means for enterprise platforms with burst traffic problems
Nike's quarterly ordering cycle is not unique. Seasonal retail, product launches, flash sales, time-bound promotional events, quarterly earnings releases. Commerce is full of predictable bursts. Any brand with a defined selling season understands the problem.
The legacy hardware model creates a structural trap. Provision for the peak, and you waste money 75% of the time. Provision for the average, and you fail when the peak arrives. There is no good answer in the old model. Cloud dynamic scaling solves that. Resources match demand. The bill matches consumption.
But the timeline compression reveals something equally important. Having a partner with pre-existing knowledge of the system mattered enormously. TechSparq did not need to learn Nike's business logic from scratch. That context was already there. The ramp-up was measured in weeks, not months. The team jumped straight to delivery.
That partnership depth is what made the Agile Scrum process so effective. Velocity compound when the team already knows what they're building and why it matters.
- Partnership depth reduces ramp time. A partner who already understands your systems can move faster than a team learning from zero.
- Lift-and-shift preserves risk tolerance. Proving the migration works before optimization means no unexpected surprises.
- Agile velocity compounds. Iterative delivery reveals what's actually possible faster than waterfall planning ever could.
- Cloud unlocks dynamic scaling. Hardware never could. Dynamic scaling is the answer to burst traffic that will never go away.
Is your platform built for the moments that matter?
Burst traffic, seasonal demand, and event-driven workloads are no longer unsolvable problems. Let's talk about how to migrate your infrastructure to match the moments that matter most to your business.
Book a Consultation