How a Morning Network Outage Became a Never-Ending Loop of Tech Support Headaches

There’s something magical about a network outage that only happens in the mornings. Like an alarm clock for IT professionals, it’s a guaranteed way to ruin your coffee and test your sanity before the day even starts. But what if the cause of the outage isn’t a faulty cable or a rogue switch, but a perfectly ordinary conference room? Buckle up for a tale of mesh networks, power breakers, and the eternal struggle to just check the basics.
It began like so many tech support horror stories: a persistent problem, a customer who takes their sweet time to report it, and a team of troubleshooters haunted by red herrings. When C&C machines at a client’s factory started losing internet every morning, it triggered a tech support investigation that would make Sherlock Holmes weep. Let’s dive into the chaos, courtesy of a Redditor who lived to tell the tale.
The Mystery of the Vanishing Network
Our story opens with a classic setup: after weeks of intermittent morning outages, the customer finally strolls in to report that their industrial C&C machines lose connectivity every morning—sometimes for just 15 minutes, sometimes for up to five hours. The kicker? Every other device at the site works just fine.
The initial suspects are the usual culprits. A desktop-grade switch is swapped out for a new one, but the problem persists. The vendor is contacted, but they’re only willing to troubleshoot if they can remote in during the outage—tricky, since the machines are offline at the time.
Sensing that the game is afoot, our protagonist steps in, assuming (foolishly, as it turns out) that all the basic troubleshooting has been done. Maybe the internal network card is fried? A USB NIC is tested, with zero improvement. Next, they break out the big guns: network diagnostic tools like iperf and PingPlotter.
The results are… strange. The network comes alive for six seconds every minute, like clockwork—just long enough to taunt Windows (and the support team), but not long enough to actually restore functionality.
When in Doubt, Call the Contractor
Time to dig deeper. Our hero recalls a contractor had visited the site a couple of months prior to install a switch and two wireless access points near the conference room—nothing unusual, right? Well, as it turns out, sometimes the devil is in the details.
After a few rounds of “he said, she said,” the truth comes out: every day, the company’s maintenance routine involves flipping the main breaker to the C&C machines. And here’s where things get spicy—the contractor, in a moment of cable-saving genius (or laziness), had run a line from one of the C&C machine switches instead of the core switch. When the breaker flipped, the newly installed conference room switch and APs lost their wired connection and did what any self-respecting wireless device would do: they established a new mesh connection.
But there was a problem. The switches and APs weren’t “smart” enough to revert to a wired connection once power was restored. Instead, they stayed in mesh mode, creating a looping feedback disaster that brought the network to its knees—every single morning.
The Solution: Disable the Mesh, Save the Sanity
The fix, when it finally arrived, was laughably simple: disable the mesh connection. Instantly, the network came back to life. Sure, the conference room lost connectivity for a bit in the afternoons, but the production line was back in business.
Time spent chasing ghosts and arguing with management? Over 32 hours. Time spent to actually fix the problem? About 30 minutes. Time spent trying to convince oneself (and one’s colleagues) to always check the basics first? Infinity, and counting.
Lessons Learned (The Hard Way)
What’s the moral of this tech support saga? Sometimes, the most maddening problems have the simplest solutions—if only we remember to check the basics before diving into the weeds. Assume nothing, especially when contractors are involved. And always, always ask why something changed right before the trouble started.
Have you ever fallen down a troubleshooting rabbit hole, only to discover an embarrassingly simple culprit? Share your stories in the comments—misery (and laughter) loves company!
If you enjoyed this tale of tech support woe, let us know below, or share your own “red herring” moments. And remember: in IT, the only thing more persistent than a network loop is our refusal to check the obvious (until it’s far too late).
Original Reddit Post: Network outage in the mornings