Maybe swap out some production hardware or change some DNS settings.
Also it is a great time for updates
WHAT?? No changes on Friday! I revert all snapshots every Friday so no one gets any big ideas while we try to figure out why we keep getting weekly data loss.
I feel like this could be real. Imagine some guy writing a script that checks a condition and then restores a backup. This guy then leaves and somehow that condition becomes true 3 years later.
I love configuring seemingly small things on Kubernetes (totally not an overblown, obtuse platform to host apps on) that cause my applications to raise strange errors in production. It’s great because the rest of the company is in some weird trance (omg the weekend!) and I get to try to handle resolving it completely on my own.
This is why I like docker compose
One VM runs everything
Just a little switching loop roulette, plug in any cable ends you find into a switch port
I like to light my network up like a Christmas tree. It is the season after all
I was told the password policy wasn’t strong enough, so I changed it, told no one, and expired all passwords just before leaving for the holiday.
“Sorry, I am leaving to go on a trip to the bottom of the ocean”
DNS is perfect. It takes 48h to propagate anyway.
It normally propagates much fast
You can wipe out you network in 20 minutes or so
Build a new conditional access policy in prod and turn it on. Report Only mode is for noobs.