Dustin
Derived: Backup Maintenance and Recovery Time Actuals
One of the biggest things that IT teams often overlook is the state and care of a backup system and the backups themselves. That can vastly affect the amount of time it takes to recover from any sort of Disaster Recovery activity, especially a cybersecurity event. I’ve recently worked with several customers that have been through events which have helped me derive a few lessons. These derivatives should help you get a proactive jump on these events and significantly lower your RTA.
The size of your backups matters:
The difference between restoring a 15Tb backup vs a 40Tb backup is measured in days. Backups often contain data that should be categorized out of the active backup rotation. Examples include legacy data or systems no longer in production, data that is backed up in another location, or testing or lab systems that can either be manually replaced or don’t matter entirely.
When DR events occur, establishing command and control, usually by a backup restoration is typically one of the first tasks taken. The business or services are typically down. Understanding what you can exclude from your active backup system and taking steps to lower the size of the backups will lower the time of the of restoration (RTA) and allow your company to get back work more quickly.
Legacy Data:
Some data is stagnant or legacy. It is usually kept for archiving, legal, or compliance purposes. Examples include things like email older than some number of years, usually 3 or 5, dated files and documents, or the former production systems: the previous CRM or company website, etc. Stagnant data from 10+ years ago is important, but it is usually inflating your backup sizes unnecessarily. Many times there is a need to keep this type of data active either for access or references. That can be done, but if the data and systems are in a read-only state, then they do not need to be in the active backup rotation. You can back them up in some manner of an archive, out-of-band, or off-site back up system. I typically recommend having redundancy backups of these placed on an out-of-band system.
Redundant Data:
Check to ensure you don’t have more copies of your data than needed. Some systems replicate the same data to more than one place for different reasons. That can be websites, docs, or databases, and they can do that for HA or speed of access purposes. In those cases, careful inspection of the backup data and removal of duplicates may be viable.
Verify Your Backups:
Two major rules were derived for this topic. Firstly, make sure you have the correct vital systems actually backed up. This can be verified by running a test restoration once or twice a year. Restoration of business critical systems should be tested to ensure you are actually backing them up correctly. Systems are also constantly being added into production networks. This allows for gaps in backup coverage that should be reviewed.
This brings us to the second derivative, time. Testing your backups can also give you an idea of how long the process takes. The number 1 question executives or management ask of IT teams in a DR event is “How long…”. Having an understanding or estimate of that timeframe can allow your team to perform more tasks instead of guessing at more answers.
Backup Security and Out-of-Band Backups
Threat Actors today are very accustomed to standard backup practices. The steps most teams use to tier and secure their backups are well know and one of the highest priority targets for threat actors, especially in ransomware cases. TA’s will take every precaution to ensure they devastate not only your backup systems, but any redundancies of those as well. Utilizing an out-of-band or archive backup system can help secure the availability of your backups in nearly any event. Adding additional security around your backups can also make the difference between having and not having backups. If your backup or storage systems have the option for additional AAA or MFA, it is highly recommended to implement those.
Derivative:
In the end, recovery time isn’t about heroics—it’s about decisions made long before the crisis begins. Reducing backup size by archiving legacy and redundant data can cut restoration from days to hours. Regular test restores provide accurate timelines and confirm coverage of critical systems. Isolated, out-of-band backups and strong authentication (MFA/AAA) protect availability when primary systems and standard tiers are compromised. These practices aren’t theoretical. They’re observed outcomes from real-world ransomware and disaster-recovery events: organizations that implemented them consistently achieved materially lower downtime.
The data is clear: preparation determines speed.
Implement what fits your environment, measure the results, and adjust as needed. Your recovery time actuals will reflect the choices you make today.
About Me

Dustin Fremin
Author/Writer
My name is Dustin, this is my space to share the lessons I’ve derived from my career in technology.
Follow Me
Connect with me and be part of my social media community.