This comprehensive guide covers Data Server Backup and Disaster basics, and seven critical backup strategies for data safety and recovery.

Despite improvements in production storage reliability, regularly creating and maintaining a high-quality, independent copy of production data is crucial. Especially with cloud backup in the mix, today’s users and application owners expect no data loss, even in the event of a systems or facilities outage. This puts enormous pressure on IT and storage administrators to create an effective backup solution.

To help, here are seven best practices that can make creating that strategy easier for Data Server Backup and Disaster

1. Increase backup frequency

Because of ransomware, data centers must increase the frequency of backups — once a night is no longer enough.All data sets have to be protected more than once a day. One such technology by which almost any data set can be backed up in a few minutes is block-level incremental (BLI) backups. This is possible by copying not the whole file but changing blocks to a target place on backup storage. Organizations should protect data using intelligent backup solutions that enable rapid and frequent backups.

Block-level incremental backups are paired with in-place recovery, sometimes known as “instant recovery.” Not so instant, technically, in-place recovery is very fast because it restores a virtual machine’s data on protected storage. This allows an application to go online in minutes rather than hours while waiting for data to transfer across the network to production storage. To be successful, in-place recovery needs high-performing disk backup storage for temporary storage.

On the other hand, streaming recovery creates an almost instant copy of the volume used by the virtual machine on the production storage as opposed to doing it on the backup storage. Data streams into the production storage system where it prioritizes data accessed most. The benefit that this brings is that it sends data directly to the production storage; thus, there is nothing to be worried about in terms of performance by the backup storage.

2. Align strategy to service-level demands

 

Since the beginning of the data center, a best practice was to set priorities for each application in the environment. This best practice made sense when an organization might have two or three critical applications and maybe four to five “important” applications. Today, however, even small organizations have more than a dozen applications, and larger organizations can have more than 50. The time required to audit these applications and determine backup priorities simply doesn’t exist. Also, the reality is that most application owners will insist on having the fastest recovery times possible. Chargeback and showback techniques can help application owners reconsider more practical recovery times.

Rapid recovery and BLI backup ease IT’s burden to prioritize data and applications. They allow all data and apps to fit within a 30-minute to one-hour window and enable prioritization based on user needs. Modern technology makes a default aggressive recovery window affordable and more practical than detailed audits, especially in rapidly growing data centers.

The recovery service level though, means that the organization needs to back up as frequently as the service level demands. Again, for BLI backups, a 15-minute window is reasonable. The only negative to a high number of BLI backups is that there is a limit in most software applications as to how many BLI backups can exist prior to them impacting backup and recovery performance. The organization might have to initiate twice-a-day consolidation jobs to lower the number of incremental jobs. Because the consolidation jobs occur off-production, they won’t impact production performance.

The cost of BLI backups and in-place recovery is well within the reach of most IT budgets today. Many vendors offer free or community versions, which work well for very small organizations. The combination of BLI and rapid recovery, typically included in the base price of the backup application, costs far less than the typical high availability system while delivering almost as good recovery times.

3. Continue to follow the 3-2-1 backup rule

The 3-2-1 backup rule requires organizations to keep three complete copies of their data: two local copies on different media types and one off-site. Organizations should back up data to a local on-premises storage system, copy it to another on-premises system, and replicate it in a separate location. In modern data centers, storage snapshots on primary systems can count as one of the three copies. Alternatively, replicating to a second location and then again to a third can fulfill the requirement.

The requirement of two copies on two separate media types is more difficult for the modern data center to meet. In its purest form, the 3-2-1 rule means storing data on two dissimilar media types, such as disk and tape. While this remains ideal, organizations can use cloud storage as the second media type, even though both copies are on hard disk drives. The cloud counts as a different media type if it is immutable and deletable only after the retention policy expires. In other words, a malicious attack cannot erase it.

4. Use cloud backup with intelligence

IT professionals should continue to demonstrate caution when moving data to the cloud. The need for caution is especially true in the case of backup data as the organization is essentially renting idle storage. Although the cloud backup provides an attractive upfront price point, long-term cloud costs can add up. Repeatedly paying for the same 100 TB of data eventually becomes more expensive than owning 100 TB of storage. Most cloud providers charge egress fees for data moved back to on-premises during recovery. These are just a few reasons why taking a strategic approach to choosing a cloud backup provider is so important.

In light of its downsides, taking a strategic approach to the cloud is important. Smaller organizations rarely have the capacity demands that would make on-premises storage ownership less expensive than cloud backup. Storing all their data in the cloud is probably the best course of action. Medium to larger organizations might find that owning their storage is more cost-effective, but those organizations should also use the cloud to store the most recent copies of data and use cloud computing services for tasks such as disaster recovery, reporting, testing and development.

Cloud backup is also a key consideration

for organizations looking to revamp their data protection and backup strategy. IT planners though, should be careful not to assume that all backup vendors support the cloud equally. Many legacy on-premises backup systems use the cloud as a tape replacement, copying all on-premises data to the cloud. While this can cut on-premises infrastructure costs, it also doubles the storage capacity IT must manage.

Using the cloud in this way enables the organization to both meet rapid recovery requirements and lower on-premises infrastructure costs.

Vendors use the cloud for disaster recovery, known as DRaaS, which offers cloud storage and computing for virtual application images. DRaaS can significantly cut IT costs compared to managing a secondary site and allows more frequent testing of recovery plans. It’s a highly practical cloud use and a great starting point for organizations. Data Server Backup and Disaster DRaaS is not magic, however. IT planners should ask vendors about the exact time from the DR declaration to when the application becomes usable. Many vendors offer “push button” DR, but it’s not “instant” DR. Vendors storing backups in proprietary formats on cloud storage must extract and convert the data from their format to the cloud provider’s format. Although IT or vendors can automate these steps, they still require time.

5. Automate disaster recovery runbooks

The most common recoveries are not disaster recoveries; they are recoveries of a single file or single application.

IT rarely faces full data center disasters but must plan for them. Recovering dozens of interdependent applications requires careful timing and a specific server start-up order.

Given the rare nature of full disasters and the need for precise server recovery, IT must meticulously document and implement disaster recovery processes. Many data centers lack updated documentation due to limited resources. Some backup vendors now offer runbook automation to streamline recovery with a single click. Organizations with complex applications should consider these features to ensure effective recovery.

6. Don’t use backup for data retention

Most organizations retain data within their backups for far too long. Most recoveries use the most recent backup, not ones from six months or six years ago. More data in the backup system complicates management and increases costs.

Most backup applications store data in proprietary formats and separate containers for each job, which adds complexity. The inability to delete individual files from these containers poses a problem. GDPR regulations require organizations to retain and segregate specific data types. Additionally, “right to be forgotten” policies mandate that organizations delete only certain components of customer data while continuing to store others. These deletions must also happen on demand. Since backup data cannot be deleted, organizations must take special steps to prevent accidentally restoring “forgotten” data.

The easiest way to meet this regulation is not to store data long-term in the backup. Using an archive product helps organizations meet data protection regulations and simplifies backup architecture. It allows for restoring backup jobs to the archive, which handles data off-production and offers file-by-file granularity. Data Server Backup and Disaster

7. Protect endpoints and SaaS applications

Laptops, desktops, tablets, and smartphones all store valuable data that may be unique to them. It’s likely that data created on these devices will never reach a data center storage device unless specifically backed up, and it will be lost if the device fails, gets lost, or is stolen. The good news is that endpoint protection is more practical than ever thanks to the cloud. Modern endpoint backup systems enable endpoints to back up to a cloud repository, managed by core IT. Data Server Backup and Disaster

SaaS applications such as Office 365, Google G-Suite, and Salesforce.com are even more overlooked by the organization. User agreements clearly state that data protection is the organization’s responsibility. IT planners should find a data protection application that covers their SaaS offerings. Ideally, IT should integrate these offerings into their existing system, but they might also consider SaaS-specific systems if they offer more capabilities or value.

The backup process is under more pressure than ever. Expectations are for no downtime and no data loss. Fortunately, backup software can provide capabilities such as BLI backups, recovery in-place, cloud tiering, DRaaS, and disaster recovery automation. These systems enable the organization to offer rapid recovery to a high number of applications without breaking the IT budget.