Include:
Tech
Cybersecurity
Business Strategy
Channel Insights
Stay Connected
Acer America
Acer America Corp. is a computer manufacturer of business and consumer PCs, notebooks, ultrabooks, projectors, servers, and storage products.

Location

333 West San Carlos Street
San Jose, California 95110
United States

WWW: acer.com

ChannelPro Network Awards

hello 2
hello 3

News & Articles

July 21, 2021 | Eric Harless

3-2-1 Works for Backup, But It’s Not Enough

Storing multiple copies of backups is a great start, but no longer sufficient to keep your clients protected.

Before cloud services became mainstream IT, backup and recovery processes required a lot of time and energy. Many MSPs and backup admins implemented multipart strategies that entailed a combo of local image backups and offsite storage solutions. Several MSPs accomplished the latter part by making a second local copy on tape media and manually transporting the media to a remote location, also known as the MSP’s home.

Fast-forward to the present, and many of the cloud-based apps we’re using are updated and backed up in real time with little effort from the user. If the computer or a local server crashes, users just log back into their Microsoft 365 suite from another computer and pick up where they left off.

Because the cloud has made our computing experience so much easier, data backup has become an afterthought for some companies—and this is a dangerous mindset. The SaaS experience may work flawlessly 99% of the time, but it is still susceptible to many of the same problems as on-premises data, including:

  • Accidental deletion—Many SaaS providers keep only 30 days of backup history, so if the deletion isn’t caught before then, the file’s gone forever.
  • Employees leaving—If a user’s email subscription gets turned off before their data is copied, it can lead to data loss.
  • Sabotage—If a rogue employee deletes critical data that’s not discovered during the standard 30-day retention period, it’s gone for good.
  • Hacking—Cybercriminals are increasingly targeting cloud services for ransomware attacks.

What do all the above scenarios have in common? In each, a person is the cause of the lost data. As workloads move to the cloud, data loss is caused less by technology glitches and crashes and more by accidental—and intentional—human-related problems.

Before we can discuss best practices for backup and recovery, it’s vital to address and minimize the potential damage any individual can cause. Here is a simple checklist you can apply to every employee and authorized network user accessing either SaaS or on-premises backups for every customer:

  1. Create named users only. This avoids accidentally giving the billing department access to network user logs, for instance.
  2. Remove shared accounts. Everyone should have their own login credentials, so you can know who is on the network at any time.
  3. Limit the number of root/administrator type roles. This will work to minimize the number of individuals who have access to the organization’s entire data set.
  4. Assign least privilege roles. Doing this ensures users only have the access necessary to do their jobs. If someone needs special one-time access to something, they should be required to get permission from an administrator.
  5. Monitor and enforce two-factor (2FA) or multifactor authentication (MFA). Bad actors want to purge your backups and mess with files. Employing 2FA and MFA—and following the other tips—works to mitigate any damage incurred from a breach and gives the IT department more time to fix the problem.
  6. Don’t log in to the multitenant backup consoles from an untrusted/customer device. Enough said.
  7. Use strong passwords and a password manager. One of the most common ways hackers break into computers and networks is by guessing passwords. Use a password manager to keep the random complexity of yours safe and easy to access.
  8. Zero password reuse. Again, enough said.
  9. Remember to log out. You’d be surprised how often a cat walking across a keyboard or a small child playing with a keyboard can accidentally delete something important on your computer if you walk away and stay logged in. Log out, always.

Now with the people out of the equation, let’s tackle the last line of defense—the technical side, also known as the data backup and recovery solution.

The good news is that creating a comprehensive backup and recovery strategy is easier than it was a decade ago. First, the 3-2-1 backup rule still applies. This catchy phrase means you should:

  • Keep three separate copies of your data (one of which you can use as your production copy)
  • Store them on at least two different types of media (e.g., cloud, disk, NAS, tape, etc.)
  • Store at least one of the three copies offsite, away from your production center

You must also test your backups regularly to ensure you can recover a compromised data set in an adequate timeframe (i.e., recovery time objective) and with an acceptable level of loss (i.e., recovery point objective). This part used to be a massive pain with image-based backups and bare metal restores. You had to build another server using identical hardware components and drivers before starting the time-consuming restore process. In the modern era of virtual machines, you can reduce an ~8-hour recovery window to less than 15 minutes with a good solution—check out my sidebar below for more tips on what to look for in a good backup and disaster recovery solution—which makes it much more feasible to include this step in your backup and recovery protocol.

Closing Thoughts

The cloud has made a lot of IT processes easier for users and MSPs alike. However, MSPs must ensure their customers aren’t making incorrect assumptions about their cloud products—especially when it comes to privacy, protection, and data backups. Even if a data set is recoverable by Microsoft, for instance, will their timescales be acceptable? Customers should consider what downtime will cost them and, with their MSPs help, make sure their backup and recovery plans align with their business continuity needs.

Officially named as head backup nerd for SolarWinds MSP, Eric Harless has over 25 years of data protection experience and has held senior-level product management, marketing, system engineering, sales, and customer support roles with several data protection and disaster recovery vendors, including SolarWinds, FalconStor Software, Symantec, CA Technologies, CommVault Systems, Yosemite Technologies, and Veritas Software.

Must-Have BDR Features

It’s estimated that in 2021 a ransomware attack will affect a business every 11 seconds. Cybercriminals know that companies with good backup solutions can roll back their systems to a pre-infected state and get back to work without paying the ransom. And that’s why many of these bad actors are taking a long-game approach that entails attacking backup files so when they unleash their ransomware, the recovery files don’t work.

To stay a step ahead of these attacks and others, it’s imperative to select a backup and disaster recovery (BDR) solution where you can control/restrict changes to backup client settings through:

  • Lockable selections and schedules set by profiles
  • Remote access in the local backup GUI
  • Assignable GUI passwords to restrict all GUI access

It’s also essential to ensure your BDR solution supports the following data protection standards and best practices:

  • AES-256 encryption and TLS 1.2 or higher communication tunnels
  • Data encryption in motion and at rest
  • System-managed and private encryption keys
  • Increased backup frequency (daily to hourly)
  • Use archiving for extended retention
  • Secure NAS attached local recovery caches/LocalSpeedVaults

Related News & Articles

Growing the MSP

Editor’s Choice


Explore ChannelPro

Events

Reach Our Audience