Quarter after quarter, with each new publication about successful ransomware attacks, the importance of backup systems grows significantly. The threat to business continuity of companies and public institutions fuels demand for investments in cybersecurity and disaster recovery. Backup is the last bastion in all of this – the one we turn to when, despite all safeguards, an attack succeeds. Practice shows that it is easier to recover from backup than to effectively secure the IT environment against a cyberattack. The question remains: how much can we trust our backup system and our backup copies – in other words, how certain are we that we will have something to recover from? After all, backups are just files, and like any other files, they can be deleted, overwritten, or encrypted.
Key Takeaways
- The erosion of IT defenses is measured in hours, and most ransomware attacks target backup copies – making backup both the last line of defense and a priority attack target.
- Backup immutability is not a single feature but a strategy combining appropriate practices, technology, and services, tailored to three scenarios: data destruction, media loss, and the human factor.
- The fastest first step toward data security is off-site backup – even a simple server in a branch office dedicated to backup, or renting a remote repository, radically increases resilience against most known threats.
Definition
Backup IMMUTABILITY is the ability of a backup system to guarantee that created copies remain integral (unchanged), undeletable, and available for recovery throughout the entire required retention period – regardless of hardware failures, human errors, or deliberate attacks.
Rapid erosion of defenses
When profit is the goal, even the most sophisticated defenses will be broken or bypassed. Cybersecurity is a constant arms race. The erosion of security systems has long ceased to be a matter of months – it is now a matter of hours. A single zero-day vulnerability can open the door for criminals to the best-protected systems. On top of that, there is ransomware, empowered by the "as a Service" business model, wreaking havoc on corporate data repositories, aided by phishing and employee carelessness.
This is why last-resort solutions are gaining importance – solutions that can save organizations from massive losses due to the inability to continue operational activities. It appears that backup policy and the backup system itself are the last line of defense when corporate data loses its consistency or becomes impossible to read.
Attacking backup increases ransom chances
In the pursuit of profit, cybercriminals are perfecting their attack tactics. And what is the most effective way to strip a victim of any illusions and thereby accelerate ransom payment? Destroy the backup copies – which is relatively easy when they are not additionally protected. It turns out that the majority of current ransomware attacks are aimed at backup repositories. Various studies show that nearly 70% of attacks targeting backup copies are successful. If we add to this that in most cases, compromising administrative credentials to the backup system opens the door to all of the organization's IT systems, the situation does not look very optimistic.
So, is there an effective method of data protection? As always, the devil is in the details. A properly designed backup system architecture – "secure by design" – incorporating best practices in security, along with a well-thought-out backup policy, gives a very high chance that our backup copies will be immutable and, when needed, data can be recovered from them.
How to understand backup IMMUTABILITY?
In short, the goal is to ensure that a created backup copy is integral, unchangeable, and undeletable. Skeptics are probably wondering right now whether this is even possible. After all, backup copies need to be protected against several scenarios, including:
- Destruction – meaning overwriting, deletion, encryption, but also modification, which can be the hardest to detect and signifies a loss of data integrity.
- Physical or logical damage to the storage media where backup copies are kept, due to hardware failure, or for example after an unsuccessful array expansion when a RAID "falls apart," or after a firmware update that ends in error.
- Unintentional, but also intentional deletion of backup by a person with appropriate privileges – meaning human error, "administrator's revenge," or a cybercriminal's attack after compromising the right credentials.
Immutability is not a single feature you toggle on in a console. It is a combination of good practices, appropriate technology, and services, and their selection depends on what you are defending against. Let's examine each scenario separately.
Protection against backup destruction and modification
The primary target of a ransomware attack is not production data but backup copies, because eliminating backup increases the chances of ransom payment.
Silent modification of the backup chain is also very dangerous – it is difficult to detect and undermines the integrity of the backup copy. Moreover, cyclical backup copies are often already infected with malware, which means that after recovery, malicious software will spread again. Worse still, malware can remain dormant for weeks, meaning even older backup copies may be contaminated, and finding a "clean" restore point becomes a serious challenge.
How can such threats be mitigated? There are several methods, which work most effectively when applied in combination.
Immutable Repositories
The idea is simple: storage supported by the operating system and/or application should honor the retention policy, blocking the ability to delete or overwrite backup copies for a specified period. Such backups – the files – are stored in a WORM (Write Once Read Many) mode, and no one, including the administrator, can modify or delete backup copies before the protection period expires. Example solutions:
- Linux Hardened Repository. Immutability flag mechanism (chattr +i) at the XFS file system level.
- Object Storage – mass object storage. The protection mechanism is Object Lock (S3-compatible).
- Veeam Cloud Connect (cloud) | Insider Protection mechanism from Veeam partners (VCSP program).
Important! Make sure your service provider has enabled "Insider Protection" on your account, and also establish how long a protection period you expect. Typically, deleted copies are additionally protected for 7–14 days. It may also turn out that the provider will charge an additional fee, as this represents real storage occupancy (copies after deletion free up available storage space in the customer's repository account but still occupy the provider's storage).
- Veeam Data Cloud Vault (cloud) | Protection via Object Lock.
- Object Storage (AWS S3, Azure Blob, etc.). Protection against deletion also via Object Lock.
- Storage Appliance Lock (Dell DataDomain, HPE StoreOnce, NetApp SnapLock, etc.). Enterprise-class storage solutions with hardware/firmware immutability.
Important! When using cloud object storage, setting excessively long retention periods for backup jobs can result in a painful storage bill. Such backups cannot be deleted before the time specified in the GFS scheme! You may then face a choice: delete the account along with all backup copies, or pay the hefty bill.
Backup encryption
By encrypting backup copies, we protect their confidentiality and integrity. In some cases, we also partially protect against ransomware. It turns out that older cryptolockers skip data that has already been encrypted (e.g., CryptoWall, Petya/NotPetya, WannaCry, TeslaCrypt). This is a consequence of attack automation and the initial significant carelessness in creating malicious code. Unfortunately, modern ransomware actively attacks backup repositories and encrypts everything.
Most backup systems support encryption at rest (AES-256) and in transit (TLS/SSL). Enabling encryption should be mandatory, and encryption keys should be stored and managed with the same care as credentials to the organization's critical systems, preferably using a dedicated KMS solution.
Backup integrity verification
Automating the backup testing process increases confidence that after a security incident, data and systems can be recovered from backup copies. Additionally, during test recovery, various scripts can be invoked to additionally scan backup copies with an antivirus engine to identify "clean" restore points.
A signature-based scanning engine should be used for malware detection. For example, the Veeam Threat Hunter scanner integrated with Veeam Backup & Replication combines AV signatures, YARA rules, and entropy analysis. This allows us to identify IoC (Indicators of Compromise) and detect encrypted or compressed data blocks characteristic of ransomware activity.
Protection against media loss
Even an immutable backup will not help if the media on which the data is stored is physically destroyed. And there are plenty of named risks. Let me start with force majeure events. In this category, it suffices to mention fires, flooding, lightning strikes, or even vandalism. Even in a professional data center, something can happen that threatens our data. Consider disk damage caused by the sound wave of an activated gas fire suppression system (a real event in a data center of our national operator). Such a system, without dampeners, acts like a large-caliber cannon shot. For spinning disks with rotating platters and a read/write head, this means permanent and irreversible damage.
Off-site Backup
Redundancy is one of the methods of data protection in case of media loss. Its implementation is aided by the 3-2-1 rule created by Peter Krogh – one of the best-known best practices in data protection. The rule states that our data should exist in three copies, on two different media types, and one copy should be stored off-site.
Off-site backup is usually an impassable barrier for ransomware or even an active attack by a hostile actor. Implementation involves adding a "backup copy job" to each backup job and, depending on policy, continuously copying backup to a second location, which can be VDC Vault or Veeam Cloud Connect.
Physical separation or Internet isolation
Logical segmentation of server environments using zones dedicated to specific resources, with communication restrictions between zones. For example, the backup repository should be located in a zone that no local network users can access. This zone should also be disconnected from the Internet (air-gap).
An extremely effective protection is storing backup on offline media, e.g., on magnetic tapes in a tape library, which by definition galvanically separates backup copies from the Internet.
Monitoring and early warning
Continuous monitoring of production and backup environments with anomaly analysis usually provides much earlier warnings of an approaching disaster. Even a simple alarm about running out of repository space makes the administrator's life more bearable. Additionally, alerts about sudden increases in storage occupancy or spikes in I/O process load on weekends can be a signal that we are dealing with a cryptolocker. If we also integrate the monitoring system with a SIEM, we gain a very effective guardian.
Protection against the human factor
The insider threat category is not just about the so-called "administrator's revenge" – it also includes a tired IT worker who accidentally overwrites a configuration, an employee who clicks on a phishing link, or a cybercriminal who has just compromised someone's credentials and, from the system's perspective, is an authorized administrator.
Protecting data and backup against threats in this category requires implementing best practices, which can be drawn from frameworks such as NIST CSF 2.0, CIS Controls v8, or DORA/NIS2. Implementing even a few of them will protect your backup system against most known threats.
Role separation and least privilege
We start by properly designing logical and physical access permissions to systems and machine-to-machine accounts. Applying the Principle of Least Privilege is fundamental. It would also be beneficial for the backup system to support Role-Based Access Control (RBAC), which is not as common as one might think.
Additionally, critical operations, such as deleting backup copies, should be authorized by at least two people (Four-Eyes Authorization) – an administrator and a security officer, or two administrators.
Early-stage threat detection
One method is predicting attacks based on TTPs (Tactics, Techniques, Procedures). This can be done by the Recon Scanner from Coveware (Veeam). The idea is to have knowledge that dual-use tools have appeared on servers, and that the use of commands indicating environment enumeration has been detected. The scanner detects and reports the presence of tools and the use of PowerShell/bash commands specific to attacks.
Employee training
Seemingly obvious, but as it turns out, most frequently neglected. And yet the most popular attack vector targeting data is phishing. Specially crafted messages that are practically indistinguishable from originals (LLMs are doing the job) are the cause of most ransomware infections. Preparing teams through training and exercises increases sensitivity to threats, thereby reducing the chances of success of typical attacks through this channel.
Where to start?
Immutability is not a product – it is a strategy combining good practices, technologies, and services. It needs to be approached methodically, as each area requires a different response, and together they form a layered protection system (defense in depth).
If I had to point to just one thing in this arsenal, I would choose off-site backup. Let me emphasize right away that this is not synonymous with copying backups to the cloud, although that is the fastest option. To store backup outside the primary site, all you need is a server packed with disks (depending on storage requirements) and some secure location in a branch office, or as a last resort, a 1RU colocation in a data center. The cheapest approach would be to use the ready-made ISO with Veeam Hardened Repository, although a quite interesting and affordable option is also Object Storage solutions, which can be installed on your own hardware or rented as a complete solution.
Then it is worth reviewing your existing backup system and perhaps considering an upgrade or migration to a modern platform. The point is for the system to meet current requirements for active backup protection and testing, malware detection, and anomaly monitoring. Furthermore, it should support most existing virtualization platforms, as this tremendously facilitates Disaster Recovery scenarios. It would also be good for the backup system to keep pace with new threats brought by increasingly accessible LLMs.
If after all that you still have an appetite for more, I invite you to explore our Four Rings of Cyber Resilience methodology and arm yourself sufficiently to meet all the recommendations of NIS2, GDPR, and whatever else may appear in the future.
Want to strengthen your cyber resilience?
Take advantage of a free cyber resilience consultation
15 Minutes with an Expert