Dirty Bitmaps are objects that track which data needs to be backed up for the next incremental backup.
Dirty bitmaps can be created at any time and attached to any node (not just complete drives.)
A dirty bitmap's name is unique to the node, but bitmaps attached to different nodes can share the same name.
Dirty bitmaps created for internal use by QEMU may be anonymous and have no name, but any user-created bitmaps may not be. There can be any number of anonymous bitmaps per node.
The name of a user-created bitmap must not be empty ("").
A Bitmap can be “frozen,” which means that it is currently in-use by a backup operation and cannot be deleted, renamed, written to, reset, etc.
The normal operating mode for a bitmap is “active.”
{ "execute": "block-dirty-bitmap-add", "arguments": { "node": "drive0", "name": "bitmap0" } }
This bitmap will have a default granularity that matches the cluster size of its associated drive, if available, clamped to between [4KiB, 64KiB]. The current default for qcow2 is 64KiB.
To create a new bitmap that tracks changes in 32KiB segments:
{ "execute": "block-dirty-bitmap-add", "arguments": { "node": "drive0", "name": "bitmap0", "granularity": 32768 } }
Bitmaps that are frozen cannot be deleted.
Deleting the bitmap does not impact any other bitmaps attached to the same node, nor does it affect any backups already created from this node.
Because bitmaps are only unique to the node to which they are attached, you must specify the node/drive name here, too.
{ "execute": "block-dirty-bitmap-remove", "arguments": { "node": "drive0", "name": "bitmap0" } }
Resetting a bitmap will clear all information it holds.
An incremental backup created from an empty bitmap will copy no data, as if nothing has changed.
{ "execute": "block-dirty-bitmap-clear", "arguments": { "node": "drive0", "name": "bitmap0" } }
Bitmaps can be safely modified when the VM is paused or halted by using the basic QMP commands. For instance, you might perform the following actions:
At this point, the bitmap and drive backup would be correctly in sync, and incremental backups made from this point forward would be correctly aligned to the full drive backup.
This is not particularly useful if we decide we want to start incremental backups after the VM has been running for a while, for which we will need to perform actions such as the following:
The usages are identical to their respective QMP commands, but see below for examples.
As outlined in the justification, perhaps we want to create a new incremental backup chain attached to a drive.
{ "execute": "transaction", "arguments": { "actions": [ {"type": "block-dirty-bitmap-add", "data": {"node": "drive0", "name": "bitmap0"} }, {"type": "drive-backup", "data": {"device": "drive0", "target": "/path/to/full_backup.img", "sync": "full", "format": "qcow2"} } ] } }
Maybe we just want to create a new full backup with an existing bitmap and want to reset the bitmap to track the new chain.
{ "execute": "transaction", "arguments": { "actions": [ {"type": "block-dirty-bitmap-clear", "data": {"node": "drive0", "name": "bitmap0"} }, {"type": "drive-backup", "data": {"device": "drive0", "target": "/path/to/new_full_backup.img", "sync": "full", "format": "qcow2"} } ] } }
The star of the show.
Nota Bene! Only incremental backups of entire drives are supported for now. So despite the fact that you can attach a bitmap to any arbitrary node, they are only currently useful when attached to the root node. This is because drive-backup only supports drives/devices instead of arbitrary nodes.
Create a full backup and sync it to the dirty bitmap, as in the transactional examples above; or with the VM offline, manually create a full copy and then create a new bitmap before the VM begins execution.
Create a destination image for the incremental backup that utilizes the full backup as a backing image.
# qemu-img create -f qcow2 incremental.0.img -b full_backup.img -F qcow2
Issue the incremental backup command:
{ "execute": "drive-backup", "arguments": { "device": "drive0", "bitmap": "bitmap0", "target": "incremental.0.img", "format": "qcow2", "sync": "incremental", "mode": "existing" } }
Create a new destination image for the incremental backup that points to the previous one, e.g.: ‘incremental.1.img’
# qemu-img create -f qcow2 incremental.1.img -b incremental.0.img -F qcow2
Issue a new incremental backup command. The only difference here is that we have changed the target image below.
{ "execute": "drive-backup", "arguments": { "device": "drive0", "bitmap": "bitmap0", "target": "incremental.1.img", "format": "qcow2", "sync": "incremental", "mode": "existing" } }
In the event of an error that occurs after a backup job is successfully launched, either by a direct QMP command or a QMP transaction, the user will receive a BLOCK_JOB_COMPLETE event with a failure message, accompanied by a BLOCK_JOB_ERROR event.
In the case of an event being cancelled, the user will receive a BLOCK_JOB_CANCELLED event instead of a pair of COMPLETE and ERROR events.
In either case, the incremental backup data contained within the bitmap is safely rolled back, and the data within the bitmap is not lost. The image file created for the failed attempt can be safely deleted.
Once the underlying problem is fixed (e.g. more storage space is freed up), you can simply retry the incremental backup command with the same bitmap.
Create a target image:
# qemu-img create -f qcow2 incremental.0.img -b full_backup.img -F qcow2
Attempt to create an incremental backup via QMP:
{ "execute": "drive-backup", "arguments": { "device": "drive0", "bitmap": "bitmap0", "target": "incremental.0.img", "format": "qcow2", "sync": "incremental", "mode": "existing" } }
Receive an event notifying us of failure:
{ "timestamp": { "seconds": 1424709442, "microseconds": 844524 }, "data": { "speed": 0, "offset": 0, "len": 67108864, "error": "No space left on device", "device": "drive1", "type": "backup" }, "event": "BLOCK_JOB_COMPLETED" }
Delete the failed incremental, and re-create the image.
# rm incremental.0.img # qemu-img create -f qcow2 incremental.0.img -b full_backup.img -F qcow2
Retry the command after fixing the underlying problem, such as freeing up space on the backup volume:
{ "execute": "drive-backup", "arguments": { "device": "drive0", "bitmap": "bitmap0", "target": "incremental.0.img", "format": "qcow2", "sync": "incremental", "mode": "existing" } }
Receive confirmation that the job completed successfully:
{ "timestamp": { "seconds": 1424709668, "microseconds": 526525 }, "data": { "device": "drive1", "type": "backup", "speed": 0, "len": 67108864, "offset": 67108864}, "event": "BLOCK_JOB_COMPLETED" }
Sometimes, a transaction will succeed in launching and return success, but then later the backup jobs themselves may fail. It is possible that a management application may have to deal with a partial backup failure after a successful transaction.
If multiple backup jobs are specified in a single transaction, when one of them fails, it will not interact with the other backup jobs in any way.
The job(s) that succeeded will clear the dirty bitmap associated with the operation, but the job(s) that failed will not. It is not “safe” to delete any incremental backups that were created successfully in this scenario, even though others failed.
QMP example highlighting two backup jobs:
{ "execute": "transaction", "arguments": { "actions": [ { "type": "drive-backup", "data": { "device": "drive0", "bitmap": "bitmap0", "format": "qcow2", "mode": "existing", "sync": "incremental", "target": "d0-incr-1.qcow2" } }, { "type": "drive-backup", "data": { "device": "drive1", "bitmap": "bitmap1", "format": "qcow2", "mode": "existing", "sync": "incremental", "target": "d1-incr-1.qcow2" } }, ] } }
QMP example response, highlighting one success and one failure:
Acknowledgement that the Transaction was accepted and jobs were launched:
{ "return": {} }
Later, QEMU sends notice that the first job was completed:
{ "timestamp": { "seconds": 1447192343, "microseconds": 615698 }, "data": { "device": "drive0", "type": "backup", "speed": 0, "len": 67108864, "offset": 67108864 }, "event": "BLOCK_JOB_COMPLETED" }
Later yet, QEMU sends notice that the second job has failed:
{ "timestamp": { "seconds": 1447192399, "microseconds": 683015 }, "data": { "device": "drive1", "action": "report", "operation": "read" }, "event": "BLOCK_JOB_ERROR" }
{ "timestamp": { "seconds": 1447192399, "microseconds": 685853 }, "data": { "speed": 0, "offset": 0, "len": 67108864, "error": "Input/output error", "device": "drive1", "type": "backup" }, "event": "BLOCK_JOB_COMPLETED" }
In the above example, “d0-incr-1.qcow2” is valid and must be kept, but “d1-incr-1.qcow2” is invalid and should be deleted. If a VM-wide incremental backup of all drives at a point-in-time is to be made, new backups for both drives will need to be made, taking into account that a new incremental backup for drive0 needs to be based on top of “d0-incr-1.qcow2.”
While jobs launched by transactions normally complete or fail on their own, it is possible to instruct them to complete or fail together as a group.
QMP transactions take an optional properties structure that can affect the semantics of the transaction.
The “completion-mode” transaction property can be either “individual” which is the default, legacy behavior described above, or “grouped,” a new behavior detailed below.
Delayed Completion: In grouped completion mode, no jobs will report success until all jobs are ready to report success.
Grouped failure: If any job fails in grouped completion mode, all remaining jobs will be cancelled. Any incremental backups will restore their dirty bitmap objects as if no backup command was ever issued.
Here's the same example scenario from above with the new property:
{ "execute": "transaction", "arguments": { "actions": [ { "type": "drive-backup", "data": { "device": "drive0", "bitmap": "bitmap0", "format": "qcow2", "mode": "existing", "sync": "incremental", "target": "d0-incr-1.qcow2" } }, { "type": "drive-backup", "data": { "device": "drive1", "bitmap": "bitmap1", "format": "qcow2", "mode": "existing", "sync": "incremental", "target": "d1-incr-1.qcow2" } }, ], "properties": { "completion-mode": "grouped" } } }
QMP example response, highlighting a failure for drive2:
Acknowledgement that the Transaction was accepted and jobs were launched:
{ "return": {} }
Later, QEMU sends notice that the second job has errored out, but that the first job was also cancelled:
{ "timestamp": { "seconds": 1447193702, "microseconds": 632377 }, "data": { "device": "drive1", "action": "report", "operation": "read" }, "event": "BLOCK_JOB_ERROR" }
{ "timestamp": { "seconds": 1447193702, "microseconds": 640074 }, "data": { "speed": 0, "offset": 0, "len": 67108864, "error": "Input/output error", "device": "drive1", "type": "backup" }, "event": "BLOCK_JOB_COMPLETED" }
{ "timestamp": { "seconds": 1447193702, "microseconds": 640163 }, "data": { "device": "drive0", "type": "backup", "speed": 0, "len": 67108864, "offset": 16777216 }, "event": "BLOCK_JOB_CANCELLED" }