Subject: Re: Restore LVM snapshot without creating a full dump to an "external" device?
Date: Sunday 9th March 2008 23:31:23 UTC (over 10 years ago)
On Sun, Mar 09, 2008 at 11:05:45PM +0100, Bas van Schaik wrote: > Hi all, > > When I started to use LVM snapshots, I presumed that it was easy to > restore a system to such a snapshot. As far as I can see now, this > presumption was incorrect... People on the internet write that I should > dump the whole snapshot using dd and then write it over the original > volume. This actually implies that I need another device with at least > the size of the original volume available to dump to. In my situation, > this means that I need about 2 TB free space to recover this snapshot! > > Isn't there a more sophisticated way to restore the snapshot than just > dumping it? > 1) create snapshot of /dev/myvolumegroup/myvolume to > /dev/myvolumegroup/mysnapshot > 2) dd if=/dev/myvolumegroup/mysnapshot of=/tmp/mysnapshot.dd > 3) lvremove /dev/myvolumegroup/mysnapshot > 4) dd if=/tmp/mysnapshot.dd of=/dev/myvolumegroup/myvolume you got (size-of-your-volume) free space in /tmp? pretty large /tmp, or pretty small volume, I guess. > Something like: > 1) lvrevert /dev/myvolumegroup/mysnapshot /dev/myvolumegroup/myvolume > > I'd like to hear your thoughts on this, because I think it should be > fairly easy to restore a COW snapshot. Or am I wrong and missing something? you may want to investigate the status of http://fedoraproject.org/wiki/StatelessLinux/CachedClient were it says "The LVM and device-mapper code to allow merging is awaiting upstream review." or you can try, at your own risk, the hack below. as I'm not too deep into devicemapper snapshot code and development, please correct me if I'm wrong, and don't shoot yourself too readily, apply own mental effort as appropriate :-> current dm-snap and dm-exception-store are implemented in a way that for a single snapshot, you get (mapping only) snapshot-origin (real storage) origin-real (mapping only) snapshot (real storage) COW (or exception store) COW on disk format is pretty simple (as of now). its all fixed size chunks. it starts with a 4x32bit header, [SnAp][valid][version][chunk_size in sectors] so any valid snapshot looks "SnAp" 1 1 [power of two] chunk_size it what you set with the lvcreate "-c" option. the rest of the (just as well chunk_size'ed) header block is unused. expressed in chunks, the COW storage looks like: [header chunk][exception area 0][data chunks][....][exception area 1][...] where each exception area is one "chunk" itself. each exception area holds a mapping table of "logic chunk number" to "in COW storage chunk number", both 64bit. "logic number" is called "old", "in COW" address is called "new". byte number 1 [old][new] 2 [old][new] 3 ... (chunk_size*512/16) [old][new] following are as many data chunks. this whole thing is append only. as a side note, since the "new" address is completely implicit in this scheme, I wonder why it is recorded at all. maybe they are not enlisted in creation/submit order, but in completion order. I attached a perl script, that opens its argument _read only_, with O_DIRECT, reads these mappings, and spits out "dd" command lines. C-code would pretty much look the same, I guess. you could replace the "print dd" stuff with a real pread/pwrite, and whoops, there is your "lvrevert.", sort of. usage woudld be: * make sure nothing will concurrently access any of the involved devices. neither origin nor snapshot may be mounted! neither should be accessed, either. * now, to get $origin into the state that is recorded on $cow, do # cow=/dev/mapper/vgXY-somedev-snap-cow # origin=/dev/mapper/vgXY-somedev ## optionally create a new snapshot of the $origin, ## so you can change your mind later :-> # lvcreate -s -n snap_before_revert -L $enough_room vgXY/somedev ## then run # list_exception_chunks $cow | tee tmp.out | less +F ## check for plausibility... ## ...and chicken out.^A^Kexecute those dd lines: # sed -ne 's/^#d //p' < tmp.out > tmp.sh # source tmp.sh ## verify outcome, ## throw away your snapshot(s), ## and create new ones. can even be done if the snapshot is (still valid but almost) full. because we are only dd-ing chunks onto the origin that exist in the cow storage already, so nothing triggers a new COW exception. again, use at your own risk. the reason I wrote it and used it once (litteraly): the origin in question was 1.7 TB (iirc) and there was simply no room left in the available VGs for a clone. and it worked. YMMV. -- : commercial DRBD/HA support and consulting: sales at linbit.com : : Lars Ellenberg Tel +43-1-8178292-0 : : LINBIT Information Technologies GmbH Fax +43-1-8178292-82 : : Vivenotgasse 48, A-1120 Vienna/Europe http://www.linbit.com :