I restored from my Linode from backup and now my Linode won't find my disk and can't boot?
After restoring from backup, my Linode won't boot. After booting into Rescue Mode to review my logs, I see an error that looks like:
ALERT! UUID=93d04481-b1d5-4fc2-b563-f3cfe7a86fbb does not exist. Dropping to a shell!
What could be happening and how do I fix it?
There are two common scenarios where this can occur:
1) Linode disk with new UUID
When you restore your disks from Backup, the new disks have different UUIDs than the originals. If your Linode's /etc/fstab file is referring to one of your disks by UUID (a Universally Unique Identifier) like
93d04481-b1d5-4fc2-b563-f3cfe7a86fbb instead of a the device name, i.e.
/dev/sda, then the newly created Linode will not be able to locate the restored disk because it will have a new UUID.
Here is an example of an
/etc/fstab file referring to a device by UUID:
# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> UUID="93d04481-b1d5-4fc2-b563-f3cfe7a86fbb" / ext4 error1 /dev/sdb none swap defaults 0 0
To correct this in Rescue Mode, complete the steps for "Change Root" and replace the line with the UUID to use the correct block device, or use the
blkid command to find the new disk's UUID. For example:
[email protected]:~# blkid /dev/sdb: UUID="11c0722d-fcff-4be5-bb2d-4aa6f43107ce" TYPE="swap" /dev/sda: UUID="26afbc0c-20e5-4da3-967a-eab394e5d5b6" TYPE="ext4"
In this case you want to now use
26afbc0c-20e5-4da3-967a-eab394e5d5b6 as the new UUID, or you can use the block device instead in which case your fstab file will look like:
# /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> /dev/sda / ext4 error1 /dev/sdb none swap defaults 0 0
2) Block Storage Volume Missing
If you had a Block Storage Volume attached to your old Linode and you restore its backup to a new Linode, your new Linode will not be able to find the Block Storage Device until you detach it from the old Linode and attach it to the new one.
I have replicated the steps given above, and have verified that they were done correctly subsequent by re-logging in in rscue mode, changing root again, and checking the contents of /etc/fstab. I am stll unable to connect to the linode after reboot.
The latest and greatest:
I have changed both lines in the /etc/fstab file to reflect the UUIDS of my boot and swap discs, as given by blkid. I then rebooted my Linode and found the same problem through straight SSH access (the connection eventually just times out). Accessing it via Lish gives me the following error:
does not exist. Dropping to a shell!
This is, as best as I can tell, wholly illogical. Neither UUID I have specified in the fstab file is represented by this error. So my question now is: where is this UUID coming from? There is apparently another file somewhere which is referring to this one, and I am totally clueless.
If your Linode's /etc/fstab file is referring to one of your disks by UUID (a Universally Unique Identifier) like 93d04481-b1d5-4fc2-b563-f3cfe7a86fbb instead of a the device name, i.e. /dev/sda,
Does reading between the lines of hphillips comment mean that you could troubleshoot whether the disk still exists and is healthy by replacing uuid style with device name style in your fstab and at least gain some information or even eliminate a possible cause of the problem?
iirc people can use knoppix (a german distro but with english language) can / has been used for system recovery. It has been many years (ubuntu 12.04 era) since I needed anything like that; but, iirc, what they were having you do was create a chroot environment (in knoppix in this example) and then mount the entire os that you need to recover on that chrooted mountpoint -oh- and also some command to get bash working once cd-ed in that root environment (don't recall what the exact command is). Then you can use the host os (knoppix in this example) as your live environment to do work on the mounted os (the one you want to recover).
It has also been many years since I built a gentoo linux system but (iirc) that also has you perform similar steps to the steps in the knoppix example too (eg: mount to a chroot environment then get bash working there). So the thing is, you should be able to do this sort of thing with any distribution of linux (ie: any linode image available) since it is common commands to any linux. The reason I mention this is that, even though not specific to system recovery, gentoo documentation may be of some use in finding the steps to perform.
Pretty involved but if all else fails you may want to look into this way of recovering (if it's important enough to you that is).
PS: In case it didn't stand out, even if you couldn't repair the borked distro you could at least have access to the data and be able to get it off of there.