All backups are failing to work in new Linodes

Had a problem and tried to restore my existing system from a backup. I couldn't even log in at that point.

I've since tried to restore backups (yes from a previously working system) to new Linodes, and none of them are working. I have to assume this is a failure in the Linode backup system.

When I login using Weblish, I invariably get the following

Gave up waiting for root file system device. Common problems:

  • Boot args (cat /proc/cmdline)
  • Check rootdelay= (did the system wait long enough?)
  • Missing modules (cat /proc/modules; ls /dev)
    ALERT! UUID=b7ea8dc6-3710-46cd-b634-60cea0efd02b does not exist. Dropping to !

The same thing is happening with all of the Linodes running a backup.

Years ago I did a restore and had no problems. Why would this backup system fail like this?

14 Replies

Based on the last issue that you posted, it looks like either the fstab file or the /boot/grub/grub.cfg of your original Linode is referencing a specific UUID, which would be unique to a specific disk. Because this restore does not have that specific disk, it's throwing the error you're seeing.

There are a couple of other Community Questions posts that will help you with these errors:

I restored from my Linode from backup and now my Linode won't find my disk and can't boot?

I can't boot from my restored backup! Seeing 'UUID does not exist`

If those do not resolve the issue for you, feel free to follow up here, or open a Support ticket, as Support might have some additional insight into what is causing the issues for you.

But would this apply when restoring a backup to the existing Linode, on the same disk?

The first thing I tried was a restore overwriting existing installation. That would be on the same disk.

When you overwrite previous disks, you're deleting the old disk first, so it would be a new disk.

OK, yeah. That was my bad.

What a mess.

Unfortunately, I've seen the pages you linked (and thank you for the links) before I posted this comment. And I can't get any of the solutions to work.

Yeah, I've opened up a support ticket. Thanks.

I know this may appear to not be helpful for your current use case, though if you need to restore from a backup in the future, there's a workflow that can help eliminate extended downtime.

You can restore to a new Linode, check to make sure things are in working order, and then transfer the IP addresses between the old and new IPs. This will help with any DNS configs that are pointing to the old Linode. The only downtime you'll have would be the equivalent of a reboot, which would be needed to ensure proper network routing of the IP changes.

I did restore backups to new Linodes. I actually tried two different backups to two new Linodes. Same thing, same problems. Nothing works.

I've done this in the past (backup to a new Linode), and had no problems. I just can't figure out what the heck is going on now.

Trying to restore to a new Linode again. At this point, will have to wait for support, because I'm fresh out of things to try.

Watrick, when I tried the Change Root step

mount -o exec,barrier=0 /dev/sda

I get

mount: /dev/sda: can't find in /etc/fstab.

When I tried your solution

mkdir /media/sda



It still does not work. I get the exact same error. I can't even start to fix anything if I can't even get beyond this step.

I am beginning to wonder if the problems I was having with my site originally were really due to my installations, or something wrong with the Linode instance.

Hey @shelleyp, these commands should allow you to mount the disk:

mkdir /media/sda
mount -o barrier=0 /dev/sda /media/sda

From there, you should be able to proceed through the Change Root guide and steps outlined in this post

Evidently, I needed to use the following

mount -o barrier=0 /dev/sda /media/sda

With this, I could get into the fstab file. And I found that it did NOT have hardcoded UUID. The relevant bits

dev/sda / ext4 noatime,errors=remount-ro 0 1
/dev/sdb none swap sw 0 0

This should not be causing any errors.

But I put in the UUID, I also modified the mount options. It does boot up now. Not cleanly, but I can get into it.

And then I received a second suggestion to update to latest kernel from GRUB2, and my system is toast, and I see no way to recover from a backup at this point.

I should be able to deploy a backup to a clean linode and have it work. I could start and stop the linode. This shouldn't be any different.

I am looking at losing years of writing and work because the backups are not clean.

@shelleyp I know this is probably too late, but I believe it's a change in the grub2 config that is causing your problem. On My fedora based Linode instances once you log into the emergency shell (via lish) I need to do this to reset the grub2.cfg UUIDs.

mkdir /mnt
mount /dev/sda /mnt
mount -t proc none /mnt/proc
mount --rbind /dev /mnt/dev
mount --rbind /sys /mnt/sys
chroot /mnt /bin/bash
dracut --force --no-hostonly
grub2-mkconfig -o /etc/grub2.cfg

then exit and reboot the linode. Once it reboots as root run:

dracut --force

which fixes any other differences in initramfs. Then reboot again.

The key here for a nice restore experience is to stop grub using uuids completely… which is harder than it looks.

I have discovered how to disable uuid in grub2.cfg
Add the line


to /etc/default/grub then run (after backing up /etc/grub2.cfg maybe?)

grub2-mkconfig -o /etc/grub2.cfg

This will create a new grub2.cfg file. Now reboot


Please enter an answer

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct