Post-Migration Drive Issue

Last night I migrated my system from FMT1 to FMT2. When the system powered on last night, I received a fsck error when trying to boot my linode. I get the option to press "I" to ignore the error (see error message below) and when I select that option, my linode boots up without any issues.

fsck from util-linux 2.20.1
fsck.ext3: No such file or directory while trying to open /dev/disk/by-uuid/8e0cc405-efcc-4dcb-bf88-c1d628259680
Possibly non-existent device?
mountall: fsck / [2135] terminated with status 8
mountall: Unrecoverable fsck error: /
Serious errors were found while checking the disk drive for /.
Press I to ignore, S to skip mounting, or M for manual recovery

The UUID specified in the error message points to my "root" filesystem in my fstab

> cat /etc/fstab
/proc        /proc   proc    defaults   0 0
/sys        /sys    sysfs   defaults   0 0
dev             /dev    tmpfs   rw         0 0 
# /dev/xvda
UUID=8e0cc405-efcc-4dcb-bf88-c1d628259680 / ext3 defaults,relatime 0 1
UUID=cae393fd-f525-48cb-87af-c44972692571 none swap defaults 0 0

(Oh, yea…I booted my system using the recovery image, and running fsck on the partition doesn't find any issues.)

One thing that I thought was interesting is that the symbolic link for this drive is missing from the /dev/disk/by-uuid/

However, running the "mount" command shows that it's mounted using that symbolic link

> mount -l
/dev/disk/by-uuid/8e0cc405-efcc-4dcb-bf88-c1d628259680 on / type ext3 (rw,relatime)
/proc on /proc type proc (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev)
/sys on /sys type sysfs (rw)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
none on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)

8 Replies


You should ask Linode Support about this. If they don't have an answer for you (which they probably will) I'm sure they'd want to know about this just in case others encounter the same problem.

I don't know if this is the same problem that you're encountering, but there have been a few issues posted in this forum about 32 bit kernels running into some issues after migrating. Booting with a 64 bit kernel seemed to correct the issue.

Which distro/version are you running? It probably shouldn't be referring to filesystems by UUID, but rather by actual device (/dev/xvd*)…


Which distro/version are you running? It probably shouldn't be referring to filesystems by UUID, but rather by actual device (/dev/xvd*)…

Heh…forgot the important things :-)

I'm running Ubuntu 12.04. I'm booting off of the "Latest 64-bit Kernel", but the userland is 32 bit.

This might be a bit of a newbie question, but why wouldn't I want to refer to the file-systems via their UUID? The idea of using the UUID is that even if the filesystem names change, the actual filesystem that gets mounted is always the same. Is there something different when running on a Linode (Xen)?

I'll submit a ticket with Linode (just in case it isn't something I did :-) )

On a Linode, the UUID will change under various circumstances (e.g. when restoring from a backup, a new filesystem is created); however, the actual device name will not change from what is set in the configuration profile. It is also possible for the UUID to not be unique, if you clone a filesystem and then attach both the clone and the original to the same Linode. (I don't believe the cloning operation generates a new UUID, but I may be wrong.)

I have a Ubuntu 12.04 deploy from a month or two back, and it does use device names (/dev/xvd*) instead of UUIDs… it is a 64-bit userland, tho, so it isn't the exact same image you're running.

I don't think I've ever had a Linode configured by UUID either (mine are all straight 32-bit). fstab always uses the xvd* device names.

While I think the UUID mounts can make sense in a physical machine where it helps to have a configuration that can be independent of what port a particular disk may be plugged into, I think it has less use (and can even be problematic) in a Linode where the xvd* devices are themselves logical and easily configured.

That is, using xvda (for example) for the root device delegates control over what that device is to the Linode Manager configuration, so you can easily pick and choose (or change temporarily) what disk image that maps to. I know I switch out various images for testing - sometimes just user partitions, but also sometimes the boot to test out a different distribution. Having a UUID would break in such cases, at least where you were changing non-boot devices.

– David

Thanks for the advice. I switched the fstab to use the hard-coded /dev/xvd* paths, and the system boots without any issues. I appreciate the help!

For the good of the cause, I also had drive migration issues, but they were different:

While booting, the kernel started printing the following messages over and over on my root partition xvda:

xvda: rw=0, want=2585907792, limit=8396800
attempt to access beyond end of device 
Buffer I/O error on device xvda, logical block 1292953895
attempt to access beyond end of device 

The solution after a few quick rounds with Linode support was to boot to Finnix and run fsck on my root partition. Fsck fixed a bunch of inode issues and then the machine booted cleanly. I also moved to an older 32-bit kernel, but I don't believe this made a difference.

This was the first time (in my memory) running fsck on this machine image in over 5 years with Linode. Impressive.

This happened to mine as well, I had them manually copy the disk image from the old host.


Please enter an answer

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct