Quota management of LXC containers
I'm setting up some LXC containers on a Linode, all Ubuntu 14. I'm very keen on limiting the disk-space the containers can each consume. Also keen on limiting RAM use, CPU use, networking use… but those can come later. Disk-space is the priority for now.
According the the lxc-create man pages, I could try using lvm as a backing-store. This would apparently allow the host to limit the disk-space by using logical volumes. However when I tried to do that on the the host Linode, I got the following errors:
root@mylinode:~# lxc-create -B lvm -n test --template download -- -d ubuntu -r trusty -a amd64
File descriptor 3 (/var/lib/lxc/test/partial) leaked on lvcreate invocation. Parent PID 31749: lxc-create
Volume group "lxc" not found
lxc-create: bdev.c: lvm_create: 1145 Error creating new lvm blockdev /dev/lxc/test size 1073741824 bytes
lxc-create: lxccontainer.c: do_bdev_create: 932 Failed to create backing store type lvm
lxc-create: lxccontainer.c: do_lxcapi_create: 1408 Error creating backing store type lvm for test
lxc-create: lxc_create.c: main: 274 Error creating container test
After reading the lvm and vgcreate man pages I realised I didn't know enough about setting up logical volumes, so I tried implementing quotas within the containers themselves. Unfortunately I haven't got very far with that either. In a container with plain-old file backing-store, and with quota and quotatools installed, I edited /etc/fstab so that entire file now only contains:
/dev/root / ext4 errors=remount-ro,usrquota,grpquota 0 1
After running mount -o remount / the output from mount begins:
/dev/root on / type ext4 (rw,errors=remount-ro,usrquota,grpquota)
There's also a bunch of lxcfs entries mounted in /proc and /sys
But the output from quotacheck doesn't look great:
root@test:~# quotacheck -cug /
quotacheck: Cannot stat() mounted device /dev/root: No such file or directory
quotacheck: Mountpoint (or device) / not found or has no quota enabled.
quotacheck: Cannot find filesystem to check or filesystem not mounted with quota option.
df -H reveals:
Filesystem Size Used Avail Use% Mounted on
/dev/root 26G 1.7G 24G 7% /
cgroup 13k 0 13k 0% /sys/fs/cgroup
none 103k 4.1k 99k 4% /dev
none 4.1k 0 4.1k 0% /sys/fs/cgroup/cgmanager
none 104M 66k 104M 1% /run
none 5.3M 0 5.3M 0% /run/lock
none 520M 0 520M 0% /run/shm
none 105M 0 105M 0% /run/user
However, there is no /dev/root nor /dev/sda or anything similar in the container:
root@test:~# ls /dev
console kmsg loop3 lxc ppp ram1 ram14 ram4 ram9 stdout tty4
core log loop4 mem ptmx ram10 ram15 ram5 random tty urandom
fd loop0 loop5 net pts ram11 ram16 ram6 shm tty1 zero
full loop1 loop6 null ram ram12 ram2 ram7 stderr tty2
kmem loop2 loop7 port ram0 ram13 ram3 ram8 stdin tty3
To be frank, I'm unsure what's going on here.
Should I go back to trying LVM on the host? If so, this seems like a good tutorial, I'll work through that until I understand things a bit better:
Or should I keep trying with quotas within the containers? If so… why, and how? I'm struggling to find much more about this.
Trawling around the web reveals several people grappling with similar issues, and I can't see many solutions out there. The following pages frequently pop up in my various Google searches, but they haven't helped me much:
They recommend sym-linkg /dev/xvda but that doesn't exist on my host Linode nor in the containers.
Any words of wisdom would be gratefully accepted, many thanks.
1 Reply
1. Provision a new Linode, Ubuntu 14.
2. Create a small deployment disk, I've used 2048MB. A smaller disk may mean downloads will fail during container creation, which looks like LXC errors.
3. Create a swap disk, I've used 128MB.
4. Create another disk from all the available remaining disk space. Call it something like "LVM disk".
5. Edit the "configuration profile" and assign the disks, I put the "LVM disk" on /dev/sdc
6. Boot and login.
7. add-apt-repository ppa:ubuntu-lxc/stable
8. apt-get update
9. apt-get install -y lxc lvm2 quota
10. Create the partition table on /dev/sdc: cfdisk /dev/sdc (Mainly just select the defaults)
11. You should now have a new device /dev/sdc1
12. pvcreate -v /dev/sdc1
13. vgcreate lxc-volume-group /dev/sdc1 (Choose a different name from "lxc-volume-group" if you prefer).
Now, the moment of truth:
lxc-create --vgname=lxc-volume-group -B lvm -t ubuntu -n CONTAINER-NAME --fssize 3733M
That last argument should be the max size you want the container to be able to consume on disk.
Check it out:
root@host:~# lxc-start -n CONTAINER-NAME
root@host:~# lxc-attach -n CONTAINER-NAME
root@CONTAINER-NAME:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/lxc-volume-group/CONTAINER-NAME 3.6G 346M 3.1G 11% /
cgroup 12K 0 12K 0% /sys/fs/cgroup
none 100K 4.0K 96K 4% /dev
none 4.0K 0 4.0K 0% /sys/fs/cgroup/cgmanager
none 100M 60K 100M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 496M 0 496M 0% /run/shm
none 100M 0 100M 0% /run/user
Looks like it's worked! I haven't tested this yet, but that output is what I was hoping to see from df.
Next steps: see what happens when the disk fills up, and implement capping on network traffic, RAM usage, etc.