メインコンテンツにスキップ

Linode 上の Docker

docker

Linode がDocker を標準でサポートするようになったことをお知らせします。

Docker を使用すると、アプリケーション用の軽量コンテナーを作成したり、他のユーザーが作成したイメージを使用したりできます。

Docker の最新リリース 0.7 では、より広範な標準カーネル設定オプションのサポートに重点が置かされており、これに合わせて調整を加えた新しいカーネル (3.12.6) がリリースされました。このリリース以降は、Docker をデフォルトLinode カーネルで使用できるようになり、pv-grub 経由でカスタムカーネルを使用する必要はなくなります。

Linode への Docker のインストール

  1. 最新のカーネルを実行していることを確認してください。 これを取得するには、再起動が必要な場合があります。
  2. 彼らの優秀なドキュメントに従って Docker をインストールする:Start Using Docker

Hello Worldを実行してそれを試してみるか、本格的にRedisサービスを設定してみてください!全てのDocker Examplesを確認してみるか、DockerDocker Image Indexを検索してもっと詳細を知ってください。

お楽しみください!


コメント (17)

  1. Author Photo

    Awesome! Note that Docker is for 64bit only I believe. Someone else may confirm?

  2. Author Photo

    That’s correct. From my understanding, they’re currently imposing the 64 bit restriction because it keeps their code cleaner and the benefits of having a 32 bit “host” are minimal.

    Our 32 bit kernel has all the right bits flipped as well, so if they do start supporting 32 bit down the line, it’ll work with our kernel too.

  3. Author Photo

    I’ll be switching my linode instances today.

    Thanks for the awesome work, guys.

  4. Author Photo

    I’m trying to understand why people pushing to use docker on cloud instead of dedicated server. In the cloud it adds additional unnecessary level of abstraction.

  5. Author Photo

    @Nikola

    Ease of deployment.

  6. Author Photo

    Ease of Deployment, Manageability, portability and reduces the security surface while able to use less hardware to support the larger VM’s. Dedicated VM’s are in the past.

  7. Author Photo

    I was asked if it there is a way to autoscale Docker containers?

  8. Author Photo

    @nikola, here is my guess:

    If you have a docker instance on your local dev machine, and a docker instance on your dedicated server, you can develop locally & have more confidence it will Just Work ™ when you push to the cloud. Much easier to configure a local docker & a remote docker identically than to configure your dev machine & cloud identically. Can anyone confirm: is that the general idea?

  9. Author Photo

    This is very interesting. With a little glue, you can use this to scale linode(s) up/down the way you can scale aws, except that linode’s VMs don’t suck 🙂

    Maybe that’s for Q2, after the SSD hybrid rolls out in Q1? Pretty exciting stuff, thanks!

  10. Author Photo

    Great stuff, keep it coming!

  11. Author Photo

    @NathanielInKS Check out the Flynn project (https://flynn.io) … Afaiu it aims to create an auto-scaling Heroku-like wrapper around Docker. Still in developer preview at the moment though I think.

  12. Author Photo

    If I have a linode setup “the old way” (Ubuntu 12.04). With pv-grub. Can I just switch to the new kernel, check xenify and reboot?

    Also does this new kernel have the kernel options for limiting memory on docker containers: “cgroup_enable=memory swapaccount=1” enabled?

  13. Author Photo

    it seems that linux-image-extra-3.12.6-* is not included in the default repos.

  14. Author Photo

    I just checked this with my ubuntu box. I switched off pv-grub then proceeded to load the latest daily build of lxc. Note that if you are on Ubuntu 12.04 you might have to do it like this.

    apt-get install –no-install-recommends lxc cgroup-lite lxc-templates

    This is due to a recommend entry for uidgen which is unavailable. I’m not sure why it was added though.

    Anyways I ran lxc-checkconfig and confirmed all necessary supports are enabled for lxc to run all by itself. 🙂

    shinji@icarus:~$ uname -a
    Linux icarus.robertpendell.com 3.12.6-x86-linode55 #2 SMP Tue Jan 14 08:41:36 EST 2014 i686 i686 i386 GNU/Linux
    shinji@icarus:~$ sudo lxc-checkconfig
    — Namespaces —
    Namespaces: enabled
    Utsname namespace: enabled
    Ipc namespace: enabled
    Pid namespace: enabled
    User namespace: enabled
    Network namespace: enabled
    Multiple /dev/pts instances: enabled

    — Control groups —
    Cgroup: enabled
    Cgroup clone_children flag: enabled
    Cgroup device: enabled
    Cgroup sched: enabled
    Cgroup cpu account: enabled
    Cgroup memory controller: missing
    Cgroup cpuset: enabled

    — Misc —
    Veth pair device: enabled
    Macvlan: enabled
    Vlan: enabled
    File capabilities: enabled

    Note : Before booting a new kernel, you can check its configuration
    usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

    Note disregard the memory contoller being marked as missing. As far as I know it requires a kernel startup option to be set which we can’t do with the linode kernels. Also it isn’t enabled in the config anyways probably for the same reason. This only prevents you from setting a memory limit on the containers.

  15. Author Photo

    Error: container_delete: Cannot destroy container d5d4d7f442d7: Driver devicemapper failed to remove init filesystem d5d4d7f442d74b17824cbcf1216cb3730053f8cfaefe8a1ea12d328451fc36d7-init: Error running removeDevice
    2014/02/19 10:55:59 Error: failed to remove one or more containers
    how to fix it?

  16. Author Photo

    Same problem…

    Driver devicemapper failed to remove init filesystem
    Driver devicemapper failed to remove root filesystem

    Not sure how to fix this yet…

  17. Author Photo

    That’s an upstream Docker problem, not something Linode-specific. I’d recommend reaching out to them via their bug tracker.

コメントを残す

あなたのメールアドレスは公開されません。必須項目には*印がついています。