We’re pleased to announce our newest API call: linode.clone(). This API call allows you to easily clone an existing Linode to a new instance in the datacenter of your choosing. It will clone all configuration profiles and disk images present on the source Linode.
The method has a limit of five concurrent clones from the same source Linode in order to protect Linode infrastructure. If you attempt to execute another clone while at the concurrency limit, the API will return with a “validation” error. If you wish to execute a large scale clone process, the original Linode can be cloned five times, then once those clones finish, the cloned Linodes can then each be cloned five times concurrently, and so on. This allows you to quickly scale your infrastructure from a single Linode to hundreds.
We hope you enjoy this new API call, and that it’s useful for your future deployments in the Linode Cloud.
We’ve been looking for something like this for a while
Are there any pre-requisites for this API Call?
Is the Linode need to be shutdown or will be shutdown during the clone process?
How long would it take to clone? Is it the same thing as if its done from the console?
AWESOME! Been waiting for this for so long, at least now we can script our scaling activities!
Can we get one for checking the amount of bandwidth/transfer used for the current month next? (To compare against the existing call for finding the quota.)
> Is the Linode need to be shutdown or will be shutdown during the clone process?
“It is recommended that the source Linode be powered down during the clone.”
The source Linode doesn’t have to be powered down, but it is recommended to ensure that a consistent copy of your data is cloned.
> How long would it take to clone? Is it the same thing as if its done from the console?
1-2 minutes per GB of data within the same datacenter of the source Linode (the Linode you’re cloning from), or 5-10 minutes per GB of data if migrating to a different datacenter.
I assume that you should not .clone() a Linode that has static networking set up, since the static confi will render the new node unreachable… or am I missing something?
You could still clone the Linode, but you’d need to use LISH to log in to the new Linode and adjust the networking configuration to the new IP address(es).
There isn’t currently a way to see how much transfer a particular Linode has used that I can see, but account.info() shows transfer pool amount and usage:
How would you suggest auto scaling in an HA environment where you want to clone and configure network addresses all in one go?
The limitation of 5 clones per source linode seems very arbitrary – you’re just offloading additional work to the side of the script doing the cloning. You might think we’re going to keep track of the clone count on our own, but the truth is we’re just going to iterate over existing linodes until the clone() call returns without an error.
What difference could it possibly make on your end whether we clone a single linode 10 times versus cloning an initial linode 5 times, and then cloning the first clone 5 additional times? There should be no difference in your ability to “protect Linode infrastructure”..?
Xof brought up a good question regarding the static networking setup causing new node unreachable. May I know what is the best solution for that in Linode? Doug’s suggestion to login using LISH and change it manually doesn’t make sense to me, as it defeats the purpose of autoscaling.