Planning and managing your cloud ecosystem and environments is vital for lowering manufacturing downtime and sustaining a functioning workload. Within the “Managing your cloud ecosystems” weblog sequence, we cowl completely different methods for making certain that your setup features easily with minimal downtime.
To begin issues off, the primary matter on this weblog sequence is making certain workload continuity throughout employee node upgrades.
What are employee node upgrades?
Employee node upgrades apply necessary safety updates and patches and must be accomplished recurrently. For extra info on varieties of employee node upgrades, see Updating VPC worker nodes and Updating Classic worker nodes within the IBM Cloud Kubernetes Service documentation.
Throughout an improve, a few of your employee nodes could grow to be unavailable. It’s necessary to verify your cluster has sufficient capability to proceed working your workload all through the improve course of. Constructing a pipeline to replace your employee nodes with out inflicting utility downtime will let you simply apply employee node upgrades recurrently.
For traditional employee nodes
Create a Kubernetes configmap that defines the utmost variety of employee nodes that may be unavailable at a time, together with throughout an improve. The utmost worth is specified as a proportion. You too can use labels to use completely different guidelines to completely different employee nodes. For full directions, see Updating Classic worker nodes in the CLI with a configmap within the Kubernetes service documentation. For those who select to not create a configmap, the default most quantity of employee nodes that grow to be unavailable is 20%.
For those who want your whole variety of employee nodes to stay up and working, use the
ibmcloud ks worker-pool resize command to quickly add further employee nodes to your cluster at some stage in the improve course of. When the improve is full, use the identical command to take away the extra employee nodes and return your employee pool to its earlier dimension.
For VPC employee nodes
VPC employee nodes are changed by eradicating the previous employee node and provisioning a brand new employee node that runs on the new model. You may improve a number of employee nodes on the identical time, however when you improve a number of without delay, they grow to be unavailable on the identical time. To be sure you have sufficient capability to run your workload in the course of the improve, you’ll be able to plan to both resize your worker pools to quickly add further employee nodes (just like the method described for traditional employee nodes) or plan to improve your employee nodes one after the other.
Whether or not you select to implement a configmap, resize your employee pool or improve elements one-by-one, making a workload continuity plan earlier than you improve your employee nodes may help you create a extra streamlined, environment friendly setup with restricted downtime.
Now that you’ve a plan to stop disruptions throughout employee node upgrades, preserve an eye fixed out for the following weblog in our sequence, which can focus on how, when and why to implement main, minor or patch upgrades to your clusters and employee nodes.