Managed Kubernetes Service
Overview
The Managed Kubernetes Service offers a streamlined solution for efficiently managing server workloads. Kubernetes has emerged as the industry standard, providing a unified and accessible API, primarily utilizing YAML for configuration. This means that teams can easily understand and work with Kubernetes, streamlining infrastructure management.
The Kubernetes leverages robust software design patterns, enabling continuous recovery in any scenario through the reconciliation method. Additionally, it ensures seamless scaling across a multitude of servers, addressing the challenges posed by complex and outdated APIs found in traditional virtualization platforms. This managed service eliminates the need for developing custom solutions or modifying source code, saving valuable time and effort.
Deployment Details
The managed Kubernetes service deploys a standard Kubernetes cluster utilizing the Cluster API, Kamaji as control-plane provicer and the KubeVirt infrastructure provider. This ensures a consistent and reliable setup for workloads.
Within this cluster, users can take advantage of LoadBalancer services and easily provision physical volumes as needed. The control-plane operates within containers, while the worker nodes are deployed as virtual machines, all seamlessly managed by the application.
- Docs: https://github.com/clastix/kamaji
- Docs: https://cluster-api.sigs.k8s.io/
- GitHub: https://github.com/clastix/kamaji
- GitHub: https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt
- GitHub: https://github.com/kubevirt/csi-driver
How-Tos
How to access to deployed cluster:
kubectl get secret -n <namespace> kubernetes-<clusterName>-admin-kubeconfig -o go-template='{{ printf "%s\n" (index .data "super-admin.conf" | base64decode) }}' > test
Parameters
Common parameters
Name | Description | Value |
---|---|---|
host | The hostname used to access the Kubernetes cluster externally (defaults to using the cluster name as a subdomain for the tenant host). | "" |
controlPlane.replicas | Number of replicas for Kubernetes control-plane components | 2 |
storageClass | StorageClass used to store user data | replicated |
nodeGroups | nodeGroups configuration | {} |
Cluster Addons
Name | Description | Value |
---|---|---|
addons.certManager.enabled | Enables the cert-manager | false |
addons.certManager.valuesOverride | Custom values to override | {} |
addons.cilium.valuesOverride | Custom values to override | {} |
addons.gatewayAPI.enabled | Enables the Gateway API | false |
addons.ingressNginx.enabled | Enable Ingress-NGINX controller (expect nodes with ‘ingress-nginx’ role) | false |
addons.ingressNginx.valuesOverride | Custom values to override | {} |
addons.ingressNginx.hosts | List of domain names that should be passed through to the cluster by upper cluster | [] |
addons.gpuOperator.enabled | Enables the gpu-operator | false |
addons.gpuOperator.valuesOverride | Custom values to override | {} |
addons.fluxcd.enabled | Enables Flux CD | false |
addons.fluxcd.valuesOverride | Custom values to override | {} |
addons.monitoringAgents.enabled | Enables MonitoringAgents (fluentbit, vmagents for sending logs and metrics to storage) if tenant monitoring enabled, send to tenant storage, else to root storage | false |
addons.monitoringAgents.valuesOverride | Custom values to override | {} |
addons.verticalPodAutoscaler.valuesOverride | Custom values to override | {} |
Kubernetes control plane configuration
Name | Description | Value |
---|---|---|
controlPlane.apiServer.resourcesPreset | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | small |
controlPlane.apiServer.resources | Resources | {} |
controlPlane.controllerManager.resources | Resources | {} |
controlPlane.controllerManager.resourcesPreset | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | micro |
controlPlane.scheduler.resourcesPreset | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | micro |
controlPlane.scheduler.resources | Resources | {} |
controlPlane.konnectivity.server.resourcesPreset | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if resources is set (resources is recommended for production). | micro |
controlPlane.konnectivity.server.resources | Resources | {} |
U Series
The U Series is quite neutral and provides resources for general purpose applications.
U is the abbreviation for “Universal”, hinting at the universal attitude towards workloads.
VMs of instance types will share physical CPU cores on a time-slice basis with other VMs.
U Series Characteristics
Specific characteristics of this series are:
- Burstable CPU performance - The workload has a baseline compute performance but is permitted to burst beyond this baseline, if excess compute resources are available.
- vCPU-To-Memory Ratio (1:4) - A vCPU-to-Memory ratio of 1:4, for less noise per node.
O Series
The O Series is based on the U Series, with the only difference being that memory is overcommitted.
O is the abbreviation for “Overcommitted”.
UO Series Characteristics
Specific characteristics of this series are:
- Burstable CPU performance - The workload has a baseline compute performance but is permitted to burst beyond this baseline, if excess compute resources are available.
- Overcommitted Memory - Memory is over-committed in order to achieve a higher workload density.
- vCPU-To-Memory Ratio (1:4) - A vCPU-to-Memory ratio of 1:4, for less noise per node.
CX Series
The CX Series provides exclusive compute resources for compute intensive applications.
CX is the abbreviation of “Compute Exclusive”.
The exclusive resources are given to the compute threads of the VM. In order to ensure this, some additional cores (depending on the number of disks and NICs) will be requested to offload the IO threading from cores dedicated to the workload. In addition, in this series, the NUMA topology of the used cores is provided to the VM.
CX Series Characteristics
Specific characteristics of this series are:
- Hugepages - Hugepages are used in order to improve memory performance.
- Dedicated CPU - Physical cores are exclusively assigned to every vCPU in order to provide fixed and high compute guarantees to the workload.
- Isolated emulator threads - Hypervisor emulator threads are isolated from the vCPUs in order to reduce emaulation related impact on the workload.
- vNUMA - Physical NUMA topology is reflected in the guest in order to optimize guest sided cache utilization.
- vCPU-To-Memory Ratio (1:2) - A vCPU-to-Memory ratio of 1:2.
M Series
The M Series provides resources for memory intensive applications.
M is the abbreviation of “Memory”.
M Series Characteristics
Specific characteristics of this series are:
- Hugepages - Hugepages are used in order to improve memory performance.
- Burstable CPU performance - The workload has a baseline compute performance but is permitted to burst beyond this baseline, if excess compute resources are available.
- vCPU-To-Memory Ratio (1:8) - A vCPU-to-Memory ratio of 1:8, for much less noise per node.
RT Series
The RT Series provides resources for realtime applications, like Oslat.
RT is the abbreviation for “realtime”.
This series of instance types requires nodes capable of running realtime applications.
RT Series Characteristics
Specific characteristics of this series are:
- Hugepages - Hugepages are used in order to improve memory performance.
- Dedicated CPU - Physical cores are exclusively assigned to every vCPU in order to provide fixed and high compute guarantees to the workload.
- Isolated emulator threads - Hypervisor emulator threads are isolated from the vCPUs in order to reduce emaulation related impact on the workload.
- vCPU-To-Memory Ratio (1:4) - A vCPU-to-Memory ratio of 1:4 starting from the medium size.
Resources
The following instancetype resources are provided by Cozystack:
Name | vCPUs | Memory |
---|---|---|
cx1.2xlarge | 8 | 16Gi |
cx1.4xlarge | 16 | 32Gi |
cx1.8xlarge | 32 | 64Gi |
cx1.large | 2 | 4Gi |
cx1.medium | 1 | 2Gi |
cx1.xlarge | 4 | 8Gi |
gn1.2xlarge | 8 | 32Gi |
gn1.4xlarge | 16 | 64Gi |
gn1.8xlarge | 32 | 128Gi |
gn1.xlarge | 4 | 16Gi |
m1.2xlarge | 8 | 64Gi |
m1.4xlarge | 16 | 128Gi |
m1.8xlarge | 32 | 256Gi |
m1.large | 2 | 16Gi |
m1.xlarge | 4 | 32Gi |
n1.2xlarge | 16 | 32Gi |
n1.4xlarge | 32 | 64Gi |
n1.8xlarge | 64 | 128Gi |
n1.large | 4 | 8Gi |
n1.medium | 4 | 4Gi |
n1.xlarge | 8 | 16Gi |
o1.2xlarge | 8 | 32Gi |
o1.4xlarge | 16 | 64Gi |
o1.8xlarge | 32 | 128Gi |
o1.large | 2 | 8Gi |
o1.medium | 1 | 4Gi |
o1.micro | 1 | 1Gi |
o1.nano | 1 | 512Mi |
o1.small | 1 | 2Gi |
o1.xlarge | 4 | 16Gi |
rt1.2xlarge | 8 | 32Gi |
rt1.4xlarge | 16 | 64Gi |
rt1.8xlarge | 32 | 128Gi |
rt1.large | 2 | 8Gi |
rt1.medium | 1 | 4Gi |
rt1.micro | 1 | 1Gi |
rt1.small | 1 | 2Gi |
rt1.xlarge | 4 | 16Gi |
u1.2xlarge | 8 | 32Gi |
u1.2xmedium | 2 | 4Gi |
u1.4xlarge | 16 | 64Gi |
u1.8xlarge | 32 | 128Gi |
u1.large | 2 | 8Gi |
u1.medium | 1 | 4Gi |
u1.micro | 1 | 1Gi |
u1.nano | 1 | 512Mi |
u1.small | 1 | 2Gi |
u1.xlarge | 4 | 16Gi |