What is Kubernetes?
Kubernetes (K8s) is an open source container orchestration system. Open source software is a type of computer software which is publicly accessible. It also allows multiple contributors to modify and redistribute existing code. Many popular applications and even operating systems are made this way, such as Mozilla Firefox and Linux OS. A container orchestration system automates the deployment, management, scaling, and networking of containers, specifically containers of enterprise applications. Containers are a way to package bits of code, runtime, and system tools/libraries. Enterprise applications (EAS) are softwares designed for the use of an entire organization (school, government, company, etc.), rather than a few individuals.
A few enterprise applications that are widely known are Zoom, Microsoft Teams, and Google Workspace services. The advantages of enterprise applications include scalability, high performance, and good error handling. Kubernetes groups EAS into “mini” containers in order for them to be easier to manage and deploy. Through K8s, these containers can be updated continually without modifying the original. This concept is called immutable infrastructure. This is where a container or server is never modified, but instead “cloned,” and then the changes are made. The clone then replaces the original container/server.
Basic Kubernetes Terms and Definitions
Here are some basic terms and definitions for Kubernetes:
- Pod – a group of containers that basically manages the containers via a connection to Kubernetes configuration
- Cluster – set of node machines for running containerized applications
- Node – a virtual or physical machine; managed by control plane and contains the services necessary to run pods
- Control planes – maintains nodes and pods; keeps desired state of clusters
- Volume – abstraction that stores data
- Namespace – A section of the cluster that is dedicated to a certain purpose (managing, scheduling, etc.)
- Controllers – takes care of tasks to make sure the desired state of the cluster matches the observed state
- ReplicaSet – one of the Kubernetes controllers that makes sure we have a specified number of pod replicas running
- Deployment – gives updates for pods to the ReplicaSet
- Job – a temporary task; creates pods, runs a task, then deletes pods
Why use Kubernetes?
Kubernetes allows you to scale your application from the user interface or the command line (horizontal scaling). It also handles errors quickly; for example, it automates rollbacks so that all instances of the application are never down at the same time. It also restarts failed containers and replaces nodes once they die. Storage orchestration is its most well-known advantage: local/public clouds or network storages are automatically mounted. These advantages give users the best way to deploy the application onto the cloud and keep it well maintained.
How do I use Kubernetes?
Kubernetes is extremely easy to begin using. First download and install the required tools (kubectl and minikube) from Install Tools | Kubernetes. From there, you can create a cluster with minikube and deploy an application by using kubectl. The next step is to scale the deployment. You can then continually roll out updates of the containerized application and debug it when necessary. Managing the cluster is super simple by using another tool called kubeadm.
What can’t be done with Kubernetes?
To begin with, Kubernetes can not build an application for you, unlike Jenkins, another popular service. Kubernetes can also not perform caching because it is not applicable to middleware. A couple benefits to this are that it does not care about the configuration language, meaning that you do not need to use JSON.
Recap
- Kubernetes is a tool used to scale, manage, and automate the deployment of enterprise applications.
- Using Kubernetes allows you to grow your application and improve its efficiency
- It can be used by installing a few packages and then following interactive tutorials from Kubernetes.
Hi ,
Kubernetes web page https://kubernetes.io/docs/setup/best-practices/cluster-large/ mentions that the “maximum amount of pods is x”, but it does not mention if we are talking about running pods or “leftover” like the ones that remain after a job is exectued and the pod is marked as completed.
So, is this reccomendation for “total pods” or only for “running pods”?
i am also wondering the same… may someone clarify plzzzz…….
total pods