In this article, we'll provide an introduction to Cilium, a networking solution for Kubernetes that uses eBPF for high-performance networking, security, and observability tasks. We cover the installation of Cilium, network policy configuration, and using Hubble for observability.
Cilium is a networking solution for Kubernetes that provides advanced networking and security features. It uses eBPF to perform high-performance networking, security and observability tasks within Kubernetes.
In this article, we’ll explore how to use Cilium for Kubernetes networking. We will cover the basics of setting up Cilium in a cluster, configuring network policies and using Hubble for observability. We’ll also discuss best practices for using Cilium in production environments and troubleshooting common issues. Let’s get started by installing Cilium to our Kubernetes cluster!
Note: We recommend using
First of all, we need to install the Cilium CLI as described in the
Once the CLI installation is finished, we can install Cilium to our cluster by running:
This will install Cilium to the cluster pointed to by our current kubectl context. To verify a working installation, we use:
The output should look something like this:
If everything looks good, we can verify proper network connectivity by running
This will create a dedicated namespace and run some tests on predefined workloads in order to test the cluster network connection.
The successful output looks like this:
If all the tests ran successfully, congratulations! We have successfully installed Cilium to our Kubernetes cluster!
Network policies in Kubernetes are used to control and filter traffic. By default, any pod running in a cluster can communicate with any other pod, which might be insecure depending on the setup. Using network policies, we can implement rules that only allow traffic that we explicitly want to allow. Cilium allows us to set rules on the HTTP level, which decouples network rules from our application code.
Now that Cilium runs in our cluster, let’s put it to the test by applying some network policies to specify what traffic is allowed inside the cluster as well as ingressing/egressing.
The commonly used “default-deny-ingress” policy can be implemented with Cilium like this:
Since the matchLabels key is empty, this will be applied to every endpoint, effectively locking down any ingress traffic within the cluster.
We need our services to communicate with one another, therefore we add a policy that specifically allows ingress traffic between two services.
A simple “ingress-allow” policy could look something like this:
This network policy will allow all ingress traffic from endpoints with the label ```role: client``` and that connect to endpoints with the label ```role: backend-api```.
Moving up the
This will allow incoming HTTP traffic from endpoints labeled with ```app: client``` to endpoints labeled with ```app: api```, as long as the HTTP method is GET, and the path is “/public”. Requests to ports other than 80 will be dropped, while other HTTP verbs and other paths will be rejected.
Cilium Hubble is a powerful observability tool that provides deep insights into the network traffic and security of a Kubernetes cluster. In this section, we will explore how to set up and use Hubble for observability.
To use Hubble, we need to deploy it in our Kubernetes cluster as follows:
If we run “cilium status” again, we’ll see that Hubble is enabled and running.
To make use of the data that’s being collected, we install the Hubble CLI as described in the
If you like graphical user interfaces, you can also deploy
In conclusion, Cilium offers a robust networking solution for Kubernetes, allowing users to enforce precise network policies and keep track of network activity in real-time. Its cloud native design and eBPF-based architecture make Cilium a top pick for users seeking advanced networking functionalities in their Kubernetes setups.
Cilium offers way more features than we can cover in this post, so here’s a short writeup of what else Cilium is capable of.