Kubernetes Security: Q/A with TraceRoute42

Roberto Popolizio Roberto Popolizio

All statistics confirm that the popularity of Kubernetes and cloud-based applications in general is bound to increase. Just think that 94% of enterprise businesses are already using a cloud service.

This means that it’s important to be aware of what cyber threats are involved with the use of Kubernetes and what are the best ways to keep your cloud environment safe, starting from the very basics such as having a strong antivirus.

That is why we asked Wojtek Olejniczak, Senior Kubernetes Architect at TraceRoute42, a technology consultancy company with a vast expertise in infrastructure architecture, design & maintenance, to point out for us the cybersecurity risks Kubernetes environments must face nowadays, and the best practices to ensure security for your Kubernetes application.

Tell us a bit about TraceRoute42. How did it start and how has it evolved since its inception?

TraceRoute42 is a technology consulting firm with expertise in infrastructure architecture, design and maintenance. We support startups and enterprises in creating bulletproof architectures and choose  the right solution during the concept phase or implementation of major functional changes.

We advise our partners on how to prepare the infrastructure for changing factors and parameters of system load, unexpected emergency situations, security breaches, and their impact on project resources.

For more than 10 years, we are a team of Linux Administrators who have once decided to move over to Kubernetes. We are currently working on automating creation and upgrade of various k8s environments and assisting in system admin, devops, and database-related issues.

What services do you offer at the moment?

Based on over a decade of co-op experience with teams of different sizes and projects in various life-cycle stages, we proactively provide custom support models. We help startups and enterprises create bulletproof architectures and choose an optimal solution during the concept phase or implementation of major functional changes. We advise on how to prepare for changing factors and parameters of system load, security breaches, unexpected emergency situations, and their impact on project resources.

Not only do we maintain our client’s servers, but also offer 24/7 infrastructure monitoring and a swift response to any challenges, emerging needs, or just day-to-day problems. Working with us, you are not limited to support of just one person. We become a solid part of your team! We are flexible, so the exact model of the cooperation can be tailored to your needs!

Can you explain What is Kubernetes is used for?

Kubernetes is used primarily as an environment for running containerized applications that require high reliability. Mostly, these are web applications but can be also programs that process some kind of data.  In Kubernetes, reliability is achieved, among other things, through mechanisms that ensure the continuity of application containers. When, for example, as a result of an error, an application stops working, the corresponding Kubernetes mechanism will restart the application container. With the ability to run multiple replicas of application containers, the failure of a single instance of a container does not affect the availability of the application.

Another important feature of the Kubernetes environment is the ability to automatically scale containers. If the application starts consuming more resources because, for example, the number of users has increased, the scaling mechanism will ensure that there are enough running containers to handle the increased traffic. In the reverse situation, you can automatically reduce the number of containers when the load generated by the application decreases. Combined with automatic scaling of Kubernetes cluster nodes, you can easily optimize infrastructure costs for applications. This is important when using Kubernetes clusters offered by cloud providers.

What are the most common security risks related to Kubernetes?

When we talk about Kubernetes and security, we need to look at it from a broader angle than just the Kubernetes software. We think about security in the case of Kubernetes in the context of the various layers referred to as the so-called 4Cs – cloud (infrastructure, servers, network), cluster (Kubernetes), container (images), code (application).

When it comes to the first layer, we mean the risks associated with the infrastructure on which the Kubernetes cluster is running. Threats can come from vulnerabilities in the operating system software or the container runtime environment on which the Kubernetes cluster runs. Risks can also come from insufficiently secured network infrastructure to which the Kubernetes cluster is connected.

In the next layer, the security risk of a Kubernetes cluster is the availability of Kubernetes cluster components. In particular, it concerns access to the API server. This is an element that many novice Kubernetes cluster users forget about. A publicly accessible Kubernetes cluster API server can be used as a primary attack vector. If successful, it allows unrestricted access to all other components of the cluster. According to a research report published in May 2022 by the Shadow Servers organization[1], 380,000 Kubernetes cluster API server addresses were publicly available out of 450,000 that were scanned.

Another risk can also arise from vulnerabilities found in the container images or container configurations. Running containers with applications in privileged mode can give an attacker access to the system functions of the node on which the application is running.

Finally, we have the risk on the side of the applications themselves running in the Kubernetes cluster. If there are critical bugs in the application code then they can be used as an attack vector.

What are the other best practices to Secure a Kubernetes (K8s) Deployment?

Avoid storing unsecured sensitive variables, such as database passwords or keys to external services. In Kubernetes, there is indeed an object called ‘secret’, which was intended for this purpose, but the data is not encrypted. It is worth considering at the infrastructure design stage the implementation of tools to protect sensitive variables from unauthorized access. These can include solutions based on so-called ‘safes’ or encryption of sensitive variables.

It is a good practice to implement to the network policy some security policies between services running in a Kubernetes cluster. There may be many different application services running in a single Kubernetes cluster. Some of them may need to communicate with each other. Communication security policy rules allow us to specify which services can communicate with each other. This way, if one service is compromised, the range of further attack is reduced to only the services associated with each other.

In addition to network policy, it is also worth thinking about implementing mechanisms to encrypt connections between services. This minimizes the potential risk of eavesdropping on communications between services. Often in such situations, service mesh is implemented – a tool that adds a layer of management and security of network communications of services running in Kubernetes.

If there is a need to access different versions of application environments (e.g. production, testing) then You should avoid running this in a single Kubernetes cluster. Testing versions of an application often contain bugs that can be exploited to gain unauthorized access. Using two clusters for different environments minimizes the risk of unauthorized access to production data.

In addition, there are more valuable practices, such as: pod security admission, IDS/IPS, kube apiserver audit, application container vulnerability scanning, cluster scanning with CIS Benchmark tests etc.

In your opinion, Which trends or technologies will change the face of your industry in the near future?

Development of software to support the so-called Hyper-Converged-Infrastructure (HCI) architecture running on the Kuberentes platform; HCI combines the management of infrastructure resources, i.e. disk spaces, computing resources or network communications through one centralized system. Such an infrastructure lacks the complexity of classic three-tier architecture and can be used to build a public or private cloud. Classic HCI carries the risk of vendor dependency because you can’t connect nodes from one vendor to those of another. The use of HCI systems in conjunction with Kubernetes will lift this limitation, thus giving new opportunities for flexible management and infrastructure expansion.

Another important trend is the development of software providing API tools for declarative management of Kubernetes clusters. These tools simplify the procedures for provisioning, upgrading and operating multiple Kubernetes clusters. With these tools, Kubernetes clusters can be quickly created in a variety of environments, both local (on-prem) and in clouds owned by different vendors.

And what are your plans for the future?

Above all, the constant development of the company by increasing the competence of employees. We want to continue to develop and grow – and this will not be possible without a high qualified team. That’s why we are constantly recruiting, acquiring the most interesting and perspective talents from the ‘cloud market’.

We would also like to get involved in open source projects this year. As both enthusiasts and supporters of the Open Source movement, we feel the need to make our own contribution to promote this approach.

Finally, we focus all our efforts on building TraceRoute42 brand recognition. TR42 wants to be directly associated with Kubernetes and Cloud Native technologies. We want to be No.1 provider of Kubernetes infrastructures!


[1] Source: https://www.shadowserver.org/news/over-380-000-open-kubernetes-api-servers/

About the Author

About the Author

Over a decade spent helping affiliate blogs and cybersecurity companies increase revenue through conversion-focused content marketing and Digital PR linkbuilding.