<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Team Blueshoe Technology Blog</title>
        <link>https://www.blueshoe.io</link>
        <description>We write about all things cloud native and let you have a look into our approaches to different challenges to occur when developing within cloud native infrastructures.</description>
        <lastBuildDate>Thu, 26 Feb 2026 13:11:33 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>Team Blueshoe</generator>
        <language>en</language>
        <copyright>Copyright © 2026 Team Blueshoe</copyright>
        <category>Django</category>
        <category>Docker</category>
        <category>FastAPI</category>
        <category>Gefyra</category>
        <category>Keycloak</category>
        <category>Kubernetes</category>
        <category>Nuxt</category>
        <category>Python</category>
        <category>Rust</category>
        <category>Tailwind CSS</category>
        <category>TypeScript</category>
        <category>Varnish</category>
        <category>Vue.js</category>
        <category>Wagtail</category>
        <category>API</category>
        <category>Operations</category>
        <category>Digitalization</category>
        <category>Documentation</category>
        <category>Development</category>
        <category>AI</category>
        <category>Performance</category>
        <category>Project Management</category>
        <category>SEO</category>
        <category>Security</category>
        <item>
            <title><![CDATA[More Freedom: Open Source for Companies]]></title>
            <link>https://blueshoe.io/blog/advantages-open-source-software-companies</link>
            <guid>https://blueshoe.io/blog/advantages-open-source-software-companies</guid>
            <pubDate>Thu, 15 May 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Open-Source Software offers companies flexibility, cost efficiency, and independence from proprietary providers. But what real advantages does it bring, and what challenges exist? In this article, we will show why using Open Source pays off – and what companies should pay attention to.</p>
<p><img src="/img/blog/opensourcebusiness.svg" alt="More Freedom: Open Source for Companies"></p>
<h2>What is Open Source – and why is it so important for companies?</h2>
<p>Open Source means that the source code of a software is freely accessible. Everyone is allowed to use, modify, and distribute it – completely legally. This not only sounds good but <strong>revolutionizes how companies deploy software</strong>.</p>
<p>More and more companies are using Open-Source software to save costs, work more flexibly, and no longer be dependent on proprietary providers. But what exactly makes Open Source so attractive – and what challenges are there?</p>
<h2>The Advantages of Open-Source Software for Companies</h2>
<h3>Cost Efficiency through License Freedom</h3>
<p>One of the biggest advantages: <strong>No license costs</strong>. Proprietary software often consumes a large portion of the IT budget. With Open Source, you save not just the license, but also expensive upgrades and hidden fees.</p>
<h3>Flexibility &#x26; Adaptability</h3>
<p>You have control:</p>
<ul>
<li>The source code belongs to you. You can customize functions, expand or develop them entirely new.</li>
<li>Instead of adhering to provider roadmaps, you build your software to fit your business model.</li>
</ul>
<h3>No Vendor Lock-in with Open Source</h3>
<p>Proprietary software makes system changes difficult. With Open Source, it's different: You can decide whether you want Cloud, On-Premise, or Hybrid – without binding yourself to providers like AWS or Azure.
<strong>Example</strong>: Tools like <strong>Kubernetes</strong> or <strong>OpenStack</strong> enable exactly this.
At Blueshoe, we deliberately use <strong>Docker</strong> and <strong>Kubernetes</strong> to build scalable, containerized solutions – flexible, efficient, future-proof.</p>
<h3>Faster Innovation through Open-Source Community</h3>
<p>The community never sleeps. Worldwide, developers work on new features, bugfixes, and security updates – often <strong>faster than commercial providers</strong>. Tools like <strong>Kubernetes, Django, Vue.js</strong> or <strong>Next.js</strong> offer the most modern development possibilities – without license restrictions.</p>
<p>At Blueshoe, we combine <strong>Vue.js / Nuxt.js</strong> in the frontend with <strong>Django / Wagtail</strong> in the backend. This way, we build flexible web applications with an Open-Source stack that you can expand at any time.</p>
<h3>Open Source also means: Cloud-Native &#x26; Scalable</h3>
<p>Scalability is mandatory today. And precisely for this, Open-Source software is made:
Technologies like <strong>Docker</strong> and <strong>Kubernetes</strong> are <strong>cloud-native from the start</strong> and make you independent of rigid platforms.</p>
<p>Whether you're growing in small steps or operating massive platforms – with Open Source, you can flexibly scale your IT infrastructure.</p>
<hr>
<h2>Challenges – and how you solve them</h2>
<h3>Support &#x26; Maintenance</h3>
<p>Not every Open-Source software comes with professional support. But don't panic:
<strong>Solution</strong>: Rely on an experienced partner like Blueshoe. We handle support, maintenance, and further development – completely without license contract.</p>
<h3>Updates &#x26; Security</h3>
<p>Security gaps? They exist everywhere. But with Open Source, you bear the responsibility for regular updates and patches.</p>
<p><strong>Our Solution</strong>: With <strong>automated maintenance and security processes</strong>, we ensure that your software remains secure – permanently.</p>
<p>:::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
If you want to learn more about Open-Source Software...
:::</p>
<h2>Open-Source Software Example: How to Successfully Transition</h2>
<p>You want to know what a transition to Open Source looks like in practice? Here's a typical example:</p>
<h3>Best Practices for Open-Source Integration in Companies</h3>
<p>For Open Source to truly take off, you need a clear strategy. Here are our tips:</p>
<ol>
<li>Implement step by step: Start pilot projects and collect experiences.</li>
<li>Build internal know-how: Equip development teams with Open-Source skills.</li>
<li>Think API-first: Cleverly plan integration into existing systems.</li>
<li>Take security seriously: Regular updates, backups &#x26; security scans.</li>
<li>Bring in experts: Blueshoe accompanies you with consulting &#x26; support.</li>
</ol>
<h2>Conclusion: Open Source = Independence + Efficiency</h2>
<p>Open-source software offers companies numerous advantages - from cost savings to more flexibility to increased security. With the right strategy, potential challenges can be solved effortlessly. Companies that strategically implement Open Source can position themselves more independently and work more efficiently long-term. Companies investing in Open Source not only gain technological freedom but also a sustainable competitive advantage.</p>
<p>At Blueshoe, we support companies in successfully integrating Open-Source technologies into their processes - from developing scalable web applications with <strong>Wagtail CMS</strong> to implementing modern <strong>Cloud Architectures with Docker and Kubernetes</strong>. Talk to us and discover how we can optimize Open Source for your company!</p>
<hr>
<h2>FAQ – Frequently Asked Questions about Open-Source Software</h2>
<h3>1. What is Open Source?</h3>
<p>Open Source means that the source code of software is <strong>publicly accessible</strong>. Companies can view, adapt, and further develop it - without expensive licensing fees. This promotes <strong>innovation, transparency, and security</strong>.</p>
<h3>2. What is Open-Source Software?</h3>
<p>Open-Source Software (OSS) is software developed by a community and can be freely used. Examples include <strong>Linux, Kubernetes, Nextcloud</strong> or <strong>Odoo</strong>.</p>
<h3>3. What does Open Source mean for companies?</h3>
<p>For companies, Open Source means more freedom, lower costs, and no dependency on a single provider (vendor lock-in). Additionally, Open Source enables a flexible IT strategy and better security controls.</p>
<h3>4. What are examples of Open-Source Software in companies?</h3>
<p>Here are some Open-Source Software examples for companies:</p>
<ul>
<li>ERP &#x26; CRM: Odoo, ERPNext, SuiteCRM</li>
<li>Cloud &#x26; Hosting: OpenStack, Kubernetes</li>
<li>Security: WireGuard (VPN), OpenSSL, Vault</li>
<li>Web Development: Django, Vue, FastAPI, Nuxt</li>
</ul>
<h3>5. What are the Best Practices for Open-Source Integration in Companies?</h3>
<p>✔ Gradual Introduction: First test individual Open-Source solutions, then expand.<br>
✔ Build Internal Expertise: Train IT teams, define Open-Source strategy.<br>
✔ Use APIs &#x26; Middleware: Connect Open Source seamlessly with existing systems.<br>
✔ Prioritize Security Updates: Set up automatic updates &#x26; security checks.<br>
✔ Collaborate with Open-Source Partners: Integrate experts for support &#x26; maintenance (e.g., Blueshoe 🚀).</p>
<h3>6. Is Open-Source Software Really Secure?</h3>
<p>Yes, Open Source can even be more secure than proprietary software!<br>
✓ Transparent Code → Security gaps can be tested &#x26; closed faster.<br>
✓ Regular Updates → Community &#x26; companies continuously improve the software.<br>
✓ No Hidden Backdoors → Proprietary software can have hidden vulnerabilities.</p>]]></content:encoded>
            <category>Project Management</category>
            <category>Digitalization</category>
            <enclosure url="https://blueshoe.io/img/blog/opensourcebusiness.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[An alternative to Telepresence 2: Gefyra]]></title>
            <link>https://blueshoe.io/blog/alternative-to-telepresence-2-gefyra</link>
            <guid>https://blueshoe.io/blog/alternative-to-telepresence-2-gefyra</guid>
            <pubDate>Wed, 30 Mar 2022 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>There are a couple of different approaches to develop locally using Kubernetes. One very well-known tool for a few different scenarios ranging from local to remote Kubernetes application development is Telepresence. Although Telepresence 2 comes with great features, we have not been completely satisfied with the extent of supported use cases. So we decided to build our own solution. May we introduce: Gefyra.</p>
<p><img src="/img/blog/alternative-to-telepresence-2-gefyra.jpg" alt="An alternative to Telepresence 2: Gefyra">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
Introduction
:::
:::globalParagraph
For local <a href="/blog/local-kubernetes-development">Kubernetes development</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} there are few possibilities to make writing code right within Kubernetes feasible. One of the simpler solutions with a limited feature set are host path mappings (such as the local-path-provisioner in K3d) in Kubernetes. Among others, the biggest concern with that is the missing portability of that approach to remote Kubernetes development scenarios.
:::
:::globalParagraph
Hence, we started to use Telepresence (back in the days in version 1) in all of our development infrastructures. That empowered our teams to utilize the same tool regardless of their development setups: either locally or remotely running Kubernetes environments. The <a href="/products">Unikube CLI</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}
offered this functionality to their users by building on top of the free open source parts of Telepresence 2.
:::
:::globalParagraph
Unfortunately, we have always had troubles with Telepresence. We experienced quite some issues on different platforms and environments. That’s why we decided to create an alternative to Telepresence 2 and started the development of Gefyra.
:::
:::globalParagraph
Today, Gefyra is part of the Unikube CLI and replaces Telepresence as the default development mechanism while having the same or even better experience. The following article will go into detail why we decided to start Gefyra and what the biggest differences between Telepresence and Gefyra are.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Working with Telepresence
:::
:::globalParagraph
Telepresence 2 is a very comprehensive tool to create a seamless Kubernetes-based development experience while having established equipment available. These include favorite IDEs (integrated development environments), debugging tools, code hot reloading, environment variables and so on. Using Telepresence comes with the great advantage of having developers work with Kubernetes from the beginning without leaving too far from the familiar surroundings.
:::
:::globalParagraph
The makers of Telepresence 2 are addressing a new paradigm, new development workflow and development environment: it essentially means that Kubernetes is becoming part of the software it runs. And so the development workflow and tooling must be adapted, too. This is concisely written down here.
:::
:::globalParagraph
Additionally to the free part, Telepresence offers commercial only features in combination with the Ambassador Cloud, for example preview links. They allow sharing of development states even within a production(-near) environment with other teams.
:::
:::globalParagraph
Our teams have only been using the free parts and so we cannot report experiences with using the commercial version.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Challenges and issues with Telepresence 2
:::
:::globalParagraph
One of the biggest challenges of Telepresence 2 is to make “your development machine to become part of the cluster”. Running on Windows, MacOS and Linux, that leads to a lot of platform specific logic, for example with the DNS resolvers. Creating special DNS and interface rules plus maintaining them as the operating systems evolve seems very difficult. In fact, it always requires granting <em>sudo-privileges</em> in order to connect to a (even local) Kubernetes cluster.
:::
:::globalParagraph
We found ourselves and users of the Unikube CLI to face <em>timeout issues</em> with no relatable reason. A very frustrating situation.
:::
:::globalParagraph
Another architectural decision of the Telepresence team was to modify the workload components (e.g. Deployments) of the applications in question upon connection. That approach opens up great opportunities and features, but can lead to inconsistencies and residues when not disconnecting properly (which is often the case for us). Once the workloads are modified they cannot be reset to their original states without applying the workload descriptors again. Cleaning up the Telepresence components became a frequent task in our development clusters.
:::</p>
<p>:::globalTitle{:size="sm" .mb-5}
Bypassing of containers
:::
:::globalParagraph
However, one of the major downsides of Telepresence 2 is their <em>agent</em>-concept which incorporates a dedicated sidecar component which can intercept running Pods. No matter which port is the target for the intercept, the traffic from the services is directly routed to Telepresence’s agent (which got installed to the Pod) effectively bypassing all other containers (i.e. sidecars). From our perspective, this is the exact opposite of writing Cloud Native software as it leaves one of the most promising Kubernetes patterns disregarded.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Gefyra: our alternative to Telepresence 2
:::
:::globalParagraph
After placing a couple of issue tickets on GitHub and being part of their community calls, we decided to build an alternative to Telepresence 2 with a smaller featureset and a simplified architecture. Gefyra is based on other popular open source projects, such as Wireguard or Nginx. We are committed to create something more robust and to support a wider range of actual development scenarios, including all Kubernetes patterns.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
More control with 4 operations
:::
:::globalParagraph
Gefyra does not try to make your entire development machine to be part of the Kubernetes cluster, instead it only connects a dedicated Docker network. That ought to be more controllable and portable across different operating systems. In addition, this approach does not need to grant sudo-privileges if the development user account has access to the Docker host.
:::
:::globalParagraph
Gefyra declares four relevant operations: <em>up, down, run, bridge</em>. Similar to Telepresence 2 one has to connect to the development cluster. Gefyra sets up the required cluster components. A developer can <em>run</em> a container which behaves to be part of the cluster while having it on the local Docker host. The bridge operation redirects traffic that hits a container in a certain Pod and proxies these requests to a local container instance. Of course, <em>down</em> removes all cluster components.
:::
:::globalParagraph
In contrast to Telepresence, Gefyra does not modify the workload manifest in the cluster. In case something goes wrong, deleting the Pod will restore its original state.
:::
:::globalParagraph
If you want to know more about Gefyra’s architecture, please head over to the documentation: <a href="https://gefyra.dev/architecture/">https://gefyra.dev/architecture/</a>{.bs-link-blue :target="_blank"}
:::
:::globalParagraph
The following table compares the approaches of Telepresence 2 and Gefyra side-by-side:
:::</p>
<p><img src="/img/blog/alternative-to-telepresence-2-gefyra-1.jpg" alt="Telepresence 2 &#x26; Gefyra">{.object-cover .w-full .mb-1}
<img src="/img/blog/alternative-to-telepresence-2-gefyra-2.jpg" alt="Telepresence 2 &#x26; Gefyra">{.object-cover .w-full .mb-5}</p>
<p>:::GlobalPodcastSection{:videoId="tyvE9VlSWkE" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
Our Kubernetes podcast
::::
::::GlobalTitle{:tag="h3" .mb-6}
Tools for the Craft: Navigating the Kubernetes ecosystem
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking in depth about the ins and outs of local Kubernetes development and also provide some real coding examples.
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="Show more" :color="green"}
::::
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Using Gefyra: an Overview
:::
:::globalParagraph
If you want to use Gefyra then simply head over to the installation guide: <a href="https://gefyra.dev/installation/">https://gefyra.dev/installation/</a>{.bs-link-blue :target="_blank"}. There are installation candidates for Windows, Linux and MacOS with different installation methods. Once the executable is available you can run
:::
:::globalParagraph
Gefyra actions.
:::
:::globalParagraph
But before you go on, please make sure you have a working <em>kubectl</em> connection set. If not, or you simply want to work with a local Kubernetes cluster, you can easily create one using <em>k3d</em>.
:::
:::globalParagraph
Also, when you tried Gefyra, we would really appreciate feedback and have prepared a quick questionnaire. This should not take you more than 3 minutes and really helps us.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Set a UDP Connection
:::
:::globalParagraph
<strong>Important</strong>: In order to use Gefyra, you’ll need a <em>UDP</em> connection. Depending on your cluster connectivity (if it is not local) you have to set appropriate firewall rules for port <em>UDP/31820</em>.
:::
:::globalParagraph
For a working local k3d cluster (k3d version >= 5) you run:
:::
:::globalParagraph
<em>k3d cluster create mycluster --agents 1 -p 8080:80@agent:0 -p 31820:31820/UDP@agent:0</em>
:::</p>
<p>:::BlogTerminal{:input='> gefyra up'}
First thing to do is connect Gefyra with the cluster running
#lines</p>
<ul>
<li>[INFO] Installing Gefyra Operator</li>
<li>[INFO] Pulling image "quay.io/gefyra/operator:latest"</li>
<li>[INFO] Successfully pulled image "quay.io/gefyra/operator:latest" in 638.596569ms</li>
<li>[INFO] Pulling image "quay.io/gefyra/stowaway:latest"</li>
<li>[INFO] Successfully pulled image "quay.io/gefyra/stowaway:latest" in 7.092248247s</li>
<li>[INFO] Operator became ready in 15.2887 seconds</li>
<li>[INFO] Creating Docker network</li>
<li>[INFO] Created network 'gefyra' (156bace408)
:::</li>
</ul>
<p>:::BlogTerminal{:input='> gefyra run -i pyserver -N mypyserver -n default'}
Now, Gefyra is connected to the cluster.</p>
<p>All you have to do now is to run a local container with:
#lines</p>
<ul>
<li>[INFO] Container image 'pyserver:latest' started with name 'mypyserver' in Kubernetes namespace 'default'
:::</li>
</ul>
<p>:::BlogTerminal{:input='> gefyra bridge -N mypyserver -n default --deployment hello-nginxdemo --port 80:8000 --container-name hello-nginx -I myp'}
<strong>Mind the -n default</strong> which places the container <em>“mypyserver”</em> in the Kubernetes namespace of your choice. There are a couple of options available for the run action, for example you can <strong>ask Gefyra to copy the environment</strong> from a container within the cluster.</p>
<p>This is handy if you need to connect to a cluster-based database for which a container within the cluster already holds the parameters.</p>
<p>If you want to <strong>intercept a cluster-based container</strong> in a <em>pod</em> of a certain <em>deployment</em>, you can do so with the bridge action.
#lines</p>
<ul>
<li>[INFO] Creating bridge for Pod hello-nginxdemo-7d648bd866-xsd28</li>
<li>[INFO] Waiting for the bridge(s) to become active</li>
<li>[INFO] Bridge mypybridge established
:::</li>
</ul>
<p>:::BlogTerminal{:input='> gefyra down'}
This action will make the “hello-nginx” container in all pods of the deployment <em>“hello-nginxdemo”</em> to redirect their traffic to the local instance of <em>“mypyserver”</em>.</p>
<p>Now, you are able to write code or make other changes and instantly see it interacting with other Kubernetes-based components.</p>
<p>To clean up everything you simply run:</p>
<p>To find even more examples please check out https://gefyra.dev.
#lines</p>
<ul>
<li>[INFO] Removing running bridges</li>
<li>[INFO] Uninstalling Operator</li>
<li>[INFO] Removing Cargo</li>
<li>[INFO] Stopping remainder container from Gefyra network</li>
<li>[INFO] Removing Docker network</li>
<li>[INFO] Removed 1 docker networks with name 'gefyra'
:::</li>
</ul>
<p>:::globalTitle{:size="lg" .mb-5}
Summary
:::
:::globalParagraph
In this blog post, we introduced our Telepresence 2 alternative Gefyra. Although it does not cover all (enterprise) features of Telepresence at the moment, it is already usable for the core requirements of real Cloud Native development. We hope that, from a technical perspective, the differences will make the technology less prone to failures caused by the host system. In addition, the clear UDP based connection requirements will make the life of corporate infrastructure teams much easier, as the underlying connection is much more comprehensible. However, in terms of features Gefyra is still far behind Telepresence.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Gefyra in Unikube
:::
:::globalParagraph
Gefyra will replace Telepresence 2 in the upcoming release of the Unikube CLI and become the new standard provider for the switch operation. This will significantly reduce the issues that our users are experiencing with Telepresence and improve the overall performance. Moreover, since we are now in control of the entire stack, the development team of Unikube will finally be able to help out with all problems related to this operation. We’re looking forward to receiving feedback.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Gefyra</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blog/alternative-to-telepresence-2-gefyra.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Alternatives to Celery for Django on Kubernetes]]></title>
            <link>https://blueshoe.io/blog/alternatives-to-django-celery-in-kubernetes</link>
            <guid>https://blueshoe.io/blog/alternatives-to-django-celery-in-kubernetes</guid>
            <pubDate>Wed, 10 Sep 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Are you planning background jobs in your cluster and wondering if there are lighter options than Celery?</p>
<p>Celery is powerful, but sometimes too heavyweight for simple setups. In this article, we'll look at alternatives like <strong>Django RQ, Dramatiq, and Huey</strong>, explain why <strong>KEDA</strong> plays a key role in autoscaling, and give you <strong>mini-YAML examples</strong> for clean deployment with probes and graceful shutdown.</p>
<p><img src="/img/blog/celery-alt.svg" alt="Celery Alternativen"></p>
<h2>The Four Options in a Nutshell</h2>
<p><em>(With links to projects and repos so you can dive deeper.)</em></p>
<h3><a href="https://docs.celeryq.dev/">Celery</a></h3>
<p>The classic with a large ecosystem. Supports workers and the separate scheduler, Beat. In large setups with chains and many integrations, Celery remains the reference point.</p>
<h3><a href="https://github.com/rq/django-rq">Django RQ</a></h3>
<p>A Django-friendly integration of RQ based on Redis. Workers start in the Django context. In many cases, just setting <code>DJANGO_SETTINGS_MODULE</code> is enough. Monitoring is easy via the <a href="https://github.com/rq/rq-dashboard">RQ Dashboard</a> or admin integrations.</p>
<h3><a href="https://dramatiq.io/">Dramatiq</a></h3>
<p>Modern defaults with a focus on reliability. Runs with Redis or RabbitMQ. The Django integration <a href="https://www.google.com/search?q=%5Bhttps://github.com/Bogdanp/django_dramatiq%5D(https://github.com/Bogdanp/django_dramatiq)"><code>django_dramatiq</code></a> provides the management command <code>rundramatiq</code>.</p>
<h3><a href="https://huey.readthedocs.io/">Huey</a></h3>
<p>Lightweight with a built-in scheduler. Clean Django integration via the <code>run_huey</code> management command, including auto-discovery of <code>tasks.py</code>.</p>
<hr>
<h2>KEDA in 60 Seconds</h2>
<p><a href="https://keda.sh/">KEDA</a> is the Kubernetes Event-Driven Autoscaler.
It scales deployments and jobs based on events like queue lengths and can scale down to zero when idle. KEDA complements the Horizontal Pod Autoscaler and works with it.</p>
<p>Why is KEDA so important?</p>
<ul>
<li>You <strong>don't need polling</strong> in the worker.</li>
<li>Pods scale dynamically when there are jobs in the queue.</li>
<li>When idle, you can <strong>reduce workers to 0</strong> and save resources.</li>
</ul>
<p>Typical triggers for Django workers are <strong>Redis Lists</strong> and <strong>RabbitMQ Queues</strong>. Both are available as scalers.</p>
<hr>
<h2>Comparison Specifically for Kubernetes</h2>
<p>| Tool | Broker | KEDA Trigger | Advantages | Disadvantages | Best Choice When... |
|---|---|---|---|---|---|
| <strong>Celery</strong> | Redis, RabbitMQ | Redis List or RabbitMQ Queue | <strong>Very mature</strong>, many integrations, worker plus Beat, large community. <strong>K8s Plus:</strong> many examples for queue-based autoscaling. | More operational overhead, additional components like Beat, careful shutdown handling required. | High load, complex chains, existing Celery experience. |
| <strong>Django RQ</strong> | Redis | Redis List | <strong>Very easy start</strong>, admin integration, few moving parts. <strong>K8s Plus:</strong> list length as a simple KEDA trigger. | Less feature depth, Redis is mandatory. | Web projects with clear jobs and a fast go-live. |
| <strong>Dramatiq</strong> | Redis, RabbitMQ | Redis List or RabbitMQ Queue | <strong>Modern defaults</strong>, robust retries, clean Django integration via <code>rundramatiq</code>. <strong>K8s Plus:</strong> easily combined with KEDA depending on the broker. | Fewer ready-made Django UIs, learning curve for the actor model. | Demanding but lean, focus on reliability. |
| <strong>Huey</strong> | Redis | Redis List | <strong>Lightweight</strong>, scheduler included, consumer as a management command. <strong>K8s Plus:</strong> simple process and KEDA coupling. | Smaller ecosystem, minimal monitoring. | Few workers, many periodic tasks. |</p>
<hr>
<h2>Mini YAML Examples</h2>
<h3>Example A: Celery Worker with RabbitMQ and KEDA</h3>
<pre><code class="language-yaml"># Deployment for Celery Worker
apiVersion: apps/v1
kind: Deployment
metadata:
  name: celery-worker
spec:
  selector: { matchLabels: { app: celery } }
  template:
    metadata: { labels: { app: celery } }
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: worker
          image: your-registry/app:latest
          command: ["celery", "-A", "proj", "worker", "--loglevel=info"]
          env:
            - name: RABBITMQ_HOST
              valueFrom:
                secretKeyRef: { name: rmq, key: amqp_uri }
---
# ScaledObject for RabbitMQ Queue
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: celery-rabbit
spec:
  scaleTargetRef: { name: celery-worker }
  minReplicaCount: 0
  maxReplicaCount: 20
  triggers:
    - type: rabbitmq
      metadata:
        hostFromEnv: RABBITMQ_HOST
        queueName: celery
        protocol: amqp
        mode: QueueLength
        value: "20"
      authenticationRef:
        name: rmq-auth
</code></pre>
<hr>
<h3>Example B: Django RQ Worker plus KEDA Redis List Scaler</h3>
<pre><code class="language-yaml"># Deployment for RQ Worker
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rq-worker
spec:
  selector: { matchLabels: { app: rq } }
  template:
    metadata: { labels: { app: rq } }
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: worker
          image: your-registry/app:latest
          command: ["bash", "-lc", "DJANGO_SETTINGS_MODULE=config.settings rq worker default"]
          env:
            - name: REDIS_HOST
              value: "redis:6379"
---
# KEDA ScaledObject with Redis List Trigger
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: rq-scale
spec:
  scaleTargetRef: { name: rq-worker }
  minReplicaCount: 0
  maxReplicaCount: 10
  triggers:
    - type: redis
      metadata:
        addressFromEnv: REDIS_HOST
        listName: default
        listLength: "20"  # Scale up from 20 jobs
</code></pre>
<hr>
<h3>Example C: Dramatiq under Django with Redis</h3>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  name: dramatiq-worker
spec:
  selector: { matchLabels: { app: dramatiq } }
  template:
    metadata: { labels: { app: dramatiq } }
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: worker
          image: your-registry/app:latest
          command: ["python", "manage.py", "rundramatiq", "--processes", "2", "--threads", "8"]
          env:
            - name: REDIS_HOST
              value: "redis:6379"
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: dramatiq-scale
spec:
  scaleTargetRef: { name: dramatiq-worker }
  minReplicaCount: 0
  maxReplicaCount: 10
  triggers:
    - type: redis
      metadata:
        addressFromEnv: REDIS_HOST
        listName: default
        listLength: "10"
</code></pre>
<hr>
<h3>Example D: Huey Worker with Redis</h3>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  name: huey-worker
spec:
  selector: { matchLabels: { app: huey } }
  template:
    metadata: { labels: { app: huey } }
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: worker
          image: your-registry/app:latest
          command: ["python", "manage.py", "run_huey"]
          env:
            - name: REDIS_HOST
              value: "redis:6379"
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: huey-scale
spec:
  scaleTargetRef: { name: huey-worker }
  minReplicaCount: 0
  maxReplicaCount: 5
  triggers:
    - type: redis
      metadata:
        addressFromEnv: REDIS_HOST
        listName: default
        listLength: "5"
</code></pre>
<hr>
<h2>Kubernetes Operational Aspects That Matter</h2>
<h3>Deploying a Broker</h3>
<p>You can get stable Redis and RabbitMQ via a <a href="https://artifacthub.io/">Helm Chart</a> or an operator. For RabbitMQ, there is also an official <a href="https://www.rabbitmq.com/kubernetes/operator/">Cluster Operator</a>.</p>
<h3>Configuring Probes</h3>
<p>Set <strong>Readiness, Liveness</strong>, and, for longer startups, a <strong>Startup Probe</strong>. Example:</p>
<pre><code class="language-yaml">livenessProbe:
  exec: { command: ["pgrep", "rq"] }
  initialDelaySeconds: 20
  periodSeconds: 10
</code></pre>
<p>This prevents deadlocks and traffic to pods that are not ready.</p>
<h3>Graceful Shutdown</h3>
<p>Use <code>terminationGracePeriodSeconds</code> and, if necessary, a <code>preStop</code> hook so that running tasks can finish cleanly:</p>
<pre><code class="language-yaml">lifecycle:
  preStop:
    exec:
      command: ["bash", "-c", "kill -TERM 1 &#x26;&#x26; sleep 30"]
</code></pre>
<p>Kubernetes will terminate containers after the grace period expires, no matter what—so plan for a buffer.</p>
<h3>KEDA Installation</h3>
<p>KEDA can be installed via a <a href="https://keda.sh/docs/">Helm Chart</a> or YAML. After that, you define <strong>ScaledObjects</strong> or <strong>ScaledJobs</strong> for each worker.</p>
<hr>
<h2>Code Examples for Tasks</h2>
<p>So you can see the difference in the Django code as well:</p>
<h3>Celery</h3>
<pre><code class="language-python">from celery import shared_task

@shared_task
def send_email(user_id):
    # classic Celery task
    ...
</code></pre>
<h3>Django RQ</h3>
<pre><code class="language-python">import django_rq

def send_email(user_id):
    ...

# Add a task to the queue
queue = django_rq.get_queue('default')
queue.enqueue(send_email, user.id)
</code></pre>
<h3>Dramatiq</h3>
<pre><code class="language-python">import dramatiq

@dramatiq.actor
def send_email(user_id):
    ...

# Dispatch the task
send_email.send(user.id)
</code></pre>
<h3>Huey</h3>
<pre><code class="language-python">from huey.contrib.djhuey import task

@task()
def send_email(user_id):
    ...

# Call the task
send_email(user.id)
</code></pre>
<hr>
<h2>Failure Handling in Kubernetes Clusters</h2>
<p>Sooner or later, it happens: a worker goes down or the broker fails. To ensure your tasks continue to run smoothly, you need some safeguards. Here's how you can proceed:</p>
<h3>When a Worker Crashes</h3>
<p>Kubernetes automatically restores your pods with <code>restartPolicy: Always</code>. However, it's important that jobs are only removed from the queue once they are truly processed. Otherwise, tasks will be lost.</p>
<p>All four tools (Celery, RQ, Dramatiq, Huey) support built-in retries. You should actively use these.</p>
<p>An example of an RQ worker deployment:</p>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  name: rq-worker
spec:
  replicas: 2
  selector: { matchLabels: { app: rq } }
  template:
    metadata: { labels: { app: rq } }
    spec:
      restartPolicy: Always
      containers:
        - name: worker
          image: your-registry/app:latest
          command: ["rq", "worker", "default"]
</code></pre>
<p>If a pod crashes, Kubernetes simply restarts it. The open jobs remain in Redis and are picked up by the next worker.</p>
<hr>
<h3>When the Broker Fails</h3>
<p>Nothing runs without Redis or RabbitMQ. That's why you should operate these components in a highly available manner.</p>
<h4>Redis as a StatefulSet</h4>
<p>A deployment alone is not enough if data needs to be stored permanently. A StatefulSet ensures that Redis pods have stable names and their own volume. This keeps the queue intact even after restarts.</p>
<p>In short: StatefulSets give pods a fixed identity and persistent storage.</p>
<h4>RabbitMQ with Replicated Queues</h4>
<p>RabbitMQ queues are normally tied to a single node. If that node or pod fails, the queue is gone. With Quorum Queues, which are replicated and fault-tolerant queues based on the Raft protocol, you are in a safer position. The RabbitMQ Kubernetes Operator makes it easier for you to set up a highly available cluster.</p>
<p>The advantage is clear: if one pod fails, another takes over without your workers noticing much.</p>
<hr>
<h3>Kubernetes Tools You Shouldn't Forget</h3>
<ul>
<li><strong>PodDisruptionBudget (PDB):</strong> ensures that not too many pods fail at the same time during updates.</li>
<li><strong>Readiness Probes:</strong> ensure that only healthy pods process jobs.</li>
<li><strong>Graceful Shutdown:</strong> gives your workers time to finish tasks upon receiving a SIGTERM signal.</li>
<li><strong>Backoff and Retries:</strong> allow workers to automatically reconnect if the broker is temporarily unavailable.</li>
</ul>
<hr>
<p>With this setup, you are well-prepared for the most common sources of failure in the cluster, whether a worker or a broker fails.</p>
<h2>Decision Guide</h2>
<ul>
<li><strong>Simple and fast with Redis:</strong> Django RQ or Huey. Couple it with the KEDA Redis List Scaler.</li>
<li><strong>Robust with a flexible choice of broker:</strong> Dramatiq. Use Redis or RabbitMQ and connect KEDA accordingly.</li>
<li><strong>Large setup with existing know-how:</strong> Celery with ScaledObjects for each queue and optionally Beat or django-celery-beat for periodic tasks.</li>
</ul>
<p>Migrations are possible: many task definitions can be ported with little effort, even if retry/ACK mechanisms are implemented differently.</p>
<hr>
<h2>FAQ</h2>
<h3>1. What is KEDA and what do I use it for?</h3>
<p>KEDA scales workloads based on external events like queue length and enables scale-to-zero. It complements the Horizontal Pod Autoscaler.</p>
<h3>2. Can I autoscale all four tools with KEDA?</h3>
<p>Yes. Via the Redis Lists Scaler or the RabbitMQ Queue Scaler, depending on the broker.</p>
<h3>3. How do I choose the right broker?</h3>
<p>Redis is quick to set up and is often sufficient for web projects. RabbitMQ is worthwhile for complex routing or if you have existing AMQP experience.</p>
<h3>4. Do I need a separate scheduler for Celery?</h3>
<p>For periodic tasks, Celery uses the Beat scheduler. With <a href="https://github.com/celery/django-celery-beat">django-celery-beat</a>, you can maintain schedules in the Django admin.</p>
<h3>5. How do I start Dramatiq cleanly in Django?</h3>
<p>Via <code>python manage.py rundramatiq</code> from <code>django_dramatiq</code>. This command is intended for Django integration.</p>
<h3>6. Are there dashboards for Django RQ?</h3>
<p>Yes. There is <a href="https://github.com/rq/rq-dashboard">rq-dashboard</a> as a standalone, as well as integrations for the Django admin.</p>]]></content:encoded>
            <category>Django</category>
            <category>Kubernetes</category>
            <category>Development</category>
            <category>Operations</category>
            <enclosure url="https://blueshoe.io/img/blog/celery-alt.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Artificial Intelligence Leverages Database and API]]></title>
            <link>https://blueshoe.io/blog/artificial-intelligence-database-api</link>
            <guid>https://blueshoe.io/blog/artificial-intelligence-database-api</guid>
            <pubDate>Wed, 04 Dec 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Artificial Intelligence is being discussed everywhere - we at Blueshoe are no exception. In this blog post, we build a simpler yet impressive case with Google's Vertex AI and the Python package SQLModel. We demonstrate how databases and APIs can be integrated into AI models. The scenario should enable simply posing queries in natural language to our database and receiving answers.</p>
<p><img src="/img/blog/vertex-ai.svg" alt="Blueshoe and FastAPI: Documentation with Programming Examples">{.object-cover .max-w-full .mb-5}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Vertex AI and SQLModel
:::
:::GlobalParagraph
The Vertex AI platform is a Google Cloud product that offers developers the opportunity to integrate generative AI into their applications. The possibilities are numerous: Input and output can be via text, audio, and video, and various AI models are available. For this blog post, we limit ourselves to text as input and output medium. Vertex AI Tutorials like this show how easy integration into projects can be.
:::</p>
<p>:::GlobalParagraph
SQLModel is a Python package based on Pydantic (a validation library) that enables storing Python data models in databases or reading from them. Essentially an ORM (=Object-relational mapping) on Pydantic basis, which is excellently suited for an AI database. Combining Artificial Intelligence with Python becomes particularly easy.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Data Model and Data Generation
:::</p>
<p>:::GlobalParagraph
We take the following scenario:
We are a bookstore and maintain a database with books. It is known how many copies of each book are in stock. Each book has a known author.
:::</p>
<p>:::GlobalParagraph
The data model looks like this:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-python">class Author(SQLModel, table=True):
   id: Optional[int] = Field(default=None, primary_key=True)
   first_name: str
   last_name: str
   birthday: date

class Book(SQLModel, table=True):
   id: Optional[int] = Field(default=None, primary_key=True)
   name: str
   author_id: int = Field(default=None, foreign_key="author.id")
   num_in_stock: int = 0
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph{.mb-4}
Books and authors are connected through a classic foreign key relationship. With SQLModel, queries to the database can now be easily formulated to build an AI API - e.g., selecting all authors with a specific first and last name:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-python"># select Author with matching first_name and last_name
statement = select(Author).where(Author.first_name == first_name, Author.last_name == last_name)
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
So far, so good. Via Polyfactory, we will generate some test data:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-python">class AuthorFactory(ModelFactory[Author]):
   __model__ = Author
   __faker__ = Faker(locale="de_DE")
   id = Use(lambda: None)

   @classmethod
   def first_name(cls) -> str:
       return cls.__faker__.first_name()

   @classmethod
   def last_name(cls) -> str:
       return cls.__faker__.last_name()

class BookFactory(ModelFactory[Book]):
   __model__ = Book
   __faker__ = Faker(locale="de_DE")
   id = Use(lambda: None)

   @classmethod
   def name(cls) -> str:
       return cls.__faker__.catch_phrase()

def create_authors():
   for i in range(0, 1000):
      author = AuthorFactory.build()
      with Session(engine) as session:
         session.add(author)
         session.commit()
         for i in range(1, randrange(1, 10)):
               create_book(author_id=author.id)

def create_book(author_id):
   book = BookFactory.build(author_id=author_id)
   with Session(engine) as session:
       session.add(book)
       session.commit()

# init sqlite
engine = create_engine("sqlite:///database.db")
SQLModel.metadata.create_all(engine)
# call create fcnt
create_authors()
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
And we already have 1000 different authors with between 1 and 10 books. The books get assigned complete availability quantities (number in stock).
:::</p>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
We can also build AI solutions for you.
::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Connection with Vertex AI
:::</p>
<p>:::GlobalParagraph
First of all, it would have to be defined what functionality our agent should have access to. We assume that the agent should be able to retrieve books for a specific author and also know the number of copies in stock for a specific book.
:::</p>
<p>:::GlobalParagraph
The following helper functions are being added:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-python">from utils import (
   get_book_id_by_title,
   get_books_for_author,
   get_num_in_stock_for_book,
)
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
Vertex AI now needs information about the functions, what these do and which input parameters can be expected. This is done using FunctionDeclarations:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-python">get_authors_for_book_func = FunctionDeclaration(
   name=GET_BOOK_BY_AUTHOR,
   description="Get a list of book names for an author.",
   parameters={
       "type": "object",
       "properties": {
           "first_name": {
               "type": "string",
               "description": "The first name of the author",
           },
           "last_name": {
               "type": "string",
               "description": "The last name of the author",
           },
       },
   },
)

</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
The function itself looks like the following:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-python">def get_books_for_author(first_name: str, last_name: str):
   with Session(engine) as session:
       statement = select(Author).where(Author.first_name == first_name, Author.last_name == last_name)
       author = session.exec(statement).first()
       statement = select(Book).where(Book.author_id == author.id)
       books = session.exec(statement=statement).all()
       return [book.name for book in books]
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
The functions are combined into a "Tool" and made available to the model:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-python">tools = Tool(
   function_declarations=[
       get_authors_for_book_func,
       get_book_id_by_title_func,
       get_num_in_stock_for_book_func,
   ],
)

model = GenerativeModel(
   model_name="gemini-1.5-pro-002",
   generation_config=GenerationConfig(temperature=0),
   tools=[tools],
)
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
The model can now classify a user's input and provide an assessment of which functions should be used. This can then be accessed:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-python">if function_calls:
   api_responses = []
   for func in function_calls:
      if func.name == GET_BOOK_BY_AUTHOR:
         api_responses.append(
            {
               "name": func.name,
               "content": get_books_for_author(
                  first_name=func.args["first_name"],
                  last_name=func.args["last_name"],
               ),
            }
         )
      elif func.name == GET_BOOK_ID_BY_TITLE:
         api_responses.append(
            {
               "name": func.name,
               "content": get_book_id_by_title(
                  title=func.args["title"],
               ),
            }
         )
      elif func.name == GET_NUM_IN_STOCK_FOR_BOOK:
         api_responses.append(
            {
               "name": func.name,
               "content": get_num_in_stock_for_book(
                  title=func.args["title"],
               ),
            }
         )

   # Return the API response to Gemini
   for api_response in api_responses:
         name = api_response.pop("name")
         response = chat_session.send_message(
             [
                 Part.from_function_response(
                     name=name,
                     response=api_response,
                 )
             ]
         )
         print(response.text)

</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
The model thus extracts the parameters from the user's input. These can then be used in the function call.
:::</p>
<p>:::GlobalParagraph
Now the chat can start!
:::</p>
<p>:::GlobalParagraph{.italic .mb-[-20px]}
<strong>Which books are written by Guenther Hendriks?</strong>
:::</p>
<p>:::GlobalParagraph{.italic .mb-2}
Guenther Hendriks has written the following books: Digitized optimal circuit, Future-proofed content-based groupware, Universal clear-thinking Local Area Network, Digitized scalable service-desk, and Innovative full-range protocol.
:::</p>
<p>:::GlobalParagraph{.italic .mb-[-20px]}
<strong>How many copies of the first of those books are available?</strong>
:::</p>
<p>:::GlobalParagraph{.italic .mb-8}
There are 2089 copies of "Digitized optimal circuit" in stock.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Conclusion
:::</p>
<p>:::GlobalParagraph
The Vertex AI API generally works well. Currently, the response always takes a few seconds, which depending on the use case might potentially be too long. The interface is intuitive and easy to operate. For some inquiries, no response comes from Vertex AI, which then must be handled.
:::</p>
<p>:::GlobalParagraph
Vertex AI makes it possible to make existing APIs and databases accessible via natural language, which works cool and fast, although it still needs some refinement. It allows an efficient use of the AI database and integration into various systems. Especially for developers who want to program AI with Python, this is an exciting solution.
:::</p>]]></content:encoded>
            <category>API</category>
            <category>AI</category>
            <category>Digitalization</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blog/vertex-ai.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Cost Optimization of an Azure Kubernetes Cluster]]></title>
            <link>https://blueshoe.io/blog/azure-kubernetes-cost-optimization</link>
            <guid>https://blueshoe.io/blog/azure-kubernetes-cost-optimization</guid>
            <pubDate>Wed, 26 Feb 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Cloud resources are powerful and practical, but expensive - especially Kubernetes clusters. In this blog post, we show how we successfully implemented Azure Kubernetes Service Cost Optimization in a existing project. We present strategies, tools, and best practices that helped reduce AKS costs without compromising performance.</p>
<p><img src="/img/blog/aks.svg" alt="Cost Optimization of an Azure Kubernetes Cluster">{.object-cover .max-w-full .mb-5}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Starting Point
:::
:::GlobalParagraph
The project is fully operated in the Azure Cloud and uses two AKS Kubernetes clusters: a Staging and a Production system. These generate cloud resources like virtual machines, hard drives, load balancers, and IP addresses.
:::</p>
<p>:::GlobalParagraph
Additionally, various Azure services are used, including a Managed Database, Storage for Media Data, and a Redis instance. Before optimization, costs were around 100 € per day.
:::</p>
<p><img src="/img/blog/250212_azure-prior.png" alt="Before">{.object-cover .max-w-full .mb-5}</p>
<p>:::GlobalTitle{:size="lg" .mb-4}
Analysis with Azure Cost Management
:::
:::GlobalParagraph
The first step towards Kubernetes cluster cost management was an analysis using Azure Cost Management for Kubernetes. The tool provides detailed insights into which services cost how much - a good starting point to look more closely at which services might be worthwhile.
:::</p>
<p>:::GlobalParagraph
We directly noticed a deactivated, but still existing VM. This caused few costs, but was unnecessary and could be removed.
:::</p>
<p>:::GlobalParagraph
The biggest cost drivers were the virtual machines - especially the Node Pools of both AKS clusters. A detailed look at Kubernetes resource optimization showed that the number of nodes was not optimally matched to actual needs. Azure also supported this - namely with monitoring of CPU and memory utilization.
:::</p>
<p>:::GlobalParagraph
Azure Log Analytics also proved to be an unexpectedly high cost point. This was configured for the Production cluster. All logs of Kubernetes resources were stored long-term without being actively used.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-4}
Implemented Measures
:::
:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Optimize Node Pools
:::</p>
<p>:::GlobalParagraph
Based on these insights, we implemented a series of targeted optimization measures.
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>The <strong>Staging Cluster</strong> was reduced from 3 to 1-3 Nodes - with Auto-Scaling. Typically, only 2 Nodes are now running.</li>
<li>The <strong>Production Cluster</strong> now uses two Node Pools: System Mode and User Mode.
:::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Adjust Virtual Machines
::
::GlobalParagraph
A VM that was too large for the Managed Database was downsized. Thanks to Azure Cost Management for Kubernetes, it was clear that the existing computing power was overdimensioned. The conversion was seamless and can be reverted at any time if demand increases.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Deactivate Log Analytics
::
::GlobalParagraph
The high costs of Azure Log Analytics were disproportionate to the actual usage. Deactivation led to significant savings that only became fully noticeable after a few weeks, as logs were stored for up to 90 days.
::</p>
<p><img src="/img/blog/250212_log-analytics.png" alt="Deactivate Log Analytics">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Remove Unnecessary Resources
::
::GlobalParagraph
An unused VM of an old GitLab Runner was deleted. Although it only caused about 10€ per month, such "forgotten" resources accumulate over time.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Optimize Operating Times with KEDA
::
::GlobalParagraph
Both AKS clusters are only used during working hours. Using <a href="/blog/kubernetes-autoscaling-keda/">KEDA Auto-Scaling</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, operating times are defined so that applications outside these times are automatically scaled down. This additionally reduces AKS costs, as fewer nodes are active.
::</p>
<p>::GlobalTitle{:size="lg" .mb-4}
Result: Over 50% Cost Reduction
::
::GlobalParagraph
It took some time until the full extent of the cost savings became visible. But the optimization measures paid off. Operating costs dropped from around 100€ to <strong>45€ per day</strong> - a saving of more than 50%!
::</p>
<p>::GlobalParagraph
Some measures could have been implemented earlier. Others, like the Kubernetes Cluster Cost Management through Operating Time Optimization, required more planning but will pay off long-term.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Lessons Learned
::
::GlobalBlock{.mb-4}
✔ Regular cost analysis is worthwhile - without it, we would not have discovered unnecessary resources.<br>
✔ Consistently use Auto-Scaling - fixed node sizes are often inefficient.<br>
✔ Question logging - not every log service is necessary or economical.<br>
::</p>
<p>::GlobalParagraph
<strong>Want to learn more about Kubernetes &#x26; Cost Optimization?</strong>
::
:::GlobalButton{:url="/our-services/cloud-native-consulting/" :label="Learn more about our Kubernetes Development Services" :color="blue" .mb-6}
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5 .mt-8}
Frequently Asked Questions
:::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. How can I reduce the costs of my Azure Kubernetes Cluster?
::
::GlobalParagraph
You can reduce AKS costs by activating Auto-Scaling, removing unused resources, optimizing log services, and adjusting operating times with <a href="/blog/kubernetes-autoscaling-keda/">KEDA</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. Which tools help with Azure Kubernetes Cost Optimization?
::
::GlobalParagraph
Azure Cost Management for Kubernetes offers detailed analyses. Additionally, the Kubernetes Autoscaler and KEDA help with dynamically scaling nodes.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. Why is Auto-Scaling important for cost optimization in AKS?
::
::GlobalParagraph
Auto-Scaling adjusts the number of nodes to actual needs and prevents over-provisioning - an effective measure for Kubernetes resource optimization.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
4. How does deactivating Azure Log Analytics affect costs?
::
::GlobalParagraph
Log Analytics can cause high running costs. If you do not actively use the logs, deactivation can lead to significant savings.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
5. Is there a way to reduce Kubernetes Clusters outside working hours?
::
::GlobalParagraph
Yes! With <a href="/blog/kubernetes-autoscaling-keda/">KEDA Auto-Scaling</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} you can automatically scale down your clusters outside business hours and save costs.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
6. How often should I review my Azure Kubernetes Cost Management?
::
::GlobalParagraph
A monthly analysis with Azure Cost Management helps identify inefficient resources early.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
7. Which Node-Pool strategy is the most cost-efficient for AKS?
::</p>
<p>::GlobalParagraph
A combination of System-Mode and User-Mode pools with Auto-Scaling is ideal. System-Mode should remain stable, while User-Mode dynamically scales.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
8. When does it make sense to use Spot Instances in AKS?
::</p>
<p>::GlobalParagraph
Spot Instances are cost-effective, but not always available. They are suitable for non-critical workloads with flexible execution times.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
9. Can I save costs by switching to another cloud provider?
::</p>
<p>::GlobalParagraph
Yes, Azure is not always the most cost-effective option. It's worth checking alternatives like Linode or Hetzner if your requirements allow.
::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Operations</category>
            <enclosure url="https://blueshoe.io/img/blog/aks.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[The End of an Era for Developers: Bitnami Discontinues Free Container Images for the Most Part]]></title>
            <link>https://blueshoe.io/blog/bitnami-and-alternatives</link>
            <guid>https://blueshoe.io/blog/bitnami-and-alternatives</guid>
            <pubDate>Wed, 24 Sep 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Bitnami, one of the most popular platforms for pre-configured application containers and images, is largely discontinuing its previously free offering.</p>
<p>The cut-off date for the transition is September 29, 2025. This move, which comes under the new ownership of Broadcom, is forcing thousands of developers, DevOps teams, and companies to act. In this post, we analyze the background, highlight the consequences, and provide you with a clear roadmap for the upcoming migration.</p>
<p><img src="/img/blog/bitnami-alternatives.svg" alt="Bitnami blurred, dollar notes, Blueshoe">{.object-cover .max-w-full .mb-5}</p>
<h2>1. Introduction: The Bombshell in the DevOps World</h2>
<p>For years, Bitnami was the shortcut to production-ready stacks: a huge portfolio of images for popular applications like WordPress, PostgreSQL, Redis, NGINX, Keycloak, or Kafka - pre-configured, well-documented, and, until now, largely free. This mix of easy installation, a wide selection, and zero entry costs made Bitnami the de facto standard in many teams.</p>
<p>With the announcement of the transition on September 29, 2025, it is now clear: <strong>"Bitnami no longer free"</strong> is more than just a headline - it's a paradigm shift. Under Broadcom's direction, the product strategy is shifting more towards paying enterprise customers and the new "Bitnami Secure Images" offering. For the majority of the existing, versioned images, this means the end of regular, freely accessible updates.</p>
<h2>2. Analysis of the Transition: What "No Longer Free" Actually Means</h2>
<h3>The "Legacy" Repository</h3>
<ul>
<li>Versioned images will be moved to a legacy or archive repository and will no longer be updated.</li>
<li>This affects virtually the entire range of tags - with the exception of a very limited selection of <code>:latest</code> images.</li>
<li>Security risk: Without patches, vulnerabilities (CVEs) accumulate, compliance requirements are violated, and audits become more difficult.</li>
</ul>
<h3>The New "Bitnami Secure Images" Subscription</h3>
<ul>
<li>Value proposition: continuous security updates, compatibility tests, SLA-driven support.</li>
<li>Pricing: Publicly mentioned figures range from approximately $50,000 - $72,000 per year, depending on the package. For many startups, smaller companies, or non-profit teams, this is hardly affordable.</li>
</ul>
<h3>What Remains Free?</h3>
<ul>
<li>A severely limited selection of <code>:latest</code> images.</li>
<li>Why <code>:latest</code> is problematic in production: no version pinning, lack of reproducibility, difficult-to-track drift behavior - a <strong>no-go</strong> for stable CI/CD and deterministic releases.</li>
</ul>
<h2>3. The Consequences for Users: From Brittle Pipelines to Security Holes</h2>
<ul>
<li><strong>Broken Builds</strong>: CI/CD pipelines that reference fixed versions will fail when images disappear or are no longer updated. Build caching, SBOM generation, and vulnerability scans become unreliable.</li>
<li><strong>Security Nightmare</strong>: Unpatched vulnerabilities in production systems increase the risk of incidents, data breaches, and compliance violations (e.g., ISO 27001, SOC 2).</li>
<li><strong>Loss of Stability</strong>: Without version pinning, rollbacks, reproducible deployments, and deterministic tests are nearly impossible. Drift between staging and production increases.</li>
</ul>
<h3>Case Study: The Unprepared Team</h3>
<p>A medium-sized SaaS company uses <code>bitnami/postgresql:15.6</code> and <code>bitnami/redis:7.2</code> in dozens of pipelines. After the transition, the versioned tags are no longer available or receive no patches. Builds break, SRE teams have to document security exceptions, and customer audits raise critical questions about the patch policy. The team improvises with <code>:latest</code> - which helps in the short term but introduces new instabilities and hard-to-reproduce errors. The cost: weeks of engineering time and an increased security risk.</p>
<h2>4. Time to Act: Your Roadmap for Migration</h2>
<h3>Step 1: Analyze Your Own Infrastructure</h3>
<ul>
<li>Take inventory of all Bitnami images in use (Container Registry, SBOMs, Deployment Manifests).</li>
<li>Search for <code>bitnami/*</code> references in <code>Dockerfile</code>, <code>docker-compose.yml</code>, Helm charts, and Kustomize overlays.</li>
<li>Create a prioritized list based on criticality (Production > Staging > Development) and exposure.</li>
</ul>
<h3>Step 2: Short-Term Transitional Solutions (Emergency Plan)</h3>
<ul>
<li>Switching to the <code>bitnamilegacy</code> repository can save builds in the short term - but it's only a temporary bridge, as security updates are missing.</li>
<li>In parallel: tighten security gates (scanners, Policy as Code), establish clear exception rules with expiration dates.</li>
</ul>
<h3>Step 3: Long-Term Strategies and Alternatives</h3>
<ul>
<li>Define the target state: Official images, verified providers, or your own hardening pipeline?</li>
<li>Adapt Infrastructure-as-Code (IaC) and CI/CD to make version pinning, SBOM generation, signatures (Sigstore/Cosign), and regular rebuilds the standard.</li>
</ul>
<h2>5. The Best Alternatives to Bitnami</h2>
<h3>Official Docker Images (Recommended Where Available)</h3>
<ul>
<li>Advantages: Proximity to upstream, fast security updates, clear roadmaps, large community.</li>
<li>Examples: <a href="https://hub.docker.com/search?image_filter=official">Docker Hub - Official Images</a>, <a href="https://hub.docker.com/_/postgres">PostgreSQL</a>, <a href="https://hub.docker.com/_/nginx">NGINX</a>, <a href="https://hub.docker.com/_/redis">Redis</a>.</li>
</ul>
<h3>Verified Providers</h3>
<ul>
<li><strong>Chainguard</strong>: Focus on "<a href="https://github.com/wolfi-dev">Wolfi</a>"-based, minimalist, signed, and frequently updated images. Link: https://www.chainguard.dev/</li>
<li><strong>Iron Bank (DoD/Platform One)</strong>: Hardened, audited images with a compliance focus. Link: https://repo1.dso.mil/</li>
<li>Evaluate other enterprise offerings depending on the technology stack and compliance requirements.</li>
</ul>
<h3>The "Do-It-Yourself" Approach: Building Your Own Images</h3>
<ul>
<li>Advantages: Full control, reproducible builds, maximum transparency (SBOM, signatures), coordinated patch management.</li>
<li>Disadvantages: Higher initial effort, ongoing maintenance, and security backporting.</li>
<li>Practical Tips:
<ul>
<li>Multi-stage builds, minimizing base images (Distroless, Wolfi, Alpine with caution), rootless containers.</li>
<li>Automated security scans (Trivy, Grype), signing with Cosign, policy enforcement (OPA/Gatekeeper, Kyverno).</li>
</ul>
</li>
</ul>
<h3>Community Helm Charts and Initiatives</h3>
<ul>
<li>Keep an eye on community projects that emerge in response to the Bitnami changes (forks, maintained charts, operators).</li>
<li>Check maintainer activity, security policy, update frequency, and migration paths.</li>
</ul>
<h2>6. Conclusion: An Opportunity for More Resilience</h2>
<p>The message "<strong>Bitnami no longer free</strong>" is a wake-up call. Relying too heavily on a single provider risks dependencies with security and stability consequences. Use this transition to modernize your container strategy: prefer official images, establish security-by-default, consistently enforce version pinning and signatures, and strengthen your own build pipelines.</p>
<p>Start the transition now, prioritize critical workloads, and document your security measures. This will increase the resilience of your platform - regardless of short-term market changes.</p>
<hr>
<h2>Frequently Asked Questions (FAQs)</h2>
<h3>1. How much does Bitnami cost now?</h3>
<p>With "Bitnami Secure Images," Bitnami is targeting enterprise customers. The price ranges mentioned are roughly between $50,000 - $72,000 annually, depending on the package. Specific conditions vary based on scope, support level, and contract duration.</p>
<h3>2. What are the alternatives to Bitnami?</h3>
<ul>
<li>Official Docker images, where available.</li>
<li>Verified providers like Chainguard or Iron Bank.</li>
<li>Custom, hardened images with automated pipelines, signatures, and SBOMs.</li>
</ul>
<h3>3. Can I continue to use Bitnami images after August 2025?</h3>
<p>In the short term, yes - often via legacy repositories or <code>:latest</code>. For production, this is not recommended due to a lack of patches and reproducibility. Plan a migration to alternative, maintained images.</p>
<hr>
<p>What are your experiences with the Bitnami transition? Which <strong>Bitnami alternatives</strong> do you recommend, and how are you addressing the <strong>Bitnami prices</strong> in your budget? Share your approaches, lessons learned, and tool recommendations in the comments.</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Docker</category>
            <category>Security</category>
            <category>Operations</category>
            <enclosure url="https://blueshoe.io/img/blog/bitnami-alternatives.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[API Strategy Essentials: How to Choose the Right Technology for Your Application]]></title>
            <link>https://blueshoe.io/blog/choose-right-api-technology</link>
            <guid>https://blueshoe.io/blog/choose-right-api-technology</guid>
            <pubDate>Wed, 15 May 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Can you imagine a world without APIs? A world without the ease and reliability of an OpenAPI schema? Yeah me neither - and yet only a few years ago developers had to come up with ingenious ways to let systems talk to each other.</p>
<p><img src="/img/blog/api-technologies-rest-graphql.svg" alt="API Strategy Essentials: How to Choose the Right Technology for Your Application">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
The Significance of APIs in Modern Software Engineering
:::
:::globalParagraph
There is a very good article "The land before modern APIs" by Darius Kazemi about the early days of ARPANET in 1970 and how decision-making back then is still influencing software development today. One of those decisions was the specification for error and response codes for the "RJE protocol" resulting in the HTTP status codes we all know and "love" today.
:::
:::globalParagraph
But why are these early decisions so important and the modern API design so significant for our daily work?
:::globalParagraph
Well, the path paved by developers in 1970 led to the development of web APIs in the early 2000. Kin Lanes’ article "Intro to APIs: History of APIs" sums up the history of APIs very well. In short, APIs went from a commercial use case over socializing the internet to powering next-generation devices. Heck, there is even an "API hub" for APIs! This all shows that APIs play a big role in modern software engineering.
:::
:::globalParagraph
Although we love our APIs, there is still an ongoing debate on how an API should work under the hood. Before 2015 you would usually go for a REST-based API design. But then Facebook open-sourced the GraphQL language for API development and gave us a new way of interacting with our data.
:::
:::globalParagraph
In this post, I want to give a small overview of REST and GraphQL, compare those two, and finally mention some other approaches to API design.
:::
:::globalParagraph{.mb-5}
So shall we start?
:::</p>
<p>:::GlobalButton{:url="/our-services/api-development-company/" :label="Learn more about our API development services" :color="blue" .mb-6}
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Understanding the Fundamentals and Strengths of REST APIs
:::
:::globalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Characteristics of REST APIs
:::
:::globalParagraph
REST <a href="/our-services/api-development-company/">APIs</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} feature two fundamental characteristics that underpin their design: being statelessness and the utilization of HTTP methods. Firstly, REST APIs implement the stateless paradigm, meaning each request from a client to the server must contain all necessary information to understand and fulfill it, without relying on any previous interactions. This simplifies server implementation and scalability, as it eliminates the need to maintain a session state on the server. Secondly, REST APIs leverage the HTTP protocol's versatile methods, such as GET, POST, PUT, DELETE, to perform various actions on resources. These methods enable developers to create APIs that align closely with the principles of CRUD (Create, Read, Update, Delete), enhancing the clarity, predictability, and accessibility of their API endpoints.
:::
:::globalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Use cases of REST
:::
:::globalParagraph
Whenever you are planning an <a href="/our-services/api-development-company/">API</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} with a well-defined client-server contract (that means, among other things, less flexibility in server responses) REST is a very good option for you. Not only is REST efficient and easy to implement, but it also has wide support in all programming languages that can work with HTTP methods.
:::
:::globalParagraph
Although REST expects predefined requests and only gives set responses it still offers a high flexibility for creating an API. You can build anything using REST!
:::
:::globalParagraph
A simple example could be a simple API that gets the images from http.cat and returns them to the user:
:::
::BlogCode{.mb-5}</p>
<pre><code class="language-python">from fastapi import FastAPI
from fastapi.responses import Response
import requests

app = FastAPI()

@app.get("/{status_code}", description="Get cat status code", response_class=Response)
async def get_cat_status_code(status_code: int):
    response = requests.get(f"https://http.cat/{status_code}")
    return Response(response.content, media_type="image/jpeg")
</code></pre>
<p>::
:::globalParagraph
Mind that http.cat is already an API returning an image (it’s not uncommon to call other APIs in your API). But this example should just show what is possible with a simple REST API and how it adheres to the HTTP method model. Let’s dissect it!
:::
:::globalParagraph
In this example, I used the popular FastAPI package
for Python. It allows for easy creation of REST APIs.
:::
:::globalParagraph
<code>@app.get</code> defines an HTTP GET method on the root path with the path parameter <code>status_code</code>. In the method <code>get_cat_status_code</code> I then make a GET request to the http.cat API (using the provided status_code) and finally return the image bytes as a FastAPI Response object.
:::
:::globalParagraph{.mb-6}
This concludes this short REST example.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
GraphQL: An API “newcomer”
:::
:::globalParagraph
Although GraphQL has been around since 2015 it’s still fairly new to the API world comparing it to REST or even older technology like SOAP-based APIs.
:::
:::
:::globalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Characteristics of GraphQL
:::
:::globalParagraph{.mb-5}
GraphQL <a href="/our-services/api-development-company/">APIs</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} possess distinctive characteristics that differentiate them from traditional REST APIs, offering a more flexible and efficient approach to data retrieval. One notable feature of GraphQL is its ability to optimize queries and minimize data transfer. Unlike REST APIs, where endpoints are fixed and return predefined data structures, GraphQL allows clients to specify exactly what data they need using a single query. This capability eliminates over-fetching and under-fetching issues commonly encountered with REST APIs, where clients may receive more or less data than required. Furthermore, GraphQL's type system enables clients to request nested or related data in a single query, reducing the need for multiple round trips to the server. By providing a precise and declarative syntax for data fetching, GraphQL empowers clients to fetch only the necessary data, resulting in more efficient network usage and improved performance. Additionally, GraphQL supports features like query batching and caching, further optimizing data transfer and reducing latency for client applications. Overall, GraphQL's query optimization capabilities enhance the efficiency and responsiveness of API interactions, making it a great choice for modern application development.
:::
:::globalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Use cases of GraphQL
:::
:::globalParagraph
The use case for GraphQL APIs differs a little from REST. GraphQL should be used in data-rich environments with complex requirements. It’s also worth considering when you have multiple clients with different data needs.
:::</p>
<p>:::globalParagraph
Let’s set up the same example like the REST API and get some http.cat images with GraphQL!
:::
:::globalParagraph
This is the main code for the API using FastAPI and its GraphQL integration with strawberry:
:::
::BlogCode{.mb-5}</p>
<pre><code class="language-python">import strawberry
from fastapi import FastAPI
from strawberry.asgi import GraphQL

@strawberry.type
class Cat:
    url: str

@strawberry.type
class Query:
    @strawberry.field
    def cat(self, status_code: int) -> Cat:
        return Cat(url=f"https://http.cat/{status_code}")

schema = strawberry.Schema(query=Query)

graphql_app = GraphQL(schema)

app = FastAPI()
app.add_route("/graphql", graphql_app)
app.add_websocket_route("/graphql", graphql_app)
</code></pre>
<p>::
:::globalParagraph
First we define a strawberry type called <em>Cat</em>, then another one called <em>Query</em>. Query has a field called <em>cat</em>, which has an implicit field <em>url</em>. Creating a GraphQL schema can be done with strawberry-Schema, handing it to the Query class.
:::
:::globalParagraph
As you can see GraphQL is not able to return the image bytes directly, meaning you need to handle stuff like that on your own. It also needs a client on the other side that can run a query against our API. Luckily strawberry has a built in server providing such a client:
<img src="/img/blog/GraphQL-strawberry.png" alt="Use cases of GraphQL">{.object-cover .max-w-full .mb-5}
:::
:::globalParagraph
On the left side, you can see the query I put together, and on the right the JSON response from the GraphQL API.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Comparing REST and GraphQL
:::
:::globalParagraph
You often read about REST vs GraphQL on the web. In my opinion, you can’t let those two technologies compete against each other. They are two very different approaches to interacting with data that we want to access. Thus a comparison between these two is not fair and you cannot generally say one approach is better than the other. I’d rather suggest when to use either one.
:::
:::globalParagraph
When you are in an environment where you have to satisfy the needs of multiple clients that want data that is differently shaped you should probably go with GraphQL. With its easy querying language, you can achieve fast, efficient responses for your clients and their different needs.
:::
:::globalParagraph
On the other hand, if you can be a bit more strict about server responses and want good caching of requests a REST-based API might be what you are looking for. Due to its standardization and usage of HTTP methods REST is widely applicable and usable. With its call for clear separation of concerns and stateless communication architecture it is also a very scalable solution.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Overview of Alternative API Technologies
:::
:::globalParagraph
In the end, I want to mention two alternative approaches to <a href="/our-services/api-development-company/">API development</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} that we also use and offer at Blueshoe.
:::
:::globalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
gRPC and its’ role in Microservices
:::
:::globalParagraph
gRPC (the g does not stand for Google) is an open-source remote procedure call (RPC) framework initially developed by Google. It is built on top of HTTP/2, Protocol Buffers (protobuf), and other modern technologies, offering a robust and efficient way to connect services across different environments.
:::
:::globalParagraph
gRPC offers many benefits in Microservice architectures. A few of those are:
:::
::GlobalBlock{:font-size="lg" :color="text-bs-text" .ul-disk .mb-5}</p>
<ul>
<li><strong>Efficient Serialization with Protocol Buffers</strong>: gRPC uses Protocol Buffers as its interface definition language (IDL) for defining service contracts and serializing data. Protocol Buffers offer a compact binary format and efficient serialization and deserialization processes, resulting in smaller message sizes and faster data transmission. This efficiency is particularly beneficial in microservices environments with high data throughput requirements.</li>
<li><strong>Strong Typing and Code Generation</strong>: The same Protocol Buffer technology is also used for typing and code generation. This allows for faster development while minimizing human error.</li>
<li><strong>Bi-directional Streaming and Flow Control</strong>: gRPC supports various communication patterns, including unary, server streaming, client streaming, and bidirectional streaming. This flexibility enables services to exchange data efficiently in real time, supporting use cases such as chat applications, real-time analytics, and event-driven architectures. Additionally, gRPC's built-in flow control mechanisms prevent issues like backpressure, ensuring optimal resource utilization and system stability.
::</li>
</ul>
<p>:::globalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
SOAP as the enterprise solution
:::
:::globalParagraph
SOAP (Simple Object Access Protocol), a protocol for exchanging structured information in the implementation of web services, has been a part of enterprise applications for decades. It provides a standardized way for applications to communicate over networks, facilitating interoperability between diverse systems and platforms. Despite criticisms of its verbosity and complexity compared to newer protocols like REST, SOAP continues to thrive in enterprise environments due to its reliability, extensibility, and comprehensive security capabilities.
:::</p>
<p>:::globalTitle{:color="text-bs-green" :font="font-oswald" :size="sm" :tag="h4" .mb-5}
Security Features of SOAP
:::
::GlobalBlock{:font-size="lg" :color="text-bs-text" .ul-disk .mb-5}</p>
<ul>
<li><strong>Message-level Security</strong>: SOAP supports message-level security, allowing for the encryption and signing of individual messages exchanged between client and server. This ensures data confidentiality, integrity, and authentication, mitigating the risk of eavesdropping, tampering, and unauthorized access to sensitive information. By encrypting payloads and attaching digital signatures to messages, SOAP helps enterprises maintain the confidentiality and integrity of their data, even in transit across untrusted networks.</li>
<li><strong>WS-Security Standard</strong>: SOAP integrates seamlessly with the WS-Security standard, a widely adopted specification for securing web services. WS-Security provides a framework for incorporating various security mechanisms, such as encryption, digital signatures, and authentication tokens, into SOAP messages. This standardization ensures interoperability and compatibility between different implementations, allowing enterprises to leverage a broad ecosystem of security tools and solutions to protect their SOAP-based services.
::</li>
</ul>
<p>:::globalTitle{:size="md" .mb-5}
Recap
:::
:::globalParagraph
As you can see there are many ways to implement an <a href="/our-services/api-development-company/">API</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} and choosing the right technology can be tricky sometimes. I hope with this short overview I can give a hint on when to use a certain approach to build your API. And if you ever need a tailored solution, hit us up!
:::</p>
<p>:::GlobalPodcastSection{:videoId="tyvE9VlSWkE" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
Our Kubernetes podcast
::::
::::GlobalTitle{:tag="h3" .mb-6}
Tools for the Craft: Navigating the Kubernetes ecosystem
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking in depth about the ins and outs of local Kubernetes development and also provide some real coding examples.
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="Show more" :color="green"}
::::
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>API</category>
            <category>Python</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blog/api-technologies-rest-graphql.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Using Cilium for Kubernetes networking and observability]]></title>
            <link>https://blueshoe.io/blog/cilium-kubernetes-networking-observability</link>
            <guid>https://blueshoe.io/blog/cilium-kubernetes-networking-observability</guid>
            <pubDate>Wed, 19 Apr 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>In this article, we'll provide an introduction to Cilium, a networking solution for Kubernetes that uses eBPF for high-performance networking, security, and observability tasks. We cover the installation of Cilium, network policy configuration, and using Hubble for observability.</p>
<p><img src="/img/blog/cilium-kubernetes.jpg" alt="cilium_kubernetes">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Introduction to Cilium and its use in Kubernetes
:::
:::GlobalParagraph
Cilium is a networking solution for Kubernetes that provides advanced networking and security features. It uses eBPF to perform high-performance networking, security and observability tasks within Kubernetes.
:::
:::GlobalParagraph
In this article, we’ll explore how to use Cilium for Kubernetes networking. We will cover the basics of setting up Cilium in a cluster, configuring network policies and using Hubble for observability. We’ll also discuss best practices for using Cilium in production environments and troubleshooting common issues. Let’s get started by installing Cilium to our Kubernetes cluster!
:::
:::GlobalParagraph
Note: We recommend using kind to try this out on your local machine. K3d (which uses k3s under the hood) does not ship bash in its node images, causing the cilium installation to fail.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Installing Cilium
:::
:::GlobalParagraph
First of all, we need to install the Cilium CLI as described in the documentation.
:::
:::GlobalParagraph
Once the CLI installation is finished, we can install Cilium to our cluster by running:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">$ cilium install
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
This will install Cilium to the cluster pointed to by our current kubectl context. To verify a working installation, we use:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">$ cilium status --wait
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
The output should look something like this:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">/¯¯\
 /¯¯\__/¯¯\    Cilium:         OK
 \__/¯¯\__/    Operator:       OK
 /¯¯\__/¯¯\    Hubble:         disabled
 \__/¯¯\__/    ClusterMesh:    disabled
    \__/

DaemonSet         cilium             Desired: 1, Ready: 1/1, Available: 1/1
Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Containers:       cilium             Running: 1
                  cilium-operator    Running: 1
Cluster Pods:     0/3 managed by Cilium
Image versions    cilium             quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: 1
                  cilium-operator    quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: 1
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
If everything looks good, we can verify proper network connectivity by running
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">$ cilium connectivity test
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
This will create a dedicated namespace and run some tests on predefined workloads in order to test the cluster network connection.
:::
:::GlobalParagraph
The successful output looks like this:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">All 31 tests (151 actions) successful, 0 tests skipped, 1 scenarios skipped.
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
If all the tests ran successfully, congratulations! We have successfully installed Cilium to our Kubernetes cluster!
:::</p>
<p>:::GlobalPodcastSection{:videoId="5wNTUUSk1jA" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR PODCAST: TOOLS FOR THE CRAFT
::::
::::GlobalTitle{:tag="h3" .mb-6}
E3: Deep dive into Getdeck
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are introducing and demonstrating Getdeck as well as comparing local and remote Kubernetes and preproduction clusters
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="Show more" :color="green"}
::::
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Configuring network policies with Cilium
:::
:::GlobalParagraph
Network policies in Kubernetes are used to control and filter traffic. By default, any pod running in a cluster can communicate with any other pod, which might be insecure depending on the setup. Using network policies, we can implement rules that only allow traffic that we explicitly want to allow. Cilium allows us to set rules on the HTTP level, which decouples network rules from our application code.
:::
:::GlobalParagraph
Now that Cilium runs in our cluster, let’s put it to the test by applying some network policies to specify what traffic is allowed inside the cluster as well as ingressing/egressing.
:::
:::GlobalParagraph
The commonly used “default-deny-ingress” policy can be implemented with Cilium like this:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "deny-all-ingress"
spec:
  endpointSelector:
    matchLabels:
  ingress:
  - {}
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
Since the matchLabels key is empty, this will be applied to every endpoint, effectively locking down any ingress traffic within the cluster.
:::
:::GlobalParagraph
We need our services to communicate with one another, therefore we add a policy that specifically allows ingress traffic between two services.
:::
:::GlobalParagraph
A simple “ingress-allow” policy could look something like this:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "ingress-allow"
spec:
  endpointSelector:
    matchLabels:
      role: backend-api
  ingress:
  - fromEndpoints:
    - matchLabels:
        role: client
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
This network policy will allow all ingress traffic from endpoints with the label <code>role: client</code> and that connect to endpoints with the label <code>role: backend-api</code>.
:::
:::GlobalParagraph
Moving up the OSI model layers, we can also lock down HTTP methods and paths like this:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "rule1"
spec:
  description: "Allow HTTP GET /api from app=client to app=api"
  endpointSelector:
    matchLabels:
      app: api
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: client
    toPorts:
    - ports:
      - port: "80"
        protocol: TCP
      rules:
        http:
        - method: "GET"
          path: "/api"
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
This will allow incoming HTTP traffic from endpoints labeled with <code>app: client</code> to endpoints labeled with <code>app: api</code>, as long as the HTTP method is GET, and the path is “/public”. Requests to ports other than 80 will be dropped, while other HTTP verbs and other paths will be rejected.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Using Cilium for observability
:::
:::GlobalParagraph
Cilium Hubble is a powerful observability tool that provides deep insights into the network traffic and security of a Kubernetes cluster. In this section, we will explore how to set up and use Hubble for observability.
:::</p>
<p>:::GlobalTitle{:size="md" :tag="h3" .mb-5}
Setting up Hubble
:::
:::GlobalParagraph
To use Hubble, we need to deploy it in our Kubernetes cluster as follows:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">$ cilium hubble enable --ui
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
If we run “cilium status” again, we’ll see that Hubble is enabled and running.
:::
:::GlobalParagraph
To make use of the data that’s being collected, we install the Hubble CLI as described in the documentation. Once the installation is complete, we can verify Hubble API access by first creating a port forward for hubble and then using the Hubble CLI to inspect Hubble status and to query the API, like this:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">$ cilium hubble port-forward&#x26;
$ hubble status
$ hubble observe
Apr  4 07:14:29.119: 10.244.0.166:37906 (host) -> cilium-test/echo-same-node-9f8754876-ns7tx:8181 (ID:2357) to-endpoint FORWARDED (TCP Flags: ACK)
Apr  4 07:14:29.120: 10.244.0.166:41980 (host) &#x3C;- cilium-test/echo-same-node-9f8754876-ns7tx:8080 (ID:2357) to-stack FORWARDED (TCP Flags: ACK, PSH)
Apr  4 07:14:29.121: 10.244.0.166:41980 (host) -> cilium-test/echo-same-node-9f8754876-ns7tx:8080 (ID:2357) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Apr  4 07:14:29.121: 10.244.0.166:41980 (host) &#x3C;- cilium-test/echo-same-node-9f8754876-ns7tx:8080 (ID:2357) to-stack FORWARDED (TCP Flags: ACK, FIN)
Apr  4 07:14:29.121: 10.244.0.166:41980 (host) -> cilium-test/echo-same-node-9f8754876-ns7tx:8080 (ID:2357) to-endpoint FORWARDED (TCP Flags: ACK)
Apr  4 07:14:30.119: 10.244.0.166:41986 (host) -> cilium-test/echo-same-node-9f8754876-ns7tx:8080 (ID:2357) to-endpoint FORWARDED (TCP Flags: SYN)
Apr  4 07:14:30.119: 10.244.0.166:41986 (host) &#x3C;- cilium-test/echo-same-node-9f8754876-ns7tx:8080 (ID:2357) to-stack FORWARDED (TCP Flags: SYN, ACK)
Apr  4 07:14:30.119: 10.244.0.166:41986 (host) -> cilium-test/echo-same-node-9f8754876-ns7tx:8080 (ID:2357) to-endpoint FORWARDED (TCP Flags: ACK)
Apr  4 07:14:30.119: 10.244.0.166:37912 (host) -> cilium-test/echo-same-node-9f8754876-ns7tx:8181 (ID:2357) to-endpoint FORWARDED (TCP Flags: SYN)
Apr  4 07:14:30.119: 10.244.0.166:37912 (host) &#x3C;- cilium-test/echo-same-node-9f8754876-ns7tx:8181 (ID:2357) to-stack FORWARDED (TCP Flags: SYN, ACK)
Apr  4 07:14:30.119: 10.244.0.166:37912 (host) -> cilium-test/echo-same-node-9f8754876-ns7tx:8181 (ID:2357) to-endpoint FORWARDED (TCP Flags: ACK)
Apr  4 07:14:30.119: 10.244.0.166:37912 (host) -> cilium-test/echo-same-node-9f8754876-ns7tx:8181 (ID:2357) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Apr  4 07:14:30.119: 10.244.0.166:41986 (host) -> cilium-test/echo-same-node-9f8754876-ns7tx:8080 (ID:2357) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Apr  4 07:14:30.120: 10.244.0.166:37912 (host) &#x3C;- cilium-test/echo-same-node-9f8754876-ns7tx:8181 (ID:2357) to-stack FORWARDED (TCP Flags: ACK, PSH)
Apr  4 07:14:30.120: 10.244.0.166:37912 (host) &#x3C;- cilium-test/echo-same-node-9f8754876-ns7tx:8181 (ID:2357) to-stack FORWARDED (TCP Flags: ACK, FIN)
Apr  4 07:14:30.120: 10.244.0.166:37912 (host) -> cilium-test/echo-same-node-9f8754876-ns7tx:8181 (ID:2357) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Apr  4 07:14:30.120: 10.244.0.166:37912 (host) -> cilium-test/echo-same-node-9f8754876-ns7tx:8181 (ID:2357) to-endpoint FORWARDED (TCP Flags: ACK)
Apr  4 07:14:30.121: 10.244.0.166:41986 (host) &#x3C;- cilium-test/echo-same-node-9f8754876-ns7tx:8080 (ID:2357) to-stack FORWARDED (TCP Flags: ACK, PSH)
Apr  4 07:14:30.121: 10.244.0.166:41986 (host) -> cilium-test/echo-same-node-9f8754876-ns7tx:8080 (ID:2357) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Apr  4 07:14:30.121: 10.244.0.166:41986 (host) &#x3C;- cilium-test/echo-same-node-9f8754876-ns7tx:8080 (ID:2357) to-stack FORWARDED (TCP Flags: ACK, FIN)
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
If you like graphical user interfaces, you can also deploy Hubble UI to your cluster. Hubble UI gives you access to a graphical service map that can be used to visually inspect traffic within the cluster. For our example setup, Hubble UI looks like this:
:::</p>
<p><img src="/img/blog/hubble-ui.jpg" alt="hubble_ui">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Conclusion
:::
:::GlobalParagraph
In conclusion, Cilium offers a robust networking solution for Kubernetes, allowing users to enforce precise network policies and keep track of network activity in real-time. Its cloud native design and eBPF-based architecture make Cilium a top pick for users seeking advanced networking functionalities in their Kubernetes setups.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Going further
:::
:::GlobalParagraph
Cilium offers way more features than we can cover in this post, so here’s a short writeup of what else Cilium is capable of.
:::</p>
<p>:::GlobalTitle{:size="md" .mb-5}
Cluster Mesh
:::
:::GlobalParagraph
With cluster mesh, we can extend networking paths across multiple Kubernetes clusters, allowing endpoints in connected clusters to communicate with each other while also enforcing network policies.
:::</p>
<p>:::GlobalTitle{:size="md" .mb-5}
Service Mesh
:::
:::GlobalParagraph
Cilium service mesh aims to replace more traditional service mesh solutions like Linkerd. It has one key difference, though: While Linkerd relies on sidecar proxies to manage traffic between pods, Cilium service mesh uses eBPF on the node level to manage traffic, improving performance, reducing load and further decoupling the service mesh from the actual workloads.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Security</category>
            <category>Documentation</category>
            <enclosure url="https://blueshoe.io/img/blog/cilium-kubernetes.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Exploring the cloud-native conferences in 2023]]></title>
            <link>https://blueshoe.io/blog/cloud-native-conferences</link>
            <guid>https://blueshoe.io/blog/cloud-native-conferences</guid>
            <pubDate>Mon, 24 Jul 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Cloud-native technology has revolutionized the tech industry by enabling scalable and flexible software development. It focuses on containerization, microservices, and continuous integration, which are essential for modern applications. As the demand for cloud-native solutions grows, conferences in 2023 offer invaluable insights and networking opportunities for professionals in this field.</p>
<p><img src="/img/blog/kcd-kubernetes.jpg" alt="kcd_kubernetes">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalParagraph
Cloud-native conferences serve as hubs of innovation, bringing together experts, enthusiasts, and learners to discuss and explore the latest trends and advancements in the field.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>Kubernetes Community Days (KCDs)</strong>
:::
:::GlobalParagraph
The Kubernetes Community Days (KCDs) are vibrant community-driven conferences that bring together adopters and technologists from open-source and cloud-native communities to foster learning, collaboration, and networking. Supported by the Cloud Native Computing Foundation (CNCF), these events play a crucial role in the growth and sustainability of Kubernetes and other cloud-native communities. KCDs can take various forms, including in-person, virtual, or hybrid, with a strong preference for in-person gatherings whenever feasible. Among the KCDs, the <a href="/blog/kubernetes-community-days-munich-2023/">event in Munich</a>{.bs-link-blue} scheduled for July 2023 was an exciting opportunity for cloud-native enthusiasts to come together and explore the latest trends and innovations in the field.
:::</p>
<p>:::GlobalTitle{:size="md" :tag="h3" .mb-5}
Upcoming KCDs:
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>UK</li>
<li>Denmark</li>
<li>Utrecht
:::</li>
</ul>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>KubeCon + CloudNativeCon Europe</strong>
:::
:::GlobalParagraph
KubeCon + CloudNativeCon Europe, held in Amsterdam in April 2023, was a prominent conference organized by the Cloud Native Computing Foundation. This flagship event brought together top technologists from open source and cloud-native communities to advance the education and development of cloud-native computing. With a focus on <a href="/blog/local-kubernetes-development/">Kubernetes</a>{.bs-link-blue} and other cloud-native technologies, the conference explored how these innovative solutions empower organizations to achieve faster software development at a reduced cost compared to traditional infrastructure. As cloud-native architecture revolutionizes application design by orchestrating containers within a microservices framework, KubeCon + CloudNativeCon served as a crucial platform for the community to exchange knowledge and expertise in this emerging and increasingly popular field. The conference attracted a diverse range of attendees, including application developers, IT operations professionals, technical managers, executive leaders, product managers, end-users, service providers, CNCF contributors, and anyone interested in learning more about cloud-native concepts.
:::</p>
<p><img src="/img/blog/kubecon-cloud-native-con.jpg" alt="kubecon_cloud_native_con">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>Cloud Native Rejekts</strong>
:::
:::GlobalParagraph
Cloud Native Rejekts, held in Amsterdam in April 2023, was a conference that provided a second chance for rejected talks from KubeCon + CloudNativeCon. Embracing the spirit of resilience and innovation, this in-person event welcomed both rejected proposals from KubeCon and new submissions that covered cutting-edge topics. With slots for 30-minute full-length talks and 5-minute lightning talks, Cloud Native Rejekts encouraged discussions on core concepts, open-source components and tooling, user experiences, security, as well as new developments and experiments.
:::</p>
<p><img src="/img/blog/cloud-native-rejekts.jpg" alt="cloud_native_rejekts">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>Django Con EU</strong>
:::
:::GlobalParagraph
The Django Con EU held in Edinburg in 2023 was a haven for web development fans seeking to expand their horizons. With a diverse range of topics, including <a href="/blog/django-development-for-kubernetes/">Django</a>{.bs-link-blue} internals and the trials of modern web development, attendees were sure to gain invaluable insights. From wild ideas and clever hacks to surprising use cases, this conference promised to ignite creativity and inspire innovation. Moreover, the event aimed to enhance the lives of <a href="/blog/django-development-for-kubernetes/">Django</a>{.bs-link-blue} and <a href="/blog/php-vs-python/">Python</a>{.bs-link-blue} developers, equipping them with tools to overcome challenges and push the boundaries of their craft. Delving into the fundamental aspects of Django through a modern lens, participants discovered novel perspectives that illuminated the framework's core principles. Notably, the conference provided a platform to explore the vibrant Django and Python community, delving into its rich culture, history, and future trajectory. With a dedicated focus on security, attendees gained essential knowledge and strategies to safeguard their projects in an ever-evolving digital landscape. In essence, Django Con Conference is a dynamic and immersive experience tailored to the diverse needs and aspirations of its participants, epitomizing the spirit of collaboration and growth in the realm of web development.
:::</p>
<p><img src="/img/blog/djangocon2023.jpg" alt="django_con_2023">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>DevOpsCon Munich</strong>
:::
:::GlobalParagraph
DevOpsCon, taking place in Munich in December 2023, is the go-to conference for professionals seeking insights into continuous delivery, microservices, containers, cloud technologies, and lean business practices. As DevOps fundamentally transforms the IT landscape and paves the way for successful business transformation, this conference provides a platform to learn about the latest tools and strategies. Explore the synergy between cloud platforms, microservices, <a href="/blog/local-kubernetes-development/">Kubernetes</a>{.bs-link-blue}, and comprehensive automation, while tackling the challenges of increasing organizational complexity. By attending DevOpsCon, you'll have the opportunity to connect with internationally recognized thought leaders of the DevOps movement and benefit from their expertise, ensuring you stay at the forefront of this rapidly evolving field.
:::</p>
<p><img src="/img/blog/devopscon-munich.jpg" alt="devops_con_munich">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>AgileTD Open Air</strong>
:::
:::GlobalParagraph
The AgileTD Open Air conference in Cologne in June brought together 200 agile software and QA engineers for an exciting event filled with learning, sharing, and networking. Over 2 and a half days, participants gained up-to-date insights into software testing, quality assurance, test automation, and more. It was a great opportunity for professionals to connect, exchange ideas, and stay ahead in the ever-evolving world of agile software development.
:::</p>
<p><img src="/img/blog/agiletd-open-air.jpg" alt="agiletd_open_air">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>Container Days</strong>
:::
:::GlobalParagraph
The Container Days 2023 in Hamburg promise to be an exceptional cloud-native experience. From September 11-12, attendees will immerse themselves in a world of Kubernetes, cloud-native, container security, DevOps, GitOps, edge computing, cloud computing, and more. With a growing CDS family, this conference offers a laid-back atmosphere for open source and cloud-native enthusiasts to connect, network, and share insightful experiences. The schedule is packed with technical talks and discussions on five stages, making it a must-attend event for Kubernauts worldwide. Whether attending in person or virtually, the Container Days ensures an engaging and personalized experience. Don't miss this opportunity to explore container and cloud-native technologies, microservices, DevSecOps, next-gen platforms, hybrid and multi-cloud, and cloud-native journeys.
:::</p>
<p><img src="/img/blog/container-days-hamburg.jpg" alt="container_days_hamburg">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>Kube Native 2023</strong>
:::
:::GlobalParagraph
Kube Native is making a comeback! Calling all cloud-native enthusiasts, Kubernetes lovers, and tech enthusiasts. Whether you've embraced microservices or witnessed the monolith resurgence like Istio, this conference is for you. Join like-minded individuals online to discuss all things cloud and cloud-native, including running in the cloud, adopting Kubernetes and related technologies, <a href="/our-services/microservice-architecture-consulting">microservices</a>{.bs-link-blue}, service meshes, distributed systems, monitoring and alerting, and valuable lessons from production failures.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>Agile Testing Days in Potsdam</strong>
:::
:::GlobalParagraph
The Agile Testing Days in Potsdam are celebrating their 15h anniversairy in November 2023! Over 600 international agile software testers and engineers will come together for an enriching experience. Immerse yourself in up-to-date insights on software testing, quality assurance, and more. With over 100 speakers and thought leaders, offering more than 150 sessions in the form of talks and workshops, there's something for everyone. Explore hot topics like machine learning, ethics in tech, chaos engineering, API testing, test automation, conscious leadership, and many others. Don't miss the chance to network, learn, and grow in this dynamic and knowledge-packed event!
:::</p>
<p><img src="/img/blog/agile-testing-days-potsdam.jpg" alt="agile_testing_days_potsdam">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>Continuous Lifecycle and ContainerConf</strong>
:::
:::GlobalParagraph
The Continuous Lifecycle and ContainerConf in Mannheim in November 2023 are the meeting point for IT professionals who are responsible for the fast and reliable deployment and secure operation of their software. This dual conference covers the entire software lifecycle, with a focus on processes and tools in the areas of continuous delivery and DevOps, including DevSecOps and GitOps. Additionally, it provides practical insights into containerization and familiarizes attendees with key processes for delivering high-quality software based on the container ecosystem.
:::</p>
<p><img src="/img/blog/continuous-lifecycle-containerconf.jpg" alt="continuous_lifecycle_containerconf">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalPodcastSection{:videoId="5wNTUUSk1jA" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
Our Kubernetes podcast
::::
::::GlobalTitle{:tag="h3" .mb-6}
TftC E3: Deep dive into Getdeck
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are introducing and demonstrating Getdeck as well as comparing local and remote Kubernetes and preproduction clusters
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="Show more" :color="green"}
::::
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>Benefits of attending cloud-native conferences</strong>
:::
:::GlobalParagraph
Attending cloud-native conferences in 2023 offers numerous benefits for professionals and enthusiasts alike. One key advantage is staying up-to-date with the latest developments in the rapidly evolving <a href="/our-services/cloud-native-consulting/">cloud-native</a>{.bs-link-blue} landscape. By attending these events, you'll gain invaluable knowledge on cutting-edge technologies, tools, and platforms shaping the industry.
:::
:::GlobalParagraph
Networking with industry experts is another significant benefit. These conferences provide an excellent platform to connect with cloud-native community, share experiences, and forge lasting relationships. You'll have the opportunity to engage with thought leaders, innovators, and key decision-makers, potentially opening doors to new opportunities.
:::
:::GlobalParagraph
Lastly, cloud-native conferences offer a glimpse into the future of the industry. Through keynote speeches, panel discussions, and hands-on workshops, you'll gain insights into emerging trends, best practices, and future challenges. This knowledge will equip you with a competitive edge, helping you stay ahead in your career or business.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>Tips for attending cloud-native conferences</strong>
:::
:::GlobalParagraph
Attending cloud-native conferences? Preparation is key. Begin by outlining your schedule. Many conferences offer mobile apps with scheduling features. Use this to highlight must-attend sessions and avoid scheduling conflicts. Networking is crucial too. Bring plenty of business cards and don't hesitate to introduce yourself to others. You never know where the next big opportunity will come from. When picking sessions, think quality over quantity. Prioritize sessions that align with your goals and interests. Remember, it's not just about absorbing information, but also about contributing to discussions. Lastly, stay energized! Conferences can be exhausting, so ensure you're well-rested and hydrated to make the most of your experience.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Unleashing the power of cloud-native conferences: shaping the future of tech
:::
:::GlobalParagraph
In wrapping up, we can't stress enough the significance of <a href="/our-services/cloud-native-consulting">cloud-native</a>{.bs-link-blue} conferences. They're a goldmine for knowledge sharing, networking, and staying on top of industry trends. These gatherings are a catalyst for technological advancement, pushing the boundaries of what's possible in the cloud. They offer a platform to hear from the experts, learn from real-world case studies, and even get hands-on with new tools. So, gear up, mark your calendars, and get ready to dive into the world of cloud-native. Your participation could help shape the future of this ever-evolving tech landscape. Don't miss out!
:::</p>]]></content:encoded>
            <category>Team Blueshoe</category>
            <enclosure url="https://blueshoe.io/img/blog/kcd-kubernetes.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Comparison of Container Orchestration Options]]></title>
            <link>https://blueshoe.io/blog/container-orchestration-options-comparison</link>
            <guid>https://blueshoe.io/blog/container-orchestration-options-comparison</guid>
            <pubDate>Mon, 30 Aug 2021 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>There are plenty of possibilities for orchestrating your container landscape and we want to give you a quick little introduction to your possibilities. Taking a look at both cloud-offerings and on-site options.</p>
<p><img src="/img/blog/container-orchestration-options-comparison.jpg" alt="Comparison of Container Orchestration Options">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
You are building applications in a containerized manner and really love the advantages of service-based application development. Now you have to decide how you want to tackle the orchestration of these containers so that you ideally have low cost, high reliability and high performance?
:::
:::globalParagraph
Or there may be some other factors that you need to consider? For example, your company doesn't allow you to use public cloud services?
:::
:::globalParagraph
The decision for a method is also based upon how many parts of your application's landscape you want to manage yourself. We identified the following integral parts:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Applications</li>
<li>Data</li>
<li>Container Orchestration</li>
<li>Operating System</li>
<li>Virtualization</li>
<li>Networking</li>
<li>Storage</li>
<li>Server
:::
:::globalParagraph
There are many ways to orchestrate your container landscape, and we would like to give you a brief introduction to your options.
:::</li>
</ul>
<p><img src="/img/blog/unikube_coec_Infographic.jpg" alt="unikube_coec_Infographic">{.object-cover .w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" :color="text-bs-green" .mb-5}
Cloud Offerings
:::
:::globalParagraph
The general advantage of public cloud offerings is that mostly you have very little ramp up time. Once you’ve made the decision to opt for a cloud service you have to make a few more decisions of how you want to run your infrastructure. In general you have the options to go for a “Platform as a Service” (PaaS) offer, “Infrastructure as a Service” (Iaas) offer or to opt for a “Managed Kubernetes” offer.
:::
:::globalParagraph
All of these come with different features, difficulties and flexibility. The following explanations can hopefully help you with this decision.
:::</p>
<p>:::globalTitle{:size="lg" :color="text-bs-blue" .mb-5}
Cloud Platform - native Services (PaaS)
:::
:::globalParagraph
If you are looking for a cloud offering that is really simple to set up and very reliable, you could opt for a PaaS offering. You only need to manage your applications and the data while the PaaS vendor will take care of the container orchestration, OS, virtualization, networking, storage and server.
:::
:::globalParagraph
So, if it’s simple and reliable, what’s the catch? Well, the simplicity comes with a strong vendor lock-in. Once you are set up it is really hard to change your setup. You can’t just port your environment to a different provider or you have to start from scratch. That also makes you susceptible to any changes in pricing or policies that your provider may implement in the future.
:::
:::globalParagraph
Also, you are limited to the exact services that your PaaS provider has to offer. If you see a service at a different provider that you may like, you may have to connect to service providers external to your cloud or provide the service per IaaS.
:::</p>
<p>:::globalTitle{:size="lg" :color="text-bs-blue" .mb-5}
Managed Kubernetes
:::
:::globalParagraph
Kubernetes has been all the rage over the last few years but as a platform, it can be very complex. So, if you want to use all the great advantages that Kubernetes brings but don’t want to do everything yourself, then you could work with a Managed Kubernetes Service of any cloud provider that you may choose.
:::
:::globalParagraph
In the end, it’s very convenient again as the only thing you’ll have to worry about is the management of your application and the respective data. Your cloud provider will again take care of orchestration, OS, virtualization, networking, storage and the server.
:::
:::globalParagraph
Also, in a Managed Kubernetes environment your provider will take care of updating and patches and it will generally need a lot fewer experts in your team to run your infrastructure.
:::
:::globalParagraph
And if you want to take things even further, you can have a look at GKE autopilot, a service by Google built for the Google Kubernetes Engine, that takes even more of the hard work out of your hands by providing a very automated and opinionated configuration of GKE.
:::
:::globalParagraph
But what’s the difference to the PaaS offering of the cloud providers? Well, mostly you would be putting your money on the current and feature industry-standard, that is Kubernetes. Also, you would now have a portable infrastructure and are not as eternally attached to your provider anymore. Portability is clearly one of Kubernetes’ strong sides.
:::
:::globalParagraph
However, the whole setup is a little more complicated than when using a PaaS offering.
:::</p>
<p>:::globalTitle{:size="lg" :color="text-bs-blue" .mb-5}
Infrastructure as a Service
:::
:::globalParagraph
Well, you may enjoy the simplicity of the services you get offered by cloud providers but you just wish you had a bit more flexibility. Then an infrastructure as a service offering could be the right choice for you.
:::
:::globalParagraph
When you use an IaaS offering, the provider predominantly takes care of your virtualization, the networking, the storage and the server. But you are completely flexible in deciding which orchestration service you use. A prefered Kubernetes distribution, such as RKE, micok8s, OKD, Docker Swarm or OpenShift? You decide and take care of the configuration.
:::
:::globalParagraph
The same is true for the operating system. When using IaaS, you can decide on which OS you want to be operating on and how - including underlying software and updates.
:::
:::globalParagraph
Of course, additionally, you also have to manage your applications and data again.
:::</p>
<p>:::globalTitle{:size="lg" :color="text-bs-green" .mb-5}
On-Site
:::
:::globalParagraph
There are sometimes good reasons why you don’t want to rely on the services of a cloud provider and run your infrastructure completely in-house. Mostly, this has to do with privacy and data security compliance guidelines in companies or because you simply don’t want to rely on an external provider.
:::
:::globalParagraph
If that’s the case, you clearly also need some serious expert knowledge in-house as with this option you are responsible for managing your whole infrastructure. Of course, aside from security this also gives you all the flexibility you want or need. But if you really want to run your infrastructure on-site, you still have a couple of options:
:::</p>
<p>:::globalTitle{:size="lg" :color="text-bs-blue" .mb-5}
Unmanaged Kubernetes
:::
:::globalParagraph
As mentioned earlier in this article, if you decided to go with Kubernetes, you definitely went with the current and future industry-standards and should be settled for the years to come. However, if you run a Kubernetes based infrastructure yourself, you better buckle up, because the setup can be really complex. In the end here you not only have to manage your application and data but also configure the orchestration, manage the OS, your virtualization, networking storage and server.
:::
:::globalParagraph
In addition to the management of all these infrastructure parts, you’ll also have to take care of the maintenance of your hardware, which causes additional overhead. Just to keep in mind.
:::
:::globalParagraph
But is there anything to gain? Yes, of course. In a complex setup like this, if done properly, you’ll have a maximum of security, great reliability and a portable and flexible infrastructure that can be adapted to every specific need of your company that may arise. If for example you want to decide to make the move to a cloud infrastructure later, such as Managed Kubernetes, you can still do that.
:::</p>
<p>:::globalTitle{:size="lg" :color="text-bs-blue" .mb-5}
Unmanaged Infrastructure
:::
:::globalParagraph
If you like challenges this setup is for you. You may decide that you want to run everything yourself but you either don’t have Kubernetes knowledge in-house or are not yet convinced of Kubernetes (what?! - tell us why). Well, then the setup is of course also very complex and prone to be misconfigured and therefore can lead to consistently having infrastructural problems all over the place. Seriously, this is hard.
:::
:::globalParagraph
And as with the Unmanaged Kubernetes setup, you don’t only have to manage all parts of your infrastructure but also have to take care of the hardware maintenance. All of this while you only gain on the security spectrum and have endless flexibility. However, reliability can be questionable and you have zero portability.
:::</p>
<p>::GlobalPodcastSection{:videoId="KapP6ouEL3E" :videoPosition="left" :bg="bg-bs-gray" .mb-5}
:::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR PODCAST: TOOLS FOR THE CRAFT
:::
:::GlobalTitle{:tag="h3" .mb-6}
E2: Remote Kubernetes development environments
:::
:::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking about the various options developers have for running remote Kubernetes development environments.
:::
:::globalParagraph{:font-size="lg" .mb-4}
More editions of our podcast can be found here:
:::
::::GlobalButton{:url="/podcast/" :label="More podcasts!" :color="green"}
::::
::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Conclusion
:::
:::globalParagraph
Of course, we would like to give a recommendation of what your choice should be but that is unfortunately impossible as this is completely depending on your individual requirements. However, in general we would say, if you want to be ready for the future, go with one of the possible setups that entail a Kubernetes infrastructure. We also did that switch once and never looked back since.
:::
:::globalParagraph
Also, check out <a href="/products">Unikube</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, which makes Kubernetes really accessible for your developers. <a href="/blog/evolution-of-application-development-to-cloud-native/">Cloud Native Development</a>{.bs-link-blue} out of the box, so to say.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Operations</category>
            <category>Digitalization</category>
            <enclosure url="https://blueshoe.io/img/blog/container-orchestration-options-comparison.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Error-Free Web Applications with Cypress]]></title>
            <link>https://blueshoe.io/blog/cypress-end-to-end-testing</link>
            <guid>https://blueshoe.io/blog/cypress-end-to-end-testing</guid>
            <pubDate>Tue, 18 Feb 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Testing web applications is a central component of modern software development. Manual testing is often time-consuming, error-prone, and difficult to scale. Cypress is an end-to-end testing tool that was specifically developed for the requirements of automated testing of web applications.</p>
<p><img src="/img/blog/cypress.svg" alt="Error-free web applications with Cypress">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Introduction
::</p>
<p>::GlobalParagraph
In today's web development, ensuring the quality of an application is crucial for success. To guarantee a seamless user experience, automated end-to-end (E2E) testing has established itself as an indispensable component of the development process. One of the most performant tools for automated testing is <a href="https://github.com/cypress-io/cypress">Cypress</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, which we would like to explore in more detail in this article.
::</p>
<p>::GlobalParagraph
Cypress enables developers to conduct tests directly in the browser, thus realistically reproducing the user experience. With its focus on speed, user-friendliness, and stability, Cypress is ideal for comprehensively testing web applications and identifying potential errors early. In this article, you will learn why Cypress plays an important role in modern web projects and how you can successfully implement it.
::</p>
<p>::GlobalTitle{:size="lg" .mb-4}
What is Cypress?
::
::GlobalParagraph
Cypress is a modern end-to-end testing framework specifically developed for web applications. It is based on JavaScript and deeply integrated into the browser. This means that tests are carried out in the same environment in which users interact with the application - namely directly in the browser. This ensures particularly realistic results.
::</p>
<p>::GlobalTitle{:size="lg" .mb-4}
Why is Cypress so special?
::
::GlobalParagraph
Cypress distinguishes itself through its close integration into the browser and its user-friendliness. Here are some of its outstanding characteristics:
::
::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li><strong>Simple Setup:</strong> No complex setup - install Cypress with one command, and you can start immediately.</li>
<li><strong>Tests in the Browser:</strong> Your tests run in the same environment as the application, making the results particularly realistic.</li>
<li><strong>Fast Debugging:</strong> With screenshots, videos, and interactive runtimes, you can immediately see where the problem is.
::
::GlobalParagraph
Compared to other testing frameworks like Selenium, Cypress offers a simpler user experience. The integration into the browser and faster test execution make it the optimal choice for automated testing of modern web applications.
::</li>
</ul>
<p><img src="/img/blog/Cypress_Logotype_Dark-Color.svg" alt="Cypress logo">{.mx-auto .h-48 .max-w-full .my-4}</p>
<p>::GlobalTitle{:size="lg" .mb-4}
Advantages of Cypress
::
::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Performance and Speed
::
::GlobalParagraph
Cypress provides real-time feedback, allowing developers to immediately recognize and fix errors. This makes the entire development process more efficient. You can write and execute tests quickly, which is a significant advantage, especially for extensive projects.
::
::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Realistic Tests
::
::GlobalParagraph
Since Cypress tests are executed directly in the browser, they reflect the actual user experience. This ensures that applications function error-free under real conditions.
::
::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
User-Friendliness and Easy Integration
::
::GlobalParagraph
Cypress uses an intuitive syntax that enables quick onboarding. Moreover, Cypress can be seamlessly integrated into existing development workflows, including Continuous Integration (CI) and Continuous Deployment (CD).
::
::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Flexibility and Adaptability
::
::GlobalParagraph
Cypress offers a variety of APIs and configuration options that allow tests to be tailored to a project's specific requirements. In addition to testing user interfaces, Cypress also supports API tests and can be seamlessly integrated into CI/CD pipelines.
::</p>
<p>::GlobalTitle{:size="lg" .mb-4}
Cypress in Practice
::
::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Example 1: Cypress in E-Commerce
::
::GlobalParagraph
A leading e-commerce company implemented Cypress to automate testing of the entire checkout process. This allows for early detection of problems in payment and shipping processes, improving user experience and reducing purchase abandonment.
::
::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Example 2: Cypress for SaaS Platforms
::
::GlobalParagraph
A provider of a project management platform uses Cypress to test critical functions like user registration and team collaboration. This helped shorten development cycles and improve the application's stability.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Example 3: Financial Platform with Continuous Integration
::
::GlobalParagraph
A financial platform with high security requirements uses Cypress for automated end-to-end testing in Continuous Integration. Errors in transaction processes are thus detected and resolved in real-time.
::
::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Integration into Existing Systems
::
::GlobalParagraph
Thanks to comprehensive documentation and support for various CI/CD tools, Cypress can be seamlessly integrated into existing development environments.
::</p>
<p>::GlobalTitle{:size="lg" .mb-4}
Implementing Cypress: Step-by-Step Guide
::
::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Step 1: Installing Cypress
::
::GlobalParagraph
Installing Cypress is the first step to integrate automated tests into your project. Cypress will be installed as an npm package, which means you must have Node.js and npm (Node Package Manager) set up in your project.
::
::GlobalParagraph
Installation command:
::
::GlobalBlock{.ol-decimal .mb-4}</p>
<ol>
<li>Open your terminal or command line and navigate to the root directory of your project.</li>
<li>Execute the following command to install Cypress as a development dependency:
::
::BlogCode{.mb-4}</li>
</ol>
<pre><code class="language-bash">$ npm install cypress --save-dev
</code></pre>
<p>::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Step 2: Starting Cypress
::
::GlobalParagraph
After installation, you can start Cypress for the first time and explore the interactive test environment. Use the following command:
::
::BlogCode{.mb-4}</p>
<pre><code class="language-bash">$ npx cypress open
</code></pre>
<p>::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Step 3: Creating and Structuring the First Tests
::
::GlobalParagraph
Cypress expects your tests to be stored in a special directory located at cypress/integration. When you open Cypress for the first time, this directory will be automatically created.
::
::GlobalBlock{.ol-decimal .mb-4}</p>
<ol>
<li>Create a new test file in this directory, e.g. example.spec.js.</li>
<li>Write a simple test to ensure that Cypress is functioning. Here's an example of a basic test that checks whether your website loads correctly:
::
::BlogCode{.mb-4}</li>
</ol>
<pre><code class="language-typescript">describe('My first Cypress test suite', () => {
    it('Visits the start page', () => {
        cy.visit('https://your-website.com');
        cy.contains('Welcome');
    });
});
</code></pre>
<p>::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Step 4: Test Execution
::
::GlobalParagraph
After creating your first test, you can run it in the Cypress Test Runner interface. Simply select the previously created example.spec.js file in the Cypress window, and the test will be automatically executed in the browser. You can observe the test steps in real-time.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Step 5: Integrating Cypress into CI/CD Pipelines
::
::GlobalParagraph
A key advantage of Cypress is the easy integration into CI/CD pipelines, allowing your tests to be automatically executed with every code change. Here's a general workflow for integrating Cypress into a CI environment like GitLab CI, Jenkins, or GitHub Actions:
::
::GlobalBlock{.ol-decimal .mb-4}</p>
<ol>
<li>Create a CI configuration file in your project (e.g. <code>.gitlab-ci.yml</code> or <code>.github/workflows/test.yml</code>).</li>
<li>Include Cypress commands in the pipeline configuration. Here's a simple example for GitLab CI:
::
::BlogCode{.mb-4}</li>
</ol>
<pre><code class="language-yaml">stages:
  - test

cypress-test:
  image: cypress/base:14
  stage: test
  script:
    - npm ci
    - npx cypress run
</code></pre>
<p>::</p>
<p>::GlobalBlock{.ol-decimal .mb-4}
3. Execute the pipeline to ensure that Cypress tests are automatically run with each new commit or merge request.
::</p>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
We can also make your application error-free with Cypress tests.
::</p>
<p>::GlobalTitle{:size="lg" .mb-4}
Best Practices for Writing Cypress Tests
::
::GlobalParagraph
To get the most out of Cypress and create efficient and maintainable tests, there are several best practices you should follow:
::
::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>Avoid direct dependencies between tests: Each test should be independent of other tests. This makes debugging easier and makes your tests more robust.</li>
<li>Use mocks and stubs to reduce API dependencies: Use Cypress' <code>cy.intercept()</code> function to simulate API calls and avoid unnecessary dependencies.</li>
<li>Use selectors selectively: Use targeted data attributes like <code>data-cy=“element-name”</code> to make your tests more stable, even if the layout changes.</li>
<li>Avoid too many “wait” commands: Use Cypress's built-in functions to wait for content to load, such as <code>cy.get()</code>, instead of adding arbitrary <code>cy.wait()</code> commands.</li>
<li>Use the Cypress dashboard option: For larger test suites, the dashboard provides detailed reports, screenshots and videos of failed tests.
::</li>
</ul>
<p>::GlobalTitle{:size="lg" .mb-4}
Error Handling and Debugging with Cypress
::
::GlobalParagraph
Cypress offers powerful debugging tools to help you quickly find and fix problems:
::</p>
<p>::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>Interactive Runtime: When a Test Fails, You Can See on the Test Runner Interface Which Step Failed and Why.</li>
<li>Screenshots and Videos: Cypress Automatically Takes Screenshots and Videos of Every Failed Test, Allowing You to Replay the Exact Moment of the Error.</li>
<li>Chrome DevTools Integration: Cypress Allows You to Use Chrome DevTools While Your Tests Are Running to Investigate Detailed Network and Console Errors.
::</li>
</ul>
<p>:::GlobalTitle{:size="lg" .mb-5 .mt-8}
Frequently Asked Questions
:::
::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. What is Cypress and What is it Used For?
::
::GlobalParagraph
Cypress is an end-to-end testing framework for web applications. It enables developers to run automated tests directly in the browser to ensure that an application works as expected.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. What Advantages Does Cypress Offer Compared to Selenium?
::
::GlobalParagraph
Cypress differs from Selenium in that:
::
::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li><strong>Faster Test Execution</strong> Directly in the browser</li>
<li><strong>Simpler Setup</strong> Without additional web driver or plugins</li>
<li><strong>Better Debugging Options</strong>, Including screenshots and videos of failed tests
::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. How Can Cypress Be Integrated into CI/CD Pipelines?
::</p>
<p>::GlobalParagraph
Cypress can be seamlessly integrated into Continuous Integration/Deployment (CI/CD). It is defined as a test step in the CI configuration (e.g. GitLab CI, GitHub Actions) and automatically executes tests for every code change.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
4. What Are the Best Practices for Cypress Tests?
::</p>
<p>::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li><strong>Independent Tests</strong>: Each test should run independently</li>
<li><strong>Targeted Selectors</strong>: Using <code>data-cy</code>-attributes for stable tests</li>
<li><strong>Use API Interception</strong>: Monitor API responses with <code>cy.intercept()</code> instead of fixed wait times</li>
<li><strong>Modular Test Design</strong>: Outsource recurring test logic to custom commands or helper functions
::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
5. For which applications is Cypress not suitable?
::
:::GlobalParagraph
Cypress supports only <strong>modern web applications</strong>. Limitations exist with:
:::
:::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li><strong>Tests in multiple browser tabs or different domains simultaneously</strong> (workarounds necessary).</li>
<li><strong>Mobile App Testing</strong>, since Cypress is not designed for native apps.</li>
<li><strong>Tests for Internet Explorer or older browsers</strong>, as Cypress only supports modern browsers.
:::</li>
</ul>]]></content:encoded>
            <category>Development</category>
            <category>TypeScript</category>
            <category>Security</category>
            <enclosure url="https://blueshoe.io/img/blog/cypress.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Handoff from Designer to Developer: What We Need for Frontend Implementation]]></title>
            <link>https://blueshoe.io/blog/designer-to-developer-handoff</link>
            <guid>https://blueshoe.io/blog/designer-to-developer-handoff</guid>
            <pubDate>Tue, 25 Nov 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>The transition from design to development is a critical moment in any frontend project. A well-thought-out handoff can save weeks of development time, while an incomplete handoff leads to endless questions, delays, and compromises.</p>
<p><img src="/img/blog/design_handoff.webp" alt="Handoff from Designer to Developer – Frontend Checklist"></p>
<p>In this article, we explain what information designers should provide so that developers can implement frontend projects efficiently and precisely. Based on our experience from numerous client projects, we have identified the most common problems and show how they can be avoided.</p>
<p>:GlobalDesignHandoffChecklist</p>
<h2>Why a well-thought-out design handoff is so important</h2>
<h3>The Problem: Incomplete Design Documentation</h3>
<p>In many projects, developers receive design files that are visually appealing but lack important technical details. This leads to situations where developers have to decide for themselves what components should look like in different states, which breakpoints to use, or how the mobile navigation should work.</p>
<p><strong>The consequences:</strong></p>
<ul>
<li><strong>Delays</strong> due to questions and need for clarification</li>
<li><strong>Inconsistencies</strong> between design and implementation</li>
<li><strong>Extra work</strong> due to subsequent adjustments</li>
<li><strong>Frustration</strong> on both sides – designers and developers</li>
</ul>
<h3>The Solution: Structured Design Handoff</h3>
<p>A structured design handoff is more than just sharing a Figma file. It is a comprehensive documentation that covers all aspects of visual and interactive design. Developers should be able to understand and implement the design without having to constantly ask questions.</p>
<h3>Alternative Approach: Using Pre-built Design Systems</h3>
<p>An alternative to custom designs is using pre-built design systems or UI component libraries. These systems offer ready-made components that developers can customize with design tokens (colors, typography, spacing). Examples include frameworks like <a href="https://flowbite.com/">Flowbite</a>{target="_blank"}, which provides Figma design files that designers can customize, allowing developers to use familiar UI libraries.</p>
<p><strong>When pre-built design systems make sense:</strong></p>
<ul>
<li>The team has agreed to use a specific UI library</li>
<li>The project can work within the constraints of the design system</li>
<li>Developers are already familiar with the chosen framework</li>
<li>Time-to-market is critical and customization needs are minimal</li>
</ul>
<p><strong>The reality:</strong>
However, in practice, many designers aren't familiar with these frameworks or work with custom, brand-specific designs that don't align with standard systems. Many projects require unique visual identities that go beyond what pre-built systems offer.</p>
<p><strong>The best approach:</strong>
Ideally, designers and developers should align early on whether to use a pre-built design system or create custom designs. If a framework is chosen, designers can build on it and customize the design tokens. If custom designs are created, a complete handoff with all the information described in this article is essential for precise implementation.</p>
<p>This article focuses on the scenario where designers deliver custom designs – in these cases, developers need comprehensive documentation to implement the design accurately.</p>
<h2>The Essential Components of a Design Handoff</h2>
<h3>1. Color Palette: The Foundation of Visual Identity</h3>
<p>A complete color palette is fundamental to any frontend implementation. It should not only contain the basic colors but also all variants used in the project.</p>
<p><strong>What a complete color palette should include:</strong></p>
<ul>
<li><strong>Primary colors</strong> with all variants (light, dark, saturated)</li>
<li><strong>Secondary colors</strong> for accents and highlights</li>
<li><strong>Neutral colors</strong> for backgrounds, texts, and borders</li>
<li><strong>State colors</strong> for hover, focus, disabled, error, success</li>
<li><strong>Hex values or CSS variables</strong> for direct implementation</li>
</ul>
<p><strong>Example structure of a color palette:</strong></p>
<pre><code class="language-css">/* Primary colors */
--color-primary: #0066CC;
--color-primary-hover: #0052A3;
--color-primary-light: #E6F2FF;
--color-primary-dark: #004080;

/* Secondary colors */
--color-secondary: #FF6B35;
--color-secondary-hover: #E55A2B;

/* Neutral colors */
--color-background: #FFFFFF;
--color-surface: #F5F5F5;
--color-text: #1A1A1A;
--color-text-light: #666666;
--color-border: #E0E0E0;

/* State colors */
--color-error: #DC3545;
--color-success: #28A745;
--color-warning: #FFC107;
--color-info: #17A2B8;
--color-disabled: #CCCCCC;
</code></pre>
<p><strong>Why this is important:</strong>
Without a complete color palette, developers have to guess or define color values themselves, which leads to inconsistencies. A precise color palette enables consistent implementation and facilitates later maintenance and adjustments.</p>
<h3>2. Components with Different States</h3>
<p>Components are the heart of any frontend application. for a precise implementation, developers need not only the standard design but all states and variants of a component.</p>
<p><strong>States that should be documented:</strong></p>
<ul>
<li><strong>Default:</strong> The standard state of the component</li>
<li><strong>Hover:</strong> What does the component look like when you hover over it?</li>
<li><strong>Focus:</strong> How is the focus visually represented (important for accessibility)?</li>
<li><strong>Active:</strong> The state during an interaction</li>
<li><strong>Disabled:</strong> What does the disabled variant look like?</li>
<li><strong>Error:</strong> Error states with corresponding visual cues</li>
<li><strong>Success:</strong> Success states for confirmations</li>
</ul>
<p><strong>Additional variants:</strong></p>
<ul>
<li><strong>Sizes:</strong> Small, Medium, Large</li>
<li><strong>Color variants:</strong> Primary, Secondary, Tertiary</li>
<li><strong>Layout variants:</strong> With/without icon, different text lengths</li>
<li><strong>Context variants:</strong> In different environments (header, footer, modal)</li>
</ul>
<p><strong>Example: Button Component</strong></p>
<p>A fully documented button component should include the following variants:</p>
<pre><code>Button Primary
├── Default (Background: Primary color, Text: White)
├── Hover (Background: Primary color darker, Cursor: Pointer)
├── Focus (Outline: 2px solid focus color, Outline-offset: 2px)
├── Active (Background: Primary color even darker)
├── Disabled (Background: Gray, Opacity: 0.6, Cursor: not-allowed)
└── Loading (Spinner icon, Text: "Loading...")

Button Secondary
├── Default (Background: Transparent, Border: Primary color)
├── Hover (Background: Primary color light, Text: Primary color)
└── [further states...]

Sizes
├── Small (Padding: 8px 16px, Font-Size: 14px)
├── Medium (Padding: 12px 24px, Font-Size: 16px)
└── Large (Padding: 16px 32px, Font-Size: 18px)
</code></pre>
<p><strong>Why this is important:</strong>
Missing state definitions mean that developers have to decide for themselves what components look like in different situations. This leads to inconsistencies and can impair the user experience.</p>
<h3>3. Typography: More Than Just Fonts</h3>
<p>Typography is a central part of the design that is often underestimated. A complete typography documentation should include all text styles used in the project.</p>
<p><strong>What a typography documentation should include:</strong></p>
<ul>
<li><strong>Font families:</strong> Which fonts are used (web fonts, system fonts)?</li>
<li><strong>Font sizes:</strong> All used font sizes with corresponding line heights</li>
<li><strong>Font weights:</strong> Regular, Medium, Bold, etc.</li>
<li><strong>Text styles:</strong> Headings (H1-H6), body text, captions, labels</li>
<li><strong>Responsive typography:</strong> How do font sizes change at different breakpoints?</li>
<li><strong>Color assignments:</strong> Which text color is used in which context?</li>
</ul>
<p><strong>Example Typography System:</strong></p>
<pre><code class="language-css">/* Headings */
--font-heading-1: 48px / 56px, Bold, 'Inter', sans-serif;
--font-heading-2: 36px / 44px, Bold, 'Inter', sans-serif;
--font-heading-3: 28px / 36px, SemiBold, 'Inter', sans-serif;
--font-heading-4: 24px / 32px, SemiBold, 'Inter', sans-serif;
--font-heading-5: 20px / 28px, Medium, 'Inter', sans-serif;
--font-heading-6: 18px / 24px, Medium, 'Inter', sans-serif;

/* Body Text */
--font-body-large: 18px / 28px, Regular, 'Inter', sans-serif;
--font-body: 16px / 24px, Regular, 'Inter', sans-serif;
--font-body-small: 14px / 20px, Regular, 'Inter', sans-serif;

/* Special */
--font-caption: 12px / 16px, Regular, 'Inter', sans-serif;
--font-label: 14px / 20px, Medium, 'Inter', sans-serif;
--font-button: 16px / 24px, Medium, 'Inter', sans-serif;
</code></pre>
<p><strong>Why this is important:</strong>
Typography significantly affects readability, hierarchy, and the overall appearance of a website. Unclear typography definitions lead to inconsistent text displays and can negatively impact the user experience.</p>
<h3>4. Breakpoints: The Basis for Responsive Design</h3>
<p>Responsive design is standard today, but many designers only provide 2-3 breakpoints, although 6 breakpoints would be optimal for a precise implementation. We recommend using the Bootstrap standard breakpoints as a basis, as they provide a proven and consistent foundation for responsive layouts.</p>
<p><strong>Bootstrap Standard Breakpoints (recommended):</strong></p>
<ul>
<li><strong>Extra Small (xs):</strong> &#x3C;576px (small smartphones in portrait mode)</li>
<li><strong>Small (sm):</strong> ≥576px (large smartphones in landscape mode)</li>
<li><strong>Medium (md):</strong> ≥768px (tablets)</li>
<li><strong>Large (lg):</strong> ≥992px (standard desktops)</li>
<li><strong>Extra Large (xl):</strong> ≥1200px (large desktops)</li>
<li><strong>Extra Extra Large (xxl):</strong> ≥1400px (very large screens)</li>
</ul>
<p><strong>What should be documented for each breakpoint:</strong></p>
<ul>
<li><strong>Layout changes:</strong> How does the grid system change?</li>
<li><strong>Component adjustments:</strong> Which components are displayed differently?</li>
<li><strong>Navigation:</strong> How does the navigation work on different screen sizes?</li>
<li><strong>Typography:</strong> How do font sizes change?</li>
<li><strong>Spacing:</strong> How do the distances between elements change?</li>
</ul>
<p><strong>Example Breakpoint Documentation (Bootstrap Standard):</strong></p>
<pre><code>Extra Small (xs) - &#x3C;576px
├── Navigation: Hamburger menu, Full-screen overlay
├── Grid: 1 column, Padding: 16px
├── Typography: H1: 32px, Body: 14px
└── Components: Stack layout, full width

Small (sm) - ≥576px
├── Navigation: Hamburger menu, Side-drawer
├── Grid: 1-2 columns, Padding: 20px
├── Typography: H1: 36px, Body: 15px
└── Components: Adapted layout, larger touch targets

Medium (md) - ≥768px
├── Navigation: Hamburger menu, side-drawer or compact menu
├── Grid: 2 columns, Padding: 24px
├── Typography: H1: 40px, Body: 16px
└── Components: 2-column layout possible

Large (lg) - ≥992px
├── Navigation: Horizontal menu, dropdowns
├── Grid: 12 columns, Padding: 32px
├── Typography: H1: 48px, Body: 16px
└── Components: Flex layout, different widths

Extra Large (xl) - ≥1200px
├── Navigation: Horizontal menu with advanced features
├── Grid: 12 columns, max-width container, Padding: 40px
├── Typography: H1: 48px, Body: 16px
└── Components: Optimized layout for large screens

Extra Extra Large (xxl) - ≥1400px
├── Navigation: Horizontal menu, extended navigation
├── Grid: 12 columns, max-width container, Padding: 48px
├── Typography: H1: 52px, Body: 18px
└── Components: Maximum widths, optimized spacing
</code></pre>
<p><strong>Why this is important:</strong>
Too few breakpoints lead to compromises in implementation. Developers then have to decide for themselves what the layout should look like on screen sizes that are not defined in the design. Using the Bootstrap standard breakpoints offers several advantages: they are proven, well-documented, and already understood by many developers. They also enable consistent implementation, especially when Bootstrap or similar frameworks are used. More breakpoints mean more control and more precise implementation across all device classes.</p>
<h3>5. Spacing between components: The invisible design element</h3>
<p>Spacing is a fundamental design element that is often overlooked. Consistent spacing creates visual hierarchy and improves readability.</p>
<p><strong>What should be documented:</strong></p>
<ul>
<li><strong>Spacing system:</strong> A consistent spacing system (e.g., 4px, 8px, 16px, 24px, 32px, 48px, 64px)</li>
<li><strong>Component spacing:</strong> Specific distances between different components</li>
<li><strong>Container padding:</strong> Inner padding of containers and sections</li>
<li><strong>Responsive spacing:</strong> How do distances change at different breakpoints?</li>
</ul>
<p><strong>Example Spacing System:</strong></p>
<pre><code class="language-css">--spacing-xs: 4px;
--spacing-sm: 8px;
--spacing-md: 16px;
--spacing-lg: 24px;
--spacing-xl: 32px;
--spacing-2xl: 48px;
--spacing-3xl: 64px;
--spacing-4xl: 96px;
</code></pre>
<p><strong>Application examples:</strong></p>
<ul>
<li>Spacing between buttons: <code>--spacing-md</code> (16px)</li>
<li>Spacing between sections: <code>--spacing-3xl</code> (64px)</li>
<li>Padding in cards: <code>--spacing-lg</code> (24px)</li>
<li>Spacing between form elements: <code>--spacing-md</code> (16px)</li>
</ul>
<p><strong>Why this is important:</strong>
Inconsistent spacing leads to an unprofessional appearance. A documented spacing system enables consistent implementation and facilitates maintenance.</p>
<h2>The Biggest Challenges: Navigation and Mobile Design</h2>
<h3>Navigation with Hover Effects: A Common Problem</h3>
<p>Navigation is often one of the most complex areas of a website, especially when hover effects and dropdown menus are involved. In the past, we have often had the problem that a lot of information was missing or not well-thought-out.</p>
<p><strong>What is often missing in navigation designs:</strong></p>
<ul>
<li><strong>Hover states:</strong> What do hover effects look like exactly? Which animations are used?</li>
<li><strong>Dropdown behavior:</strong> How do dropdowns open? Are there animations? How do they close?</li>
<li><strong>Mobile navigation:</strong> How does the navigation work on mobile devices? Is there a hamburger menu? What does it look like?</li>
<li><strong>Active states:</strong> How is the active menu item marked?</li>
<li><strong>Transitions:</strong> Which transitions are used?</li>
</ul>
<p><strong>What a complete navigation documentation should include:</strong></p>
<pre><code>Desktop Navigation
├── Default state
│   ├── Logo position and size
│   ├── Menu item styling (font, size, color)
│   └── Spacing between menu items
├── Hover state
│   ├── Background color or underline
│   ├── Text color change
│   ├── Transition duration and easing
│   └── Cursor change
├── Dropdown menu
│   ├── Trigger behavior (hover vs. click)
│   ├── Dropdown position and alignment
│   ├── Dropdown styling (background, shadow, border)
│   ├── Animation (fade-in, slide-down, etc.)
│   └── Sub-menu behavior
└── Active state
    ├── Visual marking
    └── Styling differences

Mobile Navigation
├── Hamburger menu
│   ├── Icon position and size
│   ├── Animation on open/close
│   └── Icon transformation (to X)
├── Off-canvas menu
│   ├── Slide direction (from left, right, top)
│   ├── Background overlay
│   ├── Menu width and position
│   └── Animation and transition
└── Menu items
    ├── Layout (list, grid, etc.)
    ├── Spacing between items
    └── Touch target sizes (at least 44x44px)
</code></pre>
<p><strong>Why this is important:</strong>
Navigation is often the first point of contact for users. Unclear navigation designs lead to delays in implementation and can significantly impair the user experience. It is particularly problematic when desktop designs are available, but mobile versions are missing.</p>
<h3>Mobile Implementation: Often Neglected, but Critical</h3>
<p>Mobile implementation is one of the most common pain points in design handoff. Many designers only provide desktop designs and expect developers to decide for themselves what the mobile version should look like.</p>
<p><strong>Common problems with mobile designs:</strong></p>
<ul>
<li><strong>Missing mobile designs:</strong> Only desktop designs are provided</li>
<li><strong>Incomplete mobile designs:</strong> Some pages have mobile designs, others do not</li>
<li><strong>Unclear breakpoints:</strong> It is not clear at which screen size which version is used</li>
<li><strong>Touch interactions:</strong> No specifications for touch gestures and interactions</li>
<li><strong>Performance:</strong> No consideration of loading times and performance on mobile devices</li>
</ul>
<p><strong>What a complete mobile documentation should include:</strong></p>
<ul>
<li><strong>Mobile-first approach:</strong> Designs for small screens first</li>
<li><strong>Touch targets:</strong> Minimum sizes for clickable elements (44x44px)</li>
<li><strong>Navigation:</strong> Detailed mobile navigation designs</li>
<li><strong>Component adjustments:</strong> What do components look like on mobile?</li>
<li><strong>Layout changes:</strong> What layout changes are necessary on mobile?</li>
<li><strong>Interactions:</strong> Touch gestures, swipe behavior, etc.</li>
</ul>
<p><strong>Example Mobile Specifications:</strong></p>
<pre><code>Mobile Navigation
├── Hamburger menu
│   ├── Position: Top-left or top-right
│   ├── Size: 44x44px (touch target)
│   ├── Icon: 3 lines, animation to X on open
│   └── Z-index: Above all other elements
├── Off-canvas menu
│   ├── Slide direction: From the left
│   ├── Width: 80% of the screen width, max. 320px
│   ├── Background: White with shadow
│   ├── Overlay: Dark background with 50% opacity
│   └── Animation: 300ms ease-in-out
└── Menu items
    ├── Layout: Vertical list
    ├── Padding: 16px per item
    ├── Font-Size: 18px (larger for better readability)
    └── Touch Target: At least 44px height

Mobile Components
├── Buttons: Full width or adjusted size
├── Forms: Stack layout, larger input fields
├── Cards: Full width, adjusted padding
└── Images: Responsive, optimized for mobile
</code></pre>
<p><strong>Why this is important:</strong>
Mobile usage exceeds desktop usage in many projects. Missing mobile designs lead to delays as developers have to decide for themselves what the mobile version should look like. A thorough mobile implementation is essential for project success.</p>
<h2>Best Practices for a Successful Design Handoff</h2>
<h3>1. Structured Figma File</h3>
<p>A well-organized Figma file makes the work much easier. Developers should be able to quickly find what they are looking for.</p>
<p><strong>Recommended structure:</strong></p>
<pre><code>Design System
├── Colors
│   ├── Primary
│   ├── Secondary
│   ├── Neutrals
│   └── States
├── Typography
│   ├── Headings
│   ├── Body
│   └── Special
├── Components
│   ├── Buttons (all states)
│   ├── Forms
│   ├── Cards
│   └── Navigation
├── Spacing
│   └── Spacing System
├── Breakpoints
│   └── Responsive Guidelines
└── Pages
    ├── Desktop
    │   ├── Homepage
    │   ├── About
    │   └── Contact
    └── Mobile
        ├── Homepage
        ├── About
        └── Contact
</code></pre>
<h3>2. Comments and Annotations</h3>
<p>Comments in Figma can provide important contextual information that is not obvious in the design.</p>
<p><strong>What should be commented on:</strong></p>
<ul>
<li><strong>Interactions:</strong> How do hover effects, animations, etc. work?</li>
<li><strong>States:</strong> What states are there and when are they used?</li>
<li><strong>Breakpoints:</strong> At what screen size does the layout change?</li>
<li><strong>Special requirements:</strong> Accessibility features, performance considerations, etc.</li>
</ul>
<h3>3. Export Options</h3>
<p>Developers need assets in different formats and sizes. A clear export strategy saves time.</p>
<p><strong>Recommended export formats:</strong></p>
<ul>
<li><strong>Icons:</strong> SVG (scalable, small file size)</li>
<li><strong>Images:</strong> WebP or optimized JPG/PNG</li>
<li><strong>Logos:</strong> SVG for vector graphics, PNG for raster graphics</li>
<li><strong>Colors:</strong> Hex values or CSS variables</li>
</ul>
<h2>Common Mistakes and How to Avoid Them</h2>
<h3>Mistake 1: Only providing desktop designs</h3>
<p><strong>Problem:</strong> Many designers only provide desktop designs and expect developers to develop the mobile version themselves.</p>
<p><strong>Solution:</strong> Always provide mobile designs, at least for the most important pages. If time is short, at least design the navigation and main components on mobile.</p>
<h3>Mistake 2: Missing state definitions</h3>
<p><strong>Problem:</strong> Components are only shown in the default state, hover, focus, and other states are missing.</p>
<p><strong>Solution:</strong> Document all states. If time is short, at least define the most important states (hover, focus, disabled).</p>
<h3>Mistake 3: Unclear breakpoints</h3>
<p><strong>Problem:</strong> It is not clear at which screen size which version is used.</p>
<p><strong>Solution:</strong> Explicitly define and document breakpoints. Use comments in Figma to mark breakpoint changes.</p>
<h3>Mistake 4: Inconsistent spacing</h3>
<p><strong>Problem:</strong> Spacing is set "by feel" without a consistent system.</p>
<p><strong>Solution:</strong> Define a spacing system and apply it consistently. This can also be documented afterwards.</p>
<h3>Mistake 5: Missing navigation specifications</h3>
<p><strong>Problem:</strong> Navigation is only shown visually, without details on hover effects, dropdowns, or mobile versions.</p>
<p><strong>Solution:</strong> Think through the navigation thoroughly and document all interactions. Do not neglect mobile navigation in particular.</p>
<h2>Conclusion: Investment in Quality Pays Off</h2>
<p>A well-thought-out design handoff is an investment that pays off several times over. The time that designers invest in complete documentation saves developers weeks of work and leads to better results.</p>
<p><strong>The most important takeaways:</strong></p>
<ul>
<li><strong>Completeness:</strong> Document all states, variants, and breakpoints</li>
<li><strong>Mobile-first:</strong> Do not neglect mobile designs</li>
<li><strong>Structure:</strong> Organize design files well</li>
<li><strong>Communication:</strong> Use comments and annotations for context</li>
<li><strong>Consistency:</strong> Define design systems and spacing systems</li>
</ul>
<p>A good design handoff is like a detailed map – it not only shows developers the destination but also all the important waypoints and obstacles along the way. With complete documentation, developers can work efficiently and deliver high-quality results.</p>
<hr>
<p><em>Have you had experience with design handoffs? Share your insights in the comments and let us know what information is most important to you!</em></p>
<hr>
<p>:GlobalDesignHandoffChecklist</p>
<h2>FAQ – Frequently Asked Questions about Design Handoff</h2>
<h3>1. What information do developers need for a successful design handoff?</h3>
<p>Developers need a well-thought-out Figma file or similar with color palettes, components in different states (hover, focus, disabled), typography definitions, at least 6 breakpoints (Bootstrap standard: xs, sm, md, lg, xl, xxl), spacing between components, as well as detailed navigation designs including mobile versions. Well-thought-out hover effects and a thorough mobile implementation are particularly important.</p>
<h3>2. Why is the mobile implementation so important in the design handoff?</h3>
<p>The mobile implementation is critical because many designers only provide desktop designs. However, developers need detailed mobile versions for navigation, component layouts, and interactions. Missing mobile designs lead to delays, as developers have to decide for themselves how elements should be displayed on small screens.</p>
<h3>3. How many breakpoints should be defined in the design?</h3>
<p>Ideally, 6 breakpoints should be defined, based on the Bootstrap standard: Extra Small (xs) &#x3C;576px, Small (sm) ≥576px, Medium (md) ≥768px, Large (lg) ≥992px, Extra Large (xl) ≥1200px, and Extra Extra Large (xxl) ≥1400px. These breakpoints provide a precise responsive implementation and are proven, well-documented, and already understood by many developers. In practice, often only 2-3 breakpoints are provided, which leads to compromises in the implementation. More breakpoints mean more control over the layout on different screen sizes.</p>
<h3>4. What does a complete component documentation include?</h3>
<p>A complete component documentation should include all states of a component: default, hover, focus, disabled, active, error, and success. In addition, variants for different sizes, colors, or contexts should be documented. This allows developers to implement them precisely without having to ask questions.</p>
<h3>5. Why are hover effects in navigation problematic?</h3>
<p>Hover effects in navigation are problematic because they are often not well-thought-out. Developers need clear specifications for transitions, animations, dropdown behavior, and mobile alternatives. Missing information leads to delays and compromises in implementation.</p>
<h3>6. What role does the color palette play in the design handoff?</h3>
<p>The color palette is fundamentally important as it defines the visual identity. Developers need not only the basic colors but also variants for hover states, error states, success messages, and disabled states. A complete color palette with hex values or CSS variables makes implementation much easier.</p>
<h3>7. How should a Figma file be structured for handoff?</h3>
<p>A well-structured Figma file should have clear areas for the design system (colors, typography, components), spacing system, breakpoints, and page designs (desktop and mobile). Comments and annotations help to convey context. A logical folder structure makes it easier for developers to navigate.</p>]]></content:encoded>
            <category>Project Management</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blog/design_handoff.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Mastering Django Caching: The Ultimate Guide from Cachalot to the Low-Level API]]></title>
            <link>https://blueshoe.io/blog/django-caching-cachalot-performance-guide</link>
            <guid>https://blueshoe.io/blog/django-caching-cachalot-performance-guide</guid>
            <pubDate>Wed, 14 Jan 2026 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Is your Django application slower than it should be? Long loading times not only frustrate your users but are also penalized by search engines. The solution often lies in a smart caching strategy. But where do you start?</p>
<p><img src="/img/blog/django-caching.svg" alt="Django Caching Low-level and Cachalot">{.object-cover .max-w-full .mb-5}</p>
<p>In this guide, we'll walk you through the world of Django caching. We'll introduce you to the "magical" helper <a href="https://django-cachalot.readthedocs.io/en/latest/">Django Cachalot</a>{target="_blank"}, which saves you a lot of work, and compare it with Django's powerful built-in tools—from template caching to the granular low-level API. By the end, you'll know exactly which technique is right for your use case.</p>
<h2>1. Why is Caching in Django essential? The hunt for milliseconds</h2>
<p>When a Django app becomes slow, it'salmost always due to the same culprits:</p>
<ul>
<li>Too many or too expensive database queries</li>
<li>Complex template rendering</li>
<li>Computationally intensive logic in the views</li>
</ul>
<p>These problems add up to long loading times, impatient users, and poor SEO rankings.</p>
<p>Caching is the game-changer here. It stores frequently used results and delivers them in a flash, without Django having to recalculate everything. This saves server load, reduces database load, and significantly improves the user experience.</p>
<hr>
<h2>2. Django Cachalot: The Autopilot for your ORM</h2>
<p>Django Cachalot is an open-source package (<a href="https://django-cachalot.readthedocs.io/en/latest/">https://django-cachalot.readthedocs.io/en/latest/</a>{target="_blank"}) that automatically caches all ORM queries. You don't have to write a single line of cache logic for it.</p>
<h3>How does it work?</h3>
<p>Cachalot remembers the results of your ORM queries and automatically deletes them as soon as you change data, i.e., on <strong>INSERT</strong>, <strong>UPDATE</strong>, or <strong>DELETE</strong>.
The auto-invalidation ensures that your cache always stays up-to-date without you having to actively manage it.</p>
<h2>The big advantage: Auto-invalidation</h2>
<p>So, for example, when a new post is saved:</p>
<pre><code class="language-python">post.title = "New Title"
post.save()
</code></pre>
<p>Cachalot then automatically invalidates:</p>
<ul>
<li>all cached querysets of the Post model</li>
<li>querysets that access Post via ForeignKeys</li>
<li>querysets that contain joins affected by Post</li>
</ul>
<h3>Setup in 3 steps:</h3>
<pre><code class="language-bash">pip install django-cachalot
</code></pre>
<p>Then add to your <code>settings.py</code>:</p>
<pre><code class="language-python">INSTALLED_APPS = [
    # ...
    "cachalot",
]
</code></pre>
<p>Choose your cache backend, for example, Redis:</p>
<pre><code class="language-python">CACHES = {
    "default": {
        "BACKEND": "django.core.cache.backends.redis.RedisCache",
        "LOCATION": "redis://127.0.0.1:6379/1",
    }
}
</code></pre>
<h3>Advantages of Django Cachalot</h3>
<ul>
<li>Enormous time savings during development</li>
<li>No manual cache logic needed in the code</li>
<li>Especially effective for read-intensive applications</li>
</ul>
<h3>Disadvantages and limits of Django Cachalot</h3>
<ul>
<li>Less granular control</li>
<li>Potential pitfalls with very complex queries or Raw-SQL</li>
<li>Sometimes outdated results are served if automatic invalidation fails</li>
</ul>
<hr>
<h2>3. The Standard Repertoire: Django's built-in Caching Tools</h2>
<p>Sometimes you need more control than Cachalot can offer. In these cases, Django's built-in caching tools are worthwhile.</p>
<h3>3a) Template-Fragment-Caching: The static islands</h3>
<p>Template-fragment-caching is suitable if you only want to cache certain areas of a page, for example, navigations, teaser boxes, or sidebar components with many database queries.</p>
<pre><code class="language-html">{% raw %}{% load cache %}{% endraw %}
{% raw %}{% cache 600 sidebar %}{% endraw %}
  &#x3C;div class="sidebar">
    {% raw %}{% for post in popular_posts %}{% endraw %}
      &#x3C;a href="{{ post.get_absolute_url }}">{{ post.title }}&#x3C;/a>
    {% raw %}{% endfor %}{% endraw %}
  &#x3C;/div>
{% raw %}{% endcache %}{% endraw %}
</code></pre>
<h3>3b) View-Level-Caching: The whole page at once</h3>
<p>If a page looks the same for all visitors, for example, a blog overview or a landing page, you can cache the entire view.</p>
<pre><code class="language-python">from django.views.decorators.cache import cache_page
from django.shortcuts import render
from .models import Post

@cache_page(60 * 15)  # 15 minutes
def blog_list(request):
    posts = Post.objects.all()
    return render(request, "blog/list.html", {"posts": posts})
</code></pre>
<h3>3c) The Low-Level Cache API: Maximum control for pros</h3>
<p>With the low-level API, you control which data gets into the cache, how long it stays there, and when it should be renewed. Perfect for complex objects, external API calls, or specific querysets.</p>
<pre><code class="language-python">from django.core.cache import cache

def get_weather_data(city):
    cache_key = f"weather_{city}"
    data = cache.get(cache_key)

    if not data:
        # External API call, for example, OpenWeather
        data = fetch_weather_from_api(city)
        cache.set(cache_key, data, timeout=3600)  # 1 hour

    return data
</code></pre>
<p>Great! You now know the basics of Django Cachalot and the built-in caching mechanisms. But when do you use what? The next section will help you decide.</p>
<h2>4. Head-to-Head: Cachalot vs. Low-Level API, when to use which</h2>
<p>In many projects, the question arises: Is an automatic tool like Django Cachalot enough, or do I need the full control of the low-level API. The following table helps you with the classification.</p>
<p>| Criterion         | Django Cachalot                  | Low-Level API                                |
| :---------------- | :------------------------------- | :------------------------------------------- |
| Control           | Low, as it is automatic          | Maximum, as it is completely manual          |
| Setup effort      | Very low                         | Rather high                                  |
| Area of application | ORM queries                      | Any objects and data sources                 |
| Ideal for         | Quick wins, read-intensive pages | Complex logic and fine performance tuning    |</p>
<h2>5. Practical Guide: The right strategy for your project</h2>
<p>The theory is nice, but what does it look like in real projects. Take a look at these three typical scenarios.</p>
<h3>Scenario 1: A blog or a news portal</h3>
<p>Many read accesses, comparatively few write accesses, but many returning visitors.</p>
<p>Recommendation:</p>
<ul>
<li>Use Django Cachalot for the article lists, for example, for the homepage and categories.</li>
<li>Use view-level caching for the detail pages, i.e., individual articles that do not change with every page load.</li>
</ul>
<h3>Scenario 2: An e-commerce shop</h3>
<p>In the shop, product data is changed rather infrequently, but shopping carts and personalized recommendations are very dynamic.</p>
<p>Recommendation:</p>
<ul>
<li>Use template-fragment-caching for product lists and category pages.</li>
<li>Use the low-level API for the shopping cart, checkout, and personalized recommendations, as this involves highly user-dependent data.</li>
</ul>
<h3>Scenario 3: A complex B2B application with a lot of data</h3>
<p>In B2B applications, a lot of data is often aggregated, filtered, and calculated. The business logic is usually much more complex.</p>
<p>Recommendation:</p>
<ul>
<li>Use the low-level API to specifically cache calculated key figures, reports, or dashboards.</li>
<li>Use Django Cachalot for standard list views, i.e., where many data records are simply displayed.</li>
</ul>
<hr>
<h2>6. Conclusion: The mix makes the difference</h2>
<p>Caching is not an all-or-nothing feature, but a modular system. With the right building blocks, you ensure that your Django app feels fast, even when complex things are happening in the background.</p>
<p>A sensible approach is:</p>
<ul>
<li>Start simple, for example with Django Cachalot or view-level caching.</li>
<li>Measure the effect with tools like the <a href="https://django-debug-toolbar.readthedocs.io/en/latest/">Django Debug Toolbar</a>{target="_blank"}.</li>
<li>Optimize specifically where you see the biggest performance problems, for example, with the low-level API.</li>
</ul>
<p>The rule of thumb is:</p>
<blockquote>
<p>Start as simple as possible and add complexity only where it is really worthwhile.</p>
</blockquote>
<p>Which caching technique is your favorite for Django projects? Do you have another secret tip? Share your experiences in the comments and help other developers improve their Django performance.</p>
<hr>
<h2>7. Frequently Asked Questions (FAQ)</h2>
<h3>Can I use Django Cachalot with Redis?</h3>
<p>Yes, this is a very common combination. Redis is extremely fast and very reliable as a cache backend. You just need to configure your <code>CACHES</code> setting accordingly.</p>
<h3>How do I measure if my caching is working?</h3>
<p>You can use the <a href="https://django-debug-toolbar.readthedocs.io/en/latest/">Django Debug Toolbar</a> to see how many database queries are made per request and whether this number decreases after implementing caching. Additionally, monitoring tools like Sentry Performance or APM solutions help to keep an eye on response times.</p>
<h3>Does the cache with Django Cachalot never become outdated?</h3>
<p>Yes, it does. Django Cachalot automatically invalidates all cached results as soon as data changes, for example, through saving in the admin or via an API. This ensures that users do not see outdated data.</p>
<h3>When should you use the Django Low-Level Cache?</h3>
<p>The Low-Level Cache API is worthwhile whenever you want to decide for yourself what data is cached and for how long. Typical examples are:</p>
<ul>
<li>External API requests</li>
<li>Expensive calculations</li>
<li>Complex composite objects from multiple sources</li>
</ul>
<h3>Can caching cause errors?</h3>
<p>Caching can cause problems when outdated data is returned or when business logic changes, but the cache still contains old results. Therefore, a well-thought-out invalidation strategy is important. Django Cachalot greatly simplifies this part, as changes to models automatically lead to the deletion of the affected cache entries.</p>]]></content:encoded>
            <category>Django</category>
            <category>Development</category>
            <category>Performance</category>
            <enclosure url="https://blueshoe.io/img/blog/django-caching.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Mastering Django Celery: Managing asynchronous tasks like a pro]]></title>
            <link>https://blueshoe.io/blog/django-celery-in-production</link>
            <guid>https://blueshoe.io/blog/django-celery-in-production</guid>
            <pubDate>Tue, 15 Jul 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Integrating Celery into your Django application enables powerful, asynchronous task processing. Learn how to effectively set up Django-Celery and optimise it for production environments to make your application stable and performant.</p>
<p>Have you ever experienced this? An order comes in - and suddenly your Django app hangs because emails are being sent or PDFs are being generated? This is where <a href="https://docs.celeryq.dev/en/latest/django/first-steps-with-django.html">Django Celery</a>{target="_blank"} comes into play. The integration into your Django application enables powerful, asynchronous task processing. Learn how to effectively set up Django-Celery and optimise it for production environments to make your application stable and performant.</p>
<p><img src="/img/blog/celery.svg" alt="Everything you need to know about Django-Celery"></p>
<h2>Why Django-Celery?</h2>
<p>Many tasks in web applications do not need to be processed synchronously. These include, for example, sending emails, generating PDFs or processing large amounts of data. This is exactly where Celery comes into play: it allows you to perform these tasks in the background (asynchronously).</p>
<p><strong>Typical use cases for Celery in Django:</strong></p>
<ul>
<li>Sending emails</li>
<li>Image processing</li>
<li>External API calls</li>
<li>Database cleanup or analysis</li>
<li>Recurring tasks (e.g. report generation with celery beat)</li>
</ul>
<h2>How to set up Celery</h2>
<h3>1. installation of the required packages</h3>
<pre><code class="language-bash">pip install celery redis
</code></pre>
<p>If you need periodic tasks:</p>
<pre><code class="language-bash">pip install django-celery-beat
</code></pre>
<h3>2. Prepare project structure</h3>
<p>Create a file <code>celery.py</code> in your project folder:</p>
<pre><code class="language-python"># myproject/celery.py
import os
from celery import Celery

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")

app = Celery("myproject")
app.config_from_object("django.conf:settings", namespace="CELERY")
app.autodiscover_tasks()
</code></pre>
<p>In <code>__init__.py</code>:</p>
<pre><code class="language-python">from .celery import app as celery_app
__all__ = ["celery_app"]
</code></pre>
<h3>3. Example configuration in <code>settings.py</code></h3>
<pre><code class="language-python">CELERY_BROKER_URL = "redis://localhost:6379/0"
CELERY_RESULT_BACKEND = "redis://localhost:6379/0"
CELERY_ACCEPT_CONTENT = ["json"]
CELERY_TASK_SERIALIZER = "json"
</code></pre>
<p>Test your setup with a simple task.</p>
<hr>
<h2>This is what Celery tasks look like in Django</h2>
<h3>1. Simple task</h3>
<pre><code class="language-python">@shared_task
def send_email_to_user(user_id):
    print(f"E-Mail to user {user_id} sent")
</code></pre>
<p>This is now called up with :</p>
<pre><code class="language-python">send_email_to_user.delay(user.id)
</code></pre>
<h3><code>.delay()</code> vs. <code>.apply_async()</code></h3>
<ul>
<li><code>.delay()</code> is a shortcut for <code>.apply_async()</code> with standard parameters.</li>
<li>With <code>.apply_async()</code> you can use <code>countdown</code>, <code>eta</code> or <code>retry</code>, for example.</li>
</ul>
<h3>2. More complex tasks with transactions and logging</h3>
<pre><code class="language-python">@shared_task
def run_customer_basket_groups_processing(basket_id):
    from shop.order.processing import BasketProcessor
    from shop.models import Basket
    from shop.exceptions import ReachedAdvertisingMediumQuotaWarning
    from django.db import transaction
    import logging

    logger = logging.getLogger(__name__)

    basket = Basket.objects.get(pk=basket_id)

    try:
        if basket.user.groups.exists():
            with transaction.atomic():
                for cbg in basket.customerbasketgroups.all():
                    user = cbg.consignee.user
                    customer_basket = Basket.objects.create(
                        user=user,
                        field_staff=basket.user,
                        discount=basket.discount,
                        discount_code=basket.discount_code,
                        shipment_options=basket.shipment_options
                    )
                    customer_basket.add_basket_lines_to_basket(
                        cbg.basket_lines, as_stock_order=True, check_quotas=False
                    )
                    basket_processor = BasketProcessor(basket=customer_basket)
                    basket_processor.process_basket()
    except Exception as e:
        logger.error(f"Error when processing basket {basket_id}: {e}")

</code></pre>
<p><code>transaction.atomic()</code> ensures that no half-finished order is created in the event of an error.</p>
<hr>
<h2>Periodic tasks with <code>django-celery-beat</code></h2>
<h3>Migrations:</h3>
<pre><code class="language-bash">python manage.py migrate
</code></pre>
<h3>Activate app:</h3>
<pre><code class="language-bash">INSTALLED_APPS += ["django_celery_beat"]
</code></pre>
<h3>Create example task:</h3>
<pre><code class="language-python">from django_celery_beat.models import PeriodicTask, IntervalSchedule

schedule, _ = IntervalSchedule.objects.get_or_create(every=10, period=IntervalSchedule.SECONDS)
PeriodicTask.objects.create(interval=schedule, name="Example Task", task="myapp.tasks.send_email_to_user")
</code></pre>
<p>Done! Your first periodic task has been created. This means: Celery calls up the task at this interval - for example, to send a notification or reminder email at regular intervals. This configuration is particularly useful for recurring tasks that are to be triggered on a time-controlled basis.</p>
<p>The admin interface then shows you all the tasks set up with status and schedule - ideal for monitoring and management in live operation.</p>
<p><img src="/img/blog/celery-1.png" alt="Django-Celery Admin"></p>
<hr>
<h2>Celery in production</h2>
<h3>1. Start Worker</h3>
<pre><code class="language-bash">celery -A myproject worker -l info
</code></pre>
<h3>2. Start 'Beat'</h3>
<pre><code class="language-bash">celery -A myproject beat -l info
</code></pre>
<h3>3. Use deployment tools</h3>
<ul>
<li>Use Supervisor or systemd for process management.</li>
<li>Log errors e.g. with Sentry or external tools.</li>
<li>Implement retries for temporary errors.</li>
</ul>
<h2>Frequent stumbling blocks</h2>
<ul>
<li>Redis not started → <code>ConnectionRefusedError</code></li>
<li><code>.autodiscover_tasks()</code> forgotten → Tasks are not found</li>
<li>Task hangs → Worker not started or deadlock in database</li>
</ul>
<h2>Best practices for clean tasks</h2>
<ul>
<li>Keep tasks small, fast and repeatable</li>
<li>Use logging (e.g. <code>logger.info()</code>, <code>logger.error()</code>)</li>
<li>Plan timeouts and retry logic</li>
<li>Use <code>transaction.atomic()</code> for database actions</li>
<li>Use <code>.apply_async(countdown=...)</code> for scheduled execution</li>
</ul>
<h2>Conclusion</h2>
<p>Celery is a powerful tool that can make your Django app more flexible, faster and more robust. Whether simple email tasks or complex order processing - with the right setup and a few best practices, you'll be on the safe side.</p>
<p>Get started now - and bring Django Celery to your app!</p>]]></content:encoded>
            <category>Django</category>
            <category>Docker</category>
            <category>Development</category>
            <category>Operations</category>
            <enclosure url="https://blueshoe.io/img/blog/celery.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Django-CORS: Security & Best Practices]]></title>
            <link>https://blueshoe.io/blog/django-cors-in-production</link>
            <guid>https://blueshoe.io/blog/django-cors-in-production</guid>
            <pubDate>Fri, 17 Jan 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Configuring CORS (Cross-Origin Resource Sharing) correctly is crucial for the security and functionality of your Django application. Learn how to effectively implement Django-CORS to handle external requests securely and reliably.</p>
<p><img src="/img/blogs/cors.webp" alt="Blueshoe and FastAPI in Production">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalTitle{:size="lg" .mb-4}
Optimized CORS Configuration for Modern Web Applications
::</p>
<p>::GlobalParagraph
Django CORS is an essential tool for secure communication between services in a cross-origin environment. Especially in distributed architectures and container environments, a flexible and well-configured CORS strategy plays a key role. Learn which packages you need, how to install and configure them, and what the best practices are.
::</p>
<p>:::GlobalButton{:url="/technologies/python-django-agency/" :label="Learn more about our Django development services" :color="blue" .mb-6}
:::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Introduction to Django CORS
::</p>
<p>::GlobalParagraph
CORS (Cross-Origin Resource Sharing) defines which domains can access your application's resources. Browsers block such requests by default when they come from a different domain. With <a href="https://github.com/adamchainz/django-cors-headers">django-cors-headers</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, you can selectively remove this restriction and make <a href="/our-services/api-development-company/">API</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}-access secure and controlled.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Packages and Installation
::</p>
<p>::GlobalParagraph
For CORS configuration in Django, the <a href="https://github.com/adamchainz/django-cors-headers">django-cors-headers</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} package is used. This package is specifically developed for Django and integrates seamlessly with the existing middleware.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. Install the package with pip
::</p>
<p>::BlogCode</p>
<pre><code class="language-bash">pip install django-cors-headers
</code></pre>
<p>::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. Add the package to your Django settings
::</p>
<p>::BlogCode</p>
<pre><code class="language-python">INSTALLED_APPS += [
    'corsheaders',
]
</code></pre>
<p>::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. Register the middleware - it must be at the top of the list
::</p>
<p>::BlogCode</p>
<pre><code class="language-python">MIDDLEWARE = [
    'corsheaders.middleware.CorsMiddleware',
    *MIDDLEWARE,
]
</code></pre>
<p>::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Example: Web Application with CORS
::</p>
<p>::GlobalParagraph
For a typical web application where frontend and backend run on different domains, you can configure <code>CORS_ALLOWED_ORIGINS</code> like this:
::</p>
<p>::BlogCode</p>
<pre><code class="language-python">CORS_ALLOWED_ORIGINS = [
    "https://frontend.example.com",
    "https://partner.example.com",
]
</code></pre>
<p>::</p>
<p>::GlobalParagraph
If you need additional headers, for example for authorization, you cann allow them with:
::</p>
<p>::BlogCode</p>
<pre><code class="language-python">from corsheaders.defaults import default_headers
CORS_ALLOW_HEADERS = list(default_headers) + [
    "Authorization",
    "X-Custom-Header",
]
</code></pre>
<p>::</p>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
We can also make CORS work for your Django apps.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
CORS in Container Environments
::</p>
<p>::GlobalParagraph
When using Django as a backend in containerized and distributed systems, it's often accessed by various services (frontend, API gateways, authentication servers). Setting up CORS rules correctly is crucial, especially for internal container communication.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Challenges:
::</p>
<p>::GlobalBlock{.ol-decimal .my-4}</p>
<ol>
<li><strong>Dynamic Origins:</strong> Services in containers may use different IP addresses or dynamic subdomains.</li>
<li><strong>Preflight Requests::</strong> OPTIONS requests can be blocked by proxies or load balancers, leading to issues.
::</li>
</ol>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Solution:
::</p>
<p>::GlobalParagraph
Using regex-based whitelists and flexible methods:
:</p>
<p>::BlogCode</p>
<pre><code class="language-python">CORS_ORIGIN_REGEX_WHITELIST = [
    r"^https://.*\.example\.com$",
]
CORS_ALLOW_METHODS = [
    "GET",
    "POST",
    "PUT",
    "DELETE",
    "OPTIONS",
]
</code></pre>
<p>::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Common Pitfalls
::</p>
<p>::GlobalBlock{.ol-decimal .my-4}</p>
<ol>
<li><strong>Wildcard Configuration</strong>: Avoid using <code>CORS_ALLOW_ALL_ORIGINS=True</code> in production - it's a security risk.</li>
<li><strong>Preflight Requests in Containers</strong>: Make sure API gateways or proxies properly forward OPTIONS requests.</li>
<li><strong>Logs and Debugging</strong>: Monitor failed CORS requests using Django logs to quickly identify misconfigurations.
::</li>
</ol>
<p>::GlobalTitle{:size="lg" .mb-5}
Best Practices for Django CORS
::</p>
<p>::GlobalBlock{.ol-decimal .my-4}</p>
<ol>
<li>Use environment variables to separate configurations between development and production environments.</li>
<li>Regularly test your CORS configuration using tools like <a href="https://www.postman.com/">Postman</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} or browser developer tools.</li>
<li>Keep the number of allowed origins and headers to the absolute minimum needed.
::</li>
</ol>
<p>::GlobalTitle{:size="lg" .mb-5}
Conclusion
::</p>
<p>::GlobalParagraph
Django-cors-headers helps ensure that cross-origin requests are handled securely and controlled - whether you're developing locally or working in a scalable container environment. Proper CORS configuration is especially crucial in distributed systems to ensure smooth communication between services.
::</p>
<p>:::GlobalButton{:url="/technologies/python-django-agency/" :label="Want more Django insights?" :color="blue" .mb-6}
:::</p>
<p>::GlobalParagraph
Got questions or need support with implementation? <a href="https://blueshoe.io/michael/">Contact us</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} – we'll help you successfully implement your Django apps!
::</p>
<p>:::GlobalTitle{:size="lg" .mb-5 .mt-8}
Frequently Asked Questions
:::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. What is Django CORS and why is it important?
::</p>
<p>::GlobalParagraph
Django CORS (Cross-Origin Resource Sharing) allows you to specifically define which domains can access resources of a Django application. This is particularly relevant in modern, distributed systems to make APIs secure and flexible.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. How do I configure CORS in Django?
::</p>
<p>::GlobalParagraph
Configuration is done using the django-cors-headers package. It's installed via pip and added to <code>INSTALLED_APPS</code> and <code>MIDDLEWARE</code>. Domains that are allowed to access are defined in settings like <code>CORS_ALLOWED_ORIGINS</code>.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. What challenges arise with CORS in container environments?
::</p>
<p>::GlobalParagraph
Container architectures often face issues with dynamic origins and blocked preflight requests. These can be resolved using regex-based whitelists (<code>CORS_ORIGIN_REGEX_WHITELIST</code>) and proper proxy configuration.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
4. What are the best practices for secure Django CORS configuration?
::</p>
<p>::GlobalParagraph</p>
<ul>
<li>Avoid using <code>CORS_ALLOW_ALL_ORIGINS=True</code> in production</li>
<li>Keep allowed origins and headers to a minimum</li>
<li>Use environment variables to separate configurations between development and production environments
::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
5. How do I test my Django CORS configuration?
::</p>
<p>::GlobalParagraph{.mb-4}
CORS settings can be checked using tools like Postman or browser developer tools. Preflight requests should be properly forwarded and logs should be analyzed for errors.
::</p>]]></content:encoded>
            <category>Django</category>
            <category>Docker</category>
            <category>API</category>
            <category>Development</category>
            <category>Operations</category>
            <category>Security</category>
            <enclosure url="https://blueshoe.io/img/blogs/cors.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Django development for Kubernetes]]></title>
            <link>https://blueshoe.io/blog/django-development-for-kubernetes</link>
            <guid>https://blueshoe.io/blog/django-development-for-kubernetes</guid>
            <pubDate>Thu, 21 Jul 2022 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>We’ve had another think about how to make the running of Django applications more Cloud Native and ready for Kubernetes. The result was Django-Hurricane which we’d like to introduce in this blog post and which we will make available as an open-source project.</p>
<p><img src="/img/blog/django-development-for-kubernetes.jpg" alt="Django development for Kubernetes">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
We've always been fans of <strong>open source</strong> and we use it in many of our projects. Amongst other things, we use the <strong>Django framework</strong> a lot and also make an effort to support open-source projects when working on maintenance and further development. On top of this, <strong>Kubernetes</strong> has also been a regular presence for some time now – which is why we're increasingly striving to face the challenge of developing in a <strong>Cloud Native</strong> environment as much as possible. We've already written a few blog posts about this – check out Cloud Native Kubernetes development, for example.
:::
:::globalParagraph
The logical consequence was to think about how to make the running of Django applications more Cloud Native. The result was <strong>Django-Hurricane</strong> which we’d like to introduce in this blog post and will make available as an open-source tool.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
"Classic" deployment
:::
:::globalParagraph
First, let’s have a quick look at how Django is often run on a server or a VM. To do this, we usually use a stack with Nginx as a web server as well as uWSGI as an application server which runs the Django code. App servers like uWSGI have a <strong>highly optimised</strong> process model for this type of deployment, with many CPUs, several threads etc. With enough knowledge of the application and sufficient experience in the execution of Django applications, the application server can be configured in the most optimal way possible – also because there are typically enough options available.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Kubernetes deployment of Django applications
:::
<img src="/img/blog/django-development-for-kubernetes-1.jpg" alt="Kubernetes deployment of Django">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
In Kubernetes, however, things look a little different. You can still run our Django application in a <strong>container</strong> using uWSGI; however, an <strong>optimisation</strong> of the application server that is tailored to the hardware is out of place here – for starters, you can’t directly influence which node of the cluster the container is actually run on. In Kubernetes, the <strong>horizontal</strong> scaling of the application can still be configured using the <strong>replication value</strong>. On top of this, the Horizontal Pod Autoscaler can be used to specify a minimum and maximum number of pods. If the app was unable to cope with the requests, for example, Kubernetes could simply start a <strong>new, parallel</strong> container.
:::
:::globalParagraph
There are some other points in our Django application which we have to keep in mind when running it via Kubernetes – the <strong>liveness, readiness</strong> and <strong>startup</strong> probes, for example. Our application has to provide endpoints for the probes which will be regularly checked by Kubernetes in order to judge whether a container needs to be restarted, for example, or whether <strong>traffic</strong> can be received at that given moment. This could then lead to some <strong>boilerplate code</strong> or even to the Django application simply just having to provide functionality to the <strong>hosting</strong>.
:::
:::globalParagraph
We don’t really want any of this – we want an <strong>application server which is more directly linked with Kubernetes</strong>.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Django-Hurricane
:::
<img src="/img/blog/django-development-for-kubernetes-2.jpg" alt="Django Hurricane">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
That’s why we’ve developed <strong>Django Hurricane</strong> which we are building on the Tornado web server. Hurricane can be run as a simple Django <strong>management command</strong> and can also be configured using <strong>flags</strong>. No further configuration files are required.
:::
:::globalParagraph
The installation can simply take place using <em>pip</em>:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">pip install hurricane
</code></pre>
<p>:::</p>
<p>:::globalParagraph
After that, Hurricane has to be added to the INSTALLED_APPS of the Django project. A logger should also be configured so that logging information will be displayed:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json">INSTALLED_APPS += (
  'hurricane',
)

LOGGING = {
    # [...]
    “loggers”: {
        # [...]
        “hurricane:”: {
          "handlers": ["console"],
          "level": os.getenv("HURRICANE_LOG_LEVEL", "INFO"),
          "propagate": False,
        },
    },
    # [...]
}
</code></pre>
<p>:::</p>
<p>:::globalParagraph
Hurricane can be started with the management command:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">python manage.py serve
</code></pre>
<p>:::</p>
<p>:::globalParagraph
The configuration also takes place entirely via management command options. Here’s a list of the options that currently exist:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>static: serve static files</li>
<li>media: serve media files</li>
<li>autoreload: load code changes dynamically</li>
<li>debug: set a Tornado debug flag (not to be confused with Django’s DEBUG=True)</li>
<li>port: the port on which Tornado should run (default: 8000)</li>
<li>probe: the path for the liveliness and readiness probes (default: /alive)</li>
<li>probe-port: the port under which the probes can be reached (default: a port after –port)</li>
<li>no-probe: deactivate a probe endpoint
:::</li>
</ul>
<p>:::globalParagraph
These basic configuration options already cover a fair few scenarios and we can already use it to apply Hurricane to <strong>production systems</strong>.
:::
:::globalParagraph
In order for the probes to do their job, they actually have to be able to make a statement with regard to the <strong>accessibility</strong> and <strong>availability</strong> of the application; they mustn’t just be an endpoint which can be reached independently of the application. And while the Django code and the probe endpoint are served on two different ports with Hurricane, as the entire code basis is run in the same <strong>asyncio loop</strong>, a conclusion can still implicitly be drawn about the accessibility and availability of the application.
:::
:::globalParagraph
More logic might well be necessary in order to establish the liveness or readiness of an application rather than just an endpoint which sends back a 200. Hurricane’s probe endpoint runs Django’s check framework. This makes it possible for the <strong>additional logic</strong> required for the application to be displayed in Django’s checks.
:::</p>
<p>:::GlobalPodcastSection{:videoId="KapP6ouEL3E" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR PODCAST: TOOLS FOR THE CRAFT
::::
::::GlobalTitle{:tag="h3" .mb-6}
E2: Remote Kubernetes development environments
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking about the various options developers have for running remote Kubernetes development environments.
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="More podcasts!" :color="green"}
::::
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Roadmap and further development
:::
<img src="/img/blog/django-development-for-kubernetes-3.jpg" alt="Roadmap">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
Even if Hurricane can already be used for production systems, the development is still in its <strong>infancy</strong>. We still have some <strong>further features</strong> in mind and the to-do list in the GitHub Repository has some remaining unticked boxes left. <strong>Support</strong> in the further development is always welcome, of course, just like in all open-source projects. Either way, we'll keep the ball rolling and there's surely going to be one or two blog posts in the future that deal with <strong>new Hurricane features</strong>, application scenarios and other related subjects.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Django</category>
            <category>Python</category>
            <category>Development</category>
            <category>Operations</category>
            <enclosure url="https://blueshoe.io/img/blog/django-development-for-kubernetes.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Django Hurricane: Get Your Django Application on Course for Kubernetes]]></title>
            <link>https://blueshoe.io/blog/django-hurricane-kubernetes</link>
            <guid>https://blueshoe.io/blog/django-hurricane-kubernetes</guid>
            <pubDate>Fri, 22 Aug 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>With Django Hurricane, you can develop scalable applications that are a perfect fit for Kubernetes. Learn how to optimize your workflows and harness the full power of modern container orchestration.</p>
<p>Anyone who loves <a href="https://www.djangoproject.com/">Django</a>{target="_blank"} appreciates its strengths: rapid development, a robust ORM, and a huge community. But when it comes to deployment in modern, cloud-native environments like Kubernetes, the traditional Django setup hits its limits. This is where Django Hurricane comes in - a project that gets your Django application ready for orchestration.</p>
<p><img src="/img/blog/django-hurricane.svg" alt="Django Hurricane">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalBlogLevelInfo</p>
<ul>
<li>Kubernetes Basics</li>
<li>The Project <a href="https://django-hurricane.io/">Django Hurricane</a>{target="_blank"}</li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/">Kubernetes Probes</a>{:target="_blank"}</li>
<li><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">Kubernetes Horizontal Pod Autoscaler</a>{:target="_blank"}</li>
<li>Optional <a href="https://cloud.google.com/kubernetes-engine?hl=en">Google Kubernetes Engine</a>{:target="_blank"}</li>
</ul>
<p>If you have any questions or if anything is unclear, you can use the comment function below the article.
::</p>
<p><a href="https://django-hurricane.io/">Django Hurricane</a>{target="_blank"} integrates concepts like liveness and readiness probes directly into the Django management context. This means Kubernetes can understand the state of your application at any time and react intelligently – for example, by only sending traffic to a new pod when it's truly ready.</p>
<p>But that's just the beginning. The latest updates (versions 1.5.0 and 1.6.0) have introduced a range of powerful new features that make developers' lives even easier. Let's take a closer look at the highlights.</p>
<h2>New Superpowers: Monitoring, Resource Control, and Fine-Tuning</h2>
<p>The latest versions of Hurricane have focused on giving you more control and better insights into your application – exactly what you need in a dynamic Kubernetes environment.</p>
<h2>Better Monitoring and Logging with structlog &#x26; Prometheus (v1.5.0)</h2>
<p>Good observability is essential for stable systems. Hurricane takes a big step forward here.</p>
<h3>Prometheus Metrics Out-of-the-Box</h3>
<p>When you start the server via <code>$ python manage.py serve</code>, a metrics endpoint is provided by default at <code>/metrics</code> on the <em>internal port</em>. These metrics are in Prometheus format and can be easily scraped by your monitoring system. This is the foundation for alerts and dashboards. If you don't need this, you can simply disable it with the <code>--no-metrics</code> option. 📈</p>
<h3>Structured Logging</h3>
<p>As soon as you add <code>structlog</code> to your project, Hurricane integrates automatically. Structured logging is a game-changer because logs are no longer just deserts of text, but machine-readable data. This makes analysis and troubleshooting much easier.
Tip: For users of <em>Google Kubernetes Engine (GKE)</em>, it's also recommended to use <code>structlog-gcp</code> to perfectly format logs for Google Cloud Logging.</p>
<h2>Intelligent Resource Management (v1.5.0)</h2>
<p>In a container world, resources like memory are precious and must be managed well.</p>
<h3>Set Memory Limit</h3>
<p>Set memory limit: With <code>--max-memory</code>, you can set an upper limit for memory usage (e.g., <code>--max-memory 512M</code>). If your application exceeds this limit, it will be restarted in a controlled manner. This prevents uncontrolled crashes due to "Out of Memory" errors and lets Kubernetes control the pod's restart.</p>
<h3>Adjust Worker Threads</h3>
<p>The <code>--workers</code> option gives you control over the number of threads in the <code>ThreadPoolExecutor</code>. This allows you to optimize performance for I/O-intensive tasks.</p>
<hr>
<h2>More Flexibility and Control (v1.6.0)</h2>
<p>Version 1.6.0 focuses on configuration and handling special use cases.</p>
<h3>Handling Large Amounts of Data</h3>
<p>Does your app process large file uploads or huge JSON payloads? With <code>--max-body-size</code> and <code>--max-buffer-size</code>, you can precisely control the behavior of the underlying Tornado server. This prevents buffer overflows and ensures stability even with high data volumes.</p>
<h3>Configuration Your Way</h3>
<p>Flexibility is key. Hurricane can now be configured in three ways: <strong>classic CLI arguments</strong>, entries in the <strong>Django Settings</strong>, or – and this is especially ideal for Kubernetes deployments – via <strong>environment variables</strong>. This follows the proven <a href="https://12factor.net/">"12-Factor App" principle</a>{target="_blank"} and allows for clean, environment-specific configurations without code changes.</p>
<hr>
<h2>Deep Dive: API Scaling with HPA and Prometheus Metrics</h2>
<p>One of the greatest strengths of Kubernetes is its ability to autoscale. But standard scaling based on CPU or memory utilization is often not ideal for I/O-intensive applications like Django APIs. An API can be under full load (e.g., due to many slow database queries) without a significant increase in CPU load.</p>
<p>A much better metric is the <em>request backlog</em> – the number of requests currently being actively processed by a pod. If this number increases, it means the application is reaching its load limit and we need more instances.</p>
<p>Below, I'll show you exactly how to implement this with Django Hurricane on the Google Kubernetes Engine (GKE).</p>
<h3>Step 1: Identify the Right Metric</h3>
<p>Django Hurricane provides a variety of metrics via its Prometheus endpoint <code>/metrics</code>. For our use case, a metric like <code>request_queue_length</code> is ideal. It shows us the number of concurrently processed requests per pod. Our goal is to say: "If the average number of active requests per pod is more than 5, start a new pod."</p>
<p><img src="/img/blogs/gke-hpa-1.png" alt="GKE Metrics">{.mx-auto .w-1/2}</p>
<h3>Step 2: Collect Prometheus Metrics in GKE</h3>
<p>The easiest way to use Prometheus metrics in GKE is the <a href="https://cloud.google.com/stackdriver/docs/managed-prometheus">Google Cloud Managed Service for Prometheus</a>{target="_blank"}. If this is enabled in your cluster, you just need to tell Kubernetes where to find your application's metrics.</p>
<p>We instruct Kubernetes to scrape metrics from <em>Port 8081</em> (the internal port of Django Hurricane).
Here is an example of a Deployment manifest:</p>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-django-api
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-django-api
  template:
    metadata:
      labels:
        app: my-django-api
    spec:
      containers:
      - args:
        - python /app/manage.py serve --req-queue-len 100 --command "collectstatic
          --no-input" --port 8080 --static --max-memory 950
        name: django-hurricane
        image: your-django-app-image:latest
        ports:
        - containerPort: 8080
          name: http
          protocol: TCP
        - containerPort: 8081
          name: metrics
          protocol: TCP
</code></pre>
<p>Then we also need a <code>PodMonitoring</code>:</p>
<pre><code class="language-yaml">apiVersion: monitoring.googleapis.com/v1
kind: PodMonitoring
metadata:
  name: my-django-api
spec:
  endpoints:
  - interval: 5s  # Interval to scrape the metrics endpoint
    metricRelabeling:
    - action: keep
      regex: request_queue_.+  # We only export this one metric
      sourceLabels:
      - __name__
    port: metrics  # This is the name of the port (see Deployment above)
  selector:
    matchLabels:
      name: my-django-api  # The Pod Selector
  targetLabels:
    metadata:
    - pod
    - container
</code></pre>
<p>With that, the data is already landing in the Google Cloud Console.</p>
<h3>Step 3: Create the Horizontal Pod Autoscaler (HPA)</h3>
<p>Now comes the most exciting part. We define a <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/">Horizontal Pod Autoscaler (HPA)</a>{target="_blank"} that reads the metric from the Managed Prometheus Service and reacts to it.</p>
<p>The metrics collected by Google Cloud Managed Prometheus receive a special prefix. Our <code>request_queue_length</code> metric will be available in Google Cloud Monitoring as <code>prometheus.googleapis.com|request_queue_length|gauge</code>.</p>
<p>The HPA manifest then looks like this:</p>
<pre><code class="language-yaml">apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-django-api-hpa
spec:
  # Description of the scaling behavior
  behavior:
    scaleDown:
      policies:
      - periodSeconds: 15
        type: Percent
        value: 100
      selectPolicy: Max
      stabilizationWindowSeconds: 300
    scaleUp:
      policies:
      - periodSeconds: 15
        type: Pods
        value: 4
      - periodSeconds: 15
        type: Percent
        value: 100
      selectPolicy: Max
      stabilizationWindowSeconds: 0
  # Target deployment to be scaled
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-django-api
  
  # Scaling limits
  minReplicas: 2
  maxReplicas: 20
  
  # Metrics used for scaling
  metrics:
  - type: Pods # We are measuring a value per pod
    pods:
      metric:
        # The full name of the metric in Google Cloud Monitoring
        name: "prometheus.googleapis.com|request_queue_length|gauge"
      target:
        type: AverageValue # We are targeting an average value
        averageValue: "5" # Scale up if the average is above 5
</code></pre>
<p><strong>What exactly is happening here?</strong></p>
<ol>
<li>Django Hurricane delivers: Each pod of your Django application provides the number of active requests at <code>:8081/metrics</code>.</li>
<li>Prometheus collects: The Managed Prometheus Collector in GKE sees the annotations, regularly scrapes the metrics, and stores them.</li>
<li>HPA observes: The HPA continuously queries the value for <code>prometheus.googleapis.com|request_queue_length|gauge</code>.</li>
<li>HPA reacts: It calculates the average value across all running pods. If this value is above our target of 5, the HPA instructs the Deployment to increase the number of replicas (up to a maximum of 20). If the value falls again, pods are elegantly scaled down (to a minimum of 2).</li>
</ol>
<p>With this setup, you create a highly efficient and responsive API that adapts perfectly to the actual load – all with the built-in tools of Django Hurricane, Kubernetes, and GKE.</p>
<h2>Conclusion: Django and Kubernetes – A Real Dream Team</h2>
<p>Django Hurricane closes the gap between the developer-friendliness of Django and the operational requirements of Kubernetes. The new features for monitoring, resource management, and flexible configuration make it an indispensable tool for anyone who wants to run scalable and robust web applications in the cloud.</p>
<p>If you want to take your Django projects to the next level, give <a href="/products/django-hurricane/">Django Hurricane</a> a try. Your application – and your DevOps team – will thank you for it.</p>]]></content:encoded>
            <category>Django</category>
            <category>Django Hurricane</category>
            <category>Operations</category>
            <category>API</category>
            <enclosure url="https://blueshoe.io/img/blog/django-hurricane.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Django Ninja vs. FastAPI: A Detailed Comparison]]></title>
            <link>https://blueshoe.io/blog/django-ninja-vs-fastapi</link>
            <guid>https://blueshoe.io/blog/django-ninja-vs-fastapi</guid>
            <pubDate>Wed, 23 Jul 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>FastAPI has revolutionized Python web development - but with Django Ninja, there is an alternative that combines the best of two worlds.</p>
<p>While FastAPI scores with its modern, asynchronous architecture and maximum flexibility, Django Ninja brings the elegant API syntax of FastAPI into the proven Django ecosystem. In this comprehensive comparison, you will not only learn about the technical differences in setup, schema definitions, and middleware concepts, but also gain a foundational decision-making aid: When is the radical freedom of FastAPI the right path, and when do you benefit more from Django Ninja's seamless integration into the Django world?</p>
<p><img src="/img/blog/djninja-fastapi.svg" alt="Django Ninja vs. FastAPI Comparison">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalBlogLevelInfo</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Representational_State_Transfer">REST API</a>{target="_blank"}</li>
<li><a href="https://www.djangoproject.com/">Django</a>{target="_blank"} Basics, <a href="https://docs.djangoproject.com/en/stable/topics/db/">ORM</a>{target="_blank"}, <a href="https://docs.djangoproject.com/en/stable/topics/http/middleware/">Middlewares</a>{target="_blank"}</li>
<li><a href="https://fastapi.tiangolo.com/">FastAPI</a>{target="_blank"}</li>
<li>Optional <a href="https://en.wikipedia.org/wiki/JSON_Web_Token">JSON Web Token</a>{:target="_blank"}</li>
</ul>
<p>If you have any questions or if anything is unclear, you can use the comment function below the article.
::</p>
<h2>Django Ninja vs FastAPI: A Detailed Comparison of Two Modern Python Web Frameworks</h2>
<p>When you want to develop modern REST APIs in Python, you'll undoubtedly turn to <a href="https://fastapi.tiangolo.com/">FastAPI</a>{target="_blank"} - the framework that has revolutionized Python web development in recent years. But are you also familiar with <a href="https://django-ninja.dev/">Django Ninja</a>{target="_blank"}? This elegant alternative brings the speed and type safety of FastAPI into the Django world. Let's explore together how these two frameworks differ and when you should use which one.</p>
<h2>Quick Overview: Django Ninja vs. FastAPI at a Glance</h2>
<p>| Feature         | Django Ninja                               | FastAPI                                      |
|----------------|--------------------------------------------|----------------------------------------------|
| <strong>Foundation</strong>  | Built on Django                           | Standalone framework (based on Starlette)    |
| <strong>Performance</strong> | Very good, but synchronous                | Excellent, primarily asynchronous (ASGI)     |
| <strong>Database</strong>   | Django ORM (integrated)                   | Any ORM (e.g., SQLAlchemy, Tortoise ORM)     |
| <strong>Admin Panel</strong> | Django Admin (out-of-the-box)             | Must be built/integrated manually            |
| <strong>Authentication</strong> | Django Auth (integrated)              | Must be implemented manually                 |
| <strong>Learning Curve</strong> | Very flat for Django developers        | Flat for Python developers                   |
| <strong>Flexibility</strong> | Less flexible, but "batteries-included"   | Maximum flexibility, but more setup required  |
| <strong>Ideal For</strong>  | Existing Django projects, quick CRUD APIs  | High-performance microservices, new projects |</p>
<h2>The Core Ideas Behind Both Frameworks</h2>
<p>Before we dive into the technical details, let me briefly explain what makes these frameworks special. FastAPI is an independent, asynchronous web framework, conceptualized from the ground up for modern API development. Django Ninja, on the other hand, is an extension of Django that allows writing APIs with a FastAPI-like syntax while still being able to use the entire Django ecosystem.</p>
<h2>Setup Comparison: The First Impression</h2>
<h3>FastAPI Setup</h3>
<p>With FastAPI, you practically start from zero. This has its pros and cons. The advantage is absolute freedom - you decide on every aspect of your application. A minimal FastAPI project looks like this:</p>
<pre><code class="language-python"># main.py
from fastapi import FastAPI

app = FastAPI()

@app.get("/")
def read_root():
    return {"Hello": "World"}

# Start with: fastapi dev main.py
</code></pre>
<p>That's it! With just a few lines, you have a functioning API. But as soon as you need databases, authentication, or other features, you must configure everything yourself:</p>
<pre><code class="language-python"># A more realistic FastAPI Setup
from fastapi import FastAPI
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
import uvicorn

# Database Setup
SQLALCHEMY_DATABASE_URL = "postgresql://user:password@localhost/dbname"
engine = create_engine(SQLALCHEMY_DATABASE_URL)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)

app = FastAPI(title="My API", version="1.0.0")

# Dependency for database sessions
def get_db():
    db = SessionLocal()
    try:
        yield db
    finally:
        db.close()
</code></pre>
<h3>Django Ninja Setup</h3>
<p>With Django Ninja, you already have a complete Django project as a foundation. This means database configuration, migrations, admin interface, and much more are already available:</p>
<pre><code class="language-python"># After django-admin startproject and pip install django-ninja

# views.py
from ninja import NinjaAPI

api = NinjaAPI()

@api.get("/hello")
def hello(request):
    return {"message": "Hello from Django Ninja"}

# urls.py
from django.urls import path
from my_app.views import api

urlpatterns = [
    path("api/", api.urls),
]
</code></pre>
<p>The big difference? With Django Ninja, you get the best of both worlds: the intuitive API syntax of FastAPI and the proven infrastructure of Django. You don't have to start from scratch but build on a solid foundation.</p>
<h2>Model Schemas: The Biggest Difference</h2>
<p>Here it gets really interesting! Both frameworks use Python Type Hints for validation, but the implementation differs fundamentally.</p>
<h3>FastAPI with Pydantic</h3>
<p>FastAPI relies entirely on Pydantic for data validation. Pydantic models are independent classes that define your data structures:</p>
<pre><code class="language-python">from pydantic import BaseModel, Field, validator
from datetime import datetime

class UserBase(BaseModel):
    # There is also an email field in Pydantic!
    email: str = Field(..., description="The user's email address")
    username: str = Field(..., min_length=3, max_length=20)
    
    @validator("email")
    def email_must_be_valid(cls, v):
        if '@' not in v:
            raise ValueError("Invalid email address")
        return v

class UserCreate(UserBase):
    password: str = Field(..., min_length=8)

class UserResponse(UserBase):
    id: int
    created_at: datetime
    is_active: bool = True
    
    class Config:
        # Allows use of ORM objects
        orm_mode = True
</code></pre>
<p>The strength of Pydantic lies in its flexibility and comprehensive validation options. You can implement complex validation logic, perform automatic conversions, and even create sophisticated models.</p>
<h3>Django Ninja's Schema System</h3>
<p>Django Ninja offers you two possibilities: You can either use Pydantic or Ninja's own schema system. Ninja's schema system is specifically optimized for integration with Django ORM:</p>
<pre><code class="language-python">from ninja import Schema, ModelSchema
from django.contrib.auth.models import User
from typing import Optional

# Option 1: Manual Schema
class UserSchema(Schema):
    id: int
    username: str
    email: str
    first_name: Optional[str] = None
    last_name: Optional[str] = None

# Option 2: ModelSchema - Automatically from Django Model
class UserModelSchema(ModelSchema):
    class Config:
        model = User
        model_fields = ['id', 'username', 'email', 'first_name', 'last_name']

# Option 3: Extended ModelSchema with additional fields
class UserDetailSchema(ModelSchema):
    full_name: str
    post_count: int = 0
    
    class Config:
        model = User
        model_fields = '__all__'
    
    @staticmethod
    def resolve_full_name(obj):
        return f"{obj.first_name} {obj.last_name}"
</code></pre>
<p>The crucial advantage of Django Ninja's schema system is the seamless integration with Django Models. You don't have to constantly convert between ORM objects and Pydantic models. The <code>ModelSchema</code> class automatically generates schemas based on your Django Models, avoiding duplication.</p>
<h3>The Deeper Comparison: Pydantic vs Ninja Schema</h3>
<p>Let's understand the "philosophical" differences:</p>
<p><strong>Pydantic in FastAPI:</strong></p>
<ul>
<li>Completely decoupled from the database</li>
<li>Explicit conversion between ORM and Pydantic required</li>
<li>More control, but also more boilerplate</li>
<li>Perfect for complex validations and transformations</li>
</ul>
<pre><code class="language-python"># FastAPI with SQLAlchemy
@app.post("/users/", response_model=UserResponse)
def create_user(user: UserCreate, db: Session = Depends(get_db)):
    # Manual conversion necessary
    db_user = UserModel(
        email=user.email,
        username=user.username,
        hashed_password=hash_password(user.password)
    )
    db.add(db_user)
    db.commit()
    db.refresh(db_user)
    # Pydantic converts automatically thanks to orm_mode
    return db_user
</code></pre>
<p><strong>Ninja Schema:</strong></p>
<ul>
<li>Closely integrated with Django ORM</li>
<li>Automatic conversion between Django Models and Schemas</li>
<li>Less Code, but also less flexibility</li>
<li>Ideal for CRUD Operations with Django Models</li>
</ul>
<pre><code class="language-python"># Django Ninja
@api.post("/users/", response=UserModelSchema)
def create_user(request, user_data: UserCreateSchema):
    # Direct work with Django ORM
    user = User.objects.create_user(
        username=user_data.username,
        email=user_data.email,
        password=user_data.password
    )
    # Automatic conversion to Response Schema
    return user
</code></pre>
<h2>Endpoint Definition: Subtle but Important Differences</h2>
<p>At first glance, the endpoint definitions look very similar, but the devil is in the details.</p>
<h3>FastAPI Endpoints</h3>
<pre><code class="language-python">from fastapi import FastAPI, Path, Query, Body, Header, Depends
from typing import Optional, List

app = FastAPI()

@app.get("/items/{item_id}")
async def read_item(
    item_id: int = Path(..., title="The Item's ID", ge=1),
    q: Optional[str] = Query(None, max_length=50),
    x_token: str = Header(...),
    current_user: User = Depends(get_current_user)
):
    # Async is optional but recommended
    return {"item_id": item_id, "q": q}

@app.post("/items/")
async def create_item(
    item: Item = Body(..., embed=True),
    importance: int = Body(...)
):
    return {"item": item, "importance": importance}
</code></pre>
<h3>Django Ninja Endpoints</h3>
<pre><code class="language-python">from ninja import Router, Path, Query, Body, Header
from django.shortcuts import get_object_or_404

router = Router()

@router.get("/items/{item_id}")
def read_item(
    request,  # Django Request Object is always the first parameter
    item_id: int = Path(..., title="The Item's ID", ge=1),
    q: Optional[str] = Query(None, max_length=50),
    x_token: str = Header(...)
):
    # Use Django's ORM directly
    item = get_object_or_404(Item, id=item_id)
    return {"item_id": item.id, "q": q}

@router.post("/items/", response=ItemSchema)
def create_item(request, item_data: ItemCreateSchema):
    # Django's Authentication is directly available
    if not request.user.is_authenticated:
        return 401, {"detail": "Not authenticated"}
    
    item = Item.objects.create(**item_data.dict(), owner=request.user)
    return item
</code></pre>
<p>The most important differences:</p>
<ol>
<li>
<p><strong>Request Object</strong>: Django Ninja always passes the Django Request object as the first parameter. This gives you access to sessions, User object and other Django features.</p>
</li>
<li>
<p><strong>Async Support</strong>: FastAPI is fundamentally async, while Django Ninja primarily works synchronously (async support is experimental).</p>
</li>
<li>
<p><strong>Response Handling</strong>: Django Ninja allows you to directly return Django Model instances, while FastAPI requires explicit conversion.</p>
</li>
</ol>
<h2>Middlewares and Lifecycles: Different Philosophies</h2>
<h3>FastAPI Middleware and Lifecycle</h3>
<p>FastAPI offers a minimalistic but powerful middleware system:</p>
<pre><code class="language-python">from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
import time

app = FastAPI()

# CORS Middleware
app.add_middleware(
    CORSMiddleware,
    # ! CAUTION: never ever in Production !
    allow_origins=["*"],
    allow_methods=["*"],
    allow_headers=["*"],
)

# Custom Middleware
@app.middleware("http")
async def add_process_time_header(request: Request, call_next):
    start_time = time.time()
    response = await call_next(request)
    process_time = time.time() - start_time
    response.headers["X-Process-Time"] = str(process_time)
    return response

# Lifecycle Events
@app.on_event("startup")
async def startup_event():
    # Initialization, e.g. database connection
    print("Starting up...")

@app.on_event("shutdown")
async def shutdown_event():
    # Cleanup
   print("Shutting down...")
</code></pre>
<h3>Django Ninja Middleware</h3>
<p>Django Ninja uses the most advanced Django Middleware system:</p>
<pre><code class="language-python"># Django Middleware (in middleware.py)
class ProcessTimeMiddleware:
    def __init__(self, get_response):
        self.get_response = get_response

    def __call__(self, request):
        start_time = time.time()
        response = self.get_response(request)
        process_time = time.time() - start_time
        response['X-Process-Time'] = str(process_time)
        return response

# In settings.py
MIDDLEWARE = [
    'django.middleware.security.SecurityMiddleware',
    'myapp.middleware.ProcessTimeMiddleware',
    # ... other Middlewares
]

# Django Ninja specific features
from ninja import NinjaAPI

api = NinjaAPI()

# API-Level Middleware/Hooks
@api.exception_handler(ValidationError)
def validation_error_handler(request, exc):
    return api.create_response(
        request,
        {"detail": exc.errors()},
        status=422
    )
</code></pre>
<p>The main philosophical difference lies in: FastAPI gives you a blank page and you build your middleware pipeline yourself. Django, in contrast, comes with a comprehensive middleware collection (Security, Session, CSRF, etc.) that you can automatically use with Django Ninja.</p>
<h2>Automatic Documentation Generation: Both Shine</h2>
<p>Both FastAPI and Django Ninja automatically generate interactive API documentation. But there are subtle differences.</p>
<h3>FastAPI Documentation</h3>
<pre><code class="language-python">from fastapi import FastAPI
from pydantic import BaseModel, Field

app = FastAPI(
    title="My Awesome API",
    description="This is a comprehensive description of my API",
    version="2.5.0",
    terms_of_service="https://blueshoe.de/terms/",
    contact={
        "name": "API Support",
        "email": "support@blueshoe.de"
    }
)

class Item(BaseModel):
    """An Item in our system"""
    name: str = Field(..., example="Example Item", description="The name of the item")
    price: float = Field(..., gt=0, example=29.99, description="Price in EUR")
    tax: Optional[float] = Field(None, example=2.5)

@app.post(
    "/items/",
    response_model=Item,
    summary="Create a new item",
    description="Create a new item with all details",
    response_description="The created item",
    tags=["items"]
)
async def create_item(item: Item):
    """
    Create an Item with all information:
    
    - **name**: Unique name of the item
    - **price**: Price must be greater than 0
    - **tax**: Optional tax information
    """
    # Actual code .....
    return item
</code></pre>
<p>FastAPI generates both Swagger UI (under <code>/docs</code>) and ReDoc (under <code>/redoc</code>). The documentation is highly customizable and uses Pydantic models for examples.</p>
<h3>Django Ninja Documentation</h3>
<pre><code class="language-python">from ninja import NinjaAPI, Schema
from typing import Optional

api = NinjaAPI(
    title="Django Ninja API",
    version="1.0.0",
    description="My Django-based API"
)

class ItemSchema(Schema):
    """Schema for Items"""
    name: str = Field(..., example="Example Item")
    price: float = Field(..., gt=0, example=29.99)
    tax: Optional[float] = None
    
    class Config:
        schema_extra = {
            "example": {
                "name": "Nice Product",
                "price": 49.99,
                "tax": 3.5
            }
        }

@api.post(
    "/items/",
    response=ItemSchema,
    summary="Create a new item",
    tags=["items"],
    operation_id="create_item"
)
def create_item(request, item: ItemSchema):
    """
    Create a new item.
    
    This function saves a new item in Django's database.
    """
    # Actual code ....
    return item
</code></pre>
<p>Django Ninja also generates a Swagger UI (standard by default under <code>/api/docs</code>). One advantage is the integration with Django's authentication - the documentation can automatically show the available authentication methods.</p>
<h2>Authentication &#x26; Security in Practice</h2>
<p>API security is crucial for protecting sensitive data. Let's compare the authentication capabilities in both frameworks.</p>
<h3>Token Authentication</h3>
<p><strong>FastAPI with JWT:</strong></p>
<pre><code class="language-python">from fastapi import Depends, FastAPI, HTTPException, status
from fastapi.security import OAuth2PasswordBearer, OAuth2PasswordRequestForm
from jose import JWTError, jwt
from passlib.context import CryptContext

app = FastAPI()

# Configuration
SECRET_KEY = "your_secret_token"
ALGORITHM = "HS256"
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")

@app.post("/token")
async def login(form_data: OAuth2PasswordRequestForm = Depends()):
    # User validation would go here
    user_dict = {"username": form_data.username}
    access_token = jwt.encode(user_dict, SECRET_KEY, algorithm=ALGORITHM)
    return {"access_token": access_token, "token_type": "bearer"}
</code></pre>
<p><strong>Django Ninja with JWT:</strong></p>
<pre><code class="language-python">from ninja.security import HttpBearer
from ninja import NinjaAPI
import jwt

class AuthBearer(HttpBearer):
    def authenticate(self, request, token):
        try:
            return jwt.decode(token, "your_secret_token", algorithms=["HS256"])
        except:
            return None

api = NinjaAPI(auth=AuthBearer())

@api.post("/token")
def login(request, username: str, password: str):
    # User validation would go here
    return {"token": jwt.encode({"username": username}, "your_secret_token")}
</code></pre>
<h3>Permissions</h3>
<p><strong>FastAPI:</strong></p>
<pre><code class="language-python">from fastapi import Depends, HTTPException, status

def get_current_user(token: str = Depends(oauth2_scheme)):
    credentials_exception = HTTPException(
        status_code=status.HTTP_401_UNAUTHORIZED,
        detail="Could not validate credentials",
        headers={"WWW-Authenticate": "Bearer"},
    )
    try:
        payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
        username: str = payload.get("sub")
        if username is None:
            raise credentials_exception
        return username
    except JWTError:
        raise credentials_exception

@app.get("/secure/")
async def read_secure(current_user: str = Depends(get_current_user)):
    return {"message": f"Hello {current_user}"}
</code></pre>
<p><strong>Django Ninja:</strong></p>
<pre><code class="language-python">from ninja.security import django_auth
from django.contrib.auth.models import User

@api.get("/secure/", auth=django_auth)
def secure_route(request):
    return f"Hello {request.user.username}"
</code></pre>
<h3>Security Middleware</h3>
<p><strong>FastAPI Security Headers:</strong></p>
<pre><code class="language-python">from fastapi import FastAPI
from fastapi.middleware.httpsredirect import HTTPSRedirectMiddleware
from fastapi.middleware.trustedhost import TrustedHostMiddleware

app = FastAPI()

# Force HTTPS
app.add_middleware(HTTPSRedirectMiddleware)

# Allow only specific hosts
app.add_middleware(
    TrustedHostMiddleware,
    allowed_hosts=["example.com", "*.example.com"],
)
</code></pre>
<p><strong>Django Ninja (inherits from Django):</strong></p>
<pre><code class="language-python"># settings.py
MIDDLEWARE = [
    'django.middleware.security.SecurityMiddleware',
    # ...
]

# Security settings
SECURE_HSTS_SECONDS = 31536000  # 1 year
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_SSL_REDIRECT = True
SESSION_COOKIE_SECURE = True
CSRF_COOKIE_SECURE = True
</code></pre>
<h3>Security Features Summary</h3>
<ul>
<li>
<p><strong>FastAPI</strong>: Offers maximum flexibility but requires manual configuration of many security aspects. Ideal for developers who need full control.</p>
</li>
<li>
<p><strong>Django Ninja</strong>: Leverages Django's battle-tested security features including CSRF protection, clickjacking protection, and secure cookies. Provides higher security through sensible defaults but is less flexible in customization.</p>
</li>
</ul>
<h2>Project Structure and Deployment</h2>
<p>A clear project structure and smooth deployment are crucial for your API's success. Here are the best practices for both frameworks.</p>
<h3>Project Structure</h3>
<p><strong>FastAPI Recommended Structure:</strong></p>
<pre><code>fastapi_project/
├── app/
│   ├── __init__.py
│   ├── main.py              # Main application
│   ├── config.py            # Configuration
│   ├── database.py          # Database connection
│   ├── models/              # Pydantic models
│   │   └── schemas.py
│   ├── api/                 # API endpoints
│   │   ├── __init__.py
│   │   ├── v1/              # API version 1
│   │   │   ├── __init__.py
│   │   │   ├── endpoints/   # Endpoint modules
│   │   │   └── deps.py      # Dependencies
│   ├── core/                # Core logic
│   └── tests/               # Tests
├── alembic/                 # Database migrations
├── static/                  # Static files
├── requirements.txt
└── .env                    # Environment variables
</code></pre>
<p><strong>Django Ninja Recommended Structure:</strong></p>
<pre><code>django_project/
├── manage.py
├── requirements.txt
├── .env
├── config/
│   ├── __init__.py
│   ├── settings.py
│   ├── urls.py
│   └── asgi.py
├── apps/
│   ├── __init__.py
│   └── api/
│       ├── __init__.py
│       ├── schemas.py      # Pydantic schemas
│       ├── routers.py      # API routers
│       └── tests/
└── static/
</code></pre>
<h3>Deployment with Docker</h3>
<p><strong>FastAPI Dockerfile:</strong></p>
<pre><code class="language-dockerfile"># Base image with Python 3.9
FROM python:3.9-slim

# Set working directory
WORKDIR /app

# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application
COPY . .

# Expose port
EXPOSE 8000

# Run command
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<p><strong>Docker Compose for FastAPI:</strong></p>
<pre><code class="language-yaml">version: '3.8'

services:
  web:
    build: .
    ports:
      - "8000:8000"
    env_file: .env
    depends_on:
      - db
    restart: unless-stopped

  db:
    image: postgres:13
    environment:
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_DB: ${DB_NAME}
    volumes:
      - postgres_data:/var/lib/postgresql/data/
    ports:
      - "5432:5432"

volumes:
  postgres_data:
</code></pre>
<p><strong>Django Ninja Dockerfile:</strong></p>
<pre><code class="language-dockerfile"># Base image with Python 3.9
FROM python:3.9-slim

# Set working directory
WORKDIR /app

# Install system dependencies
RUN apt-get update &#x26;&#x26; apt-get install -y --no-install-recommends \
    build-essential \
    libpq-dev \
    &#x26;&#x26; rm -rf /var/lib/apt/lists/*

# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application
COPY . .

# Collect static files
RUN python manage.py collectstatic --noinput

# Expose port
EXPOSE 8000

# Run command
CMD ["gunicorn", "config.wsgi:application", "--bind", "0.0.0.0:8000"]
</code></pre>
<h3>Deployment Options</h3>
<p><strong>FastAPI:</strong></p>
<ul>
<li><strong>Uvicorn/ASGI Server</strong>: For production behind a reverse proxy</li>
<li><strong>Gunicorn with Uvicorn Workers</strong>: For better load handling</li>
<li><strong>Cloud Providers</strong>: AWS (ECS, EKS), Google Cloud Run, Azure App Service</li>
<li><strong>Serverless</strong>: AWS Lambda with Mangum</li>
</ul>
<p><strong>Django Ninja:</strong></p>
<ul>
<li><strong>Gunicorn/Uvicorn</strong>: For synchronous/asynchronous endpoints</li>
<li><strong>Daphne</strong>: ASGI server for full async support</li>
<li><strong>Cloud Providers</strong>: Same as FastAPI, plus specialized Django hosting providers</li>
<li><strong>PaaS</strong>: Heroku, PythonAnywhere, Railway</li>
</ul>
<h3>CI/CD Integration</h3>
<p>Both frameworks can be easily integrated into CI/CD pipelines. Here's an example for GitHub Actions:</p>
<pre><code class="language-yaml">name: Django CI

on: [push, pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:13
        env:
          POSTGRES_USER: test
          POSTGRES_PASSWORD: test
          POSTGRES_DB: test
        ports:
          - 5432:5432
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5

    steps:
    - uses: actions/checkout@v3
    - name: Set up Python
      uses: actions/setup-python@v4
      with:
        python-version: '3.9'
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt
    - name: Run tests
      env:
        DATABASE_URL: postgresql://test:test@localhost:5432/test
      run: |
        python manage.py test
</code></pre>
<h2>Practical Pro Tips</h2>
<h3>1. Using HTTP Status Codes Effectively</h3>
<p><strong>Problem:</strong> Django Ninja doesn't include a built-in HTTP status code enumeration by default.</p>
<p><strong>Solution:</strong> Create a reusable <code>http_status.py</code> file:</p>
<pre><code class="language-python"># http_status.py
from http import HTTPStatus
from typing import Any, Dict, Optional
from ninja import Response

def json_response(
    data: Any = None,
    status: int = HTTPStatus.OK,
    headers: Optional[Dict[str, str]] = None,
) -> Response:
    """Create a JSON response with proper status code."""
    return Response(
        content=data,
        status=status.value,
        headers=headers or {},
        content_type="application/json",
    )

# Usage:
# return json_response({"message": "Success!"}, status=HTTPStatus.CREATED)
</code></pre>
<h3>2. Performance Optimization for Django Ninja</h3>
<pre><code class="language-python"># 1. Use select_related and prefetch_related
@api.get("/users/{user_id}")
def get_user(request, user_id: int):
    user = get_object_or_404(User.objects.select_related('profile'), id=user_id)
    return {"name": user.name, "email": user.email}

# 2. Pagination for better performance
from ninja.pagination import paginate, PageNumberPagination

@api.get("/articles/", response=List[ArticleSchema])
@paginate(PageNumberPagination, page_size=20)
def list_articles(request):
    return Article.objects.all()
</code></pre>
<h3>3. FastAPI Performance Tuning</h3>
<pre><code class="language-python"># 1. Use Pydantic's orm_mode for better performance
class UserResponse(BaseModel):
    id: int
    username: str
    email: str
    
    class Config:
        orm_mode = True

# 2. Caching for frequently accessed endpoints
from fastapi_cache import FastAPICache
from fastapi_cache.backends.redis import RedisBackend
from fastapi_cache.decorator import cache

@router.get("/expensive-query/")
@cache(expire=300)  # Cache for 5 minutes
async def expensive_query():
    # Slow query here
    return {"result": "Data from cache or computed"}
</code></pre>
<h3>4. Error Handling</h3>
<p><strong>Django Ninja:</strong></p>
<pre><code class="language-python">from ninja import NinjaAPI
from ninja.errors import HttpError

api = NinjaAPI()

@api.exception_handler(ValueError)
def handle_validation_error(request, exc):
    return api.create_response(
        request,
        {"detail": str(exc)},
        status=400,
    )
</code></pre>
<p><strong>FastAPI:</strong></p>
<pre><code class="language-python">from fastapi import FastAPI, HTTPException, Request
from fastapi.responses import JSONResponse

app = FastAPI()

@app.exception_handler(ValueError)
async def validation_exception_handler(request: Request, exc: ValueError):
    return JSONResponse(
        status_code=400,
        content={"detail": str(exc)},
    )
</code></pre>
<h3>5. Improving Documentation</h3>
<p><strong>FastAPI:</strong></p>
<pre><code class="language-python">@app.get(
    "/items/{item_id}",
    response_model=Item,
    summary="Get an item",
    description="""
    This route returns a single item.
    - **item_id**: The unique identifier for the item
    """,
    responses={
        200: {"model": Item, "description": "The requested item"},
        404: {"description": "Item not found"},
    },
)
async def read_item(item_id: int):
    return {"id": item_id, "name": "Example"}
</code></pre>
<p><strong>Django Ninja:</strong></p>
<pre><code class="language-python">@api.get(
    "/items/{item_id}",
    response=ItemSchema,
    summary="Get an item",
    description="""
    This route returns a single item.
    - **item_id**: The unique identifier for the item
    """,
    tags=["Items"],
    operation_id="get_item",
)
def get_item(request, item_id: int):
    return {"id": item_id, "name": "Example"}
</code></pre>
<h3>6. Managing Environment Variables</h3>
<p>Create a <code>config.py</code> for both frameworks:</p>
<pre><code class="language-python"># config.py
import os
from pydantic import BaseSettings, PostgresDsn

class Settings(BaseSettings):
    # Database
    DATABASE_URL: str = "postgresql://user:password@localhost/dbname"
    
    # Security
    SECRET_KEY: str = "your-secret-key"
    ALGORITHM: str = "HS256"
    ACCESS_TOKEN_EXPIRE_MINUTES: int = 30
    
    class Config:
        env_file = ".env"
        case_sensitive = True

settings = Settings()
</code></pre>
<h3>7. Setting Up Logging</h3>
<p><strong>For both frameworks:</strong></p>
<pre><code class="language-python">import logging
from pathlib import Path

# Configure logger
log_dir = Path("logs")
log_dir.mkdir(exist_ok=True)

logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
    handlers=[
        logging.FileHandler(log_dir / "api.log"),
        logging.StreamHandler(),
    ],
)

logger = logging.getLogger(__name__)

# Usage:
# logger.info("API started")
# logger.error("An error occurred", exc_info=True)
</code></pre>
<h2>Conclusion: When should you choose which framework?</h2>
<p>After this deep insight, you're surely asking yourself: "Which framework is the right one for me?" The answer depends on your requirements.</p>
<p><strong>Choose FastAPI when:</strong></p>
<ul>
<li>You're developing a new API from scratch</li>
<li>Maximum performance and async support are critical</li>
<li>You want full control over every aspect of your application</li>
<li>Your team already has experience with modern Python patterns</li>
<li>You primarily develop APIs (no traditional web views)</li>
</ul>
<p><strong>Choose Django Ninja when:</strong></p>
<ul>
<li>You already have a Django project or plan one</li>
<li>You want to use Django's utilities (ORM, Admin, Auth, etc.)</li>
<li>Your team has Django experience</li>
<li>You need a mix of API and traditional Django views</li>
<li>Quick development is more important than maximum performance</li>
</ul>
<p>Both frameworks are excellent and the decision is not an either-or question. In practice, you can even use both in different projects or microservices. The most important thing is to understand the strengths and weaknesses and choose the right tool for your specific task.</p>
<p>The decision between FastAPI and Django Ninja is not purely a technology question - it's about finding the right tool for your specific requirements. Both frameworks enable you to develop modern, performant APIs that meet the highest standards. The art lies in optimally using the strengths of the chosen framework and developing an architecture that not only works today, but is also expandable tomorrow.</p>
<p>Are you facing the decision of which framework to choose for your next project? Or are you planning a migration? At Blueshoe, we have extensive experience with both frameworks and are happy to support you in finding the optimal solution for your requirements. From architecture consulting to implementation to performance optimization - let's make your API vision a reality together. Contact us for a non-binding consultation and benefit from our expertise in modern Python web development!</p>
<h2>Frequently Asked Questions (FAQs)</h2>
<h3>1. Can I migrate from FastAPI to Django Ninja (or vice versa)?</h3>
<p>Migration is definitely possible, but the effort depends strongly on your codebase. From FastAPI to Django Ninja is often easier, as you "only" need to adapt the routing and slightly modify the code structure.</p>
<p>You have to build a Django project around it. The API syntax is so similar that many endpoints can be taken over with minimal adjustments. The greatest effort occurs during database layer migration - from SQLAlchemy to Django ORM.</p>
<p>The reverse direction (Django Ninja to FastAPI) requires more work, as you must replace the entire Django infrastructure. You must newly implement authentication, database access, middleware, and other Django features. My tip: Plan a step-by-step migration over multiple sprints and use the transition period to run both APIs in parallel.</p>
<h3>2. Which framework is faster in performance?</h3>
<p>FastAPI mostly has its nose in front in pure benchmarks, especially for asynchronous operations. The numbers can be impressive - FastAPI can sometimes process twice as many requests per second. But caution: These synthetic benchmarks rarely reflect reality!</p>
<p>In practice, the bottleneck is almost never the web framework itself, but database access, external API calls, or complex business logic. Django Ninja's performance is more than sufficient for most applications. If you really need every millisecond, FastAPI with async/await is the better choice. But remember: Premature optimization is the root of all evil. Only optimize once you've identified actual performance problems.</p>
<h3>3. How does testing look?</h3>
<p>Both frameworks offer excellent testing support, but with different approaches.</p>
<p>With FastAPI you typically use Starlette's TestClient:</p>
<pre><code class="language-python">from fastapi.testclient import TestClient
client = TestClient(app)
response = client.get("/items/1")
assert response.status_code == 200
</code></pre>
<p>Django Ninja benefits from the mature Django test framework with all its features like Fixtures, Transactional Tests, and test database. You can use both Django's TestCase and Ninja's own test client. The big advantage: You can seamlessly combine your API tests with other Django tests and have access to things like Factory Boy or Model Mommy for test data.</p>
<h3>4. Can I use both frameworks in the same project?</h3>
<p>Technically yes, but I advise against it! You could theoretically use FastAPI for high-performance endpoints and Django (with Ninja) for the rest. In practice, this leads to unnecessary complexity: two different routing systems, two types of middleware, potential dependency conflicts.</p>
<p>A better strategy would be a microservice architecture, where different services use different frameworks. For example: Django Ninja for your main API with user management and business logic, FastAPI for a specialized service doing real-time data processing.</p>
<h3>5. Which framework has the steeper learning curve?</h3>
<p>That depends strongly on your background! If you already know Django, you'll feel immediately at home with Django Ninja. The API syntax is intuitive, and you can directly apply your Django knowledge.</p>
<p>FastAPI has an initially flatter learning curve when you start from zero. The concepts are modern and clearly structured. But once you go beyond "Hello World", you'll need to make many decisions and implement things yourself. This can be overwhelming if you don't know exactly what you need.</p>
<p>My advice: If you want to learn Python web development, start with FastAPI to understand modern API concepts. If you want to be productive and already have some experience, Django Ninja might be the faster path.</p>
<h3>6. How do the future prospects look for both frameworks?</h3>
<p>FastAPI has gained enormous momentum and is supported by a very active community. Large companies such as Microsoft, Netflix and Uber use it. Development is very active and the framework is constantly being improved.</p>
<p>Django Ninja is younger and has a smaller community, but benefits from the stability of the Django ecosystem. As long as Django remains relevant (and there is every indication that it will), Django Ninja will also be a solid choice. The advantage: Even if Django Ninja were to stop development, you could migrate to Django REST Framework or pure Django Views relatively easily.</p>
<h3>7. Are there situations where neither of the frameworks fits?</h3>
<p>Absolutely! If you want to build a GraphQL API, take a look at Strawberry or Graphene. For WebSocket-heavy applications, Starlett or Django Channels might be more suitable.</p>
<p>For very simple APIs or prototypes, Flask with Flask-RESTX could also suffice. And if you're in the Data Science world, Streamlit or Gradio could be interesting for API-like interfaces. The beauty of the Python ecosystem is its diversity - there's the right tool for every use case!</p>]]></content:encoded>
            <category>Django</category>
            <category>FastAPI</category>
            <category>API</category>
            <category>Development</category>
            <category>Performance</category>
            <enclosure url="https://blueshoe.io/img/blog/djninja-fastapi.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[django-o365: A Modern Mail Backend for Django and Exchange Online]]></title>
            <link>https://blueshoe.io/blog/django-o365-a-modern-mail-backend-for-django-and-exchange-online</link>
            <guid>https://blueshoe.io/blog/django-o365-a-modern-mail-backend-for-django-and-exchange-online</guid>
            <pubDate>Wed, 04 Feb 2026 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p><a href="https://github.com/Blueshoe/django-o365"><em>django-o365</em></a>{target="_blank"} is an open-source mail backend for Django that enables sending emails via Exchange Online using OAuth 2.0. Born from real-world project requirements, this package is aimed at teams using Microsoft 365 who want to implement their email dispatch securely, maintainably, and without workarounds. This article provides an overview of the motivation, features, and usage.</p>
<p><img src="/img/blog/django-mailing-o365.svg" alt="django-o365 mail backend for Django and Exchange Online">{.object-cover .max-w-full .mb-5}</p>
<h2>Modern Email Authentication for Django Applications</h2>
<p>Emails are a central component of many web applications – and for that very reason, they are particularly critical when they don't function reliably. Many companies rely on Microsoft 365 and Exchange Online, while classic SMTP authentication is increasingly being restricted or completely disabled. OAuth 2.0 has become the recommended standard.</p>
<p>With <a href="https://github.com/Blueshoe/django-o365">django-o365</a>{target="_blank"}, we have developed a mail backend for <a href="/technologies/python-django-agency/">Django</a>{target="_blank"} that allows sending emails via Exchange Online with OAuth 2.0 authentication and integrates seamlessly into the existing Django mail workflow. In this post, we will show you what components it is built on and how you can use it in your own project.</p>
<h2>The Starting Point: Mail Dispatch with django-exchange</h2>
<p>In a legacy project for one of our clients, email dispatch was previously handled with a local mailbox via Microsoft Exchange. For this purpose, we developed the package <a href="https://github.com/Blueshoe/django-exchange">django-exchange</a>{target="_blank"} back then. The mailbox has now been migrated to Exchange Online with OAuth 2.0 authentication. Since there was no suitable package for Django yet, we developed django-o365. Our main motivation was to achieve a clean Django integration so that we would only need to replace the email backend in our project.</p>
<h2>Technical Foundation: python-o365</h2>
<p>As its technical foundation, django-o365 relies on the established Python package <a href="https://github.com/O365/python-o365">python-o365</a>{target="_blank"}. This provides a complete abstraction of the Microsoft Graph API and handles, among other things, authentication via OAuth 2.0 and the actual sending of emails through Exchange Online.</p>
<p>django-o365 specifically uses python-o365 as a transport layer but completely encapsulates its functionality behind the familiar Django mail interface. This allows us to benefit from a stable and actively maintained base without having to use it directly in the application code.</p>
<p>We deliberately decided against implementing our own version of the Microsoft Graph API: python-o365 already covers the required functionality reliably and allows us to focus on a clean Django integration, maintainability, and ease of use.</p>
<h2>Architecture &#x26; How It Works</h2>
<p>django-o365 is intentionally built to be lean and follows the principle of seamlessly integrating into existing Django structures. The core of the package is a custom Django mail backend that implements the standardized Django mail API. As soon as Django sends an email, django-o365 hands it over to python-o365, which in turn handles the dispatch via the Microsoft Graph API.</p>
<p>Authentication is handled entirely via OAuth 2.0. Access and refresh tokens are managed and renewed by python-o365 without the application code needing to be aware of it. Thus, django-o365 acts as a bridge between Django and Exchange Online: it translates Django's mail dispatch into Graph API calls without forcing additional logic into the application.</p>
<p>This clear separation has two advantages: firstly, the Django code remains unchanged and independent of Microsoft-specific details, and secondly, we can rely on a proven library for the complex OAuth and API logic.</p>
<h2>Usage in Django</h2>
<p>Since django-o365 is not yet published on PyPI, it is installed directly from the Git repository, for example, via the requirements.txt file. Other dependencies, especially python-o365, are installed automatically.</p>
<p>On the Microsoft side, an app registration in Microsoft Entra ID (formerly Azure AD) is required. An application is created there that receives the permission to send emails via Microsoft Graph (e.g., <code>Mail.Send</code>). For OAuth authentication, a Client ID, a Client Secret, and the Tenant ID are needed. Additionally, it must be specified whether the dispatch should occur via a user account or a shared mailbox.</p>
<p>In Django, the configuration is limited to the settings. Only the mail backend is switched to django-o365, and the required credentials are provided:</p>
<pre><code class="language-python">EMAIL_O365_TENANT_ID = 'my-tenant-id'
EMAIL_O365_CLIENT_ID = 'my-client-id'
EMAIL_O365_CLIENT_SECRET = 'my-super-secret'
EMAIL_O365_SENDER = 'mysender@example.com'

# set backend
EMAIL_BACKEND = 'django_o365.backend.O365EmailBackend'
</code></pre>
<p>No changes are needed in the actual application code. Existing calls to <code>send_mail</code>, <code>EmailMessage</code>, or <code>EmailMultiAlternatives</code> continue to work as before. The switch to Exchange Online with OAuth 2.0 is done exclusively through configuration – which was one of the central goals in the development of django-o365.</p>
<h2>Open Source &#x26; Outlook</h2>
<p>We at Blueshoe are firm believers in the open source approach. Therefore, django-o365 is, of course, designed as an open-source project and is developed openly on GitHub. The current focus is deliberately on a clear, stable core: reliable mail dispatch via Exchange Online with OAuth 2.0 authentication using the credentials workflow. Sending emails, including attachments, is already fully supported and covers the common use cases in web projects.</p>
<p>For the future, other authentication workflows are conceivable, such as more user-centric scenarios. The aspiration remains the same: a clean Django integration without unnecessary complexity in the application code. Feedback, suggestions, and contributions from the community are expressly welcome – they help to further develop the package in a targeted and practical manner.</p>]]></content:encoded>
            <category>Python</category>
            <category>Django</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blog/django-mailing-o365.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Docker Desktop and Kubernetes]]></title>
            <link>https://blueshoe.io/blog/docker-desktop-and-kubernetes</link>
            <guid>https://blueshoe.io/blog/docker-desktop-and-kubernetes</guid>
            <pubDate>Wed, 08 Mar 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>In this article, we will take a glance at Docker Desktop in 2023 and focus on how developers can work with Kubernetes. Our team at Blueshoe just recently published a custom Docker Desktop extension for our open-source development tool Gefyra. We want to provide the most convenient developer experience (“DX”) for Kubernetes-based development workflows and Docker Desktop may be a good foundation. So let’s see.</p>
<p><img src="/img/blog/docker-desktop-and-kubernetes.jpg" alt="my image">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Installation
:::
:::globalParagraph
The installation of Docker Desktop is quite simple for all major platforms. You will find installation candidates for Apple’s new silicon and Intel-based chipsets, too. That’s convenient and will be easy for any developer that wants to start a journey with Container-based development.
:::
:::globalParagraph
However, as someone who works with Linux (specifically with Ubuntu), it bothers me a bit to download a >500 MiB install package from the browser to my machine. I usually want to add a software repository and with that automatically configure update strategies. If I wanted to update Docker Desktop (yes, Docker Desktop tells me in the UI) I have to load another installation package from the website and run the updater accordingly.
:::
:::globalParagraph
At its core, the idea is simple: running a logical or "virtual" Kubernetes cluster within a physical or "host" Kubernetes cluster. This concept brings a new layer of flexibility, allowing developers to spin up isolated environments quickly without the overhead of managing multiple physical clusters. With this, developers can replicate production environments, experiment safely, and optimize applications for the Kubernetes ecosystem.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
VM backend on all platforms needed
:::
:::globalParagraph
Docker Desktop needs a virtual machine (VM) backend on all platforms, including Linux. That is required because software containers are a Linux-native technology and thus not directly supported on Windows and Mac. On Windows, you can either use the WSL2 subsystem (which is a lightweight VM with a limited Hyper-V toolset to run the Linux kernel) or a Hyper-V backend. Depending on the Windows edition the first or second option is recommended. On macOS, Docker Desktop runs with HyperKit, a toolkit for embedding hypervisor capabilities into an application. On Linux it uses QEMU.
:::
:::globalParagraph
The question of why Docker Desktop exactly needs a VM on Linux is answered here. I found the raised points relatable, although it means I do have to sacrifice performance.
:::
:::globalParagraph
At least on Linux, the alternative is to run only the Docker daemon (a.k.a Docker engine). That allows me to run and manage containers with system-level performance. It’s a huge advantage compared to other platforms. It doesn’t seem possible to use Docker Desktop with a native Docker daemon, though. It would be the best of both worlds, the features and convenience coming with Docker Desktop and the performance of a native Docker engine. Well, this option is probably never going to happen.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Performance &#x26; convenience
:::
:::globalParagraph
Speaking about convenience: What exactly is the developer experience with Docker Desktop about? Let me start with the most important disadvantage in 2023.
:::
:::globalParagraph
Because I work with the Docker CLI (together with the Docker engine) in Linux on a daily basis I am very used to it. Yet, the Docker daemon was not (prominently) available on Windows or macOS and was only installable via the Docker Desktop-VM on these platforms. In the past, my team at Blueshoe experienced significant differences in the speed of the container execution and the resource consumption of the VM-based workaround of Docker Desktop. For example, we used to run local Kubernetes clusters with k3d, based on Docker containers, which was fine for Linux users, but nearly impossible for Mac users, even with comparable or better hardware specs (i.e. CPU, memory, etc.).
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Some real computing power needed
:::
:::globalParagraph
The startup time of Docker Desktop, especially on MacOS, is very long and the container performance is, frankly, quite poor. Since you can only adjust the system resources for the entire VM you have to cut a lot of available capacity from your host system in favor of the Docker VM. The downside is, you get a very unflexible resource allocation. If you put much capacity in the Docker VM, you will probably miss it on your host and vice versa. Only a very powerful machine (like more than 8 cores, and more than 16 GiB of memory) will be enough for serious container-based development tasks. In reality, you also run a resource-demanding IDE, a web browser with many open StackOverflow tabs, some instant messenger, and more on your development machine. This will eat up a lot of capacity, too. Running Docker Desktop comes with another huge demand that will most probably bend a common development computer to its knees. That sucks a lot of the convenience and developer experience of Docker Desktop.
:::
:::globalParagraph
Starting the integrated Kubernetes distribution takes very long, too. This got a lot better with the WSL2 integration on Windows. I suspect that the resource allocation of WSL2 is much more flexible than any Hyper-V-based VMs. On MacOS, this is still very time-consuming and frustrating if done multiple times a day.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Much better than just a CLI
:::
:::globalParagraph
But leaving the performance aside, Docker Desktop presents a very clean and intuitive graphical user interface. In my opinion, it’s very easy if you already worked with containers before. For example, it’s a great advantage to jump from logs to interactive terminals to environment variables - all at a click of a button. That’s a huge advantage compared to the plain CLI (even if you are very fast with it) and feels more like a development tool that was not designed to “just run a software container”. Do you want to work with Kubernetes? - Fine. Just activate the checkbox in the settings and wait for Kubernetes to become available.
:::
:::globalParagraph
All you additionally need is kubectl, the primary interface to any Kubernetes cluster. The kubeconfig is automatically prepared with the right context so that kubectl will be working out of the box.
:::</p>
<p><img src="/img/blog/docker-desktop-and-kubernetes-1.jpg" alt="kubernetes">{.object-cover .w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Setting up Docker Desktop Kubernetes
:::
:::globalParagraph
Working with Kubernetes in Docker Desktop requires a rather powerful development machine. Aside from that requirement, you may face the limitation that there is only one Kubernetes version available: the one shipping with the installed Docker Desktop version. That may be a problem if you want to match the specific Kubernetes version of your production system (speaking about dev/prod parity).
:::
:::globalParagraph
<a href="/blog/minikube-vs-k3d-vs-kind-vs-getdeck-beiboot/">If you want to install a specific Kubernetes version check out this article</a>{.bs-link-blue}. You could use minikube with the --kubernetes-version flag and the Docker driver to install any available Kubernetes version on Docker Desktop. That might be even faster regarding the boot-up time.
:::
:::globalParagraph
The Kubernetes distribution coming with Docker Desktop is unopinionated. That’s an advantage if you want to work with a production close setup. However, the downside is that developers have to provide every building block by themselves. For example, there is no ingress controller available by default. Compared to k3d, it takes a few additional steps to make it available:
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Which ingress controller should be used?
:::
:::globalParagraph
You can choose from a range of different ingress controllers for Kubernetes. The platform “learnk8s” maintains a very comprehensive spreadsheet that compares basically all Kubernetes ingress controllers: <a href="https://docs.google.com/spreadsheets/d/191WWNpjJ2za6-nbG4ZoUMXMpUK8KlCIosvQB0f-oq3k/edit#gid=907731238">https://docs.google.com/spreadsheets/d/191WWNpjJ2za6-nbG4ZoUMXMpUK8KlCIosvQB0f-oq3k/edit#gid=907731238</a>{.bs-link-blue :target="_blank"}
:::</p>
<p><img src="/img/blog/docker-desktop-and-kubernetes-2.jpg" alt="kubernetes">{.object-cover .w-full .mb-5}</p>
<p>:::globalParagraph
This should not be a very relevant decision made by developers though. If you still don’t know which ingress controller to use with Docker Desktop, chances are you can choose any of them. I’d personally recommend the Ingress-Nginx controller which also happens to be an official Kubernetes project.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Install Ingress-Nginx to your Docker Desktop Kubernetes
:::
:::globalParagraph
To install it, you first need to make sure your kubeconfig context is set to docker-desktop. In a terminal, you can run the following command:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-json">> kubectl config current-context

docker-desktop
</code></pre>
<p>:::</p>
<p>:::globalParagraph
If this command tells something different, set the context to docker-desktop with:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json">> kubectl config use-context docker-desktop

Switched to context "docker-desktop".
</code></pre>
<p>:::</p>
<p>:::globalParagraph
Then, just apply the ingress controller configs for Kubernetes, like so:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json">> kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>:::</p>
<p>:::globalParagraph
You can check the ingress-controller with this command and it should tell the following:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json">> kubectl -n ingress-nginx get pod

  NAME                                        READY   STATUS      RESTARTS   AGE
  ingress-nginx-admission-create-szn97        0/1     Completed   0          70s
  ingress-nginx-admission-patch-plpcx         0/1     Completed   0          70s
  ingress-nginx-controller-6b94c75599-vpjjd   1/1     Running     0          70s
</code></pre>
<p>:::</p>
<p>:::globalParagraph
Et voila, your local Kubernetes cluster is ready to serve real HTTP traffic.
:::
:::globalParagraph
<strong>Important:</strong> the Kubernetes of Docker Desktop binds directly to the ports of your local machine. That means the ingress-controller we just installed will be started on port 80 which must be free on your machine.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Testing the Ingress-Controller with a little Demo App
:::
:::globalParagraph
Let’s confirm it working by applying some test workload and exposing it with an ingress object:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json">> kubectl apply -f https://raw.githubusercontent.com/gefyrahq/gefyra/main/testing/workloads/hello_dd.yaml
</code></pre>
<p>:::</p>
<p>:::globalParagraph
(This is the “hello-nginx” application, from our Kubernetes development tool Gefyra)
:::
:::globalParagraph
If you check the ingress objects with:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json">> kubectl get ingress

NAME            CLASS    HOSTS                    ADDRESS   PORTS   AGE
hello-ingress   &#x3C;none>   hello.127.0.0.1.nip.io             80      7s
</code></pre>
<p>:::
:::globalParagraph
you will find the hello-ingress object to be reachable on http://hello.127.0.0.1.nip.io (this requires nip.io Domains to work in your current network; if not look for “DNS rebind protection”). Once you point your browser to that address you will be welcomed with the Hello Nginx screen.
:::</p>
<p><img src="/img/blog/docker-desktop-and-kubernetes-3.jpg" alt="kubernetes">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
It tells a bit about your cluster: the pod name of the process running this simple page and the IP address of the pod.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
What does the Docker Desktop GUI look like?
:::
:::globalParagraph
Going back to Docker Desktop, you will find all containers we just installed to Kubernetes also in the “Containers“ section.
:::</p>
<p><img src="/img/blog/docker-desktop-and-kubernetes-4.jpg" alt="kubernetes">{.object-cover .w-full .mb-5}</p>
<p>:::globalParagraph
By clicking on the “View details” button a the “k8s_hello-nginx_hellp-nginxdemo” container you will be presented with the logs and much other useful information. You can even drop into an interactive shell by clicking on the “Terminal” tab.
:::</p>
<p><img src="/img/blog/docker-desktop-and-kubernetes-5.jpg" alt="kubernetes">{.object-cover .w-full .mb-5}</p>
<p>:::globalParagraph
That is very handy if you want to inspect the container state or execute a one-off management command of your application.
:::</p>
<p>:::GlobalPodcastSection{:videoId="tyvE9VlSWkE" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR PODCAST: TOOLS FOR THE CRAFT
::::
::::GlobalTitle{:tag="h3" .mb-6}
E3: Deep dive into Getdeck
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are introducing and demonstrating Getdeck as well as comparing local and remote Kubernetes and preproduction clusters
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="Show more" :color="green"}
::::
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Writing Code with Docker Desktop Kubernetes
:::
:::globalParagraph
As a developer, it is just natural that I want to write code and iterate fast. In many modern environments, one will find Kubernetes-based production environments. So it is just consequent that Kubernetes-based development environments are trending.
:::
:::globalParagraph
If there are DevOps putting a lot of effort into writing good Kubernetes configs for all required resources that made up the application, why don’t we take these resources and roll them out to our developers as well?
:::
:::globalParagraph
Let’s say you want to bootstrap a Kubernetes-based development environment. Here is what you need:
:::</p>
<p>:::GlobalBlock{.ol-decimal .mb-5}</p>
<ol>
<li>Pull all resources for provisioning a Kubernetes cluster together (e.g. you need Helm charts, Kustomize resources or plain YAML files). If in doubt, ask your DevOps to provide you with guidance.</li>
<li>Install all components that make up the application to your local Kubernetes cluster (i.e. required databases, caches, indexes, <a href="/our-services/api-development-company/">APIs</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, and other applications). Alternatively, you can just install the components that are really required for your specific application to save computing capacity.</li>
<li>Make sure everything is running
:::
:::globalParagraph
From here you have basically a few options to set up the development tooling.
:::</li>
</ol>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Running local container images in Kubernetes
:::
:::globalParagraph
Kubernetes usually pulls images from a container registry, which means you have to push and pull the image we have built after every change. Docker Desktop makes this easy by using dockershim to share the image cache between the Docker engine and Kubernetes.
:::
:::globalParagraph
The dockershim is a component coming with Docker Desktop Kubernetes that translates between the Kubernetes kubelet and the Docker Engine.
:::
:::globalParagraph
This is a huge advantage for developers as they don’t have to push and pull their container images before they can run them in Docker Desktop Kubernetes. This feature is somewhat similar to what you can achieve with k3d, minikube, and kind by importing the container image to their cluster nodes. Yet, with Docker Desktop you literally only need to build the container image as usual and don’t have to import it to a Kubernetes node. So that is even faster than what you get with other local Kubernetes solutions.
:::
:::globalParagraph
If you wanted to modify the output of the “hello-nginx” application from above, then first take a look at the workload specification:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-json">apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-nginxdemo
spec:
  selector:
    matchLabels:
      app: hello-nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: hello-nginx
      spec:
        containers:
            - name: hello-nginx
            image: nginxdemos/hello
            ports:
            - containerPort: 80
</code></pre>
<p>:::</p>
<p>:::globalParagraph
It says the container image running is “nginxdemos/hello”, pulled from the Dockerhub. You can find that information also with Docker Desktop in the “Container” sections (see the screenshot from above).
:::
:::globalParagraph
Now you can either build a new version of that container image with docker build . -t nginxdemos/hello and delete the Kubernetes pod:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-json">> kubectl get pod

NAME                               READY   STATUS    RESTARTS   AGE
hello-nginxdemo-6c4f96b577-2rkcp   1/1     Running   0          51m

> kubectl delete pod hello-nginxdemo-6c4f96b577-2rkcp

pod "hello-nginxdemo-6c4f96b577-2rkcp" deleted
</code></pre>
<p>:::</p>
<p>:::globalParagraph
You will get a new pod almost immediately, but this time running your custom container image that you just built.
:::
:::globalParagraph
Instead of reusing the original container image coming with the Kubernetes resource, you could also build a custom container image and name it anything you want. Then, you can simply modify the deployment and run that image:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-json">> kubectl patch deployment hello-nginxdemo -p '{"spec":{"template":{"spec":{"containers":[{"name":"hello-nginx","image":"my-custom-image"}]}}}}'
</code></pre>
<p>:::</p>
<p>:::globalParagraph
This will lead to the same result: the pod gets restarted with the new image that you just created. Kubernetes won’t take the effort of pulling the image as the Docker engine you use for building the container image is exactly the same as the one used by Docker Desktop Kubernetes. They share the same image cache and that makes it very convenient to run an arbitrary image in the local cluster.:::
:::globalParagraph
By the way, the container will have all Kubernetes resources and configs available. This makes it very close to the production environment.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Using additional tooling for even faster interaction
:::
:::globalParagraph
If you don’t want to build the container image every time you make a code change, you will need some additional tooling. If you want to attach a debugger to the container process, you won’t be able to do this simply with the local Kubernetes. In addition, if you need to override environment variables (let’s say you want to set a DEBUG flag) this must be done by modifying your Kubernetes configs in the cluster.
:::
:::globalParagraph
Blueshoe created Gefyra for exactly that use case. With it, you can simply docker run …, pardon me, I meant gefyra run … (it is almost similar) your container image without even forcing any Kubernetes pod to reload. That way, you can mount your current working tree into the container and activate the hot-reloading flag in your process (if available). You can easily set environment variables and best of all, expose that container on a dedicated local port to iterate even faster. The container will behave as it would run within a Kubernetes pod (including networking semantics), hence reaching all Kubernetes-based resources such as databases or other applications.
:::
:::globalParagraph
This works with the command line application gefyra or alternatively, developers can take advantage of the Gefyra Docker Desktop extension from the marketplace.
:::
:::globalParagraph
Check out Gefyra and our getting started guide to use it with Docker Desktop Kubernetes or the Docker Desktop extension.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Closing Remarks
:::
:::globalParagraph
Although Docker Desktop makes the development work with Kubernetes simple, there are still a few challenges. The resource requirements are overwhelming, and setting up a cluster that is close to production takes time and knowledge.I created a discussion at Docker's idea board on GitHub to make this process simpler in the future. Maybe it’s picked up and will be realized.
:::
:::globalParagraph
There are a couple of other interesting Docker Desktop extensions already in the marketplace that can improve the development experience even further. Stay tuned for another article in which we want to present existing Kubernetes-based development extensions.
:::
:::globalParagraph
In the long run, I actually don’t see local Kubernetes development as a sustainable option, though. Remote development environments are the future! Getdeck Beiboot will run all Kubernetes-based resources, and with tools like Gefyra, we enable developers to work in a real Kubernetes-based development environment with just a few containers running locally.
:::
:::globalParagraph
If you want to know more about Kubernetes-based development follow me on LinkedIn or drop us a line at Blueshoe.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Docker</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blog/docker-desktop-and-kubernetes.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Docker vs. Podman]]></title>
            <link>https://blueshoe.io/blog/docker-vs-podman</link>
            <guid>https://blueshoe.io/blog/docker-vs-podman</guid>
            <pubDate>Mon, 13 Feb 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>In this article, we'll compare Podman and Docker to see how they stack up against each other. We'll start with an overview of what each tool is and why you might want to choose one over the other. Then, we'll dive into the details of what makes each tool unique before coming to our conclusion about which one is best for your needs: Podman or Docker!</p>
<p><img src="/img/blog/docker-vs-podman.jpg" alt="Docker vs. Podman">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
What is Docker in 2023?
:::
:::globalParagraph
Docker is a long-time player in the container world, and it's been around since 2013. If you've been following the industry for some time, you surely have heard of Docker before, or even used it yourself!
:::
:::globalParagraph
For a few years now, the company has been focusing on developer experience and making sure that containers can be used effectively by developers at all stages of their application lifecycle. It comes with a rich feature set of operating containers in production, too. The company behind the technology, Docker Inc., also made a fantastic business pivot and is now very profitable with its subscription-based model. Relying on Docker, as a commercially backed product, might be a robust decision for the future.
:::
:::globalParagraph
Given that Docker has been on the table for a long time, it also supports many features, such as the rootless mode (more on this later). This ensures that users don't need to run the Docker daemon with root privileges on their servers anymore. That makes it easier for everyone, as you can use containers without having to worry about having privileged access or the security issues that you would otherwise encounter as a root user.
:::
:::globalParagraph
Docker provides very comprehensive documentation for virtually any topic that comes up when working with containers.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
What is special about Podman?
:::
:::globalParagraph
Podman is a relatively new container runtime, but it has already become part of many Linux standard software repositories. You don’t need to add 3rd party sources to install it on your host. It is sometimes available on a fresh installation already.
:::
:::globalParagraph
Podman runs daemon-less and has a development experience that feels very close to Docker's, i.e., most of the commands in the Podman CLI are identical to the Docker CLI. Podman Desktop, a graphical user interface for Podman, also looks almost identical to Docker Desktop.
:::
:::globalParagraph
The documentation of Podman is, frankly, a bit awful and lacks certain topics entirely.
:::
:::globalParagraph
The following figure shows a graph from Google Trends which clearly shows the growing interest in Podman over the last few years.
:::
<img src="/img/blog/docker-vs-podman-1.jpg" alt="docker-vs-podman">{.object-cover .max-w-full .mb-5}
:::globalParagraph
Podman is a community-driven open-source project that is sponsored by Red Hat.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
What sets Podman apart from Docker?
:::
:::globalParagraph
Podman and Docker share many similar characteristics. Both are tools for managing containers on Linux, built on the same kernel features (such as namespaces and cgroups) that enable them to run processes in isolation from one another (‘sandboxing’). The user interface is almost identical, which makes migrating between the two very simple and practical. You can even stick to most of the container images you already have (as long as they are compliant with the OCI container image format).
:::
:::globalParagraph
Let’s have a closer look at the differences between Podman and Docker.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Daemon – or no daemon
:::
:::globalParagraph
Docker runs a daemon process (‘docked’) on the host system which is usually equipped with root privileges. What is the daemon process doing in the depths of the system? Well, basically everything that is needed to manage containers on the host: oversee running container instances, manage the container images, provide storage volumes and much more. It creates container networks on request and deals with all the low-level container stuff, especially containerd and runc. The daemon process creates an <a href="/our-services/api-development-company/">API-interface</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} using an HTTP protocol to expose its functionality to all kinds of end-user interfaces, including the Docker CLI. Depending on the platform, the Docker interface is materialised through a Unix socket, named pipe or TCP port (with lots of options to make it secure). The Docker daemon runs with very low resource consumption.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h3" .mb-5}
No daemon with podman
:::
:::globalParagraph
Podman, on the other hand, does without the daemon process (‘daemonless container engine’). The container management happens from within the client itself. Hence, calling Podman with a user other than root will only allow operations that this user is authorised to perform. Naturally, this limits the possibilities for the user – but also for intruders that capsize a container from within.
:::
:::globalParagraph
So from a security perspective, the ‘rootless’ phenomenon is a pretty good idea. And yet, it might eventually come to a quick end. We will look at this more closely in the next section.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h3" .mb-5}
Podman for production - a little more tricky
:::
:::globalParagraph
A more crucial drawback of not running a daemon process becomes apparent when trying to run Podman for production workloads. For example, with Docker, you can simply specify a ‘restart policy’ for containers and be sure they are restarted in the case of a crash. Podman does not run a process monitor and thus must delegate this activity somewhere else: enter our good ol’ friend systemd. Although systemd is very common and probably well-understood by the majority of system administrators, it’s still a very complex additional solution that comes at its own costs. Podman supports the user by generating the systemd units (the configuration to let systemd know how to probe and manage a process), but this is a completely different ecosystem. Coming from Docker, this could mean a certain ramp-up time to get everything running with the desired behaviour. However, another plus for systemd is that a unit can be run with lowered user privileges just as well.
:::
:::globalParagraph
Be it Docker or Podman – if the production deployments are to be taken seriously, a daemon process will ultimately have to be involved somewhere in either case. And of course, if you want to use the Podman REST API, there is a service process (daemon) for it.
:::</p>
<p>:::GlobalPodcastSection{:videoId="tyvE9VlSWkE" :videoPosition="left" .mb-6}
::::GlobalTitle{:tag="h3" .mb-6}
Tools for the Craft - Edition 1: Kubernetes development environments
::::
::::globalParagraph{:font-size="lg" .mb-4}
Check out the first edition of our Kubernetes podcast "Tools for the Craft: Navigating the Kubernetes ecosystem" here. Michael and Robert are talking in depth about the ins and outs of local Kubernetes development and also provide some real coding examples.
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="Show more podcast editions" :color="green"}
::::
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Rootful and rootless
:::
:::globalParagraph
A highly useful (and sometimes underestimated) feature of Docker are overlay networks. These are like ‘real’ (virtual) networks on a host machine. Docker networks allow all kinds of complex connection topologies, with routes and NATs and IP pools etc. That is particularly useful in situations where it is desired to meet a certain production environment and to loosely couple different services that make up an application. In fact, each container runs within its own namespace in the Linux kernel, making it possible to create resource constraints for every container, network settings and so on. One of the basic ideas of segregating the Linux kernel into multiple spaces was process security. At the moment, most of this is only possible with root privileges. Yet, sandboxing with namespaces is possible with unprivileged users, too.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h3" .mb-5}
How it looks in reality
:::
:::globalParagraph
Important features disappear when running Docker in rootless mode, and it’s the same with Podman. I found out that there is an option to run Podman rootful in order to gain these capabilities, especially proper networking, too.
:::
:::globalParagraph
In practice, the Podman development team created a, in my humble opinion, questionable workaround for the missing networking by introducing the ‘pod’ concept as an alternative.
:::
:::globalParagraph
Using Podman, you can compile multiple containers in a pod. ‘Pod’ is the name for a higher-level organisation of kernel namespaces. All containers sharing the same pod are in fact in the same kernel namespace(s). Most importantly, they share the same networking namespace. That makes it possible for the container processes to talk to each other via TCP sockets. For example, you can run one process on port localhost:8000 and another process on localhost:8001. Both processes will be able to communicate with each other using the TCP socket on localhost. This would not be possible for two separate podman run … (or with docker run …), as they are separated from each other by default. Using the pod concept will ultimately remove the need for networking entirely, and thus the need for a rootful operation.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
The Podman "Infra Container"
:::
:::globalParagraph
By the way: each Podman pod gets a special container called ‘Infra container’. It does nothing but fall asleep once the pod is created. All attributes defining the pod are actually assigned to this special container, including port bindings, kernel namespaces, resource constraints and so on. Once the pod is created, you can never change these attributes again. Let’s say you create a new pod, and add a container later on that binds a new port with the host – Podman will not be able to do this. You have to recreate the entire pod with the new port binding (or other attributes, respectively).
:::</p>
<p>:::globalTitle{:size="sm" :tag="h3" .mb-5}
Privileges vs. capabilities
:::
:::globalParagraph
Running any process with dropped privileges comes with considerable cuts in capabilities. That makes sense, especially to prevent exploited container processes to perform system modifications or access other processes. Dropping the execution privileges is generally preferable, and I am taking this topic very seriously. However, trading in the sandbox mechanism in favour of missing network capabilities introduces another class of system vulnerabilities (we experienced this a lot in the past).
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Podman pods and Kubernetes
:::
:::globalParagraph
The Podman team claims working with Podman pods will make the transition to Kubernetes much easier. In fact, you can create a pod with Podman (putting in all containers you need; set certain attributes) and automatically generate a valid Kubernetes YAML file out of it. And yes, the technical foundation is the same. But still, who asked for that feature?
:::</p>
<p>:::globalTitle{:size="sm" :tag="h3" .mb-5}
The monolithic pod
:::
:::globalParagraph
I found this tutorial that suggests putting a web server, an application server and the database into one Podman pod. It would be handy if I wanted to operate this with Podman on a server host. But here is the point. Who, with an experienced background in Kubernetes, would ever apply such a pod definition in a Kubernetes environment? I get one monolithic pod containing everything that makes up an application. What about scalability, resiliency and, of course, security? A serious Kubernetes deployment employs abstract workload definitions declared in ‘Deployments’, ‘StatefulSets’ and other higher-level Kubernetes objects. I never saw this pattern in the real world (which does not mean it doesn’t exist). Applying bare pods does not seem to be a practical approach for Kubernetes at all. If it is used to construct real Kubernetes structural deployment patterns, such as sidecars or adapters, I would be very happy, though.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h3" .mb-5}
Unfulfilled promises
:::
:::globalParagraph
Hence, I find this feature misleading, especially with the communication and documentation of Podman. No, I cannot define a Podman pod on a local machine and migrate it that easily to production Kubernetes. On Kubernetes, we use strong networking mechanisms such as load balancers, IP routing, network policies and with it, loose coupling.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Conclusion
:::
:::globalParagraph
I hope this article has given you a better understanding of the differences between Podman and Docker. As you can see, there are many similarities between the two tools, but they also have some key differences that could make one option more suitable than the other, depending on your use case. While Podman is still in its early stages of development, it has already shown signs of being a worthy alternative to Docker by providing a simpler user experience while still maintaining compatibility with existing images from other registries like Docker Hub or Google Container Registry (GCR). I look forward to seeing how these tools evolve over time as they both continue to add new features. Be sure to have a look at Podman Desktop, too. I am not sure if Podman is also going the developer experience (‘DX’) path, just like Docker, or if are they trying to operate production servers. Let me know what you think.
:::
:::globalParagraph
Feel free to follow me on LinkedIn or join our discord.
:::</p>]]></content:encoded>
            <category>Docker</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blog/docker-vs-podman.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[The evolution of application development to cloud native]]></title>
            <link>https://blueshoe.io/blog/evolution-of-application-development-to-cloud-native</link>
            <guid>https://blueshoe.io/blog/evolution-of-application-development-to-cloud-native</guid>
            <pubDate>Mon, 23 Aug 2021 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Becoming a truly cloud-native company is not easy. It takes evolution. But what stages of evolution do companies have to go through to arrive at the desired stage? And what does that stage entail? Take a look with us!</p>
<p><img src="/img/blog/evolution-of-application-development-to-cloud-native.jpg" alt="The evolution of application development to cloud native">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
Stage 1: Normalisation
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Application development teams use version control.</li>
<li>Application development teams use standard development procedures.
:::
:::globalParagraph
This happens for example by using Git or SVN. There is the main branch and individual feature branches make it easier to keep control of the application code. There is also versioning for release procedures.
:::
:::globalParagraph
If you have not yet arrived at this stage you must have lived under a rock for the last 20 years. Go and get version control implemented. Now.
:::</li>
</ul>
<p>:::globalTitle{:size="lg" .mb-5}
Stage 2: Standardisation
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Teams build on a standard set of technology.</li>
<li>Teams deploy to a standard operating platform.
:::
:::globalParagraph
Standardisation drastically increases the efficiency of your organisation. That’s not only true for your tech team. But for those guys, it means working with container technology such as <a href="/blog/strategies-for-slim-docker-images/">Docker</a>{.bs-link-blue}, Docker-Compose or <a href="/blog/local-kubernetes-development/">Kubernetes</a>{.bs-link-blue}.
:::
:::globalParagraph
This stage is the groundwork for moving towards the Cloud Native organisation.
:::</li>
</ul>
<p><img src="/img/blog/unikube_c-ndem_Infographic.jpg" alt="unikube_c-ndem_Infographic">{.object-cover .w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
Stage 3: Expansion
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Applications consist of multiple smaller moving parts, loosely coupled services.</li>
<li>Applications are designed for scale, resiliency and pace of change.
:::
:::globalParagraph
Your organisation is using service-oriented architectures, message brokering, event streams and loosely coupled (REST-, GraphQL-, etc.) interfaces. This helps for implementing specialist teams, faster development and better handling of complex applications by applying the “divide and conquer”-principle.
:::
:::globalParagraph
To arrive at this stage, it is important to already plan out your application as a structure consisting of individually developed services from the get-go, since splitting up a monolithic application structure later on is a real struggle.
:::</li>
</ul>
<p>:::globalTitle{:size="lg" .mb-5}
Stage 4: Automated Application Delivery
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Teams reuse deployment patterns.</li>
<li>Version control for deployment patterns and configurations.</li>
<li>Automated development environment provisioning.</li>
<li>Teams use a standard set of build and test systems.</li>
<li>Service discovery is used in applications.</li>
<li>Security teams are involved in design and deployment.</li>
<li>Automated code and manifest security profiling.
:::
:::globalParagraph
Technologies and principles you may have implemented in stage 4 could include Helm, Quay, GitHub Actions, Continuous Integration, ArgoCD, Service Mesh, Network Policies, Pod Disruption Budget.
:::
:::globalParagraph
Applications, or better yet, resilient small services have a high release cadence. End-to-end automated deployment patterns (e. g. using Helm for all parts) with GitOps are in place. A “push” to the source management system triggers traceable changes to the infrastructure and applications. All team members (with a special focus on developers) know the pivotal elements of the continuous integration pipeline and solve challenges on their own. Besides, members of a dedicated security team are involved in the creation of architectures and services. Security fixes are rolled out just as fast as they appear.
:::</li>
</ul>
<p>:::globalTitle{:size="lg" .mb-5}
Stage 5: Automated Application Lifecycle Management
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Development teams can access all services for development.</li>
<li>Production can be replicated to development.</li>
<li>Applications use advanced operating platform patterns.</li>
<li>Applications manage themselves and the operating platform.
:::
:::globalParagraph
Examples: Operators, CRD (Custom Resource Definitions), Auto Scaling and Probes.
:::</li>
</ul>
<p>:::globalParagraph
Your development team hunts a complex bug that embarrassingly sits between all of your services? No problem, since in stage 5 your teams can provision the complexity of your production environment with the flick of a finger. In addition, all services manage their own lifecycle without the need for manual intervention.
:::
:::globalParagraph
There is an update that requires a data migration? No problem, for your Kubernetes Operators which detect the available update in your registry and automatically apply the needed scripts in order to keep your application’s data consistent. Applications themselves tell Kubernetes about their health - are they ready to process requests or is there more capacity required? May we scale down a bit in order to save money? In stage 5, that’s nothing your team has to take care of anymore.
:::
:::globalParagraph
Are you at stage 5 yet? Yes? Congratulations! But if not, don’t worry. From our own assessment, there are still very few companies that have truly arrived in stage 5 already. We feel that every organisation should do its best to arrive at stage 3 sooner rather than later and immediately start looking towards the mid-term goal of reaching stage 4. If that is the strategy, you are on a good way to be set up for future success.
:::
:::globalParagraph
If you want more insights into the Kubernetes ecosystem, feel free to follow Michael Schilonka on LinkedIn.
:::</p>
<p>::GlobalPodcastSection{:videoId="KapP6ouEL3E" :videoPosition="left" :bg="bg-bs-gray" .mb-5}
:::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR PODCAST: TOOLS FOR THE CRAFT
:::
:::GlobalTitle{:tag="h3" .mb-6}
E2: Remote Kubernetes development environments
:::
:::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking about the various options developers have for running remote Kubernetes development environments.
:::
:::globalParagraph{:font-size="lg" .mb-4}
More editions of our podcast can be found here:
:::
::::GlobalButton{:url="/podcast/" :label="Show more podcasts!" :color="green"}
::::
::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Gefyra</category>
            <category>Security</category>
            <category>Development</category>
            <category>Project Management</category>
            <enclosure url="https://blueshoe.io/img/blog/evolution-of-application-development-to-cloud-native.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[FastAPI in Production: Here's How It Works!]]></title>
            <link>https://blueshoe.io/blog/fastapi-in-production</link>
            <guid>https://blueshoe.io/blog/fastapi-in-production</guid>
            <pubDate>Tue, 10 Dec 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>With FastAPI, you can benefit from a wide range of advantages, including speed, modernity, and flexibility. However, to ensure that your FastAPI application runs successfully in production, you need the right strategies and tools. Here you'll find the most important best practices and tools to operate your application efficiently and securely.</p>
<p><img src="/img/blogs/api-technologies-fastapi.svg" alt="Blueshoe und FastAPI in production">{.object-cover .max-w-full .mb-5}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Introduction
:::
:::GlobalParagraph
In today's digital world, implementing FastAPI applications in production is an important step for companies and developers. FastAPI offers a quick, modern, and flexible way to develop and operate APIs. But how do you successfully bring your FastAPI application into production? Learn which best practices and tools will help you operate your FastAPI application performantly and securely. We'll start with a few basic conceptual principles before we look at optimizing the production environment.
:::</p>
<p>:::GlobalButton{:url="/our-services/api-development-company/" :label="Learn more about our API development services" :color="blue" .mb-6}
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
What is FastAPI anyway?
:::
:::GlobalParagraph
FastAPI is a modern, fast (high-performance) web framework for Python, based on Starlette and Pydantic. It was written by <a href="https://github.com/tiangolo">Sebastián Ramírez a.k.a tiangolo</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} and is one of the fastest-growing Python web frameworks. It is also very efficient at writing APIs in FastAPI, as it leverages the advantages of Python type checks and asynchronous programming models. Moreover, FastAPI automatically generates interactive API documentation with Swagger UI and ReDoc, which significantly reduces development effort.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
The 12 Factor App - What Makes a Good Application
:::
:::GlobalParagraph
Blueshore is a strong proponent of the 12 Factor App approach. The 12 Factor App is a method for building Software-As-A-Service apps that (according to quote) should meet the following requirements:
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>Use declarative formats for automation of configuration to minimize time and costs for new developers in the project</li>
<li>Have a clean contract with the underlying operating system, offering maximum portability between execution environments</li>
<li>Be suitable for deployment on modern cloud platforms, avoiding the need for servers and server administration</li>
<li>Minimize the difference between development and production to enable continuous deployment for maximum agility</li>
<li>Be scalable without significant changes in tooling, architecture, or development processes
:::</li>
</ul>
<p>:::GlobalParagraph
Thus, the approach encompasses concepts such as good implementation of application configuration or parity between dev and production environments. All factors can be found on the <a href="https://12factor.net/de/">12 Factor App Website</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}. With FastAPI, this approach can be implemented well.
Thanks to the lightweight nature of the framework, for example, Factor IX can be quite simply fulfilled, and a quick start of the application can be guaranteed.
:::</p>
<p>:::GlobalParagraph
After clarifying all concepts and basic principles for <em>the</em> SaaS app, the perfect FastAPI production setup follows.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Server Runner - Who Runs Your App?
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Gunicorn + Uvicorn - The Old Way
:::</p>
<p><img src="/img/blog/old_way.gif" alt="old way gif">{.object-contain .w-128}</p>
<p>:::GlobalParagraph
A few months ago, the combination of Gunicorn with Uvicorn workers was the preferred choice to serve a FastAPI API. Gunicorn was used because it has the ability to manage a large number of processes.
:::
:::GlobalParagraph
The performance remained constant. This made it possible to distribute the load across multiple processes and increase the API's availability. Uvicorn was used as a worker process to process requests, as it offers an efficient implementation of the ASGI standard. However, until recently, Uvicorn did not directly support worker management and restarting, which made the use of Gunicorn as a process manager necessary. By combining Gunicorn and Uvicorn, developers could leverage the advantages of both tools to build a scalable and reliable API infrastructure.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
The FastAPI CLI - The New Way
:::</p>
<p>:::GlobalParagraph
In the meantime, this combination of Gunicorn and Uvicorn is no longer needed, as Uvicorn now also handles worker management itself.
:::
:::GlobalParagraph
You can now use Uvicorn alone, <em>or</em> use the new FastAPI CLI. This is based on Uvicorn, but significantly simplifies developer life by providing management commands for FastAPI. It can also be used as an entrypoint command to serve production APIs.
:::</p>
<p>:::GlobalParagraph
To maximize performance in a multiprocessor environment, the FastAPI CLI <em>run</em> command can be given a flag to set the number of service workers. The resulting command looks like this:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-bash">fastapi run --workers 4 &#x3C;path to main file>
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
This leads Uvicorn to produce optimizations for production and creates 4 workers that can process parallel requests.
When you want to host your application in Kubernetes, it is recommended to let replication run across the cluster and not in each individual container. This means you don't need the <em>workers</em> flag, but should look in your deployment configuration instead.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Reverse Proxies
:::</p>
<p>:::GlobalParagraph
A Reverse Proxy is a server that sits between clients (e.g. browsers or other applications) and the backend servers of an application and functions as an intermediary. Instead of clients communicating directly with the backend, the Reverse Proxy forwards requests and returns the backend's responses to the clients. This conceals the actual backend servers from the clients and serves as a central access point for the application.
:::</p>
<p>:::GlobalParagraph
Reverse Proxies like <code>Nginx</code> or <code>Traefik</code> are an indispensable tool for deploying modern API applications, as they offer numerous advantages that improve both performance and security and scalability. By taking over tasks like SSL/TLS termination, a Reverse Proxy ensures that the API is securely accessible via HTTPS without burdening the backend application itself. Simultaneously, it enables load balancing by efficiently distributing incoming requests across multiple backend instances, which increases the stability and availability of the application. Moreover, a Reverse Proxy can directly serve static content like images or CSS files, further boosting performance by unburdening the API server. Not least, a Reverse Proxy provides additional security functions like Rate Limiting or IP Blocking to protect APIs from malicious attacks. This combination of performance optimization, security, and flexibility makes Nginx, Traefik, and similar tools an indispensable component of every professional API deployment strategy.
:::</p>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
Let us run your FastAPI-App.
::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
And finally, Security
:::</p>
<p>:::GlobalParagraph
Security is a central aspect of API deployment, and there are proven measures that help protect your application from attacks. HTTPS should always be used to secure communication between client and server. This ensures encrypted and secure data transmission, preventing unauthorized access and potential data interception.</p>
<p>Server to encrypt. An SSL certificate can be easily and freely provided through services like Let's Encrypt. Furthermore, security headers increase protection against frequent attacks like Cross-Site Scripting (XSS). This can be easily implemented in FastAPI with the SecurityMiddleware from Starlette. Additionally, the use of TrustedHostMiddleware protects against Host Header attacks by only allowing defined domains:</p>
<p>:::BlogCode{.mb-4}</p>
<pre><code class="language-python">from starlette.middleware.trustedhost import TrustedHostMiddleware
from starlette.middleware import Middleware

app = FastAPI(
  middleware=[
    Middleware(
      TrustedHostMiddleware,
      allowed_hosts=["yourdomain.com", "*.yourdomain.com"]
    ),
  ]
)
</code></pre>
<p>:::
:::GlobalParagraph
When your API interacts with external clients, a correct configuration of CORS (Cross-Origin Resource Sharing) is crucial to only allow trustworthy origins:</p>
<p>:::BlogCode{.mb-4}</p>
<pre><code class="language-python">from fastapi.middleware.cors import CORSMiddleware

app.add_middleware(
    CORSMiddleware,
    allow_origins=["https://trustedomain.com"],
    allow_methods=["GET", "POST"],
    allow_headers=["*"],
)
</code></pre>
<p>:::
:::GlobalParagraph
For protecting database access, sensitive information like access data should never be hardcoded. Instead, environment variables or secrets management tools can be used. To prevent SQL injection, you should use ORM tools like SQLAlchemy or Tortoise ORM that securely parameterize queries.
Here the 12 Factor App concept comes into play again.
:::
:::GlobalParagraph
Additionally, you can limit the frequency of incoming requests through Rate Limiting and thus prevent both misuse and overloads. Reverse Proxies like Nginx or Traefik are excellently suited to implement this efficiently. With these measures, you can significantly increase the security of your API.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Example Dockerfile
:::</p>
<p>:::GlobalParagraph
Here is an example of a Dockerfile for a production environment in Kubernetes (the cluster takes over the replication):</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-dockerfile">FROM python:3.12-slim

# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# Set the working directory
WORKDIR /app

# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy the application code
COPY src/ .

# Expose the port the app runs on
EXPOSE 8000

# Command to run the FastAPI application using the FastAPI CLI -> no workers flag as Kubernetes handles replication
CMD ["fastapi", "run", "main:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<p>:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Honorable Mentions
:::</p>
<p>:::GlobalBlock{.ul-disk}</p>
<ul>
<li>Caching can also do a lot for an API's performance. At Blueshoe, we primarily use <a href="https://varnish-cache.org/">Varnish</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.</li>
</ul>
<ul>
<li>I/O and resource-intensive tasks should be executed in <a href="https://fastapi.tiangolo.com/tutorial/background-tasks/">FastAPI's BackgroundTasks</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}
:::</li>
</ul>
<p>:::GlobalTitle{:size="lg" .mb-5 .mt-8}
Conclusion
:::</p>
<p>:::GlobalParagraph
FastAPI is not only fast and flexible, but through its integration with modern deployment and security standards, it provides an ideal foundation for production-ready APIs. With the presented best practices and tools, applications can be operated performantly, securely, and scalably. Developers benefit from quick startup times, asynchronous processing, and integrated optimization tools. The use of Reverse Proxies, security measures, and modern deployment strategies make FastAPI the ideal choice for future-oriented API development.
:::</p>]]></content:encoded>
            <category>FastAPI</category>
            <category>API</category>
            <category>Development</category>
            <category>Performance</category>
            <enclosure url="https://blueshoe.io/img/blogs/api-technologies-fastapi.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[FastAPI vs. Robyn: A Detailed Comparison]]></title>
            <link>https://blueshoe.io/blog/fastapi-v-robyn</link>
            <guid>https://blueshoe.io/blog/fastapi-v-robyn</guid>
            <pubDate>Tue, 03 Jun 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>In the world of modern API development, developers are often faced with the question: Which framework is right for my project? FastAPI and Robyn are two of the (more or less) rising stars in Python API development. Both offer modern features and high performance, but differ in several important aspects. In this article, we will explore these differences.
We take a detailed look at the commonalities and differences between these two frameworks.</p>
<p><img src="/img/blogs/robyn-fastapi.svg" alt="FastAPI vs. Robyn Comparison">{.object-cover .max-w-full .mb-5}</p>
<h2>Introduction</h2>
<p>Choosing the right API framework is crucial for the success of a project. While <a href="https://fastapi.tiangolo.com/">FastAPI</a>{target="_blank"} has been established for several years, <a href="https://robyn.tech/">Robyn</a>{target="_blank"} is gaining popularity as a newer player. Both frameworks promise high performance and modern development approaches, but they differ in their implementation and strengths. In this article, we will compare the most important aspects of both frameworks and help you make the right choice for your project.</p>
<h2>Commonalities</h2>
<p>FastAPI and Robyn share some fundamental characteristics that make them modern and efficient API frameworks:</p>
<ul>
<li>Both are modern, asynchronous web frameworks</li>
<li>They offer high performance through asynchronous processing</li>
<li>Both support OpenAPI/Swagger documentation</li>
<li>They use modern Python features like Type Hints</li>
<li>Both are lightweight and modular in design</li>
</ul>
<h2>Differences</h2>
<h3>Runtime</h3>
<p>FastAPI is based on Starlette and Uvicorn as ASGI server, while Robyn has implemented its own runtime in Rust. This leads to interesting differences: FastAPI benefits from the mature Python ecosystem integration and broad community support, while Robyn potentially offers better performance for certain workloads through its Rust implementation. However, the Rust implementation also means Robyn is less flexible when integrating Python libraries and might require more maintenance.</p>
<h3>Endpoint Handling</h3>
<p>While both frameworks pursue similar goals, they differ in their endpoint implementation:</p>
<p>FastAPI uses a decorator-based approach with a strong focus on typing and validation, as well as request destructuring and injection:</p>
<pre><code class="language-python">from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI()

class Item(BaseModel):
    id: str
    name: str
    price: float

@app.post("/items/")
async def create_item(item: Item):
    # create item in db
    # ...
    return item

@app.put("/items/{item_id}")
async def update_item(item_id: str, item: Item):
    """
    A client would call me like: PUT {base_url}/items/0485fd43-1345-4336-877c-4b4775810
    
    And `item_id` is automatically made available to this endpoint!
    """
    # update item in db
    # ...
    return item

@app.put("/items/{item_id}/move")
async def move_item(item_id: str, item: Item, directory: str | None = None):
    """
    A client would call me like: PUT {base_url}/items/0485fd43-1345-4336-877c-4b4775810/move?directory=new-n-shiny

    And `directory` is automatically made available to this endpoint!
    """
    # move item
    # ...
    return item
</code></pre>
<p>Robyn, on the other hand, offers a more flexible, <strong>in my opinion</strong> more circumstantial approach to path and query parameter injection:</p>
<pre><code class="language-python"># Why import types from three different modules?
from robyn import Robyn, Request
from robyn.types import PathParams
from robyn.robyn import QueryParams

app = Robyn(__file__)

@app.post("/items/")
async def create_item(request: Request):
    data = await request.json()
    return data

@app.put("/items/:item_id")
async def update_item(
    request: Request,
    path_parameters: PathParams,  # NOTE: variable name has to be `path_parameters` for injection to work
):
    item_id: str = str(path_parameters["item_id"])
    # update item in db
    # ...
    return item

@app.put("/items/:item_id")
async def move_item(
    request: Request,
    path_parameters: PathParams,
    query_parameters: QueryParams,  # NOTE: variable name has to be `query_parameters` for injection to work
):
    item_id: str = str(path_parameters["item_id"])
    directory: str = query_parameters.get("directory", None)
    # move item
    # ...
    return item
</code></pre>
<p>Path and Query parameters are also available in the Request Model. So one can theoretically save the injection.</p>
<h3>ORM Support</h3>
<p>Database operation support is an important aspect of framework selection:</p>
<p>FastAPI:</p>
<ul>
<li>Allows easy integration with ORMs like SQLAlchemy – thanks to Pydantic</li>
<li>Supports asynchronous ORMs like Tortoise ORM</li>
<li>Has an active community with numerous examples and best practices for ORM usage</li>
<li>Automatically generates OpenAPI documentation based on database models</li>
<li>Offers <a href="https://sqlmodel.tiangolo.com/">SQLModel</a>{target="_blank"} – its own ORM built on SQLAlchemy and Pydantic</li>
</ul>
<p>Robyn:</p>
<ul>
<li>More flexible in the choice of database solutions (including Rust ORMs!)</li>
<li>Fewer predefined patterns for database operations</li>
</ul>
<h3>Serialization and Validation</h3>
<p>The way data is validated and serialized differs significantly:</p>
<p>FastAPI:</p>
<ul>
<li>Uses <a href="https://docs.pydantic.dev/latest/">Pydantic</a>{target="_blank"} for validation and serialization</li>
<li>Strict typing and validation</li>
<li>Automatic generation of OpenAPI schemas</li>
<li>Extensive validation options (through Pydantic)</li>
</ul>
<p>Robyn:</p>
<ul>
<li>More flexible validation options</li>
<li>Less strict typing</li>
<li>Manual serialization/deserialization (for example via <code>jsonify</code>)
Even when this is not necessarily required. Robyn's documentation is not 100% clear here:
<img src="/img/blogs/robyn-jsonify-docs.png" alt="Robyn jsonify Docs">
(<code>jsonify</code> is imported but not used; In the test, one didn't even have to import <code>jsonify</code>, a dict as response is completely sufficient)</li>
</ul>
<h3>FFIs</h3>
<p>At least when using CPython, one can already freely convert certain parts of their Python code to C / C++ (if not already done; <code>stdlib</code>, etc.) to accelerate. Since Robyn has a Rust runtime, this framework allows a child's play integration of Rust code!</p>
<p>For our small performance comparison, I'll generate Fibonacci numbers up to a (maximum) 30-element size. No tricks like memoization are used to show raw Python performance:</p>
<pre><code class="language-python">def py_fib(n: int):
    if n &#x3C; 2:
        return n
    return py_fib(n - 1) + py_fib(n - 2)
</code></pre>
<p>I've also (crude, and without recursion) implemented this "Generator" in Rust:</p>
<pre><code class="language-rust">//rustimport:pyo3

//:
//: [dependencies]
//: num-bigint = "0.4"
//: num-traits = "0.2"

use pyo3::prelude::*;
use num_bigint::BigUint;
use num_traits::{Zero, One};

#[pyfunction]
fn fibonacci(term: u64) -> PyResult&#x3C;String> {
    if term == 0 {
        return Ok("0".to_string());
    }
    
    let (mut a, mut b): (BigUint, BigUint) = (Zero::zero(), One::one());
    for i in 1..=term {
        let temp = b.clone();
        b = a + b;
        a = temp;
    }
    
    Ok(b.to_string())
}
</code></pre>
<p>Child's play, as I only had to worry about the implementation. Dependencies can be defined via comments and are automatically resolved by the Robyn CLI.</p>
<p>After writing one's Rust code, the following command suffices:</p>
<pre><code class="language-bash">robyn --compile-rust-path "my-robyn-rust-dir"
</code></pre>
<p>And just like magic (via PyO3), the Rust code is compiled and packed into a platform-independent library!</p>
<p>This can then be imported like a normal Python module (my Rust file is called <code>native_fib.rs</code> and is in the same directory as the Robyn <code>main.py</code>):</p>
<pre><code class="language-python">from native_fib import fibonacci as rs_fibonnacci
</code></pre>
<h2>Performance Comparison</h2>
<p>As already mentioned, for comparing performance, the generation of Fibonacci numbers and the following setup is used:</p>
<ol>
<li>An endpoint <code>/</code> which returns a simple JSON:
<pre><code class="language-python"># fastapi implementation
@app.get("/")
async def root():
    return {"response": "success"}

# robyn implementation
@app.get("/")
async def root(request: Request):
    return {"response": "success"}
</code></pre>
</li>
<li>A <a href="https://locust.io/"><code>locust</code></a>{target="_blank"} configuration file with a task:
<ol>
<li>Calls <code>/</code> with 10, 1000 and 10,000 users</li>
</ol>
</li>
<li>An endpoint <code>/fibonacci/:size</code> with optional query parameter <code>use_native</code></li>
<li>A <a href="https://locust.io/"><code>locust</code></a>{target="_blank"} configuration file with three tasks:
<ol>
<li>Call <code>/fibonacci/10</code>; with weighting 3</li>
<li>Call <code>/fibonacci/20</code>; with weighting 2</li>
<li>Call <code>/fibonacci/30</code>; with weighting</li>
<li>Distributed over 100 users with 1 ramp user per second</li>
</ol>
</li>
<li>A <a href="https://locust.io/"><code>locust</code></a>{target="_blank"} configuration file with three tasks (for the Rust implementation):
<ol>
<li>Call <code>/fibonacci/10?use_native=true</code>; with weighting 3</li>
<li>Call <code>/fibonacci/20?use_native=true</code>; with weighting 2</li>
<li>Call <code>/fibonacci/30?use_native=true</code>; with weighting</li>
<li>Distributed over 100 users with 1 ramp user per second</li>
</ol>
</li>
</ol>
<p>Generally, <em>1</em> process and <em>1</em> worker were tested.</p>
<h3>Test Machine</h3>
<p>Lenovo ThinkPad P14s Gen 2i</p>
<ul>
<li>Processor: 11th Gen Intel® Core™ i7-1165G7</li>
<li>RAM Size: 32GiB</li>
</ul>
<p>Then let's look at the results!</p>
<h3>Simple</h3>
<p>This load test calls the very simple JSON response endpoint and increases user numbers steadily until a maximum of 10,000 users.</p>
<p>As can be recognized, FastAPI (at least on my laptop) is already reaching its limits with 246.4 Requests per second. So more errors occur where locust requests are no longer (or too late) processed.</p>
<p>A <code>ConnectionResetError</code> is a network error that occurs when the counterparty (in this case the server) abruptly terminates an existing TCP connection, typically by sending an RST packet (Reset). This is not an orderly termination (like a FIN packet) and often signals that something unexpected has happened or the server can no longer process the connection.</p>
<p>Under high load, a server can be overwhelmed. This can have various causes: exhaustion of system resources (such as CPU, memory, file descriptors) or overloading of the event loop responsible for processing asynchronous operations. If the event loop cannot quickly enough accept new connections or process data on existing sockets, timeouts can occur or the server must forcibly close connections to free resources or prevent overload.</p>
<p>In the context of Python and ASGI servers like Uvicorn (used by FastAPI), the way Python handles network I/O plays a role. Although Python with <code>asyncio</code> and ASGI enables asynchronous operations and theoretically can manage many connections simultaneously while waiting on I/O, there are limits. The execution of Python code itself is subject to the Global Interpreter Lock (GIL), which prevents multiple native threads from simultaneously executing Python bytecode on multiple CPU cores. In very high load scenarios, this becomes a bottleneck.</p>
<p>Last, when the I/O-bound endpoints are (like simple JSON responses), the sheer amount of context switches, the scheduling of coroutines, and the short time each request spends in Python code (even if it's just JSON serialization) can lead to the Event Loop being overwhelmed. The server process spends too much time executing Python code (even when asynchronous), to react quickly enough to new or existing socket events.</p>
<p>The <code>ConnectionResetError</code>s in this test show that FastAPI (or the underlying Python/Uvicorn layer) already reaches its limits quite early. The Event Loop cannot process incoming requests quickly enough, which leads to the server resetting connections instead of serving them in an orderly manner. This is an indicator that the Python runtime becomes a bottleneck under this specific extreme load. While Robyn, which uses a Rust runtime without GIL, should be better at navigating this type of bottleneck when processing network connections.</p>
<p>The Load Test Result for Robyn looks significantly more performant:</p>
<p>What do we see? Not only no failures (as all requests could be processed) but also faster response times through the bank!
Exciting, but now to the load tests presenting a greater challenge.</p>
<h3>Fibonacci</h3>
<p>Here directly the results (left FastAPI, right Robyn):</p>
<p><img src="/img/blog/py-fib-comparison.png" alt="Python Fibonacci comparison">{.object-cover .max-w-full .mb-5}</p>
<p>As we can see, Robyn is a few milliseconds faster in response times. Also, under FastAPI <em>a single</em> request failed:</p>
<p><img src="/img/blog/fastapi-py-fib-fail.png" alt="FastAPI Python Fibonacci Fail">{.object-cover .max-w-full .mb-5}</p>
<p>Interestingly, this occurred between 25 and 27 <strong>Requests per Second (RPS)</strong> and then didn't reappear. This could have been an isolated incident and might be attributable to my not specifically configured API hosting hardware.</p>
<p>But: Robyn also supports native Rust code. How does that look?</p>
<p><img src="/img/blog/py-fib-native-fib-comparison.png" alt="Python vs. native Fibonacci comparison">{.object-cover .max-w-full .mb-5}</p>
<p>The result? Drastic.</p>
<p>Joking aside, as we can see the response times in the (long unoptimised) Rust implementation are almost negligible. Here is the locust summary:</p>
<p><img src="/img/blog/native-fib-request-summary.png" alt="Native Fibonacci Response Time Stats">{.object-cover .max-w-full .mb-5}</p>
<p>Here's the case: Requiring a larger Fibonacci number seems to need less time on average and maximum? A possible explanation could be that the "handover" from Python to Rust (and back from Rust to Python) takes most of the time.</p>
<h3>Summary</h3>
<p>We tested FastAPI and Robyn with a simple JSON endpoint and a more complex Fibonacci endpoint under load.</p>
<ul>
<li>FastAPI reaches about 246 RPS on my laptop, then increasingly shows errors.</li>
<li>Robyn remains stable even at 10,000 users - faster and without errors.</li>
<li>In Fibonacci calculation (Python), Robyn is a few milliseconds faster, while FastAPI shows a disruption at 25-27 RPS.</li>
<li>With Rust-native code in Robyn, we see a drastic improvement in response times - even less time with larger Fibonacci numbers!</li>
</ul>
<p><strong>Conclusion</strong>: Robyn scales better than FastAPI, especially under high load. And Rust impressively shows how much more performance is possible when we eliminate Python bottlenecks.</p>
<h2>Project Structure and Developer Workflow</h2>
<p>How do you organize a project with FastAPI compared to Robyn?</p>
<ul>
<li>
<p><strong>FastAPI Project Structure:</strong></p>
<ul>
<li>Typically modular, often separated by features or domains (e.g., <code>routers/</code>, <code>models/</code>, <code>services/</code>).</li>
<li>Heavily relies on the Python ecosystem for tooling: <code>black</code> for formatting, <code>pytest</code> for testing, <code>alembic</code> for database migrations.</li>
<li>The <code>main.py</code> is often just the entry point that brings together various routers and configurations.</li>
</ul>
</li>
<li>
<p><strong>Robyn Project Structure:</strong></p>
<ul>
<li>Often more monolithic since the framework is less opinionated. Many developers start with a single <code>app.py</code>.</li>
<li>Rust integration requires a clear separation between Python and Rust code (e.g., in a <code>native/</code> directory).</li>
<li>Robyn comes with its own CLI for compiling Rust code and starting the application, which slightly changes the workflow.</li>
</ul>
</li>
<li>
<p><strong>Development Server:</strong></p>
<ul>
<li><strong>FastAPI:</strong> Uses <code>fastapi dev main.py</code> for a fast hot-reload development server.</li>
<li><strong>Robyn:</strong> Offers a built-in hot-reload server with <code>robyn --dev</code> that can also detect and recompile Rust code changes.</li>
</ul>
</li>
</ul>
<h2>Deployment in Practice: From Local to Live</h2>
<ul>
<li>
<p><strong>Containerization (Docker):</strong></p>
<ul>
<li><strong>FastAPI:</strong> The <code>Dockerfile</code> is straightforward. It's based on a Python image and starts the application with the <code>fastapi-cli</code> (which calls uvicorn internally).
<pre><code class="language-dockerfile">FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["fastapi", "run", "main.py", "--host", "0.0.0.0", "--port", "80"]
</code></pre>
</li>
<li><strong>Robyn:</strong> The <code>Dockerfile</code> is more complex when Rust code is involved. It often requires a multi-stage build: one stage to install the Rust toolchain and compile the code, and a second stage to copy the compiled library into a slim Python image.
<pre><code class="language-dockerfile"># Stage 1: Build Rust
FROM rust:latest as builder
WORKDIR /app
COPY native/ .
# Rust compilation step would go here

# Stage 2: Final Image
FROM python:3.11-slim
WORKDIR /app
COPY --from=builder /app/target/release/libnative_fib.so .
# ... Rest of the setup
</code></pre>
</li>
</ul>
</li>
<li>
<p><strong>Production Server:</strong></p>
<ul>
<li><strong>FastAPI:</strong> Runs on any ASGI-compatible server (Uvicorn, Hypercorn, Daphne).</li>
<li><strong>Robyn:</strong> Comes with its own Rust-based server runtime and doesn't require a separate ASGI server.</li>
</ul>
</li>
</ul>
<h2>Error Handling and Debugging</h2>
<ul>
<li>
<p><strong>FastAPI:</strong></p>
<ul>
<li>Offers a robust system for exception handlers. You can define global handlers for <code>HTTPException</code> or custom exceptions.</li>
<li>Pydantic validation errors are extremely detailed and specify exactly which field in the request body is invalid.</li>
<li>Debugging follows standard Python debugging practices.</li>
</ul>
</li>
<li>
<p><strong>Robyn:</strong></p>
<ul>
<li>Also provides exception handlers, though they are less structured than in FastAPI.</li>
<li>Debugging can be more complex when errors occur in the Rust runtime or at the Python-Rust interface, potentially requiring knowledge of both languages.</li>
</ul>
</li>
</ul>
<h2>When which framework?</h2>
<p>The choice between FastAPI and Robyn depends on your project's specific requirements:</p>
<p>FastAPI is ideal for:</p>
<ul>
<li>Projects with complex data classes</li>
<li>Teams who prefer strict typing</li>
<li>Applications with extensive API documentation requirements</li>
<li>Projects with an established community and ecosystem</li>
</ul>
<p>Many Resources Benefit</p>
<p>Robyn is better suited for:</p>
<ul>
<li>Simpler API projects</li>
<li>Teams wanting more flexibility in implementation</li>
<li>Projects with specific performance requirements that can be "fine-tuned" with native Rust integration and runtime</li>
</ul>
<hr>
<h2>Conclusion</h2>
<p>Both FastAPI and Robyn are modern, high-performance API frameworks with different strengths. FastAPI offers a more comprehensive solution with strong typing and validation, while Robyn provides more flexibility and a minimalistic approach. The final choice should depend on the specific requirements of your project.</p>
<hr>
<h2>FAQ – Frequently Asked Questions about Choosing the Right Framework</h2>
<h3>Which framework offers better performance?</h3>
<p>Robyn shows better results in performance tests, especially for simple endpoints and high load. However, FastAPI offers a more mature ecosystem integration and community support.</p>
<h3>Can I use Rust code in both frameworks?</h3>
<p>Robyn offers native Rust code integration through its Rust runtime, while FastAPI does not directly support this. With FastAPI, one would need to rely on traditional CPython FFIs.</p>
<h3>Which framework is better suited for complex data models?</h3>
<p>FastAPI is better suited for complex data models, as it uses Pydantic for validation and serialization and offers strict typing. Robyn provides more flexibility but fewer predefined structures.</p>
<h3>How do the ORM possibilities differ?</h3>
<p>FastAPI offers native integration with SQLAlchemy, Tortoise ORM, and SQLModel. Robyn has no native ORM integration but allows using Rust ORMs and provides more flexibility in database selection.</p>
<h3>For what type of projects is each framework better suited?</h3>
<p>FastAPI is better for complex projects with extensive API documentation requirements and strict typing. Robyn is ideal for simpler API projects and teams wanting more flexibility and native Rust integration.</p>]]></content:encoded>
            <category>FastAPI</category>
            <category>Performance</category>
            <category>API</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blogs/robyn-fastapi.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[FastAPI Documentation with Programming Examples]]></title>
            <link>https://blueshoe.io/blog/fastapi-with-examples</link>
            <guid>https://blueshoe.io/blog/fastapi-with-examples</guid>
            <pubDate>Thu, 14 Nov 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>There is one thing that always drives us when we want to build a good API - the documentation. Especially for publicly consumable APIs, one never knows who uses them and under what conditions the respective developer's assumptions are. To promote a particularly good DX (= Developer Experience), we provide our OpenAPI documentations with code examples in many languages.</p>
<p><img src="/img/blogs/api-technologies-fastapi.svg" alt="Blueshoe and FastAPI: Documentation with Programming Examples">{.object-cover .max-w-full .mb-5}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Documentation via OpenAPI
:::</p>
<p>:::GlobalParagraph
First the basics - <a href="https://fastapi.tiangolo.com/">FastAPI</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} is one of the frameworks we use to write APIs. It is based on Python and works with very good performance. FastAPI is strongly typed, which means that input and output types are largely known in the program (within the framework possible in Python).
:::</p>
<p>:::GlobalParagraph</p>
<p>Through the known types, an API schema can be generated. This schema contains all necessary information about the available endpoints. OpenAPI is used as the standard format here.
Example of an OpenAPI Endpoint:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-json">openapi: 3.1.0
info:
  title: Redocly Museum API
  description: An imaginary, but delightful Museum API for interacting with museum services and information. Built with love by Redocly.
  version: 1.0.0
  contact:
    email: team@redocly.com
    url: 'https://redocly.com/docs/cli/'
  x-logo:
    url: 'https://redocly.github.io/redoc/museum-logo.png'
    altText: Museum logo
  license:
    name: MIT
    url: 'https://opensource.org/licenses/mit/'
servers:
  - url: 'https://api.fake-museum-example.com/v1'
paths:
  /museum-hours:
    get:
      summary: Get museum hours
      description: Get upcoming museum operating hours
      operationId: getMuseumHours
      x-badges:
        - name: 'Beta'
          position: before
          color: purple
      tags:
        - Operations
      parameters:
        - $ref: '#/components/parameters/StartDate'
        - $ref: '#/components/parameters/PaginationPage'
        - $ref: '#/components/parameters/PaginationLimit'
      responses:
        '200':
          description: Success
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/GetMuseumHoursResponse'
              examples:
                default:
                  summary: Museum opening hours
                  value:
                    - date: 2023-09-11
                      timeOpen: '09:00'
                      timeClose: '18:00'
                    - date: 2023-09-12
                      timeOpen: '09:00'
                      timeClose: '18:00'
                    - date: 2023-09-13
                      timeOpen: '09:00'
                      timeClose: '18:00'
                    - date: 2023-09-17
                      timeOpen: '09:00'
                      timeClose: '18:00'
                  closed:
                    summary: The museum is closed
                    value: []

      '400':
        description: Bad request
      '404':
        description: Not found
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
The generated specification contains examples, possible answers, and a description of the endpoint parameters.
:::
:::GlobalParagraph
Programs like Swagger or Redoc use this schema and generate a human-readable view in the <a href="https://redocly.github.io/redoc/">Browser</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}:
:::</p>
<p><img src="/img/blogs/fastapi-redoc1.png" alt="Redoc Example">{.mx-auto .w-1/2}</p>
<p>:::GlobalParagraph
Here information such as authentication, pagination, and filtering for the endpoint are presented.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Examples of API Documentation
:::</p>
<p>:::GlobalParagraph
A major added value of the OpenAPI standard is that structured data can also generate examples. This looks in Redoc, for example, as follows:
:::</p>
<p><img src="/img/blogs/fastapi-redoc2.png" alt="Redoc API Call">{.mx-auto .w-1/2}</p>
<p>:::GlobalParagraph
The examples can be explicitly specified during development or automatically generated with simple data (such as 0 for integers, "string" for chains).
These example data for an HTTP Post can then be created automatically.
:::
:::GlobalParagraph
Request to show only the payloads from our FastAPI. To support the developers who use our API, we want to embed concrete programming examples.
This can be easily implemented with Redoc via the <a href="https://redocly.com/docs/api-reference-docs/specification-extensions/x-code-samples">x-codeSamples</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} tag. But where do we get the examples without learning every programming language ourselves?
:::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Generation of Code Examples - Packages
::</p>
<p>::GlobalParagraph
On Github, we have found 2 packages that seem suitable for generating API code examples: <a href="https://github.com/postmanlabs/postman-code-generators">postman-code-generators</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} and <a href="https://github.com/Kong/httpsnippet">httpsnippet</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}. <em>postman-code-generators</em> originates from <a href="https://www.postman.com/">Postman</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, a company focused on developer tooling for APIs. It has its own data structure to model HTTP requests.
::</p>
<p>::GlobalParagraph
<em>httpsnippet</em> comes from Kong, a company that primarily develops an API Gateway. It works with the HTTP Archives standard.
Both packages are well-maintained and for pragmatic reasons, we choose <em>postman-code-generators</em>. The data we have in our OpenAPI Schema can be quite easily transferred to Postman's structures.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Converting OpenAPI into Postman Structures
::</p>
<p>::GlobalParagraph
First, the structure of the Postman SDK. To work with postman-code-generators, the SDK <code>postman-collection</code> will additionally be used.
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-typescript">const codegen = require('postman-code-generators')
const sdk = require('postman-collection')

const baseUrl = 'https://our-api.io'
const openapiJSON = getOpenApiJSON()
const supportedCodegens = codegen.getLanguageList()
</code></pre>
<p>::</p>
<p>::GlobalParagraph
<em>postman-code-generators</em> comes with a list of languages, which we can simply iterate through to generate code examples.
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-typescript">for (const codegen of supportedCodegens) {
  language = codegen.key
  languageLabel = codegen.label
  for (const variation of codegen.variants) {
    variant = variation.key
    generateSamples()
  }
}
</code></pre>
<p>::</p>
<p>::GlobalParagraph
Each language has different variants, which can basically represent different possibilities of making HTTP requests - e.g. http and requests for Python.
Our <code>generateSamples</code> function now simply generates a code example for each endpoint (here only POST endpoints) in the current language:
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-typescript">function generateSamples() {
  // generate samples for open api endpoint paths
  const paths = openapiJSON.paths
  for (const [path, operation] of Object.entries(paths)) {
    currentPath = path
    const data = generateExamplePayload(operation.post, openapiJSON)
    convertEndpoint(path, 'POST', data, addEntry)
  }
}
</code></pre>
<p>::</p>
<p>::GlobalParagraph
Within the payload generation (<code>generateExamplePayload</code>), the Postman Request Object will then be created:
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-typescript">function buildRequest(url, method, data) {
  // build postman request
  return new sdk.Request({
    url: `${baseUrl}${url}`,
    method,
    body: buildBody(data),
    header: {
      ...header,
    }
  })
}
</code></pre>
<p>::</p>
<p>::GlobalParagraph
Since we are working with schemas in our openapi.json file, we use another package that helps us generate example payloads for these schemas:
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-typescript">const OpenAPISampler = require('openapi-sampler')
const schema = spec.components.schemas[name]
// Generate sample data
return OpenAPISampler.sample(
  schema,
  {},
  spec
)
</code></pre>
<p>::</p>
<p>::GlobalParagraph
Finally, we wrap up our loop and the code generation.
::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-typescript">function convertEndpoint(path, method, data, cb) {
  const request = buildRequest(path, method, data)
  codegen.convert(language, variant, request, sampleFormattingOptions, cb)
}
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
As a last step, we add the generated examples to our openapi.json file:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-typescript">function addEntry(error, snippet) {
  if (error) {
    console.log(error, language, variant)
    return
  }
  const xCodeSample = {
    lang: languageLabel,
    label: `${languageLabel} (${variant})`,
    source: snippet
  }
  if (openapiJSON.paths[currentPath].post['x-codeSamples']) {
    openapiJSON.paths[currentPath].post['x-codeSamples'].push(xCodeSample)
  }
  else {
    openapiJSON.paths[currentPath].post['x-codeSamples'] = [xCodeSample]
  }
}
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
And now our openapi.json file contains code examples from all possible languages.
:::</p>
<p><img src="/img/blogs/fastapi-redoc3.png" alt="Redoc all Clients">{.mx-auto .w-1/2}</p>
<p>:::GlobalTitle{:size="lg" .mb-5 .mt-8}
Conclusion
:::</p>
<p>:::GlobalParagraph{.mb-8}
Developing an API and writing documentation is not just for self-purpose. It's about considering the users / developers who will consume our REST API as an example. The API consumers can retrieve them where they stand, thus making the use of the API further facilitated, which should especially be the goal for public interfaces.
:::</p>]]></content:encoded>
            <category>FastAPI</category>
            <category>API</category>
            <category>Python</category>
            <category>Documentation</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blogs/api-technologies-fastapi.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Exploring the Pros and Cons: Function-as-a-Service (FaaS) vs Container Orchestration with Kubernetes]]></title>
            <link>https://blueshoe.io/blog/function-as-a-service-faas-vs-kubernetes</link>
            <guid>https://blueshoe.io/blog/function-as-a-service-faas-vs-kubernetes</guid>
            <pubDate>Thu, 20 Jul 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Explore the crucial differences between Function-as-a-Service (FaaS) and Kubernetes in this detailed analysis. With insights on benefits, drawbacks, use cases, and cost considerations, this post guides tech professionals in choosing the optimal platform for their specific cloud computing needs.</p>
<p><img src="/img/blog/superluminar-aws-kubernetes.jpg" alt="my image">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Function-as-a-Service (FaaS) vs Kubernetes in the world of cloud computing
:::
:::GlobalParagraph
The world of modern computing is a vast and ever-evolving landscape, with an array of platforms and frameworks to choose from. In this blog post, we want to explore and compare two prominent paradigms: Functions as a Service (FaaS)/Serverless and Kubernetes. These platforms have gained significant traction and attention, revolutionizing the way applications are developed, deployed, and managed.
:::
:::GlobalParagraph
As technology advances, organizations seek ways to optimize their infrastructure, enhance scalability, and improve development workflows. FaaS, with its promise of scalability and event-driven execution, has emerged as a compelling option for developers. On the other hand, Kubernetes, the industry standard for container orchestration, offers a robust and flexible platform that empowers organizations to build, deploy, and manage applications at scale.
:::
:::GlobalParagraph
In this comparison, we dive into the fundamental principles and characteristics of FaaS and Kubernetes, aiming to shed light on their unique strengths and weaknesses. By examining their architectural differences, resource management capabilities, developer experiences, operational complexities, and cost considerations, we aim to equip you with the knowledge necessary to make informed decisions when selecting the optimal platform for your specific use cases.
:::
:::GlobalParagraph
Please note that the following content is intended to provide a comprehensive overview and comparison of FaaS/Serverless and Kubernetes without diving into detailed information. The focus lies on the unique characteristics and considerations of each platform, allowing you to make informed decisions based on your requirements. Let's begin this exciting journey into the realm of FaaS and Kubernetes!
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Serverless computing with Function as a Service (FaaS)
:::
:::GlobalParagraph
Function as a Service (FaaS) is a relatively new approach to computing that is changing the way businesses build and deploy applications. With FaaS, developers write and deploy small, stateless functions that are triggered by events such as HTTP requests, database changes, and messages from other services. This approach eliminates the need to manage servers and infrastructure, allowing developers to focus on writing code and delivering value to their customers.
:::</p>
<p>:::GlobalTitle{:size="md" :color="text-bs-blue" :tag="h3" .mb-5}
Advantages of FaaS
:::
:::GlobalParagraph
FaaS offers several advantages over traditional server-based architectures:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li><strong>Reduced infrastructure management overhead:</strong> With serverless computing, developers do not have to manage the underlying infrastructure, which can significantly reduce infrastructure costs.</li>
<li><strong>Faster time-to-market:</strong> Serverless computing simplifies the deployment and speeds up code writing and deployment, enabling businesses to bring products and services to market faster.</li>
<li><strong>High scalability:</strong> FaaS automatically scales resources up or down based on demand, which ensures that applications can handle sudden spikes in traffic without any downtime.</li>
<li><strong>Increased resiliency:</strong> Serverless platforms typically offer built-in fault tolerance and can automatically handle failures, which ensures that applications are highly resilient and can handle unexpected errors without any downtime.</li>
<li><strong>Cost flexibility:</strong> With FaaS, you only pay for the compute time your code actually uses, allowing for more precise cost control and potentially lower overall costs than traditional server-based architectures, where you need to pay for the entire server or a fixed amount of computing resources regardless of the actual usage.</li>
<li><strong>Seamless cloud integration:</strong> AWS Lambda integrates seamlessly with other AWS cloud services such as DynamoDB and SQS, making it easy for developers to build complex and scalable applications that leverage multiple services. This allows businesses to create customized solutions that meet their specific needs and easily integrate with existing workflows.
:::</li>
</ul>
<p>:::GlobalTitle{:size="md" :color="text-bs-blue" :tag="h3" .mb-5}
Potential Drawbacks of FaaS
:::
:::GlobalParagraph
Despite the advantages of FaaS, there are also some limitations to consider. Functions are limited in terms of memory and execution time, and cold start latency (the initial start time of your function's runtime) can impact application performance for functions that are invoked infrequently. However, it's possible to pre-provision infrastructure, which can help mitigate cold start latency. It's also important to note that serverless functions often run on a proprietary platform, which means it may not be compatible with other cloud platforms or on-premise infrastructure.
:::
:::GlobalParagraph
Furthermore, functions are stateless by design, meaning they have no memory of previous invocations or interactions. However, this limitation can be overcome by leveraging serverless databases or other stateful services. By using these services, functions can create and access data that persists across invocations, allowing for stateful behavior in an otherwise stateless environment.
:::</p>
<p>:::GlobalTitle{:size="md" :color="text-bs-blue" :tag="h3" .mb-5}
Use cases for FaaS
:::
:::GlobalParagraph
Function as a Service has been used by businesses of all sizes to build innovative and exciting solutions. Here are some examples:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li><strong>Event-driven processing:</strong> FaaS is an ideal solution for event-driven processing, where you only need to execute code in response to specific events. This can include processing data from sensors, responding to user actions in a web application, or automating business processes.</li>
<li><strong>Serverless APIs:</strong> FaaS is a great option for building serverless APIs, where you only pay for the compute resources you use. This can be useful for creating small, focused microservices that can be easily deployed and scaled without the need for complex infrastructure management.</li>
<li><strong>Lightweight applications:</strong> FaaS is a great option for lightweight applications, where you need to quickly deploy and scale applications without the need for heavy infrastructure management. This can be especially useful for small startups or individual developers who need to focus on building their applications, rather than managing infrastructure.
:::
:::GlobalParagraph
AWS Lambda is one of the most popular FaaS solutions available today. If you want to unlock the full potential of serverless computing, superluminar are the experts you need on your side. As an AWS Advanced Consulting Partner, they specialize in creating cutting-edge, cost-effective solutions, particularly with serverless computing including AWS Lambda at their core.
:::</li>
</ul>
<p>:::GlobalTitle{:size="lg" .mb-5}
Understanding Kubernetes
:::
:::GlobalParagraph
<a href="/blog/local-kubernetes-development/">Kubernetes</a>{.bs-link-blue} is an open-source container orchestration platform that has revolutionized the way applications are managed and deployed. It provides a comprehensive solution for automating the deployment, scaling, and management of containerized applications. With its robust set of features and powerful components, Kubernetes offers a scalable and flexible platform for modern application development.
:::</p>
<p>:::GlobalTitle{:size="md" :color="text-bs-blue" :tag="h3" .mb-5}
Advantages of using Kubernetes
:::
:::GlobalParagraph
One of the key advantages of Kubernetes is its ability to automate and simplify application management. Kubernetes enables developers to define and declare their application's desired state, and the platform takes care of orchestrating the deployment and scaling of containers accordingly. This automation reduces the complexity and manual effort required to manage large-scale containerized environments.
:::
:::GlobalParagraph
Kubernetes also provides built-in support for autoscaling, load balancing, and service discovery, ensuring that traffic is efficiently distributed across application instances. It offers advanced networking capabilities, allowing containers to communicate with each other seamlessly. This facilitates the development of distributed and microservices-based architectures.
:::
:::GlobalParagraph
Another significant advantage of Kubernetes is its fault tolerance and self-healing capabilities. Kubernetes continuously monitors the health of application instances and automatically restarts or replaces any failed containers. This ensures high availability and resilience, minimizing downtime and enhancing the overall reliability of the application.
:::</p>
<p>:::GlobalTitle{:size="md" :color="text-bs-blue" :tag="h3" .mb-5}
Potentials Drawbacks of Kubernetes
:::
:::GlobalParagraph
While Kubernetes has gained popularity as a robust container orchestration system, it does have certain drawbacks, including higher management costs and complexities. Implementing and maintaining Kubernetes clusters necessitates specialized skills, which can lead to increased operational expenses. Moreover, the learning curve associated with Kubernetes is steep, requiring extensive training and ongoing education for teams. <a href="/blog/service-availability-using-kubernetes-probes/">Continuous monitoring</a>{.bs-link-blue}, scaling, and troubleshooting add to the management burden. Additionally, deploying and configuring applications on Kubernetes can be intricate, particularly for smaller projects or teams without dedicated resources. These factors collectively contribute to the potential downsides of Kubernetes.
:::</p>
<p>:::GlobalTitle{:size="md" :color="text-bs-blue" :tag="h3" .mb-5}
Use Cases for Kubernetes
:::
:::GlobalParagraph
Kubernetes is particularly well-suited for managing complex and scalable applications. Its flexible architecture and resource management capabilities make it an ideal choice for <a href="/our-services/microservice-architecture-consulting/">microservices architectures</a>{.bs-link-blue}, where applications are composed of multiple small, independent services. Kubernetes enables efficient deployment, scaling, and monitoring (using additional tools like prometheus) of these services, allowing organizations to build highly resilient and adaptable systems.
:::
:::GlobalParagraph
Data-intensive applications, such as those involving big data processing or machine learning, can greatly benefit from Kubernetes. It allows organizations to leverage the power of distributed computing by efficiently distributing workloads across a cluster of machines. Kubernetes provides the necessary tools and abstractions to manage large-scale data processing pipelines, making it an attractive option for data-driven applications.
:::
:::GlobalParagraph
In addition to that, it's also possible to take advantage of sleek FaaS architecture on kubernetes using software like OpenFaaS.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Comparing FaaS and Kubernetes
:::
:::GlobalTitle{:size="md" :color="text-bs-blue" :tag="h3" .mb-5}
Architecture and scalability
:::
:::GlobalParagraph
When comparing Functions as a Service (FaaS) and Kubernetes, one of the key aspects to consider is their architectural differences. FaaS platforms, such as AWS Lambda or Azure Functions, are designed for event-driven, serverless computing. They execute individual functions in response to events, allowing for high scalability and resource efficiency. On the other hand, Kubernetes follows a container-based architecture, where applications are divided into smaller, isolated applications (containers) that can be scaled independently. Kubernetes offers robust scalability capabilities, allowing organizations to scale their applications both horizontally and vertically based on demand.
:::</p>
<p>:::GlobalTitle{:size="md" :color="text-bs-blue" :tag="h3" .mb-5}
Development experience and deployment workflow
:::
:::GlobalParagraph
The development experience and deployment workflow also differ between FaaS and Kubernetes. With FaaS, developers focus primarily on writing and deploying individual functions. FaaS platforms handle the underlying infrastructure and automatically scale functions based on incoming events. This simplified development model reduces the operational burden on developers. In contrast, Kubernetes requires developers to package their applications into containers and define the desired state of the system using YAML or configuration files. Developers have more control over the infrastructure and can deploy complex, multi-container applications. However, it also involves more upfront configuration and management.
:::</p>
<p>:::GlobalTitle{:size="md" :color="text-bs-blue" :tag="h3" .mb-5}
Ops complexity and maintenance
:::
:::GlobalParagraph
Operational complexity and maintenance considerations differ between FaaS and Kubernetes. FaaS platforms abstract away much of the underlying infrastructure, allowing developers to focus solely on the code. This simplifies operations as the platform handles scalability, fault tolerance, and infrastructure maintenance. In contrast, Kubernetes requires organizations to manage and maintain the entire container orchestration infrastructure, including managing the cluster, monitoring, scaling, and ensuring high availability. While Kubernetes offers more control, it also requires more operational expertise and effort.
:::</p>
<p>:::GlobalTitle{:size="md" :color="text-bs-blue" :tag="h3" .mb-5}
Cost Considerations
:::
:::GlobalParagraph
Cost considerations play a significant role when comparing FaaS and Kubernetes. FaaS platforms operate on a pay-per-use model, where organizations are charged based on the number of function invocations and their resource consumption. This can be cost-effective for applications with sporadic or unpredictable traffic patterns. Kubernetes, on the other hand, requires organizations to provision and manage their own infrastructure, which may involve upfront costs for hardware or cloud resources. Kubernetes applications need to maintain continuous operation to ensure service availability, regardless of whether they receive incoming requests or not. However, once set up, Kubernetes offers more cost predictability and control for long-running or continuously active applications.
:::</p>
<p>:::GlobalTitle{:size="lg" :tag="h3" .mb-5}
Reflections on Function-as-a-Service (FaaS) and Kubernetes
:::
:::GlobalParagraph
In conclusion, both Functions as a Service (FaaS)/Serverless and Kubernetes offer unique strengths and capabilities for modern application development and deployment. Understanding the nuances and trade-offs of each platform is crucial in making informed decisions for your specific use cases.
:::
:::GlobalParagraph
FaaS platforms excel in their event-driven, serverless architecture, allowing developers to focus solely on writing functions without worrying about infrastructure management. They offer seamless scalability and resource efficiency, making them an attractive choice for applications with sporadic or unpredictable traffic patterns.
:::
:::GlobalParagraph
On the other hand, Kubernetes provides a robust container orchestration platform, empowering organizations to build, deploy, and manage complex applications at scale. It offers flexibility, control, and advanced capabilities for managing distributed systems, making it suitable for microservices architectures and data-intensive applications.
:::
:::GlobalParagraph
Ultimately, there is no one-size-fits-all solution. The choice between FaaS and Kubernetes depends on the nature of your application, scalability needs, development preferences, and budget constraints. In some cases, a hybrid approach or combining both platforms might be the best strategy.
:::
:::GlobalParagraph
As technology continues to evolve, new advancements and platforms will emerge, further expanding the options available to developers and organizations. Keeping abreast of industry trends and continuously evaluating the suitability of platforms will ensure that you stay at the forefront of innovation and deliver optimal solutions for your applications.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <enclosure url="https://blueshoe.io/img/blog/superluminar-aws-kubernetes.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Gefyra Roadmap 2025]]></title>
            <link>https://blueshoe.io/blog/gefyra-roadmap-2025</link>
            <guid>https://blueshoe.io/blog/gefyra-roadmap-2025</guid>
            <pubDate>Thu, 27 Mar 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Gefyra has big plans for 2025! From improved developer tools to new integrations and better performance – the roadmap is packed with exciting updates. In this blog post, we’ll walk you through what’s coming and how Gefyra aims to keep revolutionizing development and debugging in Kubernetes. Stick around for a glimpse into the next chapter!</p>
<p><img src="/img/blog/gefyra-roadmap-2025.svg" alt="Gefyra Roadmap 2025">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Gefyra for Kubernetes development: What’s coming in 2025
::
::GlobalParagraph
You’re developing in Kubernetes and tired of slow testing and endless deploy loops? Then you’re going to love Gefyra – and 2025 even more. The upcoming features will seriously level up your developer experience. Here's what the <strong>Gefyra roadmap</strong> has in store: performance boosts, a new config file, CI integration, and the game-changer – <strong>User Bridges</strong>.
::</p>
<p>:::GlobalButton{:url="https://gefyra.dev/" :label="Discover how Gefyra changes your dev setup" :color="blue" :target="_blank" .mb-6}
:::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
What is Gefyra?
::
::GlobalParagraph
If you're new to <a href="https://gefyra.dev/">Gefyra</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}  it's an open-source tool that runs your local code directly inside a Kubernetes cluster – without any build or push cycles. This saves you tons of time and nerves when developing and debugging.
::</p>
<p>::GlobalParagraph
Since version 2.0 was released at the end of 2023, a lot has happened. So it's time for a clear <strong>roadmap for 2025</strong>, with everything that's coming.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Q1 2025: What’s already shipped
::
::GlobalParagraph
With the first quarter of 2025 almost over, here's a quick recap of what’s already landed:
::
::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>New CLI parameter --registry: Allows you to define a custom Docker registry to pull Gefyra images from – especially useful in air-gapped environments.</li>
<li>New CLI parameter --mtu: Lets you change the default Wireguard MTU value (1340). This is handy when you're already connected to your Kubernetes cluster via VPN and want to set up a Gefyra connection.
::</li>
</ul>
<p><img src="/img/blog/Demonstration-neue-Paramenter-Gefyra.gif" alt="New Gefyra parameters">{.mx-auto .max-w-full}</p>
<p>::GlobalParagraph
Here’s a quick demo of the new parameters and how they affect the client file.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
What else is coming in 2025
::</p>
<p>::GlobalParagraph
There’s even more planned for the rest of 2025!
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. Copy &#x26; Copy Sync
::
::GlobalParagraph
Let’s start with the <strong>Copy &#x26; Copy Sync Feature</strong>: It will allow you to sync local files with files in the cluster – and vice versa. As you can imagine, this is especially useful for config files introduced in a feature that haven’t been deployed to the cluster yet.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. UDP-over-TCP-Support
::
::GlobalParagraph
Since April 2024 (and even a bit earlier), there’s been an <a href="https://github.com/gefyrahq/gefyra/issues/586">Github Issue</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} where Gefyra connections are only possible over UDP. We’re finally tackling that! Soon you’ll be able to use <a href="https://github.com/mullvad/udp-over-tcp">udp-over-tcp</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} to run Gefyra in Kubernetes clusters that only support TCP load balancing.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. Gefyra configuration file
::
::GlobalParagraph
Another exciting update is the Gefyra configuration file. It’s going to play a key role in 2025 and lay the groundwork for many upcoming features. The idea is to make Gefyra setups simpler and more reproducible. For example, you’ll be able to configure <code>gefyra run</code> to start two containers locally – and automatically create a bridge for one of them.
::
::GlobalParagraph
Say goodbye to digging through your Bash history to find the right <code>run</code> or <code>bridge</code> command.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
4. CI integration
::
::GlobalParagraph
Once the config file feature is available, Gefyra will also be adapted for CI workflows. The goal is to support short-lived sessions and reproducibility – so your CI pipelines can reflect real use cases more reliably.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
The highlight: User Bridges
::
::GlobalParagraph
And now to the biggest planned feature of 2025: <strong>User Bridges</strong>!
::
::GlobalParagraph
So what exactly are User Bridges? As you may know, Gefyra lets you "bridge" a local container and all its network requests into the target cluster. This allows you to test real flows and interactions with other microservices, databases, or task runners – without needing to deploy the code.
::
::GlobalParagraph
The big limitation of global bridges so far: only one person could bridge a given workload. That made simultaneous feature development on the same service a no-go.
::
::GlobalParagraph
This is where User Bridges come in. The concept is to create a copy of the target workload in the cluster.
::</p>
<p><img src="/img/blog/Gefyra-User-Bridges.png" alt="Gefyra User Bridges">{.mx-auto .max-w-full}</p>
<p>::GlobalParagraph
Gefyra’s Carrier will continue to hook into the original workload. When a request matches certain rules (more on that in a sec), it’s handled by the patched workload. Otherwise, it’s passed on to the copy and processed by the original code.
::</p>
<p>::GlobalParagraph
To make all this work, a few things need to happen:
::</p>
<p>::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>Carrier will get a version II, built with <a href="https://github.com/cloudflare/pingora">Pingora</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, Cloudflare’s Rust-based networking library. This should give us <strong>faster startup times, improved reliability, and lower resource usage</strong> compared to the current nginx-based solution.</li>
<li>We’ll introduce <strong>“bridge mount objects”</strong> – Kubernetes CRDs that prepare User Bridges for action, allowing <strong>even faster bridging</strong>. User Bridges will become the new default.</li>
<li><strong>Header &#x26; URL matching</strong> will be introduced. This is a key enabler for User Bridges. You’ll be able to define rules for how requests are routed. Fancy something like a header with regex path matching on the 5th subroute? Totally doable!
::</li>
</ul>
<p>::GlobalTitle{:size="lg" .mb-5}
Gefyra 2025: A year full of developer optimizations
::
::GlobalParagraph
The 2025 roadmap clearly shows that Gefyra is evolving to make <strong>Kubernetes development more flexible and efficient</strong>. With features like Copy Sync, UDP-over-TCP support, and a new config file, the developer experience is about to get a serious upgrade. The real game-changer? <strong>User Bridges</strong> – enabling <strong>parallel feature development on the same service</strong> without deployment conflicts.
::
::GlobalParagraph
Exciting months ahead – and we can’t wait to roll out these features together with the community. Got feedback or ideas? We’d love to hear from you – Gefyra thrives on collaboration!
::</p>
<p>:::GlobalTitle{:size="lg" .mb-5 .mt-8}
Frequently Asked Questions
:::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. What’s the benefit of the new User Bridges in Gefyra?
::
::GlobalParagraph
User Bridges let multiple developers work on the same Kubernetes service at the same time – no deploys needed, no conflicts. Perfect for team collaboration in microservice setups.
::
::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. How does the Gefyra configuration file help in daily development?
::
::GlobalParagraph
The new gefyra.yaml file allows you to define your setup once and reuse it easily – great for teams, CI/CD pipelines, and structured workflows.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. Will Gefyra be usable in CI/CD pipelines in 2025?
::
::GlobalParagraph
Yes! With short-lived sessions, automatic bridging, and reproducible environments, Gefyra is becoming CI-ready – ideal for testing real use cases in your pipelines.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
4. Why is Copy &#x26; Sync an important feature for Kubernetes development?
::
::GlobalParagraph
Copy &#x26; Sync lets you automatically sync files between your local system and the cluster – for example, config files that aren’t yet deployed. That saves time and avoids misconfigurations.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
5. What are the benefits of UDP-over-TCP support?
::
::GlobalParagraph
It enables Gefyra connections even in clusters that only allow TCP traffic – such as behind restrictive load balancers. More compatibility, fewer limitations.
::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Gefyra</category>
            <category>Development</category>
            <category>Operation</category>
            <category>Team Blueshoe</category>
            <enclosure url="https://blueshoe.io/img/blog/gefyra-roadmap-2025.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Hands on: An Introduction to Flutter.]]></title>
            <link>https://blueshoe.io/blog/hands-on-flutter</link>
            <guid>https://blueshoe.io/blog/hands-on-flutter</guid>
            <pubDate>Mon, 09 Sep 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Today we take you on an exciting journey into the world of mobile app development with Flutter. We present "blue_todo", a small but elegant Todo app that shows how powerful and versatile Flutter can be. In this article, you'll not only get an insight into the basics of Flutter but also a small Deep Dive into Best Practices. Let's take a look together at the technology and design behind "blue_todo"!</p>
<p><img src="/img/blog/flutter.svg" alt="Flutter at Blueshoe: Hands on.">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Introduction
::
::GlobalParagraph
Have you ever had the urge to write code and carry it everywhere? If yes, then you've probably already tried frameworks like React Native and Flutter. And if not, then this article is exactly right for you!
::
::GlobalParagraph
In this article, we will explore the amazingly and rapidly developing world (almost as fast as JS framework development) of multi-platform frameworks. Today's showcase: Flutter.
::
::GlobalParagraph
I'll begin with a brief summary of Flutter's history and advantages. Then I'll transition to a small app example that runs natively on Linux and end the article with a small note on why you should come to us for your next app project.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
History and Advantages
::
::GlobalParagraph
After initial explorations already in 2015, Flutter was published in 2017 with support for Android and iOS. In 2019, the Flutter team began supporting web and desktop platforms. Since then, the Flutter framework itself and its ecosystem have grown and developed into a truly unified experience across all platforms. Some of the most recent improvements include the new Impeller rendering engine and WebAssembly support for web.
::
::GlobalParagraph{.mb-5}
Flutter is a novelty in the world of UI frameworks! It delivers applications with their own rendering engine that outputs pixel data directly on the screen. This stands in contrast to many other frameworks that rely on the target platform to provide a rendering engine. Native Android apps are dependent on the Android SDK on devices, while React Native dynamically uses the integrated UI stack of the target platform. Flutter's control over the rendering pipeline is a decisive factor in supporting multiple platforms. It allows developers to use identical UI code for all target platforms, making the creation of cross-platform applications easier than ever!
::</p>
<p><img src="/img/blog/Google-flutter-logo.svg" alt="Flutter logo">{.mx-auto .h-48 .max-w-full}</p>
<p>::GlobalParagraph{.mt-5}
Moreover, the framework offers a JIT compiler that processes code changes in seconds, providing excellent DX. This enables faster prototyping and easier testing of functionalities.
::
::GlobalParagraph
Another advantage is that Flutter is built on the Dart language. This language offers a typesafe environment. It uses a mix of static type checking and type inference, with type annotations being optional. The Dart compiler also has a built-in null check that directly eliminates all possible null/none errors in the IDE.
::
::GlobalParagraph
And last but not least: Flutter has a great community with a variety of plugins that can be easily used via the built-in package manager pub.
::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Flutter Hands-On
:::</p>
<p>:::GlobalParagraph
For this blog article, I want to build a small Todo app (what else?) that runs natively under Linux. Flutter has excellent integration with Firebase (Google's platform for easy app development) or other similar Cloud Infrastructure as a Service (CIaaS) products like Supabase. But for this example, I want the app to use a local database. Specifically one of my favorite NoSQL databases: ObjectBox. It supports Dart / Flutter and is built for speed.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Flutter Setup
:::</p>
<p>:::GlobalParagraph
Before you can start developing your apps, the Flutter SDK must be installed. This varies from platform to platform, but a good starting point is the <a href="https://docs.flutter.dev/get-started/install">SDK Installation Page</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} of the Flutter project. If you're as lazy as I am, I recommend version-fox. This is an SDK version manager written in GoLang. After installing and configuring <a href="https://vfox.lhan.me/">version-fox</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, you can execute the following commands to add the Flutter plugin and install the SDK:
:::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-bash">vfox add flutter
vfox search flutter // this will open a selection with all available flutter versions
vfox use -g flutter@{use your version here}

</code></pre>
<p>::</p>
<p>:::GlobalParagraph
After executing these commands, you can get started! To verify if everything worked, you can execute the following command:
:::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-bash">flutter doctor -v

</code></pre>
<p>::</p>
<p>:::GlobalParagraph
This command will provide some information about your current Flutter installation. Depending on your platform, you might need to install some development packages. Please consult the SDK installation site to find the right packages.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Project Start
:::</p>
<p>:::GlobalParagraph
The entire code created for this project can be found in this <a href="https://github.com/Blueshow/blue_todo">Github Repository</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
:::</p>
<p>:::GlobalParagraph
Flutter has (like <em>Django</em>) management commands to quickly create projects and other templates, like Plugins. The following command can be used to create a new project with the base template:
:::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-bash">flutter create --org de.blueshow blue_todo

</code></pre>
<p>::</p>
<p>:::GlobalParagraph
This command creates a new project with the following folder structure in blue_todo:
:::</p>
<p><img src="/img/blog/flutter1.png" alt="Order Structure created by Flutter">{.mx-auto .object-cover .max-w-full}</p>
<p>:::GlobalParagraph
As you can see, Flutter automatically created directories for all supported platforms. We will only use the <em>lib</em> and <em>linux</em> directories. <em>lib</em> contains the dart- and platform-independent code, while <em>linux</em> contains the platform-specific Linux code. Flutter offers communication with native functions and libraries through so-called "Platform-Channels". These are a powerful tool to use native functionality and performance. An explanation of these would admittedly exceed the scope of this article. But maybe a new article will come soon?
:::</p>
<p>:::GlobalParagraph
Next, we'll install a few dependencies! You can search for these on <a href="https://pub.dev">pub.dev</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
:::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-bash">flutter pub add objectbox objectbox_flutter_libs:any flutter_bloc gap intl
flutter pub add --dev build_runner objectbox_generator:any
</code></pre>
<p>::</p>
<p>:::GlobalParagraph
This adds the required <code>ObjectBox</code> dependencies and <code>build_runner</code> to generate ObjectBox code. Additionally, <code>flutter_bloc</code> is installed, a <em>Business Logic Controller (BLoC)</em> implementation for Flutter. BLoC enables easier and better State Management (we'll talk later about what State Management exactly is).
Additionally, the <code>Gap</code>- and <code>Intl</code>-dependencies are installed. Gap is a very helpful widget for arranging widgets in a column or row. And <code>intl</code> enables formatting a date.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .pt-10 .mb-5}
Project Implementation
:::</p>
<p>:::GlobalParagraph
Now let's dive into the code! The standard file <code>main.dart</code> contains the following implementation of a Counter-App:
:::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-dart">import 'package:flutter/material.dart';

void main() {
 runApp(const

class MyApp extends StatelessWidget {
 const MyApp({super.key});

 // This widget is the root of your application.
 @override
 Widget build(BuildContext context) {
   return MaterialApp(
     title: 'Flutter Demo',
     theme: ThemeData(
       colorScheme: ColorScheme.fromSeed(seedColor: Colors.blue),
       useMaterial3: true,
     ),
     home: const MyHomePage(title: 'Flutter Demo Home Page'),
   );
 }
}

class MyHomePage extends StatefulWidget {
 const MyHomePage({super.key, required this.title});

 final String title;

 @override
 State&#x3C;MyHomePage> createState() => _MyHomePageState();
}

class _MyHomePageState extends State&#x3C;MyHomePage> {
 int _counter = 0;

 void _incrementCounter() {
   setState(() {
     _counter++;
   });
 }

 @override
 Widget build(BuildContext context) {
   return Scaffold(
     appBar: AppBar(
       backgroundColor: Theme.of(context).colorScheme.inversePrimary,
       title: Text(widget.title),
     ),
     body: Center(
        child: Column(
         mainAxisAlignment: MainAxisAlignment.center,
         children: &#x3C;Widget>[
           const Text(
             'You have pushed the button this many times:',
           ),
           Text(
             '$_counter',
             style: Theme.of(context).textTheme.headlineMedium,
           ),
         ],
       ),
     ),
     floatingActionButton: FloatingActionButton(
       onPressed: _incrementCounter,
       tooltip: 'Increment',
       child: const Icon(Icons.add),
     ),
   );
 }
}

</code></pre>
<p>::</p>
<p>:::GlobalParagraph
This code covers all the fundamental principles of how to create a Flutter application. Flutter is based on widgets. Widgets are the building blocks of the user interface, and everything else (your business logic) is controlled via state. State makes your application reactive and interactive. In the Flutter example, we have a FloatingActionButton (FAB) (a widget from the Material Design library). When pressed, it increments a counter variable. The framework is responsible for regenerating its state when <em>setState</em> is called, and shows the incremented counter.
:::</p>
<p>:::GlobalParagraph
Then let's run <code>flutter run -d linux</code> and see what the example looks like:
:::</p>
<p><img src="/img/blog/flutter2.png" alt="An App in Flutter">{.mx-auto .object-cover .max-w-full}</p>
<p>:::GlobalParagraph
As seen in the screenshot, I pressed the FAB 5 times and the user interface was updated to show this. Flutter also shows a nice debug banner to indicate that I started the application in debug mode.
:::</p>
<p>:::GlobalParagraph
Let's now start implementing our Todo app! First, I will create a new directory called <em>business_logic</em>. This directory will host our Flutter BLOC code. Two more directories will follow named <em>models</em> and <em>components</em>. In a larger app, this is what it would look like.
:::</p>
<p>:::GlobalParagraph
Directory structure naturally differs. But for this small Single Page Application Project, this is sufficient.
After the directories, we add the Business Logic Code:
We will use a so-called Cubit in this example. You can imagine a Cubit as a less complex BLoC. First, we must define the States of the Cubit:
:::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-dart">// todo_state.dart
part of 'todo_cubit.bloc.dart';

sealed class TodoState extends Equatable {
  const TodoState();

  @override
  List&#x3C;Object> get props => [];
}

final class TodoInitial extends TodoState {}

final class TodoLoading extends TodoState {}

final class TodoDone extends TodoState {
  final List&#x3C;Todo> todos;

  const TodoDone(this.todos);

  @override
  List&#x3C;Object> get props => [
        todos,
      ];
}

// in case anything goes wrong
final class TodoFailed extends TodoState {}

</code></pre>
<p>::</p>
<p>:::GlobalParagraph
States are classes used by Cubit to distinguish between its current state. The state change in Cubit will be reflected in the user interface, and you should display different things for various states.
:::</p>
<p>:::GlobalParagraph
After that, we can add our Cubit:
:::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-dart">// todo_cubit.bloc.dart
import 'package:blue_todo/models/todo.dart';
import 'package:blue_todo/objectbox.g.dart';
import 'package:equatable/equatable.dart';
import 'package:flutter_bloc/flutter_bloc.dart';

part 'todo_state.dart';

class TodoCubit extends Cubit&#x3C;TodoState> {
  final Box todoBox;

  TodoCubit({required this.todoBox}) : super(TodoInitial()) {
    loadTodos();
  }

  loadTodos() async {
    emit(TodoLoading());

    try {
      final todos = List&#x3C;Todo>.from(await todoBox.getAllAsync());
      emit(TodoDone(todos));
    } catch (ex) {
      emit(TodoFailed());
    }
  }

  addTodo(Todo todo) async {
    final state = this.state;

    if (state is TodoDone) {
      final id = await todoBox.putAsync(todo);
      todo.id = id;
      emit(
        TodoDone(
          [
            todo,
            ...state.todos,
          ],
        ),
      );
    }
  }

  deleteTodo(Todo toDelete) async {
    final state = this.state;

    if (state is TodoDone) {
      await todoBox.removeAsync(toDelete.id);
      emit(
        TodoDone(
          state.todos.where((t) => t.id != toDelete.id).toList(),
        ),
      );
    }
  }
}

</code></pre>
<p>::</p>
<p>:::GlobalParagraph
There's a lot to see, so let's look at the code step by step.
Starting with the Cubit constructor. It is initialized with the database and the initial state <em>TodoInitial</em>. This can be any state you can define. It also calls <em>loadTodos</em> as soon as it is initialized.
:::</p>
<p>:::GlobalParagraph
<em>loadTodos</em> first "emits" <em>TodoLoading</em>, which should signal that we are waiting for data or doing something in the background. <em>emit</em> is the method to change states in a BLoC/Cubit. Then <em>loadTodos</em> fetches all Todos.
:::</p>
<p>:::GlobalParagraph
There are two more methods in Cubit: <em>addTodo</em> and <em>deleteTodo</em>. These methods serve to add or remove a todo. Both check whether we are in the correct state before performing their respective actions. If the state does not match, it would be wise to add an error handling or give the user a hint that something went wrong.
:::
:::GlobalParagraph
With this, the entire logic that we need is complete!
:::
:::GlobalParagraph
Now we can add the UI code. The complete UI code can be viewed <a href="https://github.com/Blueshore/blue_todo/blob/main/lib/main.dart">here</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
:::
:::GlobalParagraph
Otherwise, it is most important to understand how we can use <em>BlocProvider</em> and <em>BlocBuilder</em> to modify the UI, depending on the <em>TodoState</em>.
:::
:::GlobalParagraph
Let's look at the following code section:
:::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-dart">BlocBuilder&#x3C;TodoCubit, TodoState>builder: (context, state) {
  if (state is TodoInitial || state is TodoLoading) {
    return const Center(
      child: CircularProgressIndicator.adaptive(),
    );
  }

  if (state is TodoFailed) {
    return const Center(
      child: Text("Failed to load ToDo's :("),
    );
  }

  final doneState = state as TodoDone;

  if (doneState.todos.isEmpty) {
    return const Text(
      "No ToDo's added yet. Add some via the bottom right FloatingActionButton!",
    );
  }

  ...
}
</code></pre>
<p>::
:::GlobalParagraph
As you can see, the <em>BlocBuilder</em> widget is used to distinguish between <em>TodoStates</em>. For each state, we return something different and contextually relevant. You can also check other things besides the state, e.g., whether the todos are empty.
:::
:::GlobalParagraph
If none of the previous checks are fulfilled, all todos will be displayed with the following code snippet:
:::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-dart">return Expanded(
        child: ListView.separated(
          itemCount: doneState.todos.length,
          itemBuilder: (context, index) => Card(
            child: Padding(
              padding: const EdgeInsets.all(16.0),
              child: ListTile(
                title: Text(doneState.todos[index].title!),
                subtitle: Text(doneState.todos[index].description!),
                trailing: Row(
                  mainAxisAxisSize: MainAxisAxisSize.min,
                  children: [
                    Text(
                      "due at: ${DateFormat('dd.MM.yyyy').format(doneState.todos[index].dueDate!)}",
                    ),
</code></pre>
<pre><code>(continues)
    const Gap(8.0),
    IconButton(
        onPressed: () => _deleteTodo(
            context,
            doneState.todos[index],
        ),
        icon: Icon(MdiIcons.trashCanOutline))
    ],
  ),
),
    separatorBuilder: (context, index) => const Divider(),
  ),
);

</code></pre>
<p>::
::GlobalParagraph
<em>Expanded</em> is a widget that tells the subordinate widget that it should take up the entire remaining space in the main axis of the superordinate widget. In this case, the underlying <em>ListView</em> should take up the entire remaining vertical space in the superordinate <em>Column</em>.
::
::GlobalParagraph
The <em>ListView</em> has a <em>Builder</em> property that is responsible for creating the children of the <em>ListView</em> and displaying them. The children essentially consist of a <em>ListTile</em>, a widget that specifies a specific structure for its children. In this way, you can add a title and a subtitle that will automatically and always be located on the left side of the widget. The <em>Trailing</em> property is used to add widgets at the end of the <em>ListTile</em>. Here we have added the completion date and a button for deletion. The delete button calls <em>_deleteTodo</em>, which in turn calls the <em>deleteTodo</em> method of the Cubit, thereby changing the state and redrawing the user interface.
::
::GlobalParagraph
Another interesting part of the code is adding Todos. When you press the FloatingActionButton, a dialog appears that queries some entries (e.g., title and description) and when you click on Save, the Todo is transferred to the Cubit and stored in the database.
::
::GlobalParagraph
The special widgets <em>BlueShoeTextField</em> and <em>BlueShoeeDateField</em> can be found in the <a href="https://github.com/Blueshow/blue_todo/tree/main/lib/components">Git Repository on GitHub</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
::
::GlobalParagraph
And what does it all look like? Here are some screenshots:
::</p>
<p><img src="/img/blog/flutter3.png" alt="An App in Flutter">{.mx-auto .object-cover .max-w-full}</p>
<p>::GlobalParagraph
Without Todos
::</p>
<p><img src="/img/blog/flutter4.png" alt="An App in Flutter">{.mx-auto .object-cover .max-w-full}</p>
<p>::GlobalParagraph
Adding a Todo
::</p>
<p><img src="/img/blog/flutter5.png" alt="An App in Flutter">{.mx-auto .object-cover .max-w-full}</p>
<p>:::GlobalParagraph
A todo is added
:::</p>
<p>:::GlobalParagraph
I hope this brief example has shown you that with Flutter you can quickly build versatile and beautiful apps!
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Blueshoe x Flutter
:::</p>
<p>:::GlobalParagraph
At Blueshoe, we strive for highest performance and usability in our applications. With Flutter, we can keep this promise and fulfill all expectations of our customers.
:::</p>
<p>:::GlobalParagraph
If you are looking for someone for your next app, then don't hesitate to contact us!
:::</p>]]></content:encoded>
            <category>API</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blog/flutter.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[How to: Custom configurations for ingress-nginx with kustomize]]></title>
            <link>https://blueshoe.io/blog/how-to-kustomize-ingress-nginx</link>
            <guid>https://blueshoe.io/blog/how-to-kustomize-ingress-nginx</guid>
            <pubDate>Tue, 01 Jul 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Ingress-nginx is one of the most popular ingress controllers for Kubernetes. In this blog post, we show how one can adjust the <em>ingress-nginx</em> K8s resources and easily persist them using kustomize. This makes installation and updates simpler and less error-prone.</p>
<p><img src="/img/blog/kustomize-ingress-nginx.svg" alt="Custom configurations for ingress-nginx with kustomize">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalBlogLevelInfo</p>
<ul>
<li>Kubernetes Basics, <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/">Ingress</a>{target="_blank"}</li>
<li><a href="https://github.com/kubernetes/ingress-nginx">Ingress-Nginx</a>{target="_blank"}</li>
<li>kustomize</li>
</ul>
<p>If you have any questions or if anything is unclear, you can use the comment function below the article.
::</p>
<h2>1. Problem</h2>
<p>Many <a href="https://github.com/kubernetes/ingress-nginx">ingress-nginx</a>{target="_blank"} installations require adjustments. Whether it's a custom error backend, where Command-Args and the <em>ConfigMap</em> need to be modified, or additional ports are needed, or K8s service <em>Annotations</em> must be matched with a <em>Load Balancer</em>. These adjustments can be made manually, but this increases the risk during installation and updates of overlooking something or configuring incorrectly. Moreover, each update becomes challenging and requires additional documentation to ensure nothing is forgotten.</p>
<p>The question arises: how can these adjustments be persisted so that they are automatically applied and during updates only the version number needs to be changed?</p>
<h2>2. Approach</h2>
<p><em>kustomize</em> to the rescue! The requirements are clear: resource adjustments in K8s should be stored in code and automatically rolled out during installation and updates. kustomize allows specifying the K8s yaml manifests of <em>ingress-nginx</em> as a resource and applying the necessary patches. We have stored the following <code>kustomization.yaml</code> in <code>./ base/ingress-nginx</code>:</p>
<pre><code class="language-yaml">apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - github.com/kubernetes/ingress-nginx/deploy/static/provider/cloud?ref=controller-v1.12.3

patches:
  - path: patch-configmap.yaml
  - path: patch-deployment.yaml
  - path: patch-service.yaml
</code></pre>
<p>As <code>resources</code> the K8s manifests in version 1.12.3 are specified. Additionally, we have three patches for the <em>ConfigMap</em>, Deployment, and Service. We will look at these in more detail in the next sections.</p>
<h2>3. Adjusting ConfigMap</h2>
<p>The patch for the <em>ConfigMap</em> must be in the same directory as the <code>kustomization.yaml</code>, as specified there. Here is the content of <code>patch-configmap.yaml</code>:</p>
<pre><code class="language-yaml">apiVersion: v1
kind: ConfigMap
metadata:
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  proxy-buffer-number: "4"
  proxy-buffer-size: 128k
  proxy-busy-buffers-size: 256k
  custom-http-errors: 404,503,502,504
</code></pre>
<p>In the <em>ConfigMap</em>, a few <em>proxy</em>-values are set, as well as the HTTP status codes specified for which a response from the <em>Custom Errors Backend</em> should be changed.</p>
<h2>4. Adjusting Command</h2>
<p>To adjust the command, the Deployment must be patched. <code>patch-deployment.yaml</code> looks like this:</p>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  template:
    spec:
      containers:
        - name: controller
          args:
          - /nginx-ingress-controller
          - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
          - --election-id=ingress-nginx-leader
          - --controller-class=k8s.io/ingress-nginx
          - --ingress-class=nginx
          - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
          - --validating-webhook=:8443
          - --validating-webhook-certificate=/usr/local/certificates/cert
          - --validating-webhook-key=/usr/local/certificates/key
          - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
          - --default-backend-service=default/nginx-errors
          ports:
            - containerPort: 21
              name: ftp
              protocol: TCP
</code></pre>
<p>In this example, the <code>args</code> are overwritten to set <code>--default-backend-service</code> to <code>default/nginx-errors</code>. Additionally, port 21 is specified to enable FTP requests.</p>
<h2>5. Adjusting the Service</h2>
<p>The service usually contains annotations that are relevant for the load balancer. <em>Ingress-nginx</em> also offers K8s manifests that are already optimized for AWS, GKE, or Azure. If one wants to supplement these or chooses a different cloud provider, this can also be solved via a patch. Here is the corresponding <code>patch-service.yaml</code>:</p>
<pre><code class="language-yaml">apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx-controller
  namespace: ingress-nginx
  annotations:
    load-balancer.hetzner.cloud/location: fsn1
    load-balancer.hetzner.cloud/name: # ...
spec:
  ports:
    - name: ftp
      port: 21
      protocol: TCP
      targetPort: ftp
</code></pre>
<p>It contains two exemplary annotations that are relevant for Hetzner. One to specify the location of the load balancer and the other for the name of the load balancer. Furthermore, port 21 is specified on the service so that FTP requests reach the deployment.</p>
<p>::GlobalCallAnExperts{:bg="bg-bs-blue"}
#buttonBlockTitle
Let us optimize and manage your Kubernetes environments.
::</p>
<h2>6. Deployment and Updates</h2>
<p>The yaml examples from the previous sections are already everything needed for deployment and updates. When deploying manually with <code>kubectl</code>, the following command is used: <code>kubectl apply -k ./base/ingress-nginx</code>.</p>
<p>For updates, only the version in <code>./base/ingress-nginx/kustomization.yaml</code> needs to be adjusted and the <code>kubectl</code> command from above applied. Before each deployment or update, it's recommended to render the resources using <code>kubectl kustomize ./base/ingress-nginx</code> to verify that everything is correct.</p>
<h2>7. Conclusion</h2>
<p>As you hopefully saw, it's simple to customize <em>ingress-nginx</em> using <em>kustomize</em>. The code examples in this blog post are already everything needed. The content naturally needs to be adapted case by case. Similarly, it might be necessary to use multiple overlays because, for example, in staging and production, different annotations might be needed. But this doesn't make it more complex. The changes are clear and cleanly stored in the repository and thus documented. For updates, only the version needs to be changed - it can't get simpler.</p>
<p>From our perspective, there's no reason why ingress-nginx adjustments shouldn't be handled via <em>kustomize</em>. Do you have a counterargument? Then please let us know in the comments.</p>
<h2>Frequently Asked Questions</h2>
<h3>1. Why should I kustomize ingress-nginx at all?</h3>
<p>Many setups require custom configurations - such as a custom error backend, additional ports, or special load balancer bindings. Without customization, important features often remain missing.</p>
<h3>2. What advantages does kustomize offer compared to a Helm Chart?</h3>
<p>Kustomize works declaratively and directly on Kubernetes-YAMLs. You don't need template rendering and can specifically patch individual resources - simple, readable, and Git-friendly.</p>
<h3>3. Can I combine kustomize with Helm?</h3>
<p>Yes, but this blog post deliberately uses only kustomize to directly patch the <em>ingress-nginx</em> standard resources. Anyone using Helm can combine helm templates and kustomize.</p>
<h3>4. What risks exist during updates?</h3>
<p>When upstream resources change significantly, patches might fail. Therefore: Check carefully before each update.
<code>kubectl kustomize</code> and examine whether everything renders cleanly.</p>
<h3>5. What if I have multiple environments like Staging and Production?</h3>
<p>kustomize with supported Overlays. You can use a common base directory and define specific patches per environment in separate Overlays.</p>
<h3>6. Do I have to adjust all patches with every new ingress-nginx version?</h3>
<p>Not necessarily. As long as names, structures, and resource paths do not change, patches often remain stable. Nevertheless, you should briefly review them during version changes.</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Operations</category>
            <category>Documentation</category>
            <enclosure url="https://blueshoe.io/img/blog/kustomize-ingress-nginx.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Our Roadmap for Your Software: How We Plan, Develop, and Deploy Updates]]></title>
            <link>https://blueshoe.io/blog/how-we-plan-software-updates</link>
            <guid>https://blueshoe.io/blog/how-we-plan-software-updates</guid>
            <pubDate>Fri, 24 Oct 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Software updates are crucial for the security, efficiency, and feature set of our products. But how do we ensure that every update is a real step forward for our customers?</p>
<p><img src="/img/blogs/agile-backlog.svg" alt="Blueshoe Update Planning">{.object-cover .max-w-full .mb-5}</p>
<p>In this post, we open the doors to our planning process and show you how we ensure, from the initial idea to the final deployment, that our software updates are precisely tailored to our customers' needs. Learn how customer feedback drives development and how we ensure a smooth transition with every update.</p>
<h2>1. Why Transparent Update Planning is Crucial</h2>
<p>Software is never finished. Security vulnerabilities need to be closed, new requirements met, and existing features improved. Regular updates are therefore not a "nice-to-have," but essential for stability, performance, and security.
Our standard: Transparency. We want our customers to know not only that an update is coming, but also why, when, and how it is being developed.
In this post, we give you a look behind the scenes – from the initial idea and the internal roadmap to the final release. This includes not only new features and bug fixes, but also regular updates of third-party packages that are essential for security and stability.</p>
<h2>2. Our Philosophy: Customer Feedback at the Core</h2>
<p>Many new features start with an observation or a request from our customers' daily work.</p>
<ul>
<li><strong>Our customers' voice matters</strong>: We collect feedback through tickets, direct conversations, or quarterly maintenance activity reports.</li>
<li><strong>From idea to feature</strong>: This feedback leads to concrete improvement suggestions that we discuss in the product team.</li>
<li><strong>Prioritization</strong>: Not every idea can be implemented immediately. We evaluate urgency, customer value, strategic relevance, and technical effort – this is how we ensure that the most important improvements are realized first.</li>
</ul>
<h2>3. The Update Roadmap: A Look at Our Plan</h2>
<h3>3.1 Update Categories</h3>
<p>Our internal roadmap ensures that we balance short-term improvements with long-term visions.
We adhere to the three types of updates defined by <a href="https://semver.org/">Semantic Versioning</a>{:target="_blank"}:</p>
<ol>
<li><strong>Major Updates</strong>: Major innovations, new core features, or a change to existing functionality.</li>
<li><strong>Minor Updates</strong>: Optimizations of existing workflows, smaller new features. Existing functionality can be used as before.</li>
<li><strong>Patches &#x26; Bugfixes</strong>: Quick bug fixes and security updates.</li>
</ol>
<p>We use these categories for both internal developments and regular updates of the external packages we use.</p>
<h3>3.2 Focus on External Packages</h3>
<p>Our software is based on a variety of proven open-source libraries and external packages.
To keep these components secure, performant, and compatible, we regularly check for new versions and update them promptly.
For our customers, this means:</p>
<ul>
<li><strong>Security Advantage</strong>: We close critical security vulnerabilities in external dependencies without delay.</li>
<li><strong>Less Coordination Effort</strong>: We usually implement minor and patch updates for such packages independently as part of the maintenance agreement – including testing and deployment.</li>
<li><strong>Plannable Major Updates</strong>: We coordinate larger version jumps that may require adjustments with our customers in advance and support them with staging tests.</li>
</ul>
<p>This way, we ensure that the "invisible" parts of our software – the external packages – always remain up-to-date and secure, without our customers having to worry about it.</p>
<h3>3.3 Prioritization &#x26; Timing</h3>
<p>Updates from category 2 (Minor) and 3 (Patches) often have a smaller development scope; for external packages, no adjustments are often necessary. We therefore implement these updates at short notice and deploy them directly.
Major updates, on the other hand, require more coordination. Often, our customers want to conduct extensive tests on the staging system after such updates before the update is deployed to the production system.</p>
<h2>4. The Development Process: From Planning to Quality Assurance</h2>
<p>Our teams work according to agile principles. In short sprints, we develop new features, test them, and flexibly adapt them to new requirements.
Multi-stage tests – automated, internal, and by our customers on the staging system – ensure that new versions are stable and secure.
In parallel, we carefully document every change so that functions are clearly explained and available in our knowledge base.</p>
<p>However, it should be noted that not every idea makes it into implementation immediately – resources, technical dependencies, strategic direction, and our customers' available budget play an important role.
In the - hopefully long - life of a software product, it is completely normal for features or potential bug fixes not to be implemented, for example, because the implementation proves to be of low relevance.</p>
<h2>5. Communication is Everything: Announcing Updates Transparently</h2>
<p>We want our customers to never be surprised. Major updates, in particular, require more coordination. Both on our side, to be able to plan and reserve developer capacity, and of course on the customer's side, to have test capacity ready and to find a suitable deployment date.
Smaller developments are also closely coordinated with our customers.</p>
<p>Only minor and patch updates of external packages are planned, carried out, tested, and deployed by us independently as part of the maintenance agreement.</p>
<p>For every update, there is a ticket in our ticket system. This is updated after deployment so that our customers always have insight into completed deployments.</p>
<h2>6. Conclusion: Shaping the Future of Projects Together</h2>
<p>Our update strategy is based on transparency, feedback, and quality.
Our customers' input determines the direction in which the projects develop – and we ensure that every update arrives smoothly and well-documented for all customers.</p>]]></content:encoded>
            <category>Project Management</category>
            <category>Security</category>
            <category>Operations</category>
            <enclosure url="https://blueshoe.io/img/blogs/agile-backlog.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Keycloak for developers: how integration works]]></title>
            <link>https://blueshoe.io/blog/keycloak-integration-for-developers</link>
            <guid>https://blueshoe.io/blog/keycloak-integration-for-developers</guid>
            <pubDate>Mon, 21 Oct 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>In today's digital world, secure Identity and Access Management (IAM) is essential. But implementation can be complicated - Keycloak makes it simple! In this article, you'll learn how to integrate Keycloak with SSO, OAuth2 &#x26; OpenID Connect into your project - whether Django, Kubernetes or Node.js.</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
What is Keycloak?
:::</p>
<p>::GlobalPodcastSection{:videoId="6b_DXrESzgY" :videoPosition="right" .mb-5}
:::GlobalParagraph
Keycloak is an open-source IAM solution that supports Single Sign-On (SSO), user management, and Multi-Factor Authentication (MFA).
:::
:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Benefits for developers:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li><strong>Easy integration</strong> with OAuth2, OpenID Connect &#x26; SAML</li>
<li><strong>Top-notch security</strong> with MFA &#x26; role-based access control (RBAC)</li>
<li><strong>Scalable</strong> – for small teams &#x26; large enterprises
:::
::</li>
</ul>
<p>:::GlobalTitle{:size="lg" .mb-5}
Keycloak Integration: How to get started
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Step 1: Set up your Keycloak server
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Download Keycloak as a Docker container or install it manually</li>
<li>Create a realm for your project</li>
<li>Add users and roles in the Keycloak Admin Console
:::</li>
</ul>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Step 2: Connect Keycloak to your application
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li><strong><a href="/technologies/python-django-agency/">Django</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}:</strong> Use <code>pycloak</code>  for easy OAuth2 integration</li>
<li><strong>Spring Boot:</strong> Use the Keycloak adapter for Spring Security</li>
<li><strong>Node.js:</strong> Connect Keycloak via <code>passport-keycloak</code> for authentication
:::</li>
</ul>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Step 3: Use SSO &#x26; token-based authentication
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Implement <strong>Single Sign-On (SSO)</strong> via OpenID Connect</li>
<li>Use <strong>access tokens</strong> to secure your APIs</li>
<li>Enable <strong>Multi-Factor Authentication (MFA)</strong> for enhanced security
:::</li>
</ul>
<p>:::GlobalTitle{:size="lg" .mb-5}
Best practices for secure Keycloak implementation
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li><strong>Enable SSL/TLS:</strong> All connections to Keycloak must be encrypted</li>
<li><strong>Use role-based access control (RBAC):</strong> Prevents unauthorized access</li>
<li><strong>Limit token lifetime:</strong> Reduces risk from expired credentials</li>
<li><strong>Keep it up to date:</strong> Always use the latest version for max security
:::</li>
</ul>
<p>:::GlobalParagraph
<strong>Pro tip:</strong> With our RAPID-Framework we implement Keycloak <strong>35% faster</strong>.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Real-world example: Keycloak in action
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Integration into existing systems
:::
:::GlobalParagraph
Integrating Keycloak into existing systems is easy thanks to its flexible API and broad compatibility with frameworks like Spring Boot, <a href="/technologies/python-django-agency/">Django</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, Node.js, and more. Whether you're running in the cloud or on-premise, Keycloak adapts to your infrastructure. And since it's open source, it's fully customizable. At Blueshoe, for example, we built a handy <a href="https://github.com/Blueshoe/keycloak-theme-template">Cookie-Cutter Template</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} to make theme customization super simple.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Practical Example: FondsFINANZ Broker Educational Platform
:::</p>
<p>:::GlobalParagraph
A concrete example of a successful Keycloak implementation is the backoffice world of the <strong>FondsFINANZ Broker Educational Platform</strong>, a Django-based <a href="/our-services/migration-to-kubernetes/">Kubernetes</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} application with various services. Here, a central Keycloak instance manages authentication for all services and ensures a uniform and secure user management. This setup enables developers to concentrate on business logic while Keycloak takes care of security.
:::</p>
<p>:::GlobalParagraph
In this context, <a href="https://github.com/Blueshoe/pycloak">pycloak</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} plays a significant role. Through <em>pycloak</em>, we could integrate Keycloak seamlessly into the Django application using the <a href="https://oauth2-proxy.github.io/oauth2-proxy/">oauth2-proxy project</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
:::</p>
<p><img src="https://raw.githubusercontent.com/Blueshoe/pycloak/refs/heads/main/docs/static/img/pycloak-arch.png" alt="pycloak-flow"></p>
<p>:::GlobalParagraph
The diagram above shows how pycloak handles authentication in a Django app using <em>oauth2-proxy</em>. The client app first requests a resource and is intercepted by <em>oauth2-proxy</em>. If the user isn’t authenticated, they’re redirected to the Keycloak login page. Once authenticated, they get an access token to access the target app.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Real-world example: LUMA Delikatessen
:::
:::GlobalParagraph
Another great example is the <a href="https://www.luma-delikatessen.ch/"><strong>LUMA Delikatessen</strong>-Website</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, built with Nuxt.js and <a href="/python-django-agency/">Django</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}. Here, Keycloak is used as the central authentication solution to ensure both security and user-friendliness. Using our <a href="https://github.com/Blueshoe/keycloak-theme-template">keycloak-theme-template</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, we customized the registration page and default Keycloak fields — for example, we added a birthdate field to enable birthday-based promotions. Here's how it looks:
:::
<img src="/img/blogs/luma-register.png" alt="LUMA Delikatessen">{height="70%"}</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Real-world example: LMU Munich Media Monitoring
:::
:::GlobalParagraph
Keycloak’s API also comes in handy — we use it for the <a href="https://m3.ifkw.lmu.de/">Munich Media Monitoring</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} project at LMU. After a defined time, users are asked to reverify their email. Using Keycloak's User Profile feature, we added custom fields to track when users need to reverify. A Kubernetes cronjob queries the Keycloak API to find these users and resets the verification state. Next time a user logs in, Keycloak automatically prompts for email verification.
:::</p>
<p>:::GlobalTitle{:size="lg" .my-5}
Why choose Blueshoe for your Keycloak integration?
:::
:::GlobalParagraph
We’ve been experts in open-source IAM solutions for over 10 years, helping businesses implement Keycloak securely. Plus, we offer <a href="https://github.com/Blueshoe/pycloak">pycloak</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, our tailored Python package for easy Keycloak integration in Python projects. With pycloak, we’ve made Keycloak more accessible and developer-friendly than ever.
:::
:::GlobalParagraph
<strong>Our advantages:</strong>
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li><strong>35% faster implementation</strong> thanks to our RAPID Framework</li>
<li><strong>Custom Keycloak integrations</strong> for <a href="/python-django-agency/">Django</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, <a href="/our-services/migration-to-kubernetes/">Kubernetes</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} &#x26; more</li>
<li><strong>Deep expertise</strong> in OAuth2, OpenID Connect &#x26; SAML
:::</li>
</ul>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
Secure your IAM project with Keycloak
::</p>
<p>:::GlobalTitle{:size="lg" .mb-5 .mt-8}
Frequently Asked Questions
:::
::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. How do I integrate Keycloak into my existing app?
::</p>
<p>::GlobalParagraph
Keycloak offers several integration options:
::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li><strong>Web apps:</strong> Use Keycloak’s JS adapter or OAuth2 flow for secure logins</li>
<li><strong>Backend APIs:</strong> Implement OpenID Connect or OAuth2 for token validation</li>
<li><strong><a href="/our-services/microservice-architecture-consulting/">Microservices/Kubernetes</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}:</strong> Use oauth2-proxy or Keycloak as the cluster identity provider</li>
<li><strong>Tip:</strong> The Keycloak REST API enables fine-grained user management.
::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. Which authentication protocols does Keycloak support?
::</p>
<p>::GlobalParagraph
Keycloak supports all common protocols for secure authentication:
::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li><strong>OAuth2</strong> (for API auth)</li>
<li><strong>OpenID Connect</strong> (for web apps &#x26; SSO)</li>
<li><strong>SAML</strong> (for legacy enterprise systems)</li>
<li><strong>LDAP Integration</strong> (for existing user directories)</li>
<li><strong>Tip:</strong> Choose the appropriate protocol depending on the application – OpenID Connect is usually best for web/mobile
:::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. How do I implement Role-Based Access Control (RBAC) in Keycloak?
::</p>
<p>::GlobalParagraph
Keycloak offers a flexible <strong>role and permission system</strong>:
::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li><strong>Define roles and permissions</strong> in the Keycloak console</li>
<li><strong>Assign roles to users or groups</strong> (e.g. admin, editor, viewer)</li>
<li><strong>Validate roles in your code</strong> by parsing the Keycloak token</li>
<li><strong>Tip:</strong> Use Keycloak’s “Authorization Services” for complex access rules
:::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
4. How can I run Keycloak locally for dev and testing?
::</p>
<p>::GlobalParagraph
You can run Keycloak locally using Docker or in standalone mode:
::
:::BlogCode{.mb-4}</p>
<pre><code class="language-bash">docker run -p 8080:8080 --name keycloak -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak start-dev
</code></pre>
<p>:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li><strong>Advantages:</strong>  Fast setup for local development &#x26; tests</li>
<li><strong>Tip:</strong> If you use PostgreSQL or Kubernetes, configure your DB via env variables
:::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
5. How do I implement Single Sign-On (SSO) with Keycloak?
::</p>
<p>::GlobalParagraph
Keycloak enables SSO with OpenID Connect or SAML:
::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Set up Keycloak as your identity provider</li>
<li>Connect your apps via OpenID Connect</li>
<li>Use one login across multiple apps — users log in once and access everything</li>
<li>Tip:  For migrating user accounts, use LDAP or a custom auth strategy
:::</li>
</ul>]]></content:encoded>
            <category>Keycloak</category>
            <category>SSO</category>
            <category>Security</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blogs/keycloak.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Efficient Runtimes with KEDA: Dynamic Autoscaling for Kubernetes Clusters]]></title>
            <link>https://blueshoe.io/blog/kubernetes-autoscaling-keda</link>
            <guid>https://blueshoe.io/blog/kubernetes-autoscaling-keda</guid>
            <pubDate>Fri, 20 Dec 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Kubernetes is performance-strong, but without optimized operating times, unnecessary resources and costs can arise. KEDA (Kubernetes Event Driven Autoscaling) enables dynamically scaling workloads and pausing them outside defined operating times. In this blog post, we show how you can adapt your cluster to work times – for more efficiency and reduced hosting costs.</p>
<p><img src="/img/blogs/kubernetes-keda.svg" alt="Efficient Operating Times with KEDA">{.object-cover .max-w-full .mb-5}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Introduction to Kubernetes Autoscaling
:::
:::GlobalParagraph
Kubernetes already brings a lot of automation and possibilities for efficiency optimization. Whether assigning workloads to nodes, Readiness and Liveness probes to start hanging containers, or dynamically adding or removing nodes through Cluster Autoscaler. Dynamically scaling individual workloads based on resource utilization is also possible through horizontal Pod autoscaling.
However, not all requirements can be covered. What about a cluster that is only needed on workdays during work hours? In this blog post, we will examine how you can scale Kubernetes workloads using KEDA based on a time-based schedule.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
What is KEDA?
:::</p>
<p>:::GlobalParagraph
KEDA picks up the idea of Pod autoscaling and expands it to the possibility of not only scaling based on resource utilization, but also based on general events. The events can be based on database queries, metrics of a message broker, a cron schedule, and much more.
:::
:::GlobalParagraph
Currently, KEDA supports 71 so-called scalers that you can use as a basis for event-based autoscaling. You only need to create a <code>ScaledObject</code>, a <code>CustomResourceDefinition</code> from KEDA. The keda-operator, which is deployed in the cluster, then dynamically creates a corresponding <code>HorizontalPodAutoscaler</code> resource to scale your selected workloads based on the specified events.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Cron Scaler to Define Operating Times
:::</p>
<p>:::GlobalParagraph
To scale the cluster's workloads outside operating hours to 0, we use the <a href="https://keda.sh/docs/2.16/scalers/cron/">Cron Scaler</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}. This allows defining a cron schedule with start and end times, within which the workloads are scaled to a desired number of replicas. Outside the schedule, the workloads are scaled to the specified minimum number.
:::
:::GlobalParagraph{.mb-4}
Using an exemplary <code>ScaledObject</code> for the described scenario, you will see how simple the configuration for our use case is:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-yaml">apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: cron-scaledobject
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-deployment
  minReplicaCount: 0
  maxReplicaCount: 3
  cooldownPeriod: 60
  triggers:
    - type: cron
      metadata:
        timezone: Europe/Berlin
        start: 0 6 * * 1-5
        end: 0 20 * * 1-5
        desiredReplicas: '3'
</code></pre>
<p>:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
What happens here?
:::</p>
<p>:::GlobalParagraph
The selected deployment is <code>my-deployment</code>, in the <code>default</code> namespace. The <code>minReplicaCount</code> is 0, so that it scales to 0 outside of operating hours. Start and end of the operating time are specified with the Cron schedules <code>0 6 * * 1-5</code>, and <code>0 20 * * 1-5</code>. I.e. from Monday to Friday between 6:00 and 20:00 the deployment will be scaled to 3 replicas, which is specified by the parameter <code>desiredReplicas</code>. This ensures a more efficient resource usage.
:::</p>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
We can also scale your apps to your needs.
::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
What are the cost savings?
:::</p>
<p>:::GlobalParagraph
The really exciting question is: How big is the efficiency gain and thus the cost savings through the implementation of operating hours? This strongly depends on your setup.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Prerequisite
:::</p>
<p>:::GlobalParagraph
The cluster autoscaler must be active so that unused nodes can be removed. The amount of savings then depends on how many workloads can be paused outside of operating hours.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Example
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>If in your cluster 10 applications are running and only one is scaled outside of operating hours, the effect remains small.</li>
<li>If you can pause 9 workloads, the need for nodes reduces significantly - this saves noticeable costs.
:::</li>
</ul>
<p>:::GlobalTitle{:size="lg" .mb-5}
Conclusion
:::</p>
<p>:::GlobalParagraph
KEDA makes it very easy to dynamically scale Kubernetes workloads based on time-based events. The installation and specification of <code>ScaledObjects</code> are uncomplicated and take little time.
:::</p>
<p>:::GlobalParagraph
Even if the exact cost savings cannot be generally predicted, the use of KEDA is worthwhile long-term when your workloads have a time-based characteristic.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5 .mt-8}
Frequently Asked Questions
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. What is KEDA and how does it differ from Kubernetes horizontal Pod autoscaler (HPA)?
:::</p>
<p>:::GlobalParagraph
<strong>KEDA (Kubernetes Event-Driven Autoscaling)</strong> extends the classic Horizontal Pod Autoscaling (HPA) in Kubernetes. While HPA scales workloads based on resources like CPU or storage utilization, KEDA enables autoscaling based on external events. These include database queries, message broker metrics, or time-controlled triggers. KEDA works complementarily to HPA by transmitting external metrics and thus creating more flexible scaling options.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. What advantages does KEDA offer for the dynamic autoscaling of Kubernetes workloads?
:::</p>
<p>:::GlobalParagraph
KEDA offers the following advantages:
:::</p>
<p>:::GlobalBlock{.ul-disc .mb-4}</p>
<ul>
<li><strong>Flexibility</strong>: Scaling based on external events (e.g. Kafka, Prometheus or Azure Event Hubs).</li>
<li><strong>Zero Scaling</strong>: Workloads can be reduced to 0 pods when no resources are needed.</li>
<li><strong>Simple Integration</strong>: KEDA works seamlessly with HPA and uses existing Kubernetes mechanisms.</li>
<li></li>
</ul>
<p>:::
:::GlobalBlock{.ul-disc .mb-4}
<strong>Cost Savings</strong>: Through needs-based scaling, unnecessary resources can be avoided.</p>
<ul>
<li><strong>Broad Support</strong>: With over 70 scalers, KEDA is widely deployable.
:::</li>
</ul>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. How does the Cron Scaler of KEDA work for time-based autoscaling?
:::</p>
<p>:::GlobalParagraph
The <strong>Cron Scaler</strong> in KEDA enables time-controlled scaling of workloads. You define a Cron schedule with start and end times as well as the desired number of replicas. Outside this time window, workloads are scaled to the specified minimum number (e.g., 0 Pods).
:::</p>
<p>:::GlobalParagraph
<strong>Example:</strong> A deployment can be scaled up to 3 Pods from Monday to Friday between 6:00 and 20:00 and reduced to 0 outside these times.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
4. Which event sources (Scalers) does KEDA support for autoscaling?
:::</p>
<p>:::GlobalParagraph
KEDA supports over <strong>70 Scalers</strong>, including:
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li><strong>Message Broker</strong>: Kafka, RabbitMQ, Azure Event Hubs.</li>
<li><strong>Databases</strong>: MySQL, PostgreSQL, MongoDB.</li>
<li><strong>Metric Sources</strong>: Prometheus, AWS CloudWatch, Azure Monitor.</li>
<li><strong>Time-based Triggers</strong>: Cron Schedules.</li>
<li><strong>Others</strong>: Redis, GitHub Actions, AWS SQS, and more.
:::</li>
</ul>
<p>:::GlobalParagraph
KEDA can also be expanded through user-defined scalers to integrate almost any event source.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
5. What does Zero Scaling mean and how does KEDA help?
:::</p>
<p>:::GlobalParagraph
<strong>Zero Scaling</strong> means that workloads are completely deactivated by setting their number of Pods to 0. KEDA enables this by using event sources that activate Pods when needed. This drastically reduces resource usage when no events are present and helps save costs while maximizing cluster efficiency.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
6. Is KEDA suitable for all Kubernetes applications or are there limitations?
:::</p>
<p>:::GlobalParagraph
KEDA is suitable for many applications, especially for:
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li><strong>Event-driven applications</strong> (e.g., Message Broker, database operations).</li>
<li><strong>Workloads with time-based requirements</strong> (e.g., specific business hours).</li>
<li><strong>Metric-based applications</strong> that react to external signals.
:::</li>
</ul>
<p>:::GlobalParagraph
<strong>Limitations:</strong>
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>KEDA is designed for applications using scalable architectures (e.g., Deployments, Jobs).</li>
<li>Workloads that must run continuously benefit less from KEDA, as Zero Scaling is not possible.</li>
<li>Your cluster needs an active Cluster Autoscaler to remove nodes during Zero Scaling.
:::</li>
</ul>
<p>:::GlobalParagraph{.mb-8}
<strong>Tip:</strong> For applications with fixed resource requirements or continuous availability, HPA might possibly be sufficient.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Development</category>
            <category>Operations</category>
            <category>Digitalization</category>
            <enclosure url="https://blueshoe.io/img/blogs/kubernetes-keda.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Kubernetes Backup in Storage Buckets]]></title>
            <link>https://blueshoe.io/blog/kubernetes-backup-storage-bucket</link>
            <guid>https://blueshoe.io/blog/kubernetes-backup-storage-bucket</guid>
            <pubDate>Thu, 03 Apr 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>We use not only Kubernetes, but also Cloud Storage, Cloud SQL, and GitHub Actions to run – and secure – our applications efficiently. In this article, we’ll show you how to implement a reliable Kubernetes backup strategy to protect your databases, media, and code from data loss.</p>
<p><img src="/img/blog/k8s_backup_header1.png" alt="Kubernetes Backup in Storage Buckets">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Why are backups so important?
::
::GlobalParagraph
I probably don’t need to explain why backups are essential – at least I hope not. But beyond the obvious business goals, there are entire areas of corporate strategy dedicated to topics like backup and recovery – most notably Business Continuity Planning  (<a href="https://en.wikipedia.org/wiki/Business_continuity_planning">BCP</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}). BCP introduces terms like <em>resilience</em> and <em>continuity</em>, which define two critical metrics when it comes to IT systems. But it’s not just business planning that drives the need for backups – legal requirements also play a major role, especially in Europe. For example, the <strong>GDPR</strong> demands technical and organizational measures to ensure <strong>availability</strong>, which is directly linked to <strong>confidentiality</strong> and <strong>integrity</strong>. Similar rules apply under national advertising and data retention laws.
::
::GlobalParagraph
Recent history has shown us that <a href="https://www.linkedin.com/pulse/faq-buchbinder-data-breach-from-data-protection-stefan-hessel/">backups and disaster recovery</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} still don’t get the attention they deserve. It’s understandable – strategies that mitigate risk often don’t have a direct, visible impact on revenue. That makes them easy to push down the priority list.</p>
<p>But companies that build resilience into their processes – whether it’s protection against human error or external threats like cyberattacks – will come out stronger in the long run. Because one thing is certain: <strong>the next outage is just a matter of time…</strong>
::</p>
<p>::GlobalButton{:url="/contact/" :label="Let’s figure out your backup strategy together – book a free call." :color="green" .mb-6}
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Overview of Backup Methods
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Snapshots – The Quick and Easy Solution
::</p>
<p>::GlobalParagraph
Let’s start with some background: Depending on the specific requirements, the backup strategy must be adjusted accordingly. However, the foundation in every case is regular snapshots: these capture the entire dataset at the moment the snapshot is taken. Snapshots are generally easy to organize, but they consume the most storage space, while providing the fastest and most reliable form of recovery – as snapshots allow you to restart an IT service exactly as it was. The frequency of snapshot backups directly defines the maximum acceptable period for indirect data loss in the event of a disruption.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Differential Backups – Space-Saving, but More Complex
::</p>
<p>::GlobalParagraph
In addition to snapshots, differential backups can be created. These only save the data between a snapshot (also known as the “baseline”) and the moment the differential backup is made. Depending on the IT service involved, a differential backup can require significantly more effort in creation, management, and recovery. However, it saves storage space compared to a snapshot and thus reduces long-term costs.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Point-in-Time Backups – Maximum Flexibility
::</p>
<p>::GlobalParagraph
Point-in-Time Backups allow you to restore the system to any moment in the past. This backup method is considered the gold standard, though it might not always be unequivocally achievable in complex (distributed) IT systems. Every transaction that alters the system in any form is additionally secured at the time of confirmation. In practice, this backup strategy is the most demanding. It is very challenging to create, organize, and recover Point-in-Time Backups—the restoration process is both time-consuming and labor-intensive. This is because first a baseline (from a snapshot) must be restored, and then all transactions must be applied in a “replay” process in the correct order. If conditions outside the system boundaries occur that are no longer present at the time of recovery (for example, if data in another service have changed), the restoration process can also be disrupted.
::</p>
<p>::GlobalParagraph
Since we always need a solid plan for snapshot backups, this article focuses on how to create reliable backups using our tech stack: Kubernetes, Google Cloud Storage, Cloud SQL, and GitHub Actions. The goal is to secure everything necessary to restore the operation of an IT service from a backup.
::</p>
<p>::GlobalButton{:url="/contact/" :label="Need hands-on help with your infrastructure? Reach out now." :color="green" .mb-6}
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Where to Store It All: Cloud Storage
::
::GlobalParagraph
Let’s start at the end: where should your backups actually live? Cloud Storage is a great fit for many use cases. You can store and serve production data (like assets for a website – images, documents, etc.), use it as a transit medium between source and target systems, or archive data for the long term. In most scenarios, we’d say: this is where the backup journey ends. But depending on your setup, it might make sense to move data even further – to on-prem or enterprise storage for additional redundancy.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Setting Up All Necessary Backups
::
::GlobalParagraph
So, what exactly should be backed up? We usually talk about <em>assets</em> (images, videos, documents – all “movable” data outside of the database that isn’t part of the codebase), <em>databases</em> (persistent data stores), and your application’s source code. Beyond that, there are reproducible artifacts – things that can be generated based on existing data, like <em>cached content</em> or container images (e.g. Docker). These aren’t typically part of the backup strategy, either because they’re easy to rebuild or they’re already stored securely (container registries).
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Storage Location for Backups: Using Cloud Storage
::</p>
<p>::GlobalParagraph
It’s also a good idea to back up your transactional cloud storage. Why? Because the <em>bucket</em> might not be configured for high availability (i.e. no redundant copies), or your <em>BCP</em> (yep, that again) requires protection against cloud provider outages. Luckily, most major cloud providers make this easy. In Google Cloud, for example, you can set up a one-click replication from one bucket to another using their <em>ClickOps</em> interface.
::</p>
<p><img src="/img/blog/Backup-Storage-Transfer.png" alt="Backup Storage Transfer">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalParagraph
With <a href="https://console.cloud.google.com/transfer/">Google Transfer</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, you can easily configure sources, targets, and execution schedules. There are also additional settings available – for example, how to handle deleted files.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Database Backup in Kubernetes with Cloud SQL
::
::GlobalParagraph
Things get a bit more interesting when it comes to Cloud SQL backups. In our experience, they’re not exactly “plug and play.” For a PostgreSQL-compatible database instance, you can’t simply create an <a href="https://www.postgresql.org/docs/current/backup-dump.html">SQL-Dump</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} using built-in tools – not in Google Cloud, not in AWS. Why? We assume it's to make switching providers less convenient. Instead of generating a portable SQL dump, you’re stuck using the provider’s proprietary backup features. But lucky us – we’ve got a Kubernetes cluster that can access the database directly.
::</p>
<p>::GlobalParagraph
So what we need is a Kubernetes <em>CronJob</em> that connects to the database on a defined schedule and creates a snapshot. Good news: that part’s actually pretty straightforward.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Dockerfile for PostgreSQL Backup:
::</p>
<p>::GlobalParagraph
We use the official <em>postgres</em> image as a base. But the same approach works for most other (No)SQL databases – as long as there’s an official client available.
::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-docker"># --- Dockerfile ---
FROM postgres:latest-alpine

RUN apk add --update curl zip python3
RUN curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:/root/google-cloud-sdk/bin

COPY backup.sh /
RUN chmod +x /backup.sh
</code></pre>
<p>:::</p>
<p>::GlobalParagraph
This container image already comes with the <code>psql</code> client preinstalled. On top of that, we need a few additional tools to upload data to Google Cloud Storage, as well as the <em>bash</em> script that triggers the backup and handles the upload. Once everything is in place, the container image needs to be built and made accessible – for example, by pushing it to a container registry.
::</p>
<p>::BlogCode{.mb-4}</p>
<pre><code class="language-bash"># --- backup.sh ---

#!/bin/bash

if [ ! -z $DEBUG ]; then
    set -x
fi

# ENV variables for Postgres
HOSTNAME=$PG_HOSTNAME
PASSWORD=$PG_PASSWORD
USERNAME=$PG_USERNAME
DATABASE=$PG_DATABASE
OUTPUT_DIR="${PG_OUTPUT_DIR:-/pgbackup}"
NAME_PREFIX="${PG_PREFIX:-noprefix}"
ZIP_PASSWORD="${ZIP_PASSWORD:-setme}"
if [ $ZIP_PASSWORD = "setme" ]; then
    ZIP_PASSWORD=`cat /etc/gcp/zip-password`
fi
GS_STORAGE_BUCKET="${GS_BUCKET:-nonpublic}"

gcloud auth activate-service-account --key-file /etc/gcp/sa_credentials.json

date1=$(date +%Y%m%d-%H%M)
mkdir $OUTPUT_DIR
filename=$OUTPUT_DIR"/"$date1"-$NAME_PREFIX-$DATABASE.pg"

PGPASSWORD="$PASSWORD" pg_dump -h "$HOSTNAME" -p 5432 -U "$USERNAME" "$DATABASE" -Fc > $filename

du -h $filename

zip -r --encrypt -P $ZIP_PASSWORD $filename".zip" $filename
du -h $filename".zip"

gcloud storage cp $filename".zip" "gs://"$GS_STORAGE_BUCKET"/database/"$NAME_PREFIX"/"

</code></pre>
<p>::</p>
<p>::GlobalParagraph
This example bash script is controlled entirely via environment variables. These are injected through the Kubernetes workload definition – typically using a <a href="https://kubernetes.io/docs/concepts/configuration/secret/">Kubernetes secret</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}. The script supports all the necessary parameters to:
::</p>
<p>::GlobalBlock{.ol-decimal .mb-4}</p>
<ol>
<li>connect to a PostgreSQL database,</li>
<li>request a SQL dump (in portable PostgreSQL format),</li>
<li>encrypt that dump with a ZIP password and give it a traceable filename,</li>
<li>and finally upload it to a Cloud Storage bucket.
::</li>
</ol>
<p>::GlobalParagraph
<strong>Important</strong>: This example uses a <a href="https://cloud.google.com/iam/docs/service-account-overview"><em>service account</em></a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} to upload the backup to the -nonpublic Cloud Storage bucket. The setup of service accounts differs between cloud providers. In Google Kubernetes Engine, the service account token is stored in a Kubernetes secret and mounted into the Pod at <em>/etc/gcp/sa_credentials.json</em>).
::</p>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
You want to set up or optimize your backup strategy in Kubernetes the right way?
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Automating Backups with a Kubernetes CronJob
::</p>
<p>::GlobalParagraph
With the following Kubernetes workload object, we can finally set up a scheduled job that reliably performs and stores the database backup.
The syntax and structure of Kubernetes CronJobs can be found in the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/">Kubernetes documentation</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">apiVersion: batch/v1
kind: CronJob
metadata:
  name: pg-backup
spec:
  timeZone: "Europe/Berlin"
  schedule: "45 1 * * *"
  concurrencyPolicy: Forbid
  successfulJobsHistoryLimit: 0
  failedJobsHistoryLimit: 1
  jobTemplate:
    spec:
      template:
        spec:
          # this secret is manually created holding 1) the service account key, and 2) the zip-password string
          volumes:
            - name: gcs-service-account
              secret:
                secretName: gcsbackup
          containers:
            - name: pgbackup
              image: gcr.io/backup-images/pgbackup
              imagePullPolicy: Always
              command: ["/backup.sh"]
              volumeMounts:
                - mountPath: "/etc/gcp/"
                  name: gcs-service-account
                  readOnly: true
              env:
                - name: PG_DATABASE
                  valueFrom:
                    secretKeyRef:
                      name: shop-secret
                      key: DATABASE_NAME
                - name: PG_USERNAME
                  valueFrom:
                    secretKeyRef:
                      name: shop-secret
                      key: DATABASE_USER
                - name: PG_PASSWORD
                  valueFrom:
                    secretKeyRef:
                      name: shop-secret
                      key: DATABASE_PASSWORD
                - name: PG_HOSTNAME
                  valueFrom:
                    secretKeyRef:
                      name: shop-secret
                      key: DATABASE_HOST
                - name: PG_PREFIX
                  value: production
                # - name: DEBUG
                #   value: "on"
          restartPolicy: OnFailure
</code></pre>
<p>:::
::GlobalParagraph
This way, all parameters – such as the backup cadence or database credentials – can be managed directly via Kubernetes. If a password rotation is needed, it’s just a matter of updating the corresponding Kubernetes secret.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Code Backup with GitHub Actions
::</p>
<p>::GlobalParagraph
It’s not just dynamic data that needs to be backed up – your codebase matters, too. Why? Because some <em>statics</em> (like media files embedded in the code, logos, small images, etc.) might need to be preserved, or your <em>BCP</em> includes a scenario where GitHub goes down. A GitHub outage may not immediately impact your app’s availability, but if it lasts longer, you’ll want to have a fallback in place to ensure continuity.
::</p>
<p>::GlobalParagraph
Fortunately, setting up a backup action for a repository is quick and easy.
::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-yaml"># --- .github/workflows/code_backup.yaml ---

name: Backup Repo to Google Storage Bucket
on:
  schedule:
    - cron: '0 0 * * 0'
  push:
    branches:
      - main
jobs:
  backup_repo:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      id-token: write

    steps:
      - name: Checkout Repo
        uses: actions/checkout@v4

      - id: auth
        uses: google-github-actions/auth@v2
        with:
          credentials_json: ${{ secrets.GCP_CREDENTIALS }}

      - name: Set up Cloud SDK
        uses: google-github-actions/setup-gcloud@v2

      - name: Zip and Upload
        run: 'cd .. &#x26;&#x26; zip --encrypt -P ${{ secrets.ZIP_PASSWORD }} -r our-backend.zip our-backend &#x26;&#x26; gcloud storage cp our-backend.zip gs://-nonpublic/code/our-backend.zip'
</code></pre>
<p>:::</p>
<p>::GlobalParagraph
Most major cloud providers already offer their own GitHub <em>Actions</em> to automatically upload data to a Cloud Storage bucket as part of a GitHub pipeline. In this example, the repository is checked out, encrypted as a ZIP file, and then uploaded to the designated Cloud Storage bucket.
::</p>
<p><img src="/img/blog/Action-Secrets.png" alt="Action Secrets">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalParagraph
The environment variables used are injected into the context via <em>Action Secrets</em>.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Conclusion: A Kubernetes Backup Strategy for Maximum Reliability
::
::GlobalParagraph
Backups matter. In this article, I’ve shown you a straightforward way to set up snapshot backups using Kubernetes, GitHub, and a Cloud Storage bucket. These backups can be downloaded from Cloud Storage and stored on other media – ideally offline – for long-term retention and disaster recovery.
::</p>
<p>::GlobalParagraph
We deliberately left out the restore process in this article, but we plan to cover it in a follow-up.
::</p>
<p>::GlobalParagraph
We also think it would be useful to dive deeper into differential and point-in-time backups in the future.
::</p>
<p>::GlobalParagraph
But what about you? Got feedback, ideas, or experience with a similar setup? Let us know in the comments below!
::</p>
<p>:::GlobalTitle{:size="lg" .mb-5 .mt-8}
Frequently Asked Questions
:::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. Why do I need a backup strategy for Kubernetes?
::
::GlobalParagraph
Kubernetes environments are dynamic and consist of many moving parts. Database entries, code changes, and media files can be lost at any time – due to misconfigurations, system failures, or cyberattacks. A well-thought-out backup strategy ensures you can quickly restore your applications and data.
::
::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. How often should I back up my database?
::
::GlobalParagraph
That depends on your specific needs. A snapshot-based strategy with daily backups is a good starting point for many applications. If your data changes frequently, you should consider differential or point-in-time backups. Kubernetes CronJobs help you automate these processes efficiently.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. Where should I store my backups?
::
::GlobalParagraph
Cloud storage is one of the best options for secure and scalable backups. Google Cloud Storage, AWS S3, and Azure Blob Storage offer high availability and built-in encryption. Alternatively, you can store backups on local servers or external media to add an extra layer of security.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
4. Can I use GitHub Actions to automatically back up my code?
::
::GlobalParagraph
Yes! GitHub Actions can be used to automatically back up your code to a cloud storage bucket. This protects you in case of a GitHub outage or accidental deletions. Our article includes a sample workflow file for your repository.
::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Docker</category>
            <category>Operations</category>
            <category>Security</category>
            <category>Documentation</category>
            <enclosure url="https://blueshoe.io/img/blog/k8s_backup_header1.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Kubernetes basics - simply explained]]></title>
            <link>https://blueshoe.io/blog/kubernetes-basics-simply-explained</link>
            <guid>https://blueshoe.io/blog/kubernetes-basics-simply-explained</guid>
            <pubDate>Tue, 08 Aug 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Kubernetes is a complex technology that keeps challenging even the most experienced developers. That’s why numerous providers have developed tools designed to simplify the work with Kubernetes.</p>
<p>In this article, we’ll have a closer look at the technological toolbox available to us. We’ll outline the scope of application for various tools and provide examples of the solutions they offer.</p>
<p><img src="/img/blog/kubernetes_gefyra_getdeck.jpg" alt="kubernetes_gefyra_getdeck">{.object-cover .max-w-full .mb-6}</p>
<p>:::globalParagraph
Kubernetes is an open source technology for managing containerized software and helps developers keep individual applications highly available to end users.
:::
:::globalParagraph
The Kubernetes technology can be used by developers completely independently - to manage "just" software. An application can then generally be run independently on the cluster. However, if this application is not only to be executed in the cluster, but is also to be accessible "from outside", e.g. by an end user, a technology such as Ingress is required in addition to Kubernetes. This is already the start of building an entire platform.
:::
:::globalParagraph
Kubernetes can be supplemented with a variety of standalone applications and other technologies to build a platform of any complexity, in which Kubernetes is then a central component. Such platforms can map very individual development processes, depending on how individual teams organize their processes or which business requirements shall be mapped.
:::
:::globalParagraph
In this blog post, we want to present and explain individual technologies and applications that come into question for building such a platform. Our target group here is, on the one hand, developers who do not yet have any experience with Kubernetes, and, on the other hand, people who are active in the field of software development but do not develop themselves but, for example, manage projects.
:::
:::globalParagraph
The list of applications and technologies presented here represents only a selection of the possibilities for individual subareas and is not exhaustive.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
1 Managed cluster and Kubernetes distributions
:::
:::globalTitle{:size="md" :tag="h3" .mb-5}
1.1 Managed Kubernetes cluster
:::
:::globalTitle{:size="sm" :tag="h4" .mb-5}
What does it do?
:::
:::globalParagraph
As previously explained in our article about <a href="/blog/kubernetes-explained-for-non-developers">Kubernetes for non-developers</a>{.bs-link-blue}, Kubernetes (also shortened to k8s) is an open-source technology that can be freely used by anyone. On top of this, though, there are also providers of so-called managed Kubernetes clusters. These provide both the infrastructure and an initial user interface for the use of Kubernetes.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .mb-5}
Who are the providers?
:::
:::globalParagraph
The most well-known providers for managed Kubernetes clusters are Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE) and Amazon Elastic Kubernetes Service (EKS).
:::
:::globalParagraph
All providers supply the user with a user interface as well as additional services for the work with Kubernetes. Whether individual services are already included in the basic packages or whether they’ll have to be purchased as part of licences very much depends on the respective provider.
:::
:::globalParagraph
In each scenario, the developers should always check which services are supplied by individual providers and whether these are necessary and a worthwhile investment. AKS, for example, already provides good logging in their basic version, while EKS requires you to purchase such a feature in addition to the basic version.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
1.2 Kubernetes distributions
:::
:::globalTitle{:size="sm" :tag="h4" .mb-5}
What does it do?
:::
:::globalParagraph
The differentiation between providers of managed Kubernetes clusters and those of Kubernetes distributions isn’t exactly easy.
:::
:::globalParagraph
A Kubernetes cluster with the necessary nodes and pods can be operated completely autonomously. To do this, you do not need to use a managed cluster.
:::
:::globalParagraph
However, the tools outlined in the following must not necessarily be used if you operate a Kubernetes cluster. By using these tools, platforms of any desired complexity can be created, with the Kubernetes cluster as their key component. We’ll be describing these as ‘ecosystem tools’.
:::
:::globalParagraph
Providers of Kubernetes distributions provide an already preconfigured platform with individual ecosystem tools. Providers of managed Kubernetes clusters, however, ’only’ offer the infrastructure and a first user interface as part of the Kubernetes usage. This includes a number of functions that can be expanded by any ecosystem tools.
:::
:::globalParagraph
The added value of Kubernetes distributions lies in the fact that developers don’t have to integrate these tools into the cluster themselves. It can generally be expected that the tools used in this platform have compatible configurations with one another and with the Kubernetes cluster. It can also be expected that they automatically receive regular updates and will therefore run without any issues.
:::
:::globalParagraph
Nevertheless, the developers do have the freedom to integrate these platforms into a managed Kubernetes cluster and some providers of managed Kubernetes clusters do in fact offer a Kubernetes distribution.
:::
:::globalParagraph
To sum up, you might say that managed Kubernetes clusters provide a pre-built basic framework for the work with Kubernetes. Kubernetes distributions take it a step further and offer additional tools that are already integrated as a package.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .mb-5}
Who are the providers?
:::
:::globalParagraph
The most well-known providers for a Kubernetes distribution are probably Red Hat OpenShift and Rancher Kubernetes Engine (RKE).
:::
:::globalParagraph
Here is an article about <a href="/blog/managed-vs-unmanaged-kubernetes">managed vs. unmanaged Kubernetes</a>{.bs-link-blue} that we wrote a while ago, if you like to take a deeper dive into the topic.
:::</p>
<p><img src="/img/blog/managedk8cluster-k8distribution.jpg" alt="managedk8cluster-k8distribution">{.object-cover .max-w-full .mb-6}</p>
<p>:::globalTitle{:size="lg" .mb-5}
2 Technologies that are installed in a Kubernetes cluster
:::
:::globalTitle{:size="md" :tag="h3" .mb-5}
2.1 Kubernetes Dashboard: The standard front end
:::
:::globalTitle{:size="sm" :tag="h4" .mb-5}
What does it do?
:::
:::globalParagraph
Kubernetes Dashboard is a web-based user interface. As part of Kubernetes, this user interface lets you deploy containerised software on a cluster, for example, or manage its resources. This allows you to have an overview of all applications and services that run in a cluster. It also enables you to modify individual Kubernetes resources like Jobs, deployments etc.
:::
:::globalParagraph
It should be noted, though, that Kubernetes Dashboard isn’t installed as the default user interface in a cluster – you’ll specifically have to select it.
:::
:::globalParagraph
A Kubernetes cluster can also be managed and monitored without Kubernetes Dashboard, via the command line. However, it’s a lot easier to operate the cluster using Dashboard.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .mb-5}
What tools are out there?
:::
:::globalParagraph
Kubernetes Dashboard is the standard front end for a Kubernetes cluster. Alternatively, there are other providers of managed clusters, like AKS and GKE, who offer their own dashboards.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
2.2 Command line tool: Technologies that allow you to communicate with a cluster
:::
:::globalTitle{:size="sm" :tag="h4" .mb-5}
What does it do?
:::
:::globalParagraph
While we should imagine a cluster as an autonomous, self-contained entity, it does still depend on its environment. A cluster only does whatever a command instructs it to do. Effectively, there needs to be a way for developers to communicate with the cluster.
:::
:::globalParagraph
That’s where the user interface comes in: it can display the packaged information to the cluster and allows developers to communicate with the cluster. However, most developers prefer to work with so-called command line tools: information can be retrieved without any major graphic processing or relayed straight to the cluster.
:::
:::globalParagraph
Imagine command line tools like integrated development environments during software development: different tools each have a different range of functions and also focus on various different areas of application. You can find some examples in this article about integrated development environments. And along the same lines, there are also different command line tools that developers can use for their work with a Kubernetes cluster, if they prefer to do so.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .mb-5}
What tools are out there?
:::
:::globalParagraph
Examples of available tools are kubectl, kubectx and kube-shell.
:::
:::globalParagraph
Let’s use a car as an analogy: imagine Kubernetes contains the concept of the car door. Developers can use this car door to communicate with a Kubernetes cluster. Command line tools then implement the concept of the door in different ways: one tool opens the door forwards, the second tool opens the door backwards and yet another tool acts as a wing door. It ultimately doesn’t matter which tool is used, though, and the decision is completely up to the developer. Different developers who work with the same cluster can also use different command line tools.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
2.3 Service mesh: Technologies that allow you to manage the communication between cluster components
:::
:::globalTitle{:size="sm" :tag="h4" .mb-5}
What does it do?
:::
:::globalParagraph
‘Classic’ applications tend to be designed using a monolithic architecture. ‘Modern’ cloud-native application architectures, on the other hand, are all about individual microservices. In this case, the application can only develop via the intertwining and interaction of several microservices. Individual services are packaged in containers which are compiled in individual pods. These pods communicate with one another and exchange information. Check out our blog to find out more about the <a href="/blog/kubernetes-explained-for-non-developers/">difference between monoliths and microservices.</a>{.bs-link-blue}
:::
:::globalParagraph
The communication between individual pods (which contain the containerised code) takes place within a Kubernetes cluster and is defined by the developers. In addition to this, the developers can also use a so-called service mesh which allows for the communication between the pods to be specified even further.
:::
:::globalParagraph
To demonstrate this, let’s imagine an online shop as an example. At check-out, the customer has two payment options: payment by invoice and by credit card. The shop operators would like to introduce PayPal as an additional payment option. After the developer has written the code required for this and tested it in a test environment, they want to do a first trial in the live shop.
:::
:::globalParagraph
By using a service mesh, the newest development (choice of three payment options) can be made available on its own pod within the cluster. The original set-up (choice of only two payment options) can still remain in the cluster. Using the service mesh, developers can specify that 80% of the incoming requests should continue to be sent to the pod with the original set-up (option of only two payment options). Effectively, only 20% of the requests reach the pod with the newest set-up.
:::
:::globalParagraph
But service mesh can do even more than that: in the example above, service mesh is used to specify what is communicated to the individual pods. However, you can also use service mesh to specify how the communication takes place within the cluster. For example, the communication is generally not encrypted, but thanks to service mesh, an additional encryption can be specified.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .mb-5}
What tools are out there?
:::
:::globalParagraph
Examples of tools that provide the service mesh technology are Istio, Linkerd and Cilium.
:::
:::globalParagraph
The tools all offer a different range of possible usages. Linkerd, for example, has so-called ‘sidecar proxies’ which enable the set-up of encrypted communication between the pods inside the cluster. While Istio does not offer this function, it does have some benefits compared with Linkerd: it’s less complex, has a leaner architecture and does not require any code changes in the Kubernetes application itself. Which tool is the best choice will ultimately have to be checked and evaluated by the developers. Check out online articles like this list by DevOpsCube that dive deeper into the individual tools.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
2.4 Ingress controller: Technologies that allow you to control the requests to the cluster
:::
:::globalTitle{:size="sm" :tag="h4" .mb-5}
What does it do?
:::
:::globalParagraph
‘Ingress traffic’ refers to the data traffic that originates outside of a computer network and is directed at this network. With regard to a cluster, this means that a request from outside a cluster is directed at this cluster, i.e. a user calls up a website or service which is run in a cluster. The technology or resource ‘Ingress’ makes HTTP and HTTPS requests from outside the cluster available for services within the cluster.
:::
:::globalParagraph
Similar to the technical concept of Kubernetes, the technology ‘Ingress’ also is an abstract technical blueprint. What exactly the implementation of this technical blueprint looks like, also depends on the supplier. To use the car analogy again: it’s up to the individual car manufacturer to decide whether the motor should be a combustion engine or an electric engine.
:::
:::globalParagraph
So Ingress itself is the concept of how external requests to a Kubernetes cluster are created. This might include the process of how the number of external requests in the Kubernetes cluster are balanced out, for example. Or it might even just be about making sure that a URL that can be reached externally is allocated to an available application in the cluster. The concept’s implementation is therefore the task of a so-called Ingress controller – i.e. the provider’s respective configuration of Ingress.
:::
:::globalParagraph
But Ingress controllers are not only relevant with respect to the Kubernetes clusters, but also with regard to the usage of all services within a computer network that are to be accessible for external data traffic. And that includes services that are hosted on individual servers.
:::</p>
<p>:::globalParagraph
<strong>What tools are out there?</strong>
:::
:::globalParagraph
There is a whole range of available Ingress controllers. Some of the most well-known ones that are often used in combination with Kubernetes clusters are Nginx and Traefik.
:::
:::globalParagraph
Both have various benefits and drawbacks, depending on how they’re used.
:::
:::globalParagraph
The decision of which Ingress controller to choose should not be underestimated and needs to be based on a thorough evaluation by specialised developers. This unfortunately exceeds the scope of this article. We can recommend these two articles about Ingress and Ingress controllers from the Kubernetes website, however, to help you continue your research.
:::</p>
<p><img src="/img/blog/technology-in-cluster.jpg" alt="technology_in_cluster">{.object-cover .max-w-full .mb-6}</p>
<p>:::globalTitle{:size="lg" .mb-5}
3 Technologies that are installed around a Kubernetes cluster
:::
:::globalTitle{:size="md" :tag="h3" .mb-5}
3.1 Technologies that containerise code
:::
:::globalTitle{:size="sm" :tag="h4" .mb-5}
Container Images
:::
:::globalParagraph
Kubernetes is a technology for the orchestration of containerised software. Head to our blog article <a href="/blog/kubernetes-explained-for-non-developers">‘Kubernetes explained for non-developers’</a>{.bs-link-blue} to find out more about this subject. Unless you have containerised software, it’s pretty pointless to use Kubernetes – because ultimately, Kubernetes can only work with containerised software.
:::
:::globalParagraph
After a code for an application has been written by a developer, a so-called ‘container image’ is created based on this code. In Kubernetes, the respective container image is then referenced later on. These container images are either managed in an autonomous container registry outside of the cluster or directly in the cluster itself. This referenced container image will technically only become an independent container once it’s run in Kubernetes.
:::
:::globalParagraph
A container image is a read-only template of an application’s code, including all necessary information that is relevant for the running of the code. For example, configuration data, environment variables, libraries etc. So you could imagine a container image as an unmodifiable digital image of the code. The benefit of container images is the fact that they can be duplicated and used by several different developers simultaneously. This makes container images the ideal resource as they can be split in a cluster. It allows application code to be run on several pods within a cluster, for example, which also means you can scale it.
:::
:::globalParagraph
Another benefit of using container images is the fact that the image already contains the configuration for the container that will later be created. Unlike the running of software code on your own server, the containers already receive all the required configuration via the information from the images. All containers that are generated based on the images later on are subsequently all configured in the same way. If software code is run on individual servers, the configuration has to be carried out individually for every single server. A time-consuming process that is highly prone to error. You can find more information about this subject in this article about Docker and in this article about Docker images.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .mb-5}
Tools for creating container images
:::
:::globalParagraph
The most well-known provider of tools that allow you to create a container image from software code is Docker. Using Docker or similar tools, like rkt by CoreOS or LXC, you can create a container image.
:::
:::globalParagraph
Docker specifically is a visualised operating system for containers and behaves similarly to a virtual machine (VM): a VM visualises server hardware, while containers virtualise a server’s operating system.
:::
:::globalParagraph
Docker is currently the market leader for this service. However, do check out Docker in comparison with other providers along with their pros and cons before making your decision. Which tool is ultimately selected to implement this technology, mostly depends on the technical requirements and the respective developer’s preferences. For example, are they working with Windows, Linux or Mac?
:::
:::globalParagraph
To manage container images, you can use quay.io, for example. Or check out these alternatives to quay.io to compare the different tools – we’d particularly like to point out Harbor. While quay.io is a tool that provides a container registry outside of the cluster, Harbor can be installed directly in the cluster. This means that the management of images is effectively carried out directly in the cluster, too. The benefit? This way, no additional external service outside of the cluster will be necessary anymore. Whatever option is the best choice is still a decision each individual developer team has to make. It’ll depend on the individual requirements of the software that is to be created.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
3.2 Technologies that allow you to manage apps and configurations in a cluster
:::
:::globalTitle{:size="sm" :tag="h4" .mb-5}
Configuration of a cluster with yaml files
:::
:::globalParagraph
Just because a cluster exists, doesn’t mean that it’s automatically ready for productive operation immediately. Every cluster has a specific configuration and with every new application within a cluster, this configuration has to be taken into account so that everything runs correctly.
:::
:::globalParagraph
A cluster’s configuration is defined in so-called yaml files. Yaml files contain specifications for the deployment.
:::
:::globalParagraph
These can be created manually by the developers. The drawback of this is that a manual creation of files will always be prone to errors. What’s more, a complex cluster also requires several yaml files which all have to follow the same standard. This also means that every developer in a team has to know and use this standard. If the standard is to be changed, this requires prior agreement.
:::
:::globalParagraph
In order to make this process more efficient, stable and effective, there are tools that provide templates which ensure that all yaml files ‘look the same’.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .mb-5}
Using Helm for the creation of yaml files
:::
:::globalParagraph
The most well-known tool that is installed outside a cluster in order to create yaml files is Helm.
:::
:::globalParagraph
In so-called Helm charts, a number of things are defined: they specify the dependencies between individual applications within the cluster, which Kubernetes resources are required and whatever else is necessary in order to provide and run container applications.
:::
:::globalParagraph
A Helm chart can be used in the cluster as many times as you need in order to implement any number of application instances and thereby to easily scale the system. You can find more relevant information on the subject in this article about Helm charts.
:::
:::globalParagraph
Helm charts can also be shared with other people. This means they are the central instance for a one-off definition of the application and can subsequently be managed by many different people with minimal effort.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
3.3 Technologies that allow developers to run Kubernetes locally
:::
:::globalTitle{:size="sm" :tag="h4" .mb-5}
Development on local machines
:::
:::globalParagraph
Let’s do a quick dive into the world of the developers for this one. Usually, individual components of a complex code get developed – separate features that will then be brought together at a later stage. Developers effectively write code on their own computers rather than directly in the complex production or test environments. Only later will the individual components be united.
:::
:::globalParagraph
For this to happen, the project-specific development environment needs to be available on the developers’ own computers, i.e. the framework configuration of the future testing and production environment. This is always required, no matter whether Kubernetes will eventually be used for the software operation or not.
:::
:::globalParagraph
The challenge for the developers will be to configure this environment correctly on their local computer. Only if the development is done against the right configuration, will the code be able to run flawlessly in the production and test environment later on. So far, the biggest challenge has been that every individual developer had to carry out the respective configuration themselves. Naturally, this requires close coordination between the teams that develop the software (development teams) and the teams responsible for the configuration of the servers that are to be used at a future stage (operations team). And let’s be honest – the communication between the two teams wouldn’t always be smooth.
:::
:::globalParagraph
If Kubernetes is used to run the software code, however, there is now a whole range of handy tools around. They enable the operations team to carry out and maintain the cluster’s configuration autonomously. The use of these specialist tools also allows the development teams to install the configurations on their own computers without having to take any configuration measures and without first having to coordinate the approach. Even if the cluster’s configurations change, these tools allow for the changed configurations to be transferred to the developers’ computers without any further communication between the teams.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .mb-5}
Tools for local development
:::
:::globalParagraph
Examples of tools that offer this technology are minikube, kind and K3s.
:::
:::globalParagraph
You can find a good overview of the different scope of each tool and their possible application here.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
3.4 Technologies that bridge containers
:::
:::globalTitle{:size="sm" :tag="h4" .mb-5}
What does "bridge" mean?
:::
:::globalParagraph
Just a little disclaimer: the term ‘bridge’ isn’t necessarily part of the official terminology to describe the tools we’re about to present. To be quite honest: we started using this term internally as it describes the purpose of these technologies pretty well. So if you’re searching the internet for the verb ‘to bridge’ in connection with Kubernetes, you’re unlikely to get many hits. So let us explain the term ‘to bridge’ and the respective tools in a little more detail.
:::
:::globalParagraph
In order to operate a cluster, you need a number of resources, especially processing power. And processing power costs money. Money is naturally a limited resource, and we don’t want to start a philosophical debate about how neither commercial nor non-commercial software development has access to an ever-growing money tree.
:::
:::globalParagraph
Even if a developer makes a cluster available for themselves locally in order to make the software development more effective, this will ultimately require resources and subsequently also money. Just imagine a large team of developers working on a variety of smaller, new features for a major hotel booking platform and every single developer has a locally accessible cluster. This gives you an idea of the scope of resources required to make this happen.
:::
:::globalParagraph
Thankfully, there are customised technologies to deal with this problem. If developers install a tool of this technology on their computers, it is possible for the code to be developed locally and to be packaged in containers. However, when running the code in the container on the developer’s own computer, the container is ‘led to believe’ it’s located in a cluster.
:::
:::globalParagraph
This ‘make believe’ isn’t quite yet what we mean by ‘bridging’. Technically, bridging only begins once developers start working on an existing code – to fix a bug, for example. To do this, the developer can use one of the relevant tools (see below) to ‘clone’ a container onto their local computer and then work on an existing code. The edited code, along with its cluster, can be placed on a container and tested, while still only being run exclusively on the developer’s local system. As long as this bridge exists, all users that have access to the URL can also access this code change.
:::
:::globalParagraph
It should be noted that all traffic on this container will then also take place via the local development environment of the respective developer. So this approach is particularly useful when carrying out work in a staging cluster: this way, bug fixes can be tested directly during staging. For work in a production cluster, however, this approach should not be used.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .mb-5}
What tools are available for bridging?
:::
:::globalParagraph
One of the most well-known tools for bridging is ‘Telepresence’.
:::
:::globalParagraph
For this purpose, Blueshoe has also developed their own tool called ‘Gefyra’. While Gefyra isn’t quite as comprehensive as Telepresence in terms of its functionalities, it is considerably more convenient for developers when creating a bridge. The reason: it focuses on the actual usage. If you want to find out more about this, you can find a detailed <a href="/blog/alternative-to-telepresence-2-gefyra/">comparison of Gefyra and Telepresence</a>{.bs-link-blue} in our blog.
:::</p>
<p><img src="/img/blog/bridge.jpg" alt="bridge">{.object-cover .max-w-full .mb-6}</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
3.5 Technologies that provide developers with a development environment that matches the production environment
:::
:::globalTitle{:size="sm" :tag="h4" .mb-5}
Requirments for development environments
:::
:::globalParagraph
If work needs to be done on software code that is run in a cluster, it’s necessary for the developers in the local development environment to also have a cluster at their disposal.
:::
:::globalParagraph
That’s why it makes sense for the local cluster’s configuration to closely match the cluster on which the software code will later be run. To ensure this, dedicated tools like minikube can be used (see above).
:::
:::globalParagraph
It also makes sense to ‘prefill’ the local development cluster, meaning it’s useful if the existing data in the cluster largely matches the live data. This might include databases, database entries, integrated third systems as well as tools for identity management etc.
:::
:::globalParagraph
So effectively, we have tools that allow for clusters to be created on the local development environment of individual developers. These tools ensure that all developers have the same configuration at their disposal. One example of this is minikube (see above).
:::
:::globalParagraph
Other tools allow the developers to edit existing code in the local development cluster and to then ‘try’ this code in the cluster before deployment. Examples of such tools are ‘Telepresence’ and ‘Gefyra’ (see above).
:::
:::globalParagraph
And then there are tools introduced in the following section which allow you to provision local clusters with data and/or third-party systems that are as close to the live system as possible.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .mb-5}
What tools are out there?
:::
:::globalParagraph
Here’s another Blueshoe original: we have developed a tool called ‘Getdeck’ which enables the fast and simple provision of virtual Kubernetes clusters to developers. With Getdeck's Shelf feature you can prepare fully configured virtual clusters that have all configurations of your production system available and can simply be picked of the shelf and used by developer. Want to find out more? Then head to the Getdeck website. We strongly believe in our tool and use it in our work on a daily basis. Feel free to book an appointment with us and we will be more than happy to tell you more about Getdeck.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
3.6 Technologies that ensure the quality of the code
:::
:::globalTitle{:size="sm" :tag="h4" .mb-5}
Advantages of CI/CD
:::
:::globalParagraph
Back in the day when software was only made available via floppy disks or CD-ROMS, a developer’s focus used to be on the development of a permanent software version. When an update came around, the user had to go and get a new CD-ROM with the updated code status.
:::
:::globalParagraph
Today, this approach has thankfully been overhauled: software gets developed further all the time and new updates are released on the regular.
:::
:::globalParagraph
This also means that nowadays, individual programme components are regularly merged – so their compatibility has to constantly be checked, too.
:::
:::globalParagraph
If this process is structured in a linear fashion and the compatibility of individual software components only get checked right at the end, significant problems can arise. For the developers, this can mean pure ‘integration hell’: while a code for a new or updated feature may be completed, it might not interact as desired with other code components due to unforeseen dependencies. The result: nothing works as planned.
:::
:::globalParagraph
A solution for this issue is the CI/CD method. The acronyms stand for:
:::</p>
<p>:::globalParagraph{.mt-4}
<strong>CI - Continuous integration</strong>
:::
:::globalParagraph
Automation process for developers
:::
:::globalParagraph
Individual developers’ code changes are regularly merged, which is particularly beneficial since it allows for a significantly earlier detection of incompatibilities. More on the subject can be found in this article about CIs and CDs.
:::</p>
<p>:::globalParagraph{.mt-4}
<strong>CD - Continuous Devlivery</strong>
:::
:::globalParagraph
Code changes are tested automatically
:::
:::globalParagraph
<strong>CD Continuous Deployment</strong>
:::
:::globalParagraph
Approval process of code changes when made available to the end users, i.e. the deployment
:::
:::globalParagraph
Code changes are subject to automated tests and are made available in repositories like GitHub. This also serves as a check to see how the new code interacts with the existing code in the live system.
:::
:::globalParagraph
There are now special tools that make the CD process available specifically to software that is to be run in a Kubernetes cluster. So these are testing tools that are explicitly designed for a Kubernetes environment.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .mb-5 .mt-4}
The Argo CD tool
:::
:::globalParagraph
At Blueshoe, we use the tool Argo CD. Argo CD is a Kubernetes controller that monitors a running application at all times. It constantly compares the live status of a code with a certain desired status, as specified in a Git depository (the new software code can be included here, too). Argo CD can then automatically rectify any deviations or visualise the deviations for the developers so that they can quickly be rectified manually. If you want to find out more about this subject, head over to the Argo CD website.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
3.7 Technologies for the secret management
:::
:::globalTitle{:size="sm" :tag="h4" .mb-5}
Keeping data safe and encrypted?
:::
:::globalParagraph
Even small projects require certain data that have to remain secret and should only be accessible to those people or apps that genuinely require this data. Such data might include passwords for the authorisation of other services (database, <a href="/our-services/api-development-company/">API</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} etc.) or the keys for the encryption of stored data. Since these should not end up in the wrong hands, they must not be written in the versioned Kubernetes resources (Kustomize manifests, Helm charts etc.) without encryption (plaintext).
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .mb-5}
What tools are out there?
:::
:::globalParagraph
The Secrets plugin for Helm encrypts values in the Helm yaml data locally thanks to a key (Mozilla SOPS, for example) that doesn’t live in the repository and is made available to the operator by other means. Only the encrypted secrets are then versioned. When using the charts, the plugin decrypts these values and thereby delivers the secret data to the cluster.
:::
:::globalParagraph
Bitnami Sealed Secrets acts in a similar way but encrypts the secret data within the cluster and generates its own objects, the type being SealedSecrets. These can be versioned and, when the resources are being used, they can be decrypted by an operator and turned into ‘real’ Kubernetes secrets.
:::
:::globalParagraph
Examples of other tools or technologies that can be used solo or in combination with the previously mentioned tools are HashiCorp Vault, Azure Key Vault and AWS Secrets Manager.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
3.8 Technologies for monitoring, logging and metrics collection
:::
:::globalParagraph
In a system with many different elements, you need to stay on top of things. To ensure this, it makes sense to gather all logs and other data that provide information about each individual component’s status in one central place and to structure this data in a clear manner. Examples of tools that are used in this context are <strong>Prometheus</strong>, <strong>Open Telemetry</strong>, <strong>Grafana</strong> and <strong>Logstash</strong>.
:::</p>
<p><img src="/img/blog/technology-ausserhalb-cluster.jpg" alt="technology_ausserhalb-_luster">{.object-cover .max-w-full .mb-6}</p>
<p>:::globalTitle{:size="lg" .mb-5}
4 Cloud-native development
:::
:::globalParagraph
Cloud-native development describes a software development approach that focuses on designing applications that are ready to be used in the cloud right from the start (Gitlab). It therefore makes sense for the actual development to take place in the future cloud environment as much as possible.
:::
:::globalParagraph
With our in-house Blueshoe tech stack consisting of Gefyra and Getdeck, we have made a signification contribution to making this process more efficient and effective for entire development teams.
:::
:::globalParagraph
With that being said, we cannot ignore that there are competitors to our own products that are definitely worth mentioning, such as Okteto and Skaffold.
:::
:::globalParagraph
And still, we think our products are the real deal. They offer exactly what development teams require, have been thoroughly tested and keep getting integrated into other tools: Gefyra, for example, has now become its own Docker Desktop extension.
:::
:::globalParagraph
Sparked your curiosity about our products? Keen to find out more? Go on then, give us a shout – we can’t wait to tell you more!
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Gefyra</category>
            <category>Project Management</category>
            <enclosure url="https://blueshoe.io/img/blog/kubernetes_gefyra_getdeck.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Kubernetes Community Days Munich 2023]]></title>
            <link>https://blueshoe.io/blog/kubernetes-community-days-munich-2023</link>
            <guid>https://blueshoe.io/blog/kubernetes-community-days-munich-2023</guid>
            <pubDate>Tue, 23 May 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>In October 2022, the Kubernetes Community Days Munich (KCD Munich) made its debut, leaving an indelible mark on the vibrant Kubernetes and cloud native ecosystem. With its resounding success, KCD Munich established itself as a premier event for knowledge sharing, networking, and community building.</p>
<p><img src="/img/blog/bildschirmfoto.jpg" alt="my image">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalParagraph
The overwhelming response and enthusiasm from participants propelled the organizers to plan an even grander edition this year, scheduled to take place on July 17th and 18th, 2023. Let's delve into what makes KCD Munich 2023 an unmissable event for anyone interested in Kubernetes and the cloud native sphere.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Building on the Success of KCD Munich 2022
:::
:::GlobalParagraph
KCD Munich 2022 was a triumph, attracting more than 200 participants who immersed themselves in 19 engaging talk sessions. The event included a high note with the KCD Bowling community event, where the community bonded over strikes, spares, and shared passions. The success of the inaugural event laid the foundation for an even more remarkable KCD Munich in 2023.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Bigger, Better, and Packed with Opportunities
:::
:::GlobalParagraph
KCD Munich 2023 promises to be a significant step forward, featuring an expanded lineup of activities and a larger community of participants. With over 400 attendees expected, the event will be a melting pot of industry experts, enthusiasts, and curious minds, fostering valuable connections and meaningful conversations.
:::
:::GlobalParagraph
The conference will boast 36 sessions delivered by some of the brightest minds in the Kubernetes and cloud native space, providing attendees with deep insights and practical knowledge. You will also find our very own Robert Gutschale holding a session on: “Effortlessly providing ephemeral cloud-based K8s clusters with Getdeck”
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Hands-On Workshops and Expert Guidance
:::
:::GlobalParagraph
Recognizing the value of interactive learning, KCD Munich 2023 will feature a range of hands-on workshops. These workshops will give attendees the opportunity to dive deep into practical exercises, gain practical experience, and develop valuable skills. By engaging with industry-leading speakers and experts, participants can seek guidance on overcoming challenges faced in their day-to-day business operations. Whether you're a beginner or an experienced Kubernetes user, KCD Munich 2023 offers a unique chance to enhance your expertise.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
New Venue and Community-Building Activities
:::
:::GlobalParagraph
Responding to the growing demand, KCD Munich 2023 will be held at the spacious and suitable location of the SmartVillage in Bogenhausen, Munich. The new venue will accommodate the larger crowd, ensuring everyone can comfortably engage in sessions, workshops, and networking activities. Additionally, the organizers are once again renting out an entire bowling alley for a memorable community event, facilitating informal conversations, and strengthening connections among attendees.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Don't Miss Out on KCD Munich 2023
:::
:::GlobalParagraph
If you have even the slightest interest in the Kubernetes or cloud native sphere, KCD Munich 2023 is an event you cannot afford to miss. Seize the opportunity to connect with like-minded individuals, exchange ideas, and be at the forefront of industry trends. Whether you're looking to learn new techniques, discover innovative solutions, or seek hands-on advice, KCD Munich 2023 offers a rich ecosystem of knowledge and collaboration.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Get Your Tickets Today
:::
:::GlobalParagraph
Secure your spot at KCD Munich 2023 by visiting the official event page at <a href="https://community.cncf.io/events/details/cncf-kcd-munich-presents-kcd-munich-2023/">https://community.cncf.io/events/details/cncf-kcd-munich-presents-kcd-munich-2023/</a>{.bs-link-blue :target="_blank"}.
:::
:::GlobalParagraph
Don't delay, as tickets are expected to sell out quickly due to high demand. Join us at KCD Munich 2023, and let's shape the future of Kubernetes and cloud native technologies together!
:::
:::GlobalParagraph
The organizing team (Liquid Reply, white duck and blueshoe) is looking forward to welcoming you in Munich!
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Conclusion
:::
:::GlobalParagraph
The Kubernetes Community Days Munich is back, bigger and better than ever! With an expanded program, a larger venue, and a vibrant community, KCD Munich 2023 promises to be a blast!
:::</p>]]></content:encoded>
            <category>Team Blueshoe</category>
            <enclosure url="https://blueshoe.io/img/blog/bildschirmfoto.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Kubernetes explained for non-developers]]></title>
            <link>https://blueshoe.io/blog/kubernetes-explained-for-non-developers</link>
            <guid>https://blueshoe.io/blog/kubernetes-explained-for-non-developers</guid>
            <pubDate>Thu, 26 Jan 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Kubernetes is currently the big thing in IT. But even developers struggle with it at times. And it’s infinitely more difficult for non-developers. But what can Kubernetes do exactly? What’s the difference between individual Kubernetes service providers? And what are the benefits of Kubernetes?</p>
<p>We’re going to look at these questions and provide a broad overview of Kubernetes and related subjects.</p>
<p><img src="/img/blog/kubernetes-explained-for-non-developers.jpg" alt="Kubernetes explained for non-developers">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
What is Kubernetes?
:::
:::globalParagraph
Kubernetes is not a service offered by individual providers. Instead, Kubernetes is an open-source technology that enables the management and orchestration of applications packaged in containers.
:::
:::globalParagraph
In theory, Kubernetes can be downloaded for free on GitHub and then be made available on local servers or publicly (e.g. for a client).
:::
:::globalParagraph
In addition to this, there are also fee-based services that use Kubernetes as an open-source technology and provide further services on top of this that simplify Kubernetes and expand its potential. The most well-known examples for this are Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE) and Amazon Elastic Kubernetes Service (EKS). Most developers probably make use of such services.
:::
:::globalParagraph
With all these services, Kubernetes is still free to use – but the cloud resources or surfaces supplied by the providers are not. The management costs for EKS, AKS and GKE are usually pretty low. However, the computing and storage costs for resources charged by the services for cloud resources and surfaces can add up very quickly.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Back to the roots: monolith vs. microservices
:::
:::globalParagraph
Put very simply, software development can be done via two different approaches: the monolithic approach and the building of microservices.
:::
:::globalParagraph
In the case of monoliths, all relevant components are included in one application. With microservices, on the other hand, there is an independent application for every component which only works on one specific part of the task. If one microservice requires the input of another microservice in order to get its job done, the respective microservices communicate with each other via interfaces. One advantage of microservices compared with a monolithic architecture: if one microservice malfunctions, it doesn’t necessarily make the entire system inoperable. However, neither of the two options is superior. At Blueshoe, we mostly use microservices for our work as they’re simply more beneficial for our purposes.
:::
:::globalParagraph
You don’t need to be using microservices to use Kubernetes. You can just as well employ Kubernetes to operate monoliths.
:::
:::globalParagraph
However, the software application will inevitably have to be packaged in a container if you want to use Kubernetes – and that’s possible with both monoliths and microservices.
:::</p>
<p><img src="/img/blog/kubernetes-explained-for-non-developers-1.jpg" alt="monolith vs. microservices">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="md" .mb-5}
Back to the roots: container
:::
:::globalParagraph
In order to grant non-developers a brief insight into why containers are used, let’s have a quick look into this – we’ll keep it simple!
:::
:::globalParagraph
Software containers can literally be regarded as just that – containers. They form a predefined environment within which the code can be run. A container therefore does not only contain the software – it also provides the opportunity to preconfigure the environment in which the software will be run (i.e. the container). Before the dawn of containerised software, the software always had to be run in various different environments, for example on different computers. So you didn’t just face the challenge of the software having to be free of errors, but every environment also had to be configured exactly the same way. With software containerisation, the container itself is the environment in which the software is run. So the software can be run on different servers without every single server having to be configured individually.
:::
:::globalParagraph
The most well-known provider that allows for the software packaging in containers is Docker. That's why the term 'Docker container' is now commonly used – and in fact, it's often used synonymously with 'container'. In order to use Kubernetes, it's necessary for the software to be packaged in containers. It doesn't matter what technology is used to achieve this, though.
:::
:::globalParagraph
The size of the individual containerised applications is equally irrelevant (see above: monolithic architecture vs microservices) – Kubernetes can be used with both approaches.
:::
:::globalParagraph
The software that has been packaged in a container will be placed in a specified place. This is called a 'Docker image'. When the software is run, this image is always referenced. This means that the software can be run in several instances if the same image is referenced.
:::</p>
<p>:::GlobalPodcastSection{:videoId="tyvE9VlSWkE" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR KUBERNETES PODCASTS
::::
::::GlobalTitle{:tag="h3" .mb-6}
Tools for the Craft: Navigating the Kubernetes ecosystem
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking in depth about the ins and outs of local Kubernetes development and also provide some real coding examples.
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="Show more" :color="green"}
::::
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
What exactly Kubernetes does
:::
:::globalParagraph
The open-source technology Kubernetes allows the management of containers within a defined environment. This environment needs to be clearly separated from its own surroundings. I.e., it needs to form a so-called cluster on a local system or be created in a public cloud, for example (client access can be restricted).
:::
:::globalParagraph
A cluster is a combination of different subcomponents which are required to run the Kubernetes technology – nodes and pods, for example. A node essentially acts as a server that runs the software application. In order for a software application to be run, the software needs to be packaged in a container and made available in the node (more about this later).
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
"Server vs. node" or: "pet vs. cattle"
:::
:::globalParagraph
Without Kubernetes, software is run directly on the server. In this case, the software is only available on this server. If the server can’t be reached, the software also cannot be run anymore.
:::
:::globalParagraph
Apart from the maintenance of the software itself, the maintenance of the server is also crucial with this solution. The server is like a pet: it needs looking after, requires attention and care, it’s meant to live as long as possible and hopefully, it doesn’t get ill. This is all to ensure that the server can deliver the software reliably.
:::
:::globalParagraph
In contrast, you could regard a node as an anonymous farm animal – much like cattle. Just a number, no name, no face. If a farm animal dies, it’s simply replaced by another one without much fuss. A large number of farm animals become a herd of anonymous individuals. Similarly, a group of nodes becomes a cluster.
:::
:::globalParagraph
If software is run on an individual server and the server cannot be reached anymore, the software effectively cannot be run either. The service can then no longer be used by the user. If the node, on which a containerised application is run, cannot be reached anymore, the container can be transferred to a different node in no time. This way, the software can continue to run.
:::
:::globalParagraph
This transfer of a container from one node to another is done automatically by Kubernetes. So going back to our example, Kubernetes is like the supervisor that makes sure there’s always one farm animal available to carry the application. Kubernetes is also often referred to as a service that orchestrates containers by assigning containers to available nodes – much like a conductor of an orchestra.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Kubernetes alternatives
:::
:::globalParagraph
Kubernetes isn’t the only service that orchestrates the software execution in a container on virtual machines (the nodes). Apart from Kubernetes, there are a whole lot of other providers: Docker Swarm, Nomad or Kontena, among others. You can find comparisons of these and other applications with Kubernetes in various online resources.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Why use Kubernetes?
:::
:::globalParagraph
Kubernetes undeniably offers many benefits – but along with those come a few drawbacks, too. One ought to consider carefully whether or not to employ Kubernetes.
:::
:::globalParagraph
Here are some arguments for and against the use of Kubernetes:
:::</p>
<p><img src="/img/blog/kubernetes-explained-for-non-developers-2.jpg" alt="monolith vs. microservices">{.object-cover .w-full .mb-5}</p>
<p>:::globalParagraph
With every benefit of Kubernetes, you’ll also find a drawback. So there’s no easy answer to the question ‘Should I use Kubernetes?’. Every reply is as unique as every software or every one of Blueshoe’s clients.
:::
:::globalParagraph
At Blueshoe, we have decided to make Kubernetes our go-to solution for orchestrating applications in containers. And that’s for a number of reasons:
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>The projects we manage usually revolve around applications that are highly complex – the use of Kubernetes pays off in the case of such complexity.</li>
<li>Our developers simply cannot be equally proficient in all alternative software used for the orchestration of containerised software. We therefore decided to become Kubernetes experts. Our knowledge can now be transferred to many other projects. The ramp-up phase was extensive, but we’re now benefiting from it.</li>
<li>Kubernetes is supported by (almost) all cloud providers – unlike some alternative systems for the orchestration of containerised software. This means that there are no limitations when our clients choose their provider.
:::</li>
</ul>
<p>:::globalTitle{:size="md" .mb-5}
Should you host Kubernetes yourself or use a managed service?
:::
:::globalParagraph
As mentioned above, Kubernetes is an open-source technology. So its use is basically (licence-)free and can be hosted completely independently. AKS, GKE, EKS and other providers offer additional managed services that are designed to make the use of Kubernetes easier. And the use of these services is what such providers charge for.
:::
:::globalParagraph
So what do you do? Do you host yourself or pay money for a managed service?
:::
:::globalParagraph
When making your choice, two factors ought to be considered: For one thing, which ‘hardware’ and which services are supplied by the provider.
:::
:::globalParagraph
Secondly, one must not forget that there are staff costs to be paid if you host a Kubernetes cluster yourself – and these management costs aren’t exactly negligible. This article (date: 19/05/2022 – 18:05) clearly illustrates that if a cluster needs to be managed 24/7, at least 4 full-time developers have to be employed. This would ensure round-the-clock management, but would also cover staff shortages due to holidays and sick leave. When using a managed service, you still need a developer to oversee it – but in this case, it is assumed that one full-time position is sufficient.
:::
:::globalParagraph
What’s more, bear in mind that the individual providers’ cost structure for nodes, computing power etc. can sometimes be very unclear. What costs you can expect mostly depends on the scope of the information to be processed and the computing power required for this. The costs are often presented in a way that for people who aren’t familiar with the technical development and/or implementation of software (technical departments, for example), it’s hardly possible to estimate the necessary capacities and resulting costs. This means that the costs may vary significantly depending on the changing user traffic, for example, as this would require considerably more computing capacity. You should therefore always do a rough cost estimate for a managed service and allow for some buffer space in your budget. You should also always include a developer who knows the subject matter when making these calculations.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
What do you need to build Kubernetes?
:::
:::globalParagraph
Let’s have a closer look at Kubernetes itself in this second part. We’ll explain the terminology and clarify how exactly they relate to one another.
:::
:::globalParagraph
Developer or no developer: If you’re checking out Kubernetes for the first time, you can find a great initial overview of the specifics here (date: 07/07/2022 – 8.30). In the following, we will mostly refer to this source.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Cluster
:::
:::globalParagraph
Kubernetes can only be used within a specified environment. The cluster itself cannot be equated with Kubernetes. But as a technology, Kubernetes offers the possibility to create a cluster that sections off various elements from its environment.
:::
:::globalParagraph
So in the context of Kubernetes, a cluster is a combination of different components required for the use of Kubernetes which has to be clearly separated from its environment. New components (like new nodes, for example) have to be explicitly allocated to the cluster by a developer and cannot get to the cluster automatically.
:::</p>
<p><img src="/img/blog/kubernetes-explained-for-non-developers-3.jpg" alt="Cluster">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="md" .mb-5}
Node
:::
:::globalParagraph
A node is a virtual machine or a physical computer. It’s a part of the cluster which runs the software that is packaged in containers. The containerised software itself is placed on the node via pods.
:::
:::globalParagraph
The node itself consists of smaller applications in order to carry out this work:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li><strong>Kubelet</strong>: Every node contains a so-called Kubelet which manages the node autonomously and communicates with the control plane via <a href="/our-services/api-development-company/">API</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.</li>
<li><strong>Tools to operate the containers within the node</strong>: The node provides space for the containerised software. It may, however, become necessary for work to be carried out on the containerised software. To allow for work to be carried out on the software (starting it, for example), the node has tools that can access the software located in the container within the node. It’s a special tool that manages Docker containers.
:::
:::globalParagraph
Additionally, nodes can also contain pods, which in turn contain containerised software. But more about that later.
:::</li>
</ul>
<p>:::globalTitle{:size="md" .mb-5}
Control Plane
:::
:::globalParagraph
The control plane is the core of Kubernetes – the executive Kubernetes instance (like a control centre) that coordinates all activity within the cluster. A control plane is also a node, but with the specific task to coordinate the cluster.
:::
:::globalParagraph
The control plane provides an <a href="/our-services/api-development-company/">API</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} to communicate with the other cluster components.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Pods
:::
:::globalParagraph
The term ‘pod’ is probably one of the most used words in the Kubernetes vocabulary. Sadly, though, they have nothing to do with Star Wars – they are in fact the smallest units in the Kubernetes universe.
:::
:::globalParagraph
As the smallest autonomous units in the cluster, pods effectively combine several elements that are placed in the cluster during the deployment process. Essentially, they’re like pea pods, and they can enclose elements like the following:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Separated storage units, for ex. volumes</li>
<li>Cluster-specific IP addresses</li>
<li>Information about how containers are operated, for ex. image versions of the application, information about ports etc.
:::
:::globalParagraph
Every pod has its own IP address which is only known within the cluster. This means that the individual pods can only be accessed within the cluster and cannot be controlled from the outside.
:::
:::globalParagraph
A node can contain a number of pods. Every pod is only ever allocated to one node and always remains on this node until the node is deleted or ‘dies’ due to errors. If this happens, the pod ‘dies’, too. In this case, however, the pod can be recreated on a different node via the deployment process (see below) and therefore enjoys ‘eternal life’.
:::</li>
</ul>
<p><img src="/img/blog/kubernetes-explained-for-non-developers-41.jpg" alt="Cluster">{.object-cover .max-w-full .mb-5}</p>
<p><img src="/img/blog/kubernetes-explained-for-non-developers-4.jpg" alt="node.js">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="md" .mb-5}
ReplicaSets and the deployment process
:::
:::globalParagraph
We’ve now covered the individual components of the cluster: the node, the control plane and pods. But how do these components interact with one another, and how do they help contribute to running the developers’ code correctly?
:::
:::globalParagraph
That’s where the deployment process comes in. This is the stage when the developers place the code in the cluster. During the deployment process, the developer specifies how often individual software components should be run.
:::
:::globalParagraph
It is also decided which software should be run in general. The software is packaged in a container image and the image is placed in a specific location (see above). If it is decided during deployment that the software should only run once, the number of ReplicaSets equals 1 which means one pod is established for this task. If the software should be run multiple times, the number of ReplicaSets might equal 3, for example. Three pods will now be established in the ReplicaSet which all reference the same container image.
:::
:::globalParagraph
If the ReplicaSet equals 3 and is therefore linked to three pods, these three pods can be run on the same node. While all three pods run the same software via the reference to the container image with the same code, they are indeed different pods. It is therefore still correct that every pod only exists once in the cluster and is only ever assigned to one node.
:::
:::globalParagraph
A deployment can cover multiple ReplicaSets which either refer to different or to one and the same software.
:::
:::globalParagraph
As mentioned previously, the deployment process offers the benefit that, should a node be deleted or malfunction, for example, the pod along with its contained software can be recreated on a different node. The information of how a pod is to be run within a cluster is described during deployment. The step of assigning the pod to a node is carried out by a service (see below).
:::
:::globalParagraph
With Kubernetes, in order to start and manage the deployment, the command-line interface is used – the Kubectl. Kubectl is less relevant for non-developers, but it’s still worth mentioning this term at least once at this point.
:::</p>
<p><img src="/img/blog/kubernetes-explained-for-non-developers-5.jpg" alt="ReplicaSets">{.object-cover .max-w-full .mb-5}
<img src="/img/blog/kubernetes-explained-for-non-developers-6.jpg" alt="ReplicaSets">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="md" .mb-5}
Service, label and selector
:::</p>
<p><img src="/img/blog/kubernetes-explained-for-non-developers-7.jpg" alt="ReplicaSets">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
A service in Kubernetes has nothing to do with a ‘managed Kubernetes service’ (see above), but is in fact a part of the cluster. A service could be considered as an abstraction which gathers pods together in a logical sense and defines how the pods can interact with one another. They do not interact with each other directly, though. Instead, the pods can be assigned labels. This service also includes a complementary selector. Pods can then interact with one another via the labels on the pod and the selectors of the service.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
The Cluster: a summary
:::
:::globalParagraph
In a nutshell, a cluster consists of several nodes, with one of these nodes acting as the control plane.
:::
:::globalParagraph
The software that is to be run in the cluster is packaged in a container and has been placed in a specific location whilst acting as a container image. So the software isn’t actually placed in the cluster itself, but the cluster only ever references the container image.
:::
:::globalParagraph
During the deployment process, it is defined which software is to be run in what sort of manner (how often, for example). To do this, one or several ReplicaSets are created. The deployment process is carried out on the control plane.
:::
:::globalParagraph
After the deployment, the pods that were defined during the deployment process are created in the cluster and distributed to the different nodes.
:::
:::globalParagraph
Services in the cluster have selectors, and pods can access these selectors via complementary labels. It’s possible for the pods to interact with each other via this lock-and-key principle. A pod can only interact with another pod if it has a label – if it doesn’t have one, it acts on its own.
:::</p>
<p><img src="/img/blog/kubernetes-explained-for-non-developers-8.jpg" alt="Cluster">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="md" .mb-5}
Cluster size
:::
:::globalParagraph
The advantage of Kubernetes is that the system can recognise when a node (= virtual machine) isn’t operational anymore. If the software was running on one individual server, you wouldn’t be able to execute it anymore at this stage. Kubernetes, however, can automatically assign the software within the container to another, functional node. For this reason, a cluster in the production system should contain at least 3 nodes. One of these would be the necessary control plane, while the other two nodes would contain the software to be run in pods. A pod would only run on one node – the remaining node would ‘only’ be on standby in case the other malfunctioned.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Conclusion
:::
:::globalParagraph
Kubernetes is an open-source software that can be used by anyone, free of charge. It’s a service that runs software and offers a lot of benefits compared with the operation of autonomous servers.
:::
:::globalParagraph
What’s more, providers like AKS, GKE or EKS offer further services associated with Kubernetes which are designed to make the administration easier. These services come at a price, though, and it’s not always that easy to keep track of the various charges.
:::
:::globalParagraph
Kubernetes is a technology that consists of a set of individual components. A cluster is required for its operation – and this can only be created once nodes, pods and the control plane interact with one another.
:::
:::globalParagraph
Kubernetes is not a miracle cure that is equally suited to every kind of software. Whether Kubernetes is suitable for the operation of a specific software or for the entire organisation needs to be assessed on a case-by-case basis. This also applies to the decision of whether or not the services by providers like AKS, GKE or EKS should be purchased.
:::
:::globalParagraph
We hope we were able to give you a good overview of what Kubernetes is, explain how it differs from other technologies for running software and – tragically – to confirm that a Kubernetes pod has nothing to do with podracing from the Star Wars films.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Docker</category>
            <category>Project Management</category>
            <enclosure url="https://blueshoe.io/img/blog/kubernetes-explained-for-non-developers.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Kubernetes Logging with Promtail, Loki and Grafana]]></title>
            <link>https://blueshoe.io/blog/kubernetes-logging-with-promtail-loki-and-grafana</link>
            <guid>https://blueshoe.io/blog/kubernetes-logging-with-promtail-loki-and-grafana</guid>
            <pubDate>Wed, 24 Aug 2022 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Reading logs from multiple Kubernetes Pods using kubectl can become cumbersome fast. What if there was a way to collect logs from across the cluster in a single place and make them easy to filter, query and analyze? Enter Promtail, Loki, and Grafana.</p>
<p><img src="/img/blog/kubernetes-logging-with-promtail-loki-and-grafana.jpg" alt="my image">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
Introduction (what is this, why do we need it)
:::
:::globalParagraph
By default, logs in Kubernetes only last a Pod’s lifetime. In order to keep logs for longer than a single Pod’s lifespan, we use log aggregation. This means we store logs from multiple sources in a single location, making it easy for us to analyze them even after something has gone wrong. While the ELK stack (short for Elasticsearch, Logstash, Kibana) is a popular solution for log aggregation, we opted for something a little more lightweight: Loki.
:::
:::globalParagraph
Developed by Grafana Labs, ‘Loki is a horizontally scalable, highly available, multi-tenant log aggregation system inspired by Prometheus’. Loki allows for easy log collection from different sources with different formats, scalable persistence via object storage and some more cool features we’ll explain in detail later on. For now, let’s take a look at the setup we created.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Deck
:::
:::globalParagraph
If you just want to take a quick look around, you can use Deck to set up this stack on your machine with one command. After installing Deck, you can run:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">$ deck get https://raw.githubusercontent.com/Getdeck/wharf/main/loki/deck.yaml
</code></pre>
<p>:::
:::globalParagraph
Follow the instructions that show up after the installation process is complete in order to log in to Grafana and start exploring.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Setup
:::
:::globalParagraph
In this article, we’ll focus on the Helm installation. Grafana Labs offers a bunch of other installation methods.
:::
:::globalParagraph
In Grafana’s Helm chart repository, you’ll find 5 charts related to Loki. <em>Loki-canary</em> allows you to install canary builds of Loki to your cluster. <em>Loki-distributed</em> installs the relevant components as microservices, giving you the usual advantages of microservices, like scalability, resilience etc. while allowing you to configure them independently of one another. <em>Loki-simple-scalable</em> is similar - however, some of the components are always on, taking away a number of the configuration possibilities. The chart named <em>Loki</em> will deploy a single StatefulSet to your cluster containing everything you need to run Loki. The last of the bunch is <em>loki-stack</em>, which deploys the same StatefulSet as the Loki chart in addition to Promtail, Grafana and some others. For our use case, we chose the Loki chart. In addition to Loki itself, our cluster also runs Promtail and Grafana. In the following section, we’ll show you how to install this log aggregation stack to your cluster!
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Prerequisites
:::
:::globalParagraph
To follow along, you need a Kubernetes cluster that you have kubectl access to and Helm needs to be set up on your machine.
:::
:::globalParagraph
First of all, we need to add Grafana’s chart repository to our local helm installation and fetch the latest charts like so:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">$ helm repo add grafana https://grafana.github.io/helm-charts
$ helm repo update
</code></pre>
<p>:::</p>
<p>:::globalParagraph
Once that’s done, we can start the actual installation process.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Loki Installation
:::
:::globalParagraph
Let’s start by getting Loki running in our cluster. To configure your installation, take a look at the values the Loki Chart accepts via the ‘helm show values’ command, and save that to a file.
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">$ helm show values grafana/loki > loki-values.yaml
</code></pre>
<p>:::
:::globalParagraph
We won’t go over the settings in detail, as most values can be left at their defaults. However, you should take a look at the persistence key in order to configure Loki to actually store your logs in a PersistentVolume.
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json">persistence:
    enabled: true
    accessModes:
    - ReadWriteOnce
    size: 10Gi
    annotations: {}
</code></pre>
<p>:::
:::globalParagraph
Once you’re done adapting the values to your preferences, go ahead and install Loki to your cluster via the following command:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">$ helm upgrade --install loki grafana/loki -n loki -f loki-values.yaml
</code></pre>
<p>:::
:::globalParagraph
After that’s done, you can check whether everything worked using kubectl:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">$ kubectl get pods -n loki
NAME                            READY   STATUS    RESTARTS   AGE
loki-0                          1/1     Running   0          1h
</code></pre>
<p>:::
:::globalParagraph
If the output looks similar to this, congratulations! That’s one out of three components up and running.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Promtail Installation
:::
:::globalParagraph
Next, let’s look at Promtail. Promtail has 3 main features that are important for our setup:
:::
:::GlobalBlock{.ol-decimal .mb-5}</p>
<ol>
<li>It discovers targets (Pods running in our cluster)</li>
<li>It labels log streams (attaching metadata like pod/filenames etc. for easier identification later on)</li>
<li>It ships them to our Loki instance
:::</li>
</ol>
<p>:::globalParagraph
To install it, we first need to get a values file, just like we did for Loki:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">$ helm show values grafana/promtail > promtail-values.yaml
</code></pre>
<p>:::
:::globalParagraph
Like for Loki, most values can be left at their defaults to get Promtail working. However, we need to tell Promtail where it should push the logs it collects by doing the following:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">$ kubectl get svc -n loki
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
loki            ClusterIP   10.101.163.181   &#x3C;none>        3100/TCP   1h
</code></pre>
<p>:::
:::globalParagraph
We ask kubectl about services in the Loki namespace, and we’re told that there is a service called Loki, exposing port 3100. To get Promtail to ship our logs to the correct destination, we point it to the Loki service via the ‘config’ key in our values file.
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json">config:
 logLevel: info
 serverPort: 3101
 lokiAddress: http://loki:3100/loki/api/v1/push
</code></pre>
<p>:::
:::globalParagraph
Under ‘lokiAddress’, we specify that we want Promtail to send logs to ‘<a href="http://loki:3100/loki/api/v1/push">http://loki:3100/loki/api/v1/push</a>{.bs-link-blue :target="_blank"}’. Note that if Loki is not running in the same namespace as Promtail, you’ll have to use the full service address notation like so: ‘..svc.cluster.local:'. Promtail runs as a DaemonSet and has the following Tolerations in order to run on master and worker nodes.
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json">tolerations:
    - key: node-role.kubernetes.io/master
    operator: Exists
    effect: NoSchedule
    - key: node-role.kubernetes.io/control-plane
    operator: Exists
    effect: NoSchedule
</code></pre>
<p>:::
:::globalParagraph
If you don’t want Promtail to run on your master/control plane nodes, you can change that here.
:::
:::globalParagraph
Now that we set the most important values, let’s get this thing installed!
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code>$ helm upgrade --install promtail grafana/promtail --namespace=loki -f promtail-values.yaml
</code></pre>
<p>:::
:::globalParagraph
Verify that everything worked as expected:
:::
:::BlogCode{.mb-5}</p>
<pre><code>$ kubectl get ds -n loki
NAME       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
promtail   8         8         8       8            8           &#x3C;none>          1h
</code></pre>
<p>:::
:::globalParagraph
You can also take a look at the Pods with the ‘-o wide’ flag to see what node they’re running on:
:::
:::BlogCode{.mb-5}</p>
<pre><code>$ kubectl get pods -n loki -o wide
NAME                            READY   STATUS    RESTARTS   AGE    IP         NODE                     NOMINATED NODE   READINESS GATES
promtail-2j9dj                  1/1     Running   0          1h    1.1.1.1     control-plane-3          &#x3C;none>           &#x3C;none>
promtail-5wjxl                  1/1     Running   0          1h    1.1.1.1     control-plane-1          &#x3C;none>           &#x3C;none>
promtail-9nvps                  1/1     Running   0          1h    1.1.1.1     worker-1                 &#x3C;none>           &#x3C;none>
promtail-brgj2                  1/1     Running   0          1h    1.1.1.1     worker-2                 &#x3C;none>           &#x3C;none>
promtail-cfnff                  1/1     Running   0          1h    1.1.1.1     control-plane-2          &#x3C;none>           &#x3C;none>
promtail-gtt6m                  1/1     Running   0          1h    1.1.1.1     worker-3                 &#x3C;none>           &#x3C;none>
promtail-hnh4z                  1/1     Running   0          1h    1.1.1.1     worker-4                 &#x3C;none>           &#x3C;none>
promtail-r4xsz                  1/1     Running   0          1h    1.1.1.1     worker-5                 &#x3C;none>           &#x3C;none>
</code></pre>
<p>:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Grafana Installation
:::
:::globalParagraph
Last but not least, let’s get an instance of Grafana running in our cluster.
:::
:::globalParagraph
The following values will enable persistence. If you want your Grafana instance to be able to send emails, you can configure SMTP as shown below. Just add your SMTP host and ‘from_address’ to create a secret containing your credentials.
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-json">persistence:
 type: pvc
 enabled: true
 # storageClassName: default
 accessModes:
   - ReadWriteOnce
 size: 10Gi
grafana.ini:
 smtp:
   enabled: true
   host: smtp.smtpserver.io:465
   from_address: grafana@collectallthelogs.io
   skip_verify: true
smtp:
 # `existingSecret` is a reference to an existing secret containing the smtp configuration
 # for Grafana.
 existingSecret: "grafana-smtp"
 userKey: "user"
 passwordKey: "password"
</code></pre>
<p>:::
:::globalParagraph
Once you’re done configuring your values, you can go ahead and install Grafana to your cluster like so:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-json">helm upgrade --install loki-grafana grafana/grafana --namespace=loki -f grafana-values.yaml
</code></pre>
<p>:::
:::globalParagraph
Verify everything went smoothly:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json">$ kubectl get pods -n loki
NAME                            READY   STATUS    RESTARTS   AGE
loki-grafana-64b4b79494-8bt7c   1/1     Running   0          1h
</code></pre>
<p>:::
:::globalParagraph
All three components are up and running, sweet! Now that we’re all set up, let’s look at how we can actually put this to use.
:::</p>
<p>:::GlobalPodcastSection{:videoId="tyvE9VlSWkE" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR KUBERNETES PODCAST
::::
::::GlobalTitle{:tag="h3" .mb-6}
Tools for the Craft: Navigating the Kubernetes ecosystem
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking in depth about the ins and outs of local Kubernetes development and also provide some real coding examples.
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="Show more" :color="green"}
::::
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Usage
:::
:::globalParagraph
Connecting your newly created Loki instance to Grafana is simple. All you need to do is create a data source in Grafana. Under Configuration → Data Sources, click ‘Add data source’ and pick Loki from the list. You’ll be presented with this settings panel, where all you need to configure, in order to analyze your logs with Grafana, is the URL of your Loki instance. Since Grafana is running in the same namespace as Loki, specifying ‘<a href="http://loki:3001">http://loki:3001</a>{.bs-link-blue :target="_blank"}’ is sufficient.
:::
<img src="/img/blog/kubernetes-logging-with-promtail-loki-and-grafana-1.jpg" alt="Usage">{.object-cover .max-w-full .mb-5}
:::globalParagraph
When you’re done, hit ‘Save &#x26; test’ and voilà, you’re ready to run queries against Loki.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
LogQL
:::
:::globalParagraph
‘LogQL is Grafana Loki’s PromQL-inspired query language. Queries act as if they are a distributed grep to aggregate log sources. LogQL uses labels and operators for filtering.’
:::
:::globalParagraph
With LogQL, you can easily run queries against your logs. You can either run log queries to get the contents of actual log lines, or you can use metric queries to calculate values based on results.
:::
:::globalParagraph
LogQL is well-documented, so we won’t go into detail about every feature, but instead give you some queries you can run against your logs right now in order to get started. Go to the Explore panel in Grafana (${grafanaUrl}/explore), pick your Loki data source in the dropdown and check out what Loki collected for you so far.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Simple Log Query
:::
:::globalParagraph
If you just want logs from a single Pod, it’s as simple as running a query like this:
:::
:::BlogCode{.mb-5}</p>
<pre><code>{pod="loki-0"}
</code></pre>
<p>:::
:::globalParagraph
Grafana will automatically pick the correct panel for you and display whatever your Loki Pod logged.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Errors across a namespace
:::
:::globalParagraph
This query will filter logs from a given namespace that contain the word ‘error’ It will count them over the range selected in the dashboard and return the sum, giving you a simple overview of what’s going on across your cluster.
:::
:::BlogCode{.mb-5}</p>
<pre><code>sum(count_over_time({namespace="loki"} |= "error" [$__range]))
</code></pre>
<p>:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Average response time in a namespace, by path and app
:::
:::globalParagraph
This query is as complex as it will get in this article. It collects logs from a namespace before applying multiple neat features LogQL offers, like pattern matching, regular expressions, line formatting and filtering. In the end, you’ll receive the average response time of apps running in the given namespace within the selected interval. You’ll effectively be filtering out the log lines that are generated by Kubernetes liveness and readiness probes, grouped by app label and path. Note: this exact query will work for Django Hurricane’s log format, but you can tweak it by changing the pattern to match your log format.
:::
:::BlogCode{.mb-5}</p>
<pre><code>avg_over_time({namespace="application"} | pattern "&#x3C;date> &#x3C;time> &#x3C;access> &#x3C;level>     &#x3C;code> &#x3C;method> &#x3C;path> &#x3C;ip> &#x3C;latency>" | regexp "(?P&#x3C;latencyDecimal>[0-9]+\\.[0-9]+ms)" | line_format "{{.latencyDecimal}}" | regexp "(?P&#x3C;latencyClean>[0-9]+\\.[0-9])" | unwrap latencyClean | __error__="" | path!="/alive" | path!="/ready" [$__interval]) by (path, app)
</code></pre>
<p>:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Other Features/Further Reading/Caveats
:::
:::globalParagraph
If you don’t want to store your logs in your cluster, Loki allows you to send whatever it collects to S3-compatible storage solutions like Amazon S3 or MinIO. The log analysing/viewing process stays the same.
:::
:::globalParagraph
File system storage does not work when using the distributed chart, as it would require multiple Pods to do read/write operations to the same PV. This is documented in the chart repo, but it’s sadly not mentioned in Loki’s official documentation.
:::
:::globalParagraph
LogCLI is Loki’s CLI tool, allowing you to easily browse your logs from the comfort of your terminal. This requires you to expose your Loki instance via http or use port forwarding from your cluster to your machine.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Operations</category>
            <category>Documentation</category>
            <enclosure url="https://blueshoe.io/img/blog/kubernetes-logging-with-promtail-loki-and-grafana.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Kubernetes Scaling in the Google Cloud]]></title>
            <link>https://blueshoe.io/blog/kubernetes-scaling-gke</link>
            <guid>https://blueshoe.io/blog/kubernetes-scaling-gke</guid>
            <pubDate>Wed, 20 Nov 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>In this article, we discuss the possibility of scaling Kubernetes workloads in the Google Cloud simply and sustainably.</p>
<p><img src="/img/blogs/gke-hpa.svg" alt="Blueshoe and FastAPI: Documentation with Programming Examples">{.object-cover .max-w-full .mb-5}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Scaling the API - too much and too little
:::</p>
<p>:::GlobalParagraph
Kubernetes already allows scaling workloads in a simple way. For this article, a stateless application is assumed - a simple REST API.
:::</p>
<p>:::GlobalParagraph
This is in the form of a Deployment with 4 Replicas (each 125 mCPU and 250 MiB Memory) in the GKE Autopilot Cluster.
:::</p>
<p>:::GlobalParagraph
<strong>Problem Statement:</strong> At night, these 4 Replicas run almost without load. During the day, these partially do not suffice.
:::</p>
<p>:::GlobalParagraph
<strong>Solution:</strong> Automatic scaling of services based on their load.
:::</p>
<p>:::GlobalParagraph
<em>In the worst case, the applications are killed during high load - because they consume too much memory.</em>
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
How much CPU or working memory do my pods need?
:::</p>
<p>:::GlobalParagraph
Before scaling is configured, it is important to find out - what are actually
typical metrics for the behavior of the to-be-scaled application. Does it consume a lot of CPU or a lot of working memory? Is there another metric that provides insight into the load (e.g. <em>Request Queue</em>-Length)?
:::</p>
<p><img src="/img/blogs/gke-hpa-1.png" alt="GKE Metrics">{.mx-auto .w-1/2}</p>
<p>:::GlobalParagraph{.mb-4}
A quick look at the Google Cloud Dashboards of our REST API shows that working memory fluctuates and CPU has a relatively constant load from a certain point in time. The blue line shows the actual load, the red line the limits.
The working memory seems clearly more volatile and closer to its limits - for scaling our application in Kubernetes, memory will be used as a basis.
:::</p>
<p>:::GlobalParagraph
Now it is clear which metric is being used, but not yet what the necessary parameters are. The memory load rarely falls below 250 MiB, which means at least 2 pods should be constantly available.
We rarely, but reliably, approach the capacity limit of the currently 4 available replicas. So we take with some buffer a maximum of 6 replicas as highest load.
:::</p>
<p>:::GlobalParagraph
<em>Note: Strongly fluctuating working memory load indicates problems in the application. In this case, a memory leak of a dependency that cannot be changed.</em>
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Scaling in Kubernetes: the Horizontal Pod Autoscaler
:::</p>
<p>:::GlobalParagraph
The Google Cloud allows both vertical and horizontal scaling of workloads. Vertical scaling means that the available resources (CPU, Memory) of Pods are scaled. Horizontal scaling creates and removes entire Pods of the same Deployment.
:::</p>
<p><img src="/img/blogs/gke-hpa-2.png" alt="Google Cloud Menu">{.mx-auto .w-1/2}</p>
<p>:::GlobalParagraph
The basic parameters are quickly created - the minimum and maximum of the API scaling is handled with the following 2 fields:
:::</p>
<p><img src="/img/blogs/gke-hpa-3.png" alt="HPA Configure 1">{.mx-auto .w-1/2}</p>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
We can also dynamically scale your apps.
::</p>
<p>:::GlobalParagraph
But what is now the baseline for scaling itself? It's simple - for our example, an "ideal load" per pod is set. The limit of each pod is set at 250 MB. With a load of 80% or 200 MB (with some buffer), we reach the load limit of the service and need a new instance.
:::</p>
<p><img src="/img/blogs/gke-hpa-4.png" alt="HPA Configure Metrics">{.mx-auto .w-1/2}</p>
<p>:::GlobalParagraph{.mb-4}
Since the defined minimum is 2 pods - as soon as the memory load in the average exceeds 400 MB, a further pod will be scaled. If this is then again fallen below, the Horizontal Pod Autoscaler (HPA) also removes this again.
:::</p>
<p>:::GlobalParagraph
For all Kubernetes Experts - of course, the HPA can also be defined via Kubernetes Resources and thus stored as a configuration in the cluster - the typical DevOps approach.
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-yaml">apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: store-autoscale
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: store-autoscale
  minReplicas: 2
  maxReplicas: 6
  metrics:
    - resource:
        name: memory
        target:
          averageValue: 200
          type: AverageValue
</code></pre>
<p>:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Conclusion
:::</p>
<p>:::GlobalParagraph{.mb-8}
With a few clicks - or even a simple Kubernetes resource - costs can be saved and the load peaks of the REST API can be easily handled.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Operations</category>
            <category>Digitalization</category>
            <enclosure url="https://blueshoe.io/img/blogs/gke-hpa.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[How does local Kubernetes development work?]]></title>
            <link>https://blueshoe.io/blog/local-kubernetes-development</link>
            <guid>https://blueshoe.io/blog/local-kubernetes-development</guid>
            <pubDate>Wed, 07 Jul 2021 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>'Local Kubernetes development' aka 'development of containerised microservices in a local Kubernetes cluster' means that applications are designed and developed for a Kubernetes architecture – i. e. a developer works with a Kubernetes architecture locally. In this blog post, we’ll show you how local Kubernetes development works.</p>
<p><img src="/img/blog/local-kubernetes-development.jpg" alt="How does local Kubernetes development work?">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
Prerequisites
:::
:::globalParagraph
‘Local Kubernetes development’ – or rather, ‘How can I stuff as <strong>many tech buzzwords</strong> into a short blog article as possible?’ One might also want to pop in the term ‘K8s’ somewhere, which is used as an abbreviation for ‘Kubernetes’ … but let’s not go crazy here. In order to understand this blog post, we’ll assume you have a <strong>basic understanding of Kubernetes</strong>. If that’s not the case, we can recommend this Comic by Google:
:::</p>
<p><img src="/img/blog/bildschirmfoto_3.jpg" alt="bildschirmfoto_3">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
Let’s assume you’re developing a new project. You’ve identified a few independent services along the way and have now decided that it would make sense to deploy these in <strong>separate containers</strong> and have them <strong>orchestrated by Kubernetes</strong>. As it’s a bigger project, several programmers are working on it – and they’re only working on one of the services each, either individually or in small teams.
:::</p>
<p>:::globalTitle{:size="lg" .font-normal .mb-5}
Status Quo
:::
:::globalParagraph
The <strong>project example</strong> described above has now become a pretty common scenario. How can we now ensure that our programmers are also able to use their own laptops to develop as closely to the Kubernetes architecture as possible? A common method of running Docker containers locally is <strong><em>docker-compose</em></strong>. While this one’s especially easy to manage, it does have one major drawback: a docker-compose set-up doesn’t display the eventual production environment, i.e. the Kubernetes set-up. The worst-case scenario would be that you’ve programmed something which functions locally in your docker-compose set-up but not in the production system due to the image being run differently here.
:::
:::globalParagraph
As an alternative, technologies have been developed which simulate Kubernetes clusters on local computers. <strong><em>Minikube</em></strong> is a pretty widespread solution, but there are also more and more other alternatives which have been gaining ground in recent times. Some of those worth mentioning are <strong><em>microk8s</em></strong> by Canonical, for example, or <strong><em>k3s</em></strong> and <strong><em>k3d</em></strong> by Rancher which are more resource-efficient. K3d uses k3s to simulate more <strong><em>Worker Nodes</em></strong> in the local Kubernetes cluster. Usually, <strong><em>kubectl</em></strong> is then used for the interaction with the cluster.
:::
:::globalParagraph
As a developer, you now simply have to build a Docker image of your service and make it available to your colleagues. They can deploy the image in their local cluster and will then have local access to the most up-to-date status of your service.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .font-normal .mb-5}
Two exciting challenges still remain at this point, however:
:::
:::GlobalBlock{.ol-decimal .mb-5}</p>
<ol>
<li>How can I work on my service and always have the up-to-date status available in my cluster without having to build and deploy a new image?</li>
<li>How can I use the debugger locally?
:::</li>
</ol>
<p>:::globalTitle{:size="md" .font-normal .mb-5}
How does local Kubernetes development work?
:::
:::globalParagraph
In the upcoming sections, we will have a look at how we overcome these challenges concerning local Kubernetes development. For this, we’ll be using <strong><em>k3d</em></strong> as a <strong>local Kubernetes cluster</strong> as well as <strong><em>PyCharm</em></strong> as our <strong><em>development environment</em></strong>. We’ll also be using <strong><em>Helm</em></strong> for the <strong>cluster management</strong> as well as <strong><em>Telepresence</em></strong> to manage the <strong>live code reloading</strong>. The following installation examples were all carried out on an up-to-date Ubuntu system.
:::</p>
<p>::GlobalPodcastSection{:videoId="FazRUwBWPUI" :videoPosition="left" :bg="bg-bs-gray" .mb-5}
:::GlobalTitle{:tag="h3" .mb-6}
Local Kubernetes Development using Gefyra
:::
:::globalParagraph{:font-size="lg" .mb-4}
Gefyra is an Open Source intiative that aims to make local Kubernetes Development super fast and rock solid while providing you with all the Kubernetes features you love locally. Check my talk at Conf42 about how I locally debug container with a sidecar using Gefyra.
:::
:::globalParagraph{:font-size="lg" .mb-4}
You can check out Gefyra here:
:::
::::GlobalButton{:url="https://gefyra.dev/" :label="Show me Gefyra" :color="green" target="_blank"}
::::
::</p>
<p>:::globalTitle{:size="sm" .font-normal .mb-5}
K3D/K3S – Lightweight Kubernetes in Docker
:::
:::globalParagraph
k3d can be installed very easily by using the installation script provided by Rancher:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash
</code></pre>
<p>:::</p>
<p>:::globalParagraph
The installation of k3s is just as simple:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash
</code></pre>
<p>:::</p>
<p>:::globalParagraph
A new cluster can be created with the following command (we use k3d version 4 here):
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">k3d cluster create buzzword-counter --agents 1 -p 8080:80@agent[0] -p 31820:31820/UDP@agent[0] --registry-create
</code></pre>
<p>:::
:::globalParagraph
Here we have created a cluster called <strong>buzzword-counter</strong> and, amongst other things, have mapped the local port 8080 on the cluster’s internal port 80 so that we can access our cluster in the web browser via port 8080. Additionally we mapped port 31820, which we later need for Gefyra. We also enable local Docker images to be deployed in the cluster using the flag <strong>--registry-create</strong>, which creates a local registry named <em>k3d-buzzword-counter-registry</em>. We need an entry in our <strong>/etc/hosts</strong> file for the local registry:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">[...]
127.0.0.1     k3d-buzzword-counter-registry
[...]
</code></pre>
<p>:::
:::globalParagraph
The correct kubectl context has been set by k3d, which we can verify by running <strong><em>kubectl config current-context</em></strong>.
:::
:::globalParagraph
In order for us to be able to interact with our cluster using kubectl, we can either export the <strong>KUBECONFIG environment variable</strong> or integrate the content of the respective file in <strong><em>~/.kube/config:</em></strong>
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">export KUBECONFIG="$(k3d get-kubeconfig --name='buzzword-counter')"
</code></pre>
<p>:::</p>
<p>:::globalTitle{:size="lg" .font-normal .mb-5}
Helm – Kubernetes Package Manager
:::
:::globalParagraph
We often use Helm to manage our Kubernetes cluster. Helm describes itself as a package manager for Kubernetes and it also enables the <strong>mapping of complex Kubernetes applications in templates</strong>. Here, the buzzword is <strong>‘infrastructure as code’</strong>. Thanks to the templates, our application can be deployed into a new Kubernetes cluster at any time and without any major effort. To install Helm, you can simply download a binary file: to the download
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .font-normal .mb-5}
Example deployment: Buzzword counter
:::
:::globalParagraph
To show you a practical example, we have created a simple deployment for this blog post and put it up on Github:
:::
:::globalParagraph
Buzzword counter
:::
:::globalParagraph
Buzzword charts
:::
:::globalParagraph
This deployment includes a simple Django application, a <strong><em>Celery distributed task queue</em></strong> with <strong><em>rabbitmq</em></strong> as message broker to <strong>process asynchronous tasks</strong> as well as a <strong>PostgreSQL databank</strong>. With our application, we can count buzzwords and add new ones, too. The adding of buzzwords is implemented as a Celery task – in this example, it’s pretty pointless, but it demonstrates the functionality of our Celery distributed task queue perfectly.
:::
:::globalParagraph
The first step of the deployment is to deploy the application as a <strong>Docker image</strong>. As we want to push it to our local registry, we need to find out its mapped port by running <strong><em>docker ps -f name=k3d-buzzword-counter-registry</em></strong>. In our case it is <strong>port</strong> <strong><em>45319</em></strong>. Now we build the Docker image (from the Django application’s directory) and push it into our local registry:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">docker build -t k3d-buzzword-counter-registry:45319/buzzword-counter:0.1.0 .
docker push k3d-buzzword-counter-registry:45319/buzzword-counter:0.1.0
</code></pre>
<p>:::
:::globalParagraph
With the following commands (from the Helm charts’ directory), the application is installed and configured in the Kubernetes cluster with the PostgreSQL and RabbitMQ dependencies.
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">helm repo add bitnami https://charts.bitnami.com/bitnami
helm dep update buzzword-counter
helm install buzzword-counter buzzword-counter/
</code></pre>
<p>:::
:::globalParagraph
It <strong>might take a couple of minutes</strong> until everything is ready. Via kubectl, we can see whether the pods are available, for example, or we can have the log output displayed and verify that the runserver was started on the web pod and the celery worker on the worker pod:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">kubectl get deployments
kubectl get pods
kubectl logs -f buzzword-counter-web-XXXXX-XXXXXXXX
kubectl logs -f buzzword-counter-worker-XXXXX-XXXXXXXX
</code></pre>
<p>:::
:::globalParagraph
As we use nip.io, we can simply access our service under <em>http://buzzword-counter.127.0.0.1.nip.io:8080</em>. If DNS Rebinding doesn’t work or isn’t allowed on your local setup and therefore you can’t use nip.io, you need to add an entry to your /etc/hosts in order to access the url:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">[...]
127.0.0.1       buzzword-counter.127.0.0.1.nip.io
[...]
</code></pre>
<p>:::
:::globalParagraph
If we start a task, we can check out its output in the worker pod’s log via kubectl.
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">kubectl logs -f buzzword-counter-worker-XXXXX-XXXXXXXX
</code></pre>
<p>:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Gefyra - Fast, local development for Kubernetes
:::
:::globalParagraph
In order to access live code reloading, i. e. to make code changes done in our IDE immediately available in the cluster, we use Gefyra. Without live code reloading, we have to build a new Docker image and deploy it in the cluster after every change – which is pretty inconvenient and can become very time-consuming. Please follow the instructions to install Gefyra.
:::
:::globalParagraph
To make our local code available in the cluster we need to run following commands, you need to adapt the path in LOCAL_DIR to the directory of your local buzzword-counter copy:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">docker build -t buzzword-counter:dev .
	gefyra up
	export LOCAL_DIR=/home/&#x3C;...>/buzzword-counter
	gefyra run -i buzzword-counter:dev -N buzzword-dev -n default -v $LOCAL_DIR:/code --env-from buzzword-counter-web-69d98bb99d-62p4q/buzzword-counter -c "python manage.py serve --port 9000 --static --autoreload --debugger --debugger-port 5678"
gefyra bridge -N buzzword-dev -n default --deployment buzzword-counter-web --port 9000:9000 --container-name buzzword-counter -I buzzword-bridge
</code></pre>
<p>:::
:::globalParagraph
Let’s go over it in more detail. First, we built our development image.
:::
:::globalParagraph
Then we initialized gefyra in the cluster and executed our image with <em>gefyra run</em>, to make it part of the cluster. We can see it running with <em>docker ps</em>. Additionally we mounted the local source code directory into the container and specified that the new container should copy the environment from container <em>buzzword-counter of pod buzzword-counter-web-69d98bb99d-62p4q</em>. You need to change it appropriately to the name of the pod that’s running in your cluster, if you’re following these steps. We start the container with Hurricanes serve command with the <em>autoreload</em> flag and two flags for debugging, which we need later on.
:::
:::globalParagraph
Finally, we created a bridge with <em>gefyra</em> bridge in order to intercept the traffic to the cluster application with the one running locally.
:::
:::globalParagraph
By visiting <strong>http://buzzword-counter.127.0.0.1.nip.io:8080/</strong>, we can now observe that code changes in our local IDE are available in the cluster. Keep in mind that only changes in a non-template file trigger the reload.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Attaching a Debugger with Gefyra
:::
:::globalParagraph
Now that we have our local code running in the Kubernetes cluster, we only need one more essential thing for a convenient developer workflow: attaching a debugger to interactively debug our application. We can achieve this with Gefyra too, that’s why we’ve added the debug-flags to Hurricanes serve command. We’re using VSCode with the Python extension installed.
:::
:::globalParagraph
In order to set it up, we need to find out the local IP address of the container we started with <em>gefyra</em> run:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' buzzword-dev
</code></pre>
<p>:::
:::globalParagraph
This will be part of the CLI soon (as of April 2022, maybe it already is). The command shown tells the IP of the container which is needed for VSCode. Next we open the buzzword-counters source code in VSCode and select the Remote Attach option to connect to the running debugpy instance at the given IP. The port is 5678, which we specified in the <em>gefyra run</em> command.
:::</p>
<p><img src="/img/blog/usecases_oauth.jpg" alt="usecases_oauth">{.object-cover .w-full .mb-5}</p>
<p>:::globalParagraph
Upon starting the debugger, the output shows the following:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">2022-04-29 06:26:34,683 hurricane.server.general INFO     Tornado-powered Django web	 server
2022-04-29 06:26:34,688 hurricane.server.general INFO     Autoreload was performed
2022-04-29 06:26:34,688 hurricane.server.general INFO     Starting probe application running on port 9001 with route liveness-probe: /alive, readiness-probe: /ready, startup-probe: /startup
2022-04-29 06:26:34,695 hurricane.server.general INFO     Starting HTTP Server on port 9000
2022-04-29 06:26:34,695 hurricane.server.general INFO     Serving static files under /static/ from /code/buzzword_counter/../static
2022-04-29 06:26:34,697 hurricane.server.general INFO     Startup time is 0.014339923858642578 seconds
</code></pre>
<p>:::</p>
<p>:::globalParagraph
We can now set breakpoints and start debugging our code. And with that we have achieved not only cloud native development but debugging too!
:::</p>
<p><img src="/img/blog/pasted_image.jpg" alt="pasted_image">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
Telepresence: Another option for local development
:::
:::globalParagraph
[This section was originally written in March 2020 with Telepresence 1. In the meantime Telepresence 2 has been released. Please refer to their documentation or check out our <a href="/blog/alternative-to-telepresence-2-gefyra/">comparison of Telepresence 2 and Gefyra</a>{.bs-link-blue}.]
:::
:::globalParagraph
Another solution for local Kubernetes development ist Telepresence. It is a so-called <strong>sandbox project by the CNCF</strong>, the Cloud Native Computing Foundation. With Telepresence (v1), you can run a locally built Docker image in a cluster by ‘swapping’ a deployment.
:::
:::globalParagraph
This is pretty spectacular from a technical point of view. However, for this post, it’ll be sufficient to use a command to swap the <strong>buzzword counter web deployment</strong> of our Kubernetes cluster and run the specified Docker image instead. First, though, we’ll have to create the Docker image. For both commands, we’ll have to be in the <strong>directory of our Django application’s source code</strong>:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">docker build -t buzzword-counter:local .
telepresence --swap-deployment buzzword-counter-web --expose 8080 --docker-run --rm -it -v $(pwd):/code buzzword-counter:local python manage.py runserver 0.0.0.0:8080
</code></pre>
<p>:::</p>
<p>:::globalParagraph
What’s more, we have <strong>mounted</strong> the current directory in the Docker container using the flag <strong><em>‘-v $(pwd):/code’</em></strong> so that the code changes in <em>PyCharm</em> are also available in the Kubernetes cluster. However, as we’re using the Django runserver, the live reloading will only work if the <strong><em>DEBUG=True</em></strong> has been set up. We can either <strong>deploy this via the Helm charts or simply export it in our swapped deployment</strong>. Afterwards, we’ll run the run script:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">export DJANGO_DEBUG=True
/usr/src/run_app.sh
</code></pre>
<p>:::
:::globalParagraph
If we swap the container, we have to run the three above-mentioned commands again for the <em>port-forward</em> of the pod. Afterwards, we can change the code in PyCharm and verify that the runserver was restarted – either in the log or by opening the page in a browser.
:::</p>
<p><img src="/img/blog/pasted_image_0_1.jpg" alt="pasted_image">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
Have a closer look and you’ll find that <strong>Telepresence</strong> is not limited to a local Kubernetes cluster. You can also swap <strong>deployments from remote clusters</strong> as long as they can be accessed via kubectl. This can be very useful for the debugging and tracing of bugs on test systems, for example. Caution is advised, however, as every <strong>deployment traffic</strong> is directed to the local laptop after the swap. That means that this approach is only really suited to <strong>test systems</strong> and should be avoided at all costs in the case of <strong>most production systems</strong>.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Python remote debug in PyCharm
:::</p>
<p>:::globalParagraph
So now we can deploy our application in the local Kubernetes cluster with live code reloading. We have accomplished our buzzword mission, the <strong>production environment (i. e. Kubernetes development environment) has been replicated locally</strong> and we can carry out <strong>Cloud Native</strong> developments on our service. The icing on the cake now is to configure the <em>PyCharm Debugger</em> in a way that we can also debug our application directly in <em>PyCharm</em>. To do this, we first have to configure <strong><em>Python remote debug</em></strong> in <em>PyCharm</em>:
:::</p>
<p><img src="/img/blog/pasted_image_0_2.jpg" alt="pasted_image">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
Do bear in mind that it’s crucial that an <strong>absolute path</strong> is specified in the path mapping (the ~ shortcut for the home directory doesn’t work). As you can see in the image above, the configuration also needs a specific version of the Python package  <em>pydevd-pycharm</em>.
:::
:::globalParagraph
In order to avoid this package being unnecessarily part of our production deployment, we’re creating a second Dockerfile which installs upgraded <strong><em>pip requirements</em></strong>. Furthermore, we’ve added a simple view to our application (in urls.py) so that we can conveniently establish a connection between our cluster and the <em>PyCharm Debugger</em> via URL. What’s important here is that the IP address and the port match the configuration in <em>PyCharm</em>.
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">docker build -f Dockerfile.dev -t buzzword-counter:debug .
telepresence --swap-deployment buzzword-counter-web --expose 8080 --docker-run --rm -it -v $(pwd):/code buzzword-counter:debug bash
</code></pre>
<p>:::
:::globalParagraph
Afterwards, we browse the <em>Debug-URL</em>. Here, too, we have to remember that <em>DEBUG=True</em> has been set up and that we have carried out the port-forward. Now we can already set up a <strong>breakpoint</strong> in <em>PyCharm</em>. If we browse the respective view, the application will be stopped by the debugger and we can then inspect why a reduction of the counter either resets it to 0 or why we even get an <strong><em>IntegrityError</em></strong>:
:::</p>
<p><img src="/img/blog/debug_kubernetes.jpg" alt="pasted_image">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalPodcastSection{:videoId="tyvE9VlSWkE" :videoPosition="left" :bg="bg-bs-gray" .mb-5}
:::GlobalTitle{:tag="h3" .mb-6}
Tools for the Craft - Edition 1: Kubernetes development environments
:::
:::globalParagraph{:font-size="lg" .mb-4}
You can also check out the first edition of our Kubernetes podcast "Tools for the Craft: Navigating the Kubernetes ecosystem" here. Michael and Robert are talking in depth about the ins and outs of local Kubernetes development and also provide some real coding examples.
:::
:::globalParagraph{:font-size="lg" .mb-4}
More editions of our podcast can be found here:
:::
::::GlobalButton{:url="/podcast/" :label="More podcast editions!" :color="green"}
::::
::</p>
<p>:::globalTitle{:size="lg" .font-normal .mb-5}
Conclusion
:::
:::globalParagraph
Thanks to the tools <strong>k3d/k3s, Helm, Gefyra</strong> and additional debugging courtesy by <strong>Hurricane</strong>, we’ve conquered the mountain called ‘<strong>Local Kubernetes development’</strong>. Our developers can now develop in their <strong>own local Kubernetes cluster</strong>. A particularly practical solution to local Kubernetes development is Gefyra in combination with Hurricanes debugging support.
:::
:::globalParagraph
Still, it has to be noted that the <strong>handling of the tools</strong> isn’t quite that simple and that it does <strong>take some time to get used to them</strong>. The obstacle is particularly big in comparison with <em>docker-compose</em>. If you are looking for that kind of convenience and a team oriented workflow, check out Unikube, which combines all needed tools for effortless local kubernetes development.
:::
:::globalParagraph
And finally, let’s not forget the <strong>buzzword counter:</strong> I got to <strong>23 unique buzzwords</strong> in total. Did you count along and get to a different number? Go on then, let us know.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Gefyra</category>
            <category>Team Blueshoe</category>
            <enclosure url="https://blueshoe.io/img/blog/local-kubernetes-development.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Managed vs. unmanaged Kubernetes]]></title>
            <link>https://blueshoe.io/blog/managed-vs-unmanaged-kubernetes</link>
            <guid>https://blueshoe.io/blog/managed-vs-unmanaged-kubernetes</guid>
            <pubDate>Wed, 23 Jun 2021 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>We’ve noticed an increasing interest in Kubernetes (K8s) when speaking to techies and even more so when speaking to our clients. So in this blog article, we’re asking the question of what managed and unmanaged Kubernetes actually is.</p>
<p><img src="/img/blog/managed-vs-unmanaged-kubernetes.jpg" alt="Managed vs. unmanaged Kubernetes">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
When it comes to Kubernetes (K8s), we’ve noticed that the biggest knowledge gap is often found in one of the fundamental questions – namely, whether you want to use managed or unmanaged Kubernetes. We want to change this!
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Kubernetes overview
:::
:::globalParagraph
For this blog post, we will assume that you have a <a href="/blog/local-kubernetes-development/">basic understanding of Kubernetes</a>{.bs-link-blue} (K8s). However, we do still want to have a quick look at how Kubernetes actually works and in particular, how it interacts:
:::
:::globalParagraph
At the bottom of the graphic, there are technically no changes; just ‘like before’, virtual servers are still being used to run the code. <strong>Kubernetes acts as a kind of abstraction layer above the virtual servers – in this case, they’re called nodes and they are combined to a cluster</strong> (see diagram below by Kubernetes.io). This means that as a developer and operations specialist, I don’t really need to worry about which of the nodes my application ends up running on. I only need to teach Kubernetes the framework of my application in the form of manifest files and subsequently, K8s will ensure that a suitable node is used.
:::</p>
<p><img src="/img/blog/bildschirmfoto_1.jpg" alt="bildschirmfoto">{.object-cover .w-full .mb-5}</p>
<p>:::globalParagraph
So effectively, the direct <strong>interaction with virtual machines is covered by Kubernetes,</strong> so I don’t need to deal with it myself. In the K8s manifest, a number of things are specified, including the container image and its version as well as the command which is to be run. What’s more, you can also find other things here, such as the resource requests and limits – i.e. the minimum requirements that CPU and memory have to fulfill in order for the application to run successfully or the maximum value the application is allowed to take up.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Unmanaged Kubernetes
:::
:::globalParagraph
The word ‘unmanaged’ in the term ‘<strong>unmanaged Kubernetes’ refers to the fact that you have to deal with the installation and maintenance of Kubernetes yourself</strong> – essentially, you have to manage Kubernetes on your own. Primarily, this will be necessary if a hosting with a cloud provider is not possible or not wanted.
:::
:::globalParagraph
Below is an example of a standard, albeit shortened, <strong>procedure of the manual installation:</strong>
:::
:::globalParagraph
<strong>First, the K8s nodes – i.e. virtual machines – have to be created.</strong> In this example, we’ll assume there are 3 virtual servers: one for the so-called master node and two for the so-called worker nodes. The master node controls the worker nodes which means that the K8s software that is installed on the master node ultimately decides on which worker node an application is run.
:::
:::globalParagraph
<strong>Next, the software packages required need to be installed and configured.</strong> Among others, kubeadm will be required to initialise the master node and also to assign the worker nodes to the master one. With this step, you essentially create the Kubernetes cluster which is now technically ready to use.
:::
:::globalParagraph
<strong>Next, the K8s manifests can be installed directly or via applications – using helm, for example.</strong>
:::
:::globalParagraph
<strong>As an alternative to installing all the individual components required, it would also be possible to use microk8s by Canonical, for example.</strong> This is often used to simulate Kubernetes for the local development. For some years now, however, even microk8s has been deemed as production-ready by Canonical. The advantage of this method compared to the previous one is that you’d only need to install one package.
:::
:::globalParagraph
<strong>What must be considered, though, is that ‘managed’ also refers to the maintenance.</strong> All maintenance tasks must be done manually – be it the set-up and configuration of further worker nodes due to the available resources not sufficing anymore or even just the regular updates of the Kubernetes version.
:::</p>
<p>::GlobalPodcastSection{:videoId="tyvE9VlSWkE" :videoPosition="left" :bg="bg-bs-gray" .mb-5}
:::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR KUBERNETES PODCAST
:::
:::GlobalTitle{:tag="h3" .mb-6}
TftC E1: Kubernetes development environments
:::
:::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking about the various options developers have for running remote Kubernetes development environments.
:::
:::globalParagraph{:font-size="lg" .mb-4}
More editions of our podcast can be found here:
:::
::::GlobalButton{:url="/podcast/" :label="Show me more" :color="green"}
::::
::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Managed Kubernetes
:::
:::globalParagraph
In contrast, managed Kubernetes describes <strong>a Kubernetes installation which is made available by a provider or which can be configured via a provider.</strong> Providers and their services include the Google Kubernetes Engine (GKE), Amazon’s AWS Elastic Kubernetes Service (EKS) and IONOS’ managed Kubernetes.
:::
:::globalParagraph
<strong>The degree of the management can vary.</strong> In many cases, there’s ‘only’ the option to have a Kubernetes infrastructure provided – meaning, to have a cluster created as well as having the number and specification of the worker nodes configured. That’s already a lot more convenient than unmanaged Kubernetes. Usually, a web interface or a command line interface is provided for the interaction. On top of this, further worker nodes can be added, or the Kubernetes version can be conveniently updated.
:::
:::globalParagraph
Right at the top of the ‘managed’ range, you’ll find the managed Kubernetes offer by Canonical, among others. With this set-up, the entire Kubernetes infrastructure is installed and maintained by the provider. With such an offer, the operational part is reduced to the creation of the Kubernetes manifests or Helm charts.
:::
:::globalParagraph
In the upper range of the ‘managed’ services, Google has GKE Autopilot on offer. With this one, the nodes are automatically scaled according to the resources required by the application. Here I can also limit my responsibilities to the creation of the Kubernetes manifests or Helm charts as a developer of operations specialist. We will be having a closer look at the GKE Autopilot in a future blog post.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Summary
:::
:::globalParagraph
<strong>In our experience, most projects and companies will be better off with managed Kubernetes.</strong> Thanks to the variety of providers, a lot of hosting requirements can be covered. Depending on how much it’s managed, you can benefit from various conveniences. One of the main benefits is undoubtedly that you don’t need to bother with the maintenance of the clusters – instead, some of this can be automated or you can conveniently deal with it via a web interface or CLI.
:::
:::globalParagraph
<strong>If there are explicit requirements of on-premise hosting</strong> or valid reasons to run Kubernetes in your own data centre, then there’s <strong>no getting around unmanaged Kubernetes.</strong>
:::
:::globalParagraph
Good to know: When it comes to <a href="/products">Unikube</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, the issue of ‘managed vs. unmanaged Kubernetes’ isn’t even a problem in the first place. Whether in its entirety or only partially managed, unmanaged or even just simulated locally with Unikube, it doesn’t matter how the K8s cluster was installed.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Operations</category>
            <category>Performance</category>
            <category>Security</category>
            <category>Project Management</category>
            <enclosure url="https://blueshoe.io/img/blog/managed-vs-unmanaged-kubernetes.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Migration to cloud-native]]></title>
            <link>https://blueshoe.io/blog/migration-to-cloud-native</link>
            <guid>https://blueshoe.io/blog/migration-to-cloud-native</guid>
            <pubDate>Mon, 30 Aug 2021 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>The ‘cloud-native’ development process is an integral part of our projects. However, it can be quite a challenge to migrate existing projects into the cloud-native workflow. In this blog post, we’ll use three examples to show you the steps that a migration to cloud-native requires.</p>
<p><img src="/img/blog/migration-to-cloud-native.jpg" alt="Migration to cloud-native">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph</p>
<p>The first obstacles have been dealt with. We know why we want to use cloud-native development, our developers have been trained to utilise the process and new projects use cloud-native right from the start. However, migrating to cloud-native can still be a challenge. That's why towards the end of this article, you’ll find out in what way Unikube can assist the migration.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Type of migration
:::
:::globalParagraph
With the migration, we ultimately want our production and staging/testing systems to be hosted via Kubernetes (K8s) and also for our developers to develop with private or individual Kubernetes clusters. If you want to check out what the local development with Kubernetes looks like, check out the blog post "<a href="/blog/local-kubernetes-development/">How does local Kubernetes development work?</a>{.bs-link-blue}".
:::
:::globalParagraph
Depending on the development process, an existing project includes different elements on which you can build during the <a href="/our-services/migration-to-kubernetes/">migration</a>{.bs-link-blue}, or which have to be prepared first. One of the main aspects is whether the project already uses container images or not. We will have a look at three example projects which differ from one another with regard to development procedure and hosting:
:::
:::globalParagraph
For one, we have a project which has been developed locally using Vagrant and which is hosted by a tech stack of uwsgi and nginx (a pretty common stack for Django projects). The other two projects already utilise <a href="/blog/strategies-for-slim-docker-images/">Docker images</a>{.bs-link-blue}. For the <a href="/blog/local-kubernetes-development/">local development</a>{.bs-link-blue}, both projects use Docker Compose while one project is hosted with Docker Compose and the other with Kubernetes.
:::
:::globalParagraph
In the following graphic, which illustrates the temporal development of development and hosting systems, we can clearly assign our example projects. For this blog post, we’ll ignore the first stage, 'Bare Metal'. In the development platform section, our three projects can be found under the stages ‘Virtualized’ and ‘Containerized’. In the operating platform section we can follow the arrows and find our projects under 'Virtual Machines', 'Container Runtimes &#x26; PaaS' as well as '<a href="/blog/container-orchestration-options-comparison/">Container-Orchestration</a>{.bs-link-blue}'. The goal is for all three projects to appear both in development and operating under the stage ‘Container-Orchestrated’.
:::</p>
<p><img src="/img/blog/migration-to-cloud-native-1.jpg" alt="Migration to cloud-native">{.object-cover .w-full .mb-5}</p>
<p>:::globalTitle{:size="md" :tag="h3" :color="text-bs-blue" .mb-5}
Example 1: Local development using Vagrant
:::</p>
<p>:::globalParagraph
Vagrant was developed to display the development environment in a completely virtual machine (VM) in order to simulate the production environment here in the best possible way. As no complete VMs are used in the cloud-native process anymore and as instead, the application along with its environment variables is put into the container, the first step is a migration to Docker.
:::
:::globalParagraph
To do this, a Dockerfile has to be created for the scope of the application. At this point, it might also make sense to consider which parts of the application could be subdivided into individual services. Previously, there was only one Vagrant VM and in the worst-case scenario, the application would become one massive monolith. At the very least, systems like a databank or a cache should not end up in the Dockerfile but should instead be configured as individual services.
:::
:::globalParagraph
The second step is to create the Kubernetes manifests – via Helm charts, for example. This means that for all required services, Helm charts have to be created which then generate the respective K8s resources.
:::
:::globalParagraph
Finally, you ‘only’ have to transition the development process to Kubernetes. This means that our developers have private or individual clusters are their disposal. On the one hand, this could be local clusters which are simulated via <a href="/blog/minikube-vs-k3d-vs-kind-vs-getdeck-beiboot/">k3d, microk8s or minikube</a>{.bs-link-blue}. Or one the other, these individual ‘developer clusters’ might also exist remotely – meaning it’s a real K8s cluster, but it’s only used by one developer.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" :color="text-bs-blue" .mb-5}
Example 2: Local development and hosting with Docker Compose
:::
:::globalParagraph
Our second project employs Docker Compose in the development as well as in the hosting of the production system. This means that we have one or several Dockerfiles and that right at the start of the project, we have already thought thoroughly about the different services the application will need. These are displayed in the <em>Docker-Compose.yaml</em>.
:::
:::globalParagraph
The main part of the migration consists of the creation of the Kubernetes manifests. Just like with the previous project, this can be done by using Helm charts. When using Helm charts, the Kubernetes manifests are easier to maintain. This particularly makes a difference in larger projects. If the deployments are built quite similarly, for instance, this can be displayed more effectively in Helm templates (keyword: DRY). Should this not be a requirement, you could also create the manifest files directly from the <em>Docker-Compose.yaml</em>. To do this, Kubernetes provides the command <em>kompose</em>. Using it is easy – a simple kompose <em>convert</em> will suffice to create the files.
:::
:::globalParagraph
Naturally, in this case the local development process will then have to be transitioned to Kubernetes, too.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" :color="text-bs-blue" .mb-5}
Example 3: Local development via Docker Compose, hosting via Kubernetes
:::</p>
<p>:::globalParagraph
With this project, we have already subdivided our services and we also have one or several Dockerfiles. For the local development, Docker Compose is used while the hosting takes place via Kubernetes. Thus, the only migration step is the transition of the local development process to Kubernetes. Other than that, the production environment for the local development always has to be simulated in the <em>Docker-Compose.yaml</em>. This does require a certain amount of extra effort on the one hand and on the other, it does create the problem that the local environment doesn’t quite match the production environment. This means that unexpected problems or side effects can occur during deployment.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Challenges during migration
:::
:::globalParagraph
For the migration to cloud-native in our example projects, a few challenges need to be overcome. For one, these challenges apply to the actual migration itself. But then they also apply to how the developers receive the transition to the local development processes and to any additional tools that the developers might have to master first. We’re also assuming that an operations specialist with extensive Kubernetes knowledge is part of the team who can develop the Helm charts.
:::
:::globalParagraph
This brings us to the next question: Which obstacles remain, from a developer’s point of view, when migrating all projects to cloud-native?
:::
:::GlobalBlock{.ol-decimal .mb-5}</p>
<ol>
<li>Learning how to use <em>kubectl</em> in order to enable the direct interaction with the cluster.</li>
<li>Live code-reloading: When the code is changed, it should be possible to test the alterations as quickly as possible – without having to first build a new Docker image and deploy it in the local cluster. This is possible via <a href="/products/">Gefyra</a>{.bs-link-blue}.</li>
<li>For most developers, the debugger is undoubtedly an important part of an optimal development process. With a local Kubernetes cluster, this debugger has to be explicitly configured again. In a Python environment, for example, you do this using <em>python remote debug</em>.
:::
:::globalParagraph
So our developers have to learn at least the basics of three further tools in order to utilise the whole scope of features which the Docker Compose setup has made available. This isn’t an impossible task, of course, but it does require additional effort.
:::</li>
</ol>
<p>:::globalTitle{:size="lg" .mb-5}
How does Unikube support the migration?
:::
:::globalParagraph
Finally, we’ll have a quick look at how <a href="/products/">Unikube</a>{.bs-link-blue} can come into play when migrating to cloud-native. Unikube essentially works like a kind of ‘wrapper’ for various tools or features. This means that a developer working with Unikube will only have to learn one tool, rather than several different ones. The developer will thereby also not have to acquire any detailed Kubernetes knowledge anymore and will also not have to directly interact with the Kubernetes cluster.
:::
:::globalParagraph
One of the key aspects when developing <a href="/products/">Unikube</a>{.bs-link-blue} was for it to be as easy and comfortable to use via command-line interface as possible. The goal here was to get to a certain convenience level that we’re used to from Docker Compose. And through this, it also just so happens that we eliminate all that Kubernetes complexity!
:::
:::globalParagraph
Stay tuned for upcoming <a href="/products/">Unikube</a>{.bs-link-blue} features and feel free to send us any feedback you might have – via GitHub, for example.
:::</p>
<p>:::GlobalPodcastSection{:videoId="FazRUwBWPUI" :videoPosition="left" .mb-6}</p>
<p>::::GlobalTitle{:tag="h3" .mb-6}
An alternative to telepresence
::::
::::globalParagraph{:font-size="lg" .mb-4}
We decided to create an alternative to Telepresence 2. Check it out if you like: <a href="https://gefyra.dev">https://gefyra.dev</a>{.bs-link-blue :target="_blank"}
::::
::::globalParagraph{:font-size="lg" }
Also, have a look at my talk at Conf42 about debugging a container with sidecar using Gefyra.
::::
::::GlobalButton{:url="https://gefyra.dev/" :label="Gefyra" :color="blue" target="_blank"}
::::
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Team Blueshoe</category>
            <category>Digitalization</category>
            <category>Operations</category>
            <enclosure url="https://blueshoe.io/img/blog/migration-to-cloud-native.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Minikube vs. k3d vs. kind vs. Getdeck]]></title>
            <link>https://blueshoe.io/blog/minikube-vs-k3d-vs-kind-vs-getdeck-beiboot</link>
            <guid>https://blueshoe.io/blog/minikube-vs-k3d-vs-kind-vs-getdeck-beiboot</guid>
            <pubDate>Mon, 27 Feb 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>What is the best Kubernetes tool for development in 2023? This article compares three of the most popular solutions. Getdeck, created by Blueshoe, is a new alternative to local Kubernetes development entering the market.</p>
<p><img src="/img/blog/minikube-vs-k3d-vs-kind-vs-getdeck-beiboot.jpg" alt="minikube vs k3d vs kind vs getdeck">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Introduction
:::
:::globalParagraph
In this article, we’ll compare three popular local Kubernetes development tools. In addition, Getdeck Beiboot is added to the comparison as a remote Kubernetes-based development environment.
:::
:::globalParagraph
The main focus of this blog post is the evaluation of DX (“developer experience”) in actual development scenarios. This is particularly important to keep in mind as you could potentially use these tools for production deployments as well. However, the important dimensions for the evaluation of these tools are very different between development and production hosting.
:::
:::globalParagraph
The following aspects are considered relevant for software development use cases:
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Ease of installation</li>
<li>Ease of use, complexity</li>
<li>Feature completeness (especially for development and production parity)</li>
<li>Resource consumption</li>
<li>Overall usability (the so-called developer experience, DX)
:::</li>
</ul>
<p>:::globalParagraph
This list of evaluation criteria is not exhausting. There are also a few concerns that make the work with these tools appealing, like personal preferences. I won’t take a glance at all of these criteria in this article, though.
:::
:::globalParagraph
All of the tools are capable of providing the developer with a dedicated Kubernetes environment for learning Kubernetes, playing around, or solving development problems.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Minikube vs. k3d
:::
:::globalTitle{:size="lg" .mb-5 .mt-8}
Minikube
:::
:::globalParagraph
minikube is one of the most mature solutions on the market. When our team at Blueshoe started adopting Kubernetes in 2017, minikube was already available. The first version 0.1.0 was released on May 30, 2016, shortly after the initial commit on Github, on Apr 16, 2016.
:::
<img src="/img/blog/minikube-vs-k3d-vs-kind-vs-getdeck-beiboot-1.jpg" alt="minikube">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
minikube was started by a Kubernetes SIG, a special interest group, that recognized the need for local Kubernetes environments. Today, the SIG is very close to the Kubernetes development team and hence up-to-speed with the official Kubernetes code base.
:::</p>
<p>:::GlobalPodcastSection{:videoId="tyvE9VlSWkE" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
TOOLS FOR THE CRAFT
::::
::::GlobalTitle{:tag="h3" .mb-6}
E1: Kubernetes Development Environments
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert talk about how we got from docker compose to truly developing with Kubernetes. They discuss the different challenges at hand and which tools can help to move development environments closer to the production setup.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5 .mt-8}
Jack of all platform-trades
:::
:::globalParagraph
A very important difference between minikube and all other contestants is that it can deploy Kubernetes clusters with one of the multiple drivers. These drivers implement the way you run the Kubernetes cluster on a development machine: either based in a virtual machine (for example Hyper-V, KVM2, QEMU, or others) or in a container runtime (for example with Docker or Podman). When looking at minikube with the evaluation aspects from above, one can spot differences in the details between these drivers. Yet, in general, minikube abstracts the driver's implementation for the developer.
:::
:::globalParagraph
Hence, it’s more than likely that minikube can run Kubernetes for virtually any platform a developer is working on. Coming with a unified interface, minikube is a very platform-agnostic solution. If your team is working with Windows, macOS, Linux, or even more exotic platforms, it’s a great benefit to have all members use the same tool. They will be able to share knowledge more easily, provide scripts for automation and write documentation that covers all platforms equally.
:::
:::globalParagraph
A big plus for minikube is its comprehensive documentation. It not only contains technical references but also a long list of tutorials for many specific use cases and deployment scenarios.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5 .mt-8}
Use all K8s features with minikube
:::
:::globalParagraph
With minikube a developer can use practically any required Kubernetes feature. Some of them must be enabled with the <em>–feature-gates</em> flag. This is a set of key-value pairs that describe feature gates for experimental features. Other features are controlled by the addons system of minikube. <em>Addons</em> can be integrated by 3rd party vendors. Here is a list of addons from my system.
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code>|-----------------------------|--------------------------------|
|         ADDON NAME          |           MAINTAINER           |
|-----------------------------|--------------------------------|
| ambassador                  | 3rd party (Ambassador)         |
| auto-pause                  | Google                         |
| csi-hostpath-driver         | Kubernetes                     |
| dashboard                   | Kubernetes                     |
| default-storageclass        | Kubernetes                     |
| efk                         | 3rd party (Elastic)            |
| freshpod                    | Google                         |
| gcp-auth                    | Google                         |
| gvisor                      | Google                         |
[...]
| nvidia-gpu-device-plugin    | 3rd party (Nvidia)             |
| olm                         | 3rd party (Operator Framework) |
| pod-security-policy         | 3rd party (unknown)            |
| portainer                   | 3rd party (Portainer.io)       |
| registry                    | Google                         |
| registry-aliases            | 3rd party (unknown)            |
| registry-creds              | 3rd party (UPMC Enterprises)   |
| storage-provisioner         | Google                         |
| storage-provisioner-gluster | 3rd party (Gluster)            |
| volumesnapshots             | Kubernetes                     |
|-----------------------------|--------------------------------|
</code></pre>
<p>:::</p>
<p>:::globalParagraph
These addons are enabled with...
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code>minikube addons enable [...]
</code></pre>
<p>:::</p>
<p>:::globalParagraph
...and allows a minikube cluster to provision that particular feature in the local development cluster. For example, if you need volumesnapshots, like we did when building the Getdeck Beiboot shelf feature, just run:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code>minikube addons enable volumesnapshots
</code></pre>
<p>:::</p>
<p>:::globalParagraph
That makes it very convenient to use such a feature without bloating each development cluster instance from the start. In addition, the same set of addons will be available across the team, given they all use the same version of minikube.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5 .mt-8}
Minikube Profiles: multiple logical clusters on one dev machine
:::
:::globalParagraph
When we started adopting Kubernetes, we were looking for a solution that allowed us to manage multiple logical clusters on one development machine. In 2016/2017, minikube did not put much focus on that particular feature. It was only possible to spin up one cluster per machine, and there was only a single-node cluster configuration possible. That is why we at Blueshoe decided to work with k3d. However, minikube caught up with this important developer feature and does now support multiple so-called minikube profiles.
:::
:::globalParagraph
minikube profiles are logical clusters that can be started and stopped separately from each other. It allows a developer to have more than one Kubernetes-based development environment. Just think of multiple disjunct projects that require different Kubernetes <a href="/our-services/api-development-company/">API</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} versions, features, or simply different workloads running in them. You can run:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code>minikube start -p myprofile1
</code></pre>
<p>:::
:::globalParagraph
and you will get a blank new cluster with a fresh profile that can exist along with other profiles.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
k3d
:::
:::globalParagraph
k3d is more limited when it comes to deploying it on a development machine. From the very beginning, k3d only supported a local container runtime for running the Kubernetes cluster. Yet, as I mentioned before, it was always possible to manage multiple separate clusters for development on one host. That was a real killer feature, especially for Blueshoe, as we are running multiple different Kubernetes projects for several clients. Especially with our maintenance work, it is a must to have an up-to-head (don’t worry, I created that term) development environment, as well as a close-to-production stable environment at the same time. As a developer, I need to provide bug fixes in no time and drive the development of new features.
:::
<img src="/img/blog/minikube-vs-k3d-vs-kind-vs-getdeck-beiboot-2.jpg" alt="k3d">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
k3d is based on k3s, a lightweight Kubernetes solution that is developed by Rancher. However, k3d is not deeply affiliated with k3s and is driven by a community of developers.
:::
:::globalParagraph
The characteristic of k3s is that it replaces some of the Kubernetes default components, such as etcd, with less scalable and less resource-intensive alternatives (e.g. SQLite). In addition, the whole system is compiled into one very small binary executable (less than 40 MiB) which makes it very low on storage requirements, too. The base Kubernetes system k3s was originally developed for IoT and edge computing environments. I’d say, that makes it perfect for development environments, too, as these little resource requirements are a perfect fit. We will see the comparison of the resource consumption later in this article.
:::
:::globalParagraph
Since k3d is just a wrapper for k3s, it can focus on the developer’s experience. It comes with very good documentation, just like minikube, that also contains tutorials for certain use case scenarios. For example, you can find a development workflow example using Tilt and a build-push-test cycle using k3d’s container image-sharing capability.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5 .mt-8}
Good for teams: Sharing k3d configurations
:::
:::globalParagraph
One great advantage that k3d provides (which minikube misses at this point) is that k3d provides a cluster configuration file (as of version 4.0.0). It allows development teams to persist the configuration of a k3d cluster to a local YAML file that can be shared across the team. This file contains the configuration for almost all parameters that make up a cluster, for example, the number of cluster nodes, the Kubernetes version, the locally mapped ports, registries, features, and many more. That file makes it very easy to spin up the same cluster configuration across the team without having the developer follow along with a readme or a script to set up their local Kubernetes cluster. You can run k3d cluster create --config mycluster1.yaml and everything will be provisioned as specified. In my eyes, that is much more simple than what you can currently do with minikube.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5 .mt-8}
Don't worry about kubectl
:::
:::globalParagraph
With either solution, minikube and k3d, a developer will get its kubectl context automatically set to the newly created cluster. Both alternatives name their kube-context following the cluster name/profile name that was specified when creating the cluster. This way, it is very easy for the developer to start working without worrying about the kubectl configuration at all.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5 .mt-8}
Less complexity, fewer CLI commands
:::
:::globalParagraph
As k3d does not provide the complexity of minikube, the CLI is much less comprehensive, yet straightforward. I’d say, for developers working with the CLI, this is a plus. Especially when using the k3d configuration file, I can spare most of the typing on command line interfaces and reduce the surface of the CLI to the few required commands: starting, stopping, and deleting a cluster.
:::
:::globalParagraph
I would suspect that there are only a few features missing in k3d, as they are not supported in k3s, but for 95% of the development work, it should be totally sufficient. Even the snapshot-controller was recently added to k3s.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Minikube vs. kind
:::
:::globalParagraph
Kind is another project driven by a Kubernetes SIG. At this point, I couldn’t find out why it is still maintained (I found a reason, but read on). Kind is an acronym for “Kubernetes in Docker'' and was born from the idea to run Kubernetes on a container runtime (instead of a virtual machine). However, nowadays, minikube also prefers to use Docker as a deployment option, so there is no difference between minikube and kind anymore regarding this important point. However, they put up a neat page in their documentation explaining kind’s principles and target use cases. I’d say it all boils down to automation.
:::
<img src="/img/blog/minikube-vs-k3d-vs-kind-vs-getdeck-beiboot-3.jpg" alt="minikube vs. kind">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5 .mt-8}
Config files and K8s features
:::
:::globalParagraph
Just like k3d, kind also provides the possibility to use configuration files. Similar to k3d, you can run...
:::
:::BlogCode{.mb-5}</p>
<pre><code>kind create cluster --config mycluster1.yaml
</code></pre>
<p>:::</p>
<p>:::globalParagraph
...to create a local Kubernetes cluster based on the given configuration.
:::
:::globalParagraph
Kind does also provide feature gates to enable experimental Kubernetes features and plenty of other configuration options.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5 .mt-8}
Podman? Rootless? Kind!
:::
:::globalParagraph
Compared to minikube where running on Podman is considered experimental, kind is one leap ahead and provides solid Podman support. The project team even invested some serious effort in running kind in rootless mode, too. For whom this could be important, kind is currently the only solution. Yet, it comes with several limitations, of course.
:::
:::globalParagraph
Kind comes with a less complex CLI comparing it to minikube. The command line interface also renounces the emojis which can be a benefit. But that is a matter of taste.
:::
:::globalParagraph
Comparing the front page of both products, minikube claims to be “[...] focusing on helping application developers and new Kubernetes users.” whereas kind “was primarily designed for testing Kubernetes itself, but may be used for local development or CI”. I think this gives a certain foreshadowing of what it is all about.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Comparing minikube, k3d, and kind in 2023
:::
:::globalParagraph
Now let's have a head-to-head comparison of these three alternatives for local Kubernetes development in 2023.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Popularity
:::
:::globalParagraph
Popularity is an indicator of how secure the ongoing maintenance of a product is. A good currency to measure this is the number of GitHub stargazers:
:::
:::GlobalBlock{.ol-decimal .mb-5}</p>
<ol>
<li>minikube: >25.8k stars on GitHub</li>
<li>kind: >11.1k stars on GitHub</li>
<li>k3d: >4.1k stars on GitHub
:::
:::globalParagraph
As you can see, all three contestants do already have significant popularity on GitHub. However, minikube is by far the most popular option. I would say that all three solutions will be maintained in perpetuity because they currently have a very vibrant community.
:::</li>
</ol>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Performance evaluation
:::
:::globalParagraph
The following performance evaluation was conducted on a Ubuntu, running on an Intel Core i7 (8th Gen) with 16 GiB or RAM. Although I am working on Linux, I used Docker Desktop on my machine and hope to collect results comparable with other platforms, too.
:::
:::globalParagraph
Remark: Docker Desktop runs a QEMU-based virtual machine on Linux, too, just like on Windows and macOS. The Kubernetes in Docker Desktop was deactivated.
:::
:::globalParagraph
I used the following versions of the tools:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>minikube: minikube version: v1.26.1</li>
<li>kind: kind v0.17.0 go1.19.2 linux/amd64</li>
<li>k3d: k3d version v5.4.1
:::</li>
</ul>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5 .mt-8}
Cluster startup time
:::
:::globalParagraph
In this test case I am measuring the time from requesting a new local cluster until it is started. I did not specify any special configurations, just using the defaults.
:::
:::globalParagraph
I ran this test five times and took the best of all results so that there are no downloading container images in the time measured.
:::
:::globalParagraph
The startup times are as follows:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>minikube (docker): 29,448s</li>
<li>k3d: 15,576 s</li>
<li>kind: 19,691 s
:::
:::globalParagraph
The startup times of all contestants are quite close. If you have to, for example, download the required container images first this will probably impact the overall process more than the underlying bootstrapping process. Except for the kvm2 driver of minikube. This process is much more heavyweight and does involve the bootup of an entire virtual machine. I assume that VM-based drivers are not the first option for the majority of developers, anyway.
:::</li>
</ul>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5 .mt-8}
Cluster tear-down time
:::
:::globalParagraph
I measured the times for stopping and deleting a cluster. I run this test multiple times and took the best of all results:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>minikube (docker): 2,616 s</li>
<li>k3d: 0,700 s</li>
<li>kind: 0,805 s
:::
:::globalParagraph
All tools stop and delete their clusters very swiftly. No big difference between them.
:::</li>
</ul>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5 .mt-8}
Cluster resource consumptions
:::
:::globalParagraph
I started a local Kubernetes cluster and at about 120 s after the finished startup, I inspected the resource consumptions for an idling one-node cluster. I used the docker stats command for that.
:::
:::globalParagraph
Please note that I disabled traefik on k3d to get a comparable setup. Since k3d runs at least two containers, I aggregated the consumptions.
:::
:::globalParagraph
Here are the results:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>minikube with docker (CPUs=8, Memory=15681 MiB):
CPU: ~20% Memory Usage: ~680.8 MiB</li>
<li>K3d (CPUs=8, Memory=15681 MiB):
CPU: ~20% Memory Usage: ~502 MiB</li>
<li>kind (CPUs=8, Memory=15681 MiB):
CPU: ~20% Memory Usage: ~581 MiB
:::
:::globalParagraph
Looking at the results, you can spot some differences between minikube and k3d or kind. For a blank and idling cluster, minikube allocates about 35% more memory than k3d, and 17% more memory than kind. I suspect that with a growing number of workloads, the resource consumption of minikube will get to the limit of the development machine very fast.
:::
:::globalParagraph
In any case, I was very surprised by the CPU usage that went from 10% to 50% from time to time without anything going on in these clusters. That pattern occurred for any of the Kubernetes providers.
:::</li>
</ul>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Usability and Developer Experience (DX)
:::
:::globalParagraph
Usability and DX are very complex topics and it’s difficult to come up with quantitative metrics. However, I would like to point out a few of my findings that I like or do not like about the tools.
:::
:::globalParagraph
All tools are currently available as CLI (command line interface) only. That’s fine for me and probably a good chunk of developers on Linux and macOS. As far as I can tell, only a few developers on Windows like working with a terminal. From their perspective, a CLI probably does not provide the best possible DX. There are a couple of GUIs (graphical user interfaces) around. I added my findings in the next chapter.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5 .mt-8}
minikube
:::
:::globalParagraph
minikube comes with a CLI that employs a lot of emojis. That’s a very individual preference, but I find them a bit annoying. However, they can be disabled.
:::
:::globalParagraph
The installation is very simple, you can get it via brew, script, or download it as binary and put it manually to your path.
:::
:::globalParagraph
minikube makes it very easy and swift to create a new cluster. It’s just one command with two words: <em>minikube start</em>. That’s simple enough. How would you think to pass configuration options? Right! Directly as an argument to the start operation. One very important configuration is the Kubernetes API version. It doesn’t matter which version of minikube you have installed on your local machine, you can always select a different Kubernetes API version than the default. And that’s very simple and intuitive. Your production cluster runs on version 1.25.5, then you want to run:
:::
:::BlogCode{.mb-5}</p>
<pre><code>minikube start --kubernetes-version=1.25.5
</code></pre>
<p>:::
:::globalParagraph
...and you will be provided with the correct <a href="/our-services/api-development-company/">API</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} version.
:::
:::globalParagraph
Other basic cluster operations are likewise: halting the cluster, stopping or deleting it is always only one command.
:::</p>
<p>:::globalParagraph{.mt-8 .text-xl}
<strong>Clean CLI, quick Kubernetes Dashboard</strong>
:::
:::globalParagraph
The command pallet of the minikube CLI is clean and relatable. If you are working with multiple clusters in parallel, either started or sleeping, you can always add the <em>-p/--profile</em> argument to most of the actions and run the requested action on the specified cluster.
:::
:::globalParagraph
How do you list all existing clusters on the machine? That’s a
:::
:::BlogCode{.mb-5}</p>
<pre><code>minikube profile list
</code></pre>
<p>:::
:::globalParagraph
...and you will be presented with a list of created clusters.
:::
:::globalParagraph
When you have a cluster running, you can always open up the official Kubernetes dashboard with <em>minikube dashboard</em> (for the default profile). Of course, you can always install the Kubernetes dashboard to any cluster, but this command is really a shortcut to get a visual interface to this cluster after a few seconds.
:::</p>
<p><img src="/img/blog/minikube-vs-k3d-vs-kind-vs-getdeck-beiboot-4.jpg" alt="minikube dashboard">{.object-cover .w-full .mb-5}</p>
<p>:::globalParagraph
<strong>Add that ingress</strong>
:::
:::globalParagraph
If you need to expose a Kubernetes deployment or service to your local development machine, just use the networking and connectivity commands:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li><em>minikube service</em>: returns a URL to connect to a service</li>
<li><em>minikube tunnel</em>: connect to LoadBalancer services
:::
:::globalParagraph
One common component that needs to be enabled via addons is the ingress controller. Usually, that is the preferred way to expose an application. With minikube you don’t have an ingress controller available by default, instead, you have to provide it manually. Luckily, there is an addon with the well-known and widely adopted “nginx-ingress” available. Just run:
:::
:::BlogCode{.mb-5}</li>
</ul>
<pre><code>minikube addons enable ingress
</code></pre>
<p>:::</p>
<p>:::globalParagraph
and you can create ingress objects that will be served under http://192.168.49.2. Please note that the IP address of your cluster can be a different one. You can find it out with
:::
:::BlogCode{.mb-5}</p>
<pre><code>minikube ip
</code></pre>
<p>:::</p>
<p>:::globalParagraph{.mt-8}
<strong>Criticism</strong>
:::
:::globalParagraph
There is only one criticism I have about minikube: the poor automation options. There is no configuration file that I can just feed in the command to set up a whole cluster as specified. Instead, I need to run all those commands sequentially. That is a pity and can be improved in the future.
:::
:::globalParagraph
A command to generate the tab-completion script is available for many terminals, too.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" :tag="h3" .mb-5}
k3d
:::
:::globalParagraph
The installation of the k3d CLI is very simple. You can get it via brew, script, or download it as binary and put it manually to your path. However, the CLI needs more time to get used to it. Compared to minikube, k3d does not provide so many features on the command line, yet you can realise almost all required setups with k3d just as well.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h3" .mb-5}
Fewer CLI options but ingress out of the box
:::
:::globalParagraph
A developer will miss most of the handy features that the minikube CLI provides but the k3d CLI misses. That’s not too much of an issue though. If you are a more experienced developer, you probably work with kubectl very efficiently and know other tools from the ecosystem like Helm or Kustomize. For example, if you need the Kubernetes dashboard, you have to install it via Helm (or any other installation method). That’s no big deal, but it’s not as convenient as with minikube. Once you create a cluster, your global kubeconfig context is set to point to the new cluster.
:::
:::globalParagraph
k3d comes with traefik as an ingress controller. It’s always installed, except you explicitly deactivate it using a configuration flag. At Blueshoe, we found it very helpful to have it always available as we didn’t have to handle that important feature during the development setup time.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h3" .mb-5}
Port mapping meh
:::
:::globalParagraph
Setting the port mapping to your local machine can be a bit cumbersome. For example, if you want to expose an application via ingress on port 8080 on your development machine you have to specify this during the cluster creation. And the notation is not super intuitive for developers. Have a look at the documentation. Create a cluster with a fixed set of port mappings like so:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code>k3d cluster create -p "8080:80@loadbalancer" -p "8443:443@loadbalancer" …
</code></pre>
<p>:::
:::globalParagraph
Other port configurations are possible as well, but from a DX perspective it’s not very convenient to recreate the entire cluster just because you forgot to map the ports.
:::
:::globalParagraph
If you just want to work as a developer in a team, you probably get a cluster configuration file anyway. With it, and the correct specifications, you will have a very good time setting everything up. You just have to run:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code>k3d cluster create --config myconfig.yaml
</code></pre>
<p>:::
:::globalParagraph
...and within a few seconds you will be all set. That’s fast and very convenient. A big DX plus for k3d.
:::
:::globalParagraph
A command to generate the tab-completion script is available for many terminals, too.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h3" .mb-5}
kind
:::
:::globalParagraph
kind is very similar to k3d in most aspects. Just like k3d and minikube, you can install it using popular packet managers, scripts and as a single executable.
:::
:::globalParagraph
If you know the handling of the k3d CLI already, you will probably be used to kind’s CLI very fast. The options are almost identical, and so are the limitations.
:::
:::globalParagraph
There is nothing much to add in this section.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Development Options
:::
:::globalParagraph
Getting your own code to run into one of the Kubernetes tools can be challenging and inconvenient. First, you need to build container images, as Kubernetes only allows running container instances. Usually, Kubernetes pulls these images from an external container registry (such as Dockerhub, quay.io, or a self-hosted registry). If a developer wanted to run some custom code, it would require a workload specification and a registry that serves the container image. This can lead to an enormous loss of efficiency.
:::
:::globalParagraph
Luckily, all tools provide some workaround or shortcuts to remove this barrier (at least to some degree)
:::</p>
<p>:::globalTitle{:size="sm" :tag="h5" .mb-5 .mt-8}
Mounting local code
:::</p>
<p>:::globalParagraph
minikube and k3d provide the capability to mount code from the developer’s machine directly into the running Kubernetes node.
:::
:::globalParagraph
With k3d this is possible with the local path provisioner of k3s. A developer can create a PersistentVolumeClaim that points to a path on the host system. Subsequently, this PVC can be mounted into a container instance and used in the container process. This will allow you to either run a container process with the current code (restarting the container once the code changed) or start the container process with <em>hot reloading capabilities</em>. Of course, this is highly specific to a framework or process that is being run and has nothing to do with Kubernetes. However, this only works at the cluster creation time like so:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code>k3d cluster create my-cluster --volume /my/home/go/src/github.com/nginx:/data
</code></pre>
<p>:::</p>
<p>:::globalParagraph
Adding volumes after the cluster has been created and is running is still an open issue.
:::
:::globalParagraph
With the minikube mountcommand, exactly the same is possible. You can even mount storage volumes after creating the cluster. Instead of using a Kubernetes PVC, you can mount to code using the <em>hostPath</em> property of a Pod, which makes it a bit more convenient
:::</p>
<p>:::globalTitle{:size="sm" :tag="h5" .mb-5 .mt-8}
Loading a local container image
:::</p>
<p>:::globalParagraph
A more practical and less invasive approach to run local code in minikube, k3d and kind is the load-image feature. Why less invasive? - As a developer, you don’t need to adjust the Kubernetes objects (Pods, PVCs, etc.) for your local environment, based on the paths that are potentially unique to your system (e.g. mounting home directories are usually different between developers). Instead, you make a container image available to your local cluster without the need for a dedicated container registry. That means, you build a local container image based on your current code (e.g. <em>docker build . -t myimage</em>) and transfer it directly into your local Kubernetes to run it.
:::
:::globalParagraph
That approach is leveraged by almost all Kubernetes development toolkits such as tilt.dev, devspace, and others. Watching for code changes, these kinds of development tools automatically run a build-load-execute cycle. This approach is slower than mounting local code with an adjusted container process, but at least it does not (always) modify the Kubernetes objects.
:::
:::globalParagraph
In order to do so with minikube, you run...
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code>minikube image load &#x3C;name>
</code></pre>
<p>:::
:::globalParagraph
In k3d you load an image with...
:::
:::BlogCode{.mb-5}</p>
<pre><code>k3d image import &#x3C;name>
</code></pre>
<p>:::
:::globalParagraph
and with kind it is...
:::
:::BlogCode{.mb-5}</p>
<pre><code>kind load docker-image &#x3C;name>
</code></pre>
<p>:::</p>
<p>:::globalParagraph
...from your terminal.
:::
:::globalParagraph
There are a few other tools available, like ksync that copies code into containers running in Kubernetes, but with a more general technical approach. A great option for developers working with any kind of Kubernetes environment, either local or remote, is introduced in the next section.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h5" .mb-5 .mt-8}
The best alternative for local Kubernetes development
:::</p>
<p>:::globalParagraph
The options from above don’t make all required development features easily accessible. For example, overriding environment variables is not very easy as they can come from different Kubernetes objects: ConfigMaps, Secrets, Pod specifications, Downward API and others. A developer who is not used to working with Kubernetes may have a hard time fiddling around with environment variables.
:::
:::globalParagraph
The almighty debugger, that is not easily attached to a process running in Kubernetes, is not very handy with the options from above. The above mentioned options do have some other setbacks, too.
:::</p>
<p>:::globalParagraph
At this point Blueshoe decided to construct a more sophisticated development tool, that spares the developer from spending time in the build-load-execute cycle or get local directories to run in Kubernetes: Gefyra.
:::
:::globalParagraph
Gefyra does not only connect to local Kubernetes clusters based on minikube, k3d or kind. It connects to virtually any Kubernetes cluster that runs anywhere. This allows Gefyra users to create dedicated development clusters in the cloud while providing local coding experiences to the developers.
:::</p>
<p>:::globalParagraph
Gefyra runs the code on a local Docker runtime (without Kubernetes at all) but does some networking and process magic to connect the local container instance with a Kubernetes cluster. The process on a developer machine will feel like it would run directly within a Kubernetes namespace (including networking features) with the upside of having all common development tools available locally. This can drastically improve the development velocity while maintaining a very good dev/prod parity.
:::
:::globalParagraph
If you have an opinion about Gefyra, missing features or need to file a bug report, feel free to open an issue or discussion on GitHub.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Graphical User Interfaces and Docker Desktop
:::
:::globalParagraph
If you are looking for a graphical user interface for your local Kubernetes cluster, please have a look at K3x and minikube GUI. Both projects are in a very early stage of development as of the time writing this article.
:::
:::globalParagraph
The main goals of these projects are to allow the user to create, start and stop, and destroy Kubernetes clusters with a click of a button. In addition, it allows developers to manage the most important operations with keyboard shortcuts and to reduce the learning curve of using Kubernetes.
:::
:::globalParagraph
And then there is Docker Desktop which comes with its own Kubernetes solution. However, Kubernetes in Docker Desktop does not really provide any of the features that minikube, k3d, or kind provide. You can simply start and stop a Kubernetes cluster using a graphical user interface.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
A cloud-based Kubernetes development environment with Getdeck
:::
:::globalParagraph
At Blueshoe, we realised that local Kubernetes clusters are a challenge with growing workloads. Especially on Windows and macOS, even a few development workloads in Kubernetes turn the development machine into a slow-walking zombie. That was very impractical, hence we decided to look for other solutions for our development teams. For the complex Kubernetes-native software architecture we are developing, it was not possible to create a shared cluster setup. Splitting up one physical cluster using namespaces is something that many development teams are currently doing. Instead, we wanted to provide dedicated, full-blown, on-demand Kubernetes clusters to our developers. But with all the features that a mature development organisation is demanding, such as lifecycle management, resource constraints, and so on.
:::
:::globalParagraph
We created Getdeck for that.
:::
:::globalParagraph
With Getdeck Beiboot, a team of developers only need to operate one physical Kubernetes cluster. The Beiboot operator spins up “virtual” Kubernetes clusters within the host cluster and manages their lifecycles. The creation of an ad-hoc Kubernetes environment takes about 20 seconds and does not consume any resources on a development machine.
:::
:::globalParagraph
In addition, the Beiboot Shelf feature allows developers to create pre-provisioned Kubernetes clusters off-the-shelf. That means, it only takes a few seconds longer and developers have a dedicated Kubernetes cluster running all required workloads for their tasks containing all the data that is required to match production infrastructure. This is not only convenient for development purposes but also for automated tasks in CI/CD scenarios.
:::
:::globalParagraph
And the best: these clusters are tunnelled to the local machine, so that it feels like they would run on the local host. That is very handy.
:::
:::globalParagraph
Getdeck now also comes with a graphical user interface: Getdeck Desktop.
:::
:::globalParagraph
It allows developers to manage Beiboot clusters in no time. They can establish a connection to it and work with it as it would run on their local machine, but without the computer blasting off.
:::
:::globalParagraph
You can easily test how this works with our free Getdeck-as-a-Service offer. Just download the desktop app, enter some ports, create a cluster and start developing in a virtual kubernetes cluster hosted and paid for by us. The cluster comes with the following restrictions:
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>max. 4h cluster lifetime</li>
<li>no session timeout</li>
<li>max. 3 nodes (max. 2 cores, 6GB RAM, 50GB Storage)</li>
<li>max. 1 cluster at a time
:::</li>
</ul>
<p>:::globalTitle{:size="lg" .mb-5 .mt-8}
Closing Remarks
:::
:::globalParagraph
It is very difficult to pick a winner in this comparison. All three established solutions, minikube, k3d, and kind are very similar to each other. There are some pros and cons for any solution but nothing that really stands out. That’s good because it’s not really possible to choose the wrong tool, either. I like the overall usability of all of these tools, given they address a professional working environment. All of them are fast, easy to install, and quite easy to use.
:::
:::globalParagraph
I have a gut feeling that minikube is slightly ahead of all options and the closest to the official Kubernetes development roadmap. Especially, for a single (potentially inexperienced) developer the entry barrier seems quite low. Yet, it’s the option with the highest resource demands. I would recommend minikube to Kubernetes starters.
:::
:::globalParagraph
At Blueshoe, we have been very happy with k3d in the past. Especially if you run many different Kubernetes clusters, you will be happy about the lower resource consumption compared to minikube. If you are working in a team, the configuration files coming with k3d or kind will be a huge benefit for all.
:::
:::globalParagraph
For some of our automated test cases, we switched over to minikube, because of the <em>--kubernetes-version</em> argument. It’s dead simple to set the requested Kubernetes version and voila, it’s running. With k3d you have to take a look at the corresponding k3s Docker image to use.
:::
:::globalParagraph
In the long run, we actually don’t see local Kubernetes development as a sustainable option. Remote development environments are the future! Getdeck Beiboot will run all Kubernetes-based resources, and with tools like Gefyra, we enable developers to work in a real Kubernetes-based development environment.
:::
:::globalParagraph
If you want to know more about Kubernetes-based development follow me on LinkedIn, join our discord or drop us a line at Blueshoe.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blog/minikube-vs-k3d-vs-kind-vs-getdeck-beiboot.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Model Context Protocol Meets OpenAPI]]></title>
            <link>https://blueshoe.io/blog/model-context-protocol-with-openapi</link>
            <guid>https://blueshoe.io/blog/model-context-protocol-with-openapi</guid>
            <pubDate>Thu, 12 Jun 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Artificial Intelligence (AI) - especially Large Language Models (LLMs) - are on everyone's lips. Now they should not only generate text, images, video, and audio, but also operate the software we work with as humans. To increase efficiency and make human-machine interfaces more natural. In this blog post, we offer a simple introduction to the Model Context Protocol (MCP) and share our first experiences with it. Additionally, we examine the integration of APIs via the OpenAPI standard through the MCP.</p>
<p><img src="/img/blogs/llm-openapi.svg" alt="MCP Meets OpenAPI - An Experience Report">{.object-cover .max-w-full .mb-5}</p>
<h2>Model Context Protocol</h2>
<p>The Model Context Protocol (MCP) is a definition for providing context from applications for LLMs. In simple terms, the protocol contains information about how an LLM can operate software (mostly interfaces).</p>
<p>It stands to reason that existing structures, such as the OpenAPI standard, are used to define an MCP for an application. However, before we bring MCP and OpenAPI together, let's first look at a few details of the MCP.</p>
<h3>Communication Model of the MCP</h3>
<p>Communication in the MCP framework occurs as a client-server communication. In this case, the client is the LLM, and the server is an abstraction layer of the application to be operated. This layer indicates how the client can operate the underlying application.</p>
<p><img src="/img/blogs/mcp_fastapi/architecture.png" alt="General Structure of MCP Architecture" title="Source: https://modelcontextprotocol.io/introduction (07.06.2025)"></p>
<p>The underlying application can be anything - a database/data source, a complex application, or an interface. The MCP host is the program that executes the MCP client (e.g., Claude Desktop, ChatGPT).</p>
<p><img src="/img/blogs/mcp_fastapi/communication.png" alt="MCP Communication Flow" title="Source: https://modelcontextprotocol.io/docs/concepts/architecture (07.06.2025)"></p>
<p><strong>Important</strong> - the graphic shows bidirectional communication between client and server. This means our MCP server must be accessible from the internet. Locally running MCP Servers require a setup that makes them available for mostly internet-based MCP Clients (e.g., Claude Desktop or Ngrok).</p>
<h3>Resources and Tools in the MCP</h3>
<p>I don't want to delve too deeply into the complexities of the MCP - all details can be read in the <a href="https://modelcontextprotocol.io/">official documentation</a>{target="_blank"}. Important for the context of this blog post are primarily <strong>Resources</strong> and <strong>Tools</strong>.</p>
<p><a href="https://modelcontextprotocol.io/docs/concepts/resources">Resources</a>{target="_blank"} are simply data sources. They describe how the client (the LLM) can retrieve data from the server. All forms of data are possible: Text, binary data, images, sound, videos, structured or unstructured. For accessing specific resources, <em>Resource Templates</em> are used. These allow access to specific resources - for example, "Give me the purchase order with number <em>X341</em>".</p>
<p><a href="https://modelcontextprotocol.io/docs/concepts/tools">Tools</a>{target="_blank"} enable actions that allow interaction.This enables the LLMs to perform actions with running services. Creating or manipulating a data record, calculating a task, triggering an order or manipulating an image. Everything is conceivable.</p>
<h3>Connection of an LLM and an Application via MCP</h3>
<p>With the description of resources and tools, MCP now promises that LLMs can serve applications in a meaningful way and manner. They leverage efficiencies and effectively create a new interface in both machine-to-machine and human-machine communication. Looking into the details of resource and tool descriptions, it becomes apparent that many information used for MCP already exist in standard formats like OpenAPI. The description of structures is JSON-based in both cases. Naturally, the structure differs - at this point, <code>fastmcp</code> comes into play.</p>
<p>In the following section, we have explored the Python package <a href="https://gofastmcp.com/getting-started/welcome">fastmcp</a>{target="_blank"} and share our insights.</p>
<h2>fastmcp - The Pythonic MCP Package</h2>
<p><code>fastmcp</code> is a framework for implementing the MCP protocol, written in Python. Version 1 of the package was incorporated into the official Python SDK. The current version 2, at the time of the blog post, focuses primarily on good usability and a most complete feature set.</p>
<p>Upon reviewing the documentation, it's immediately apparent that there's a dedicated page for <a href="https://gofastmcp.com/servers/openapi">OpenAPI</a>{target="_blank"} integration. The example appears remarkably simple:</p>
<pre><code class="language-python">import httpx
from fastmcp import FastMCP

# Create an HTTP client for your API
client = httpx.AsyncClient(base_url="https://blueshoe.youtrack.cloud/api/", headers={
    "Authorization": "############",
    "Accept": "application/json",
})

# Load your OpenAPI spec 
openapi_spec = httpx.get("https://blueshoe.youtrack.cloud/api/openapi.json").json()

# Create the MCP server
mcp = FastMCP.from_openapi(
    openapi_spec=openapi_spec,
    client=client,
    name="blueshoe-youtrack",
)

if __name__ == "__main__":
    mcp.run()
</code></pre>
<p>The server receives a URL with the OpenAPI JSON and a client object for API usage.</p>
<p><strong>My idea:</strong> I want to make our ticket system <em>YouTrack</em>, which we use for project management at Blueshoe, controllable via LLM.</p>
<p>This works seamlessly - now how can LLMs communicate with my server? The documentation offers 2 possibilities - Claude Desktop or Ngrok. What are the differences?</p>
<h3>MCP Local or Remote?</h3>
<p><code>fastmcp</code> allows the written server to be simply installed in the locally installed Claude Desktop. Communication then occurs via <em>stdio</em>. Alternatively, the server can be bound to a port, with communication then happening via SSE (<a href="https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events">Server-Sent Events</a>{target="_blank"}). <a href="https://ngrok.com/">Ngrok</a>{target="_blank"} comes into play as an alternative. Ngrok offers simple options to expose local programs to the internet. However, my initial attempts started with Claude Desktop.</p>
<h2>MCP via Claude Desktop</h2>
<p><code>fastmcp</code> comes with a command to install the MCP server in Claude Desktop. Accordingly, I mainly followed the <a href="https://gofastmcp.com/integrations/claude-desktop">documentation</a>{target="_blank"} for connecting to Claude Desktop. So let's go:</p>
<pre><code class="language-shell">fastmcp install server.py
</code></pre>
<p>When starting Claude Desktop, the following message appeared:</p>
<p><img src="/img/blogs/mcp_fastapi/claude_1.png" alt="Claude Desktop Error"></p>
<p>Hmm, okay? What's the problem? I start the server manually. Worked. After a brief investigation, I find: The installation command of fastmcp is not particularly smart:</p>
<p><img src="/img/blogs/mcp_fastapi/claude_2.png" alt="Claude Desktop fastmcp Installation"></p>
<p>Unfortunately, the installation command does not select the correct Python interpreter. This prevents the script from being executed.</p>
<p>Note: I've cut out a small piece of the path here.</p>
<p>Okay, this can be easily resolved. In my project directory, I perform the following.</p>
<p>Command from:</p>
<pre><code class="language-shell">which python3
</code></pre>
<p>I will then place this interpreter instead of <a href="https://github.com/astral-sh/uv">uv</a>{target="_blank"} in the configuration file of Claude Desktop. As a parameter, only the path to <code>server.py</code> will be entered.</p>
<p>Problem solved! Claude Desktop starts without problems. I'm asking a question - which tickets were processed today?</p>
<p><img src="/img/blogs/mcp_fastapi/claude_3.png" alt="Claude Desktop fastmcp Installation"></p>
<p>Ah yes. Crap! It turns out that such a rather extensive OpenAPI specification directly exceeds the <em>length limits</em>. Extensive suggests a pretty good description. I have a good opinion of Jetbrains and their products, so I was hoping that the input from the YouTrack OpenAPI would be ideal for my experiment.</p>
<p>I have tested further models, including paid variants. Unfortunately, I often run into limitations of tokens or rate limits. My personal impression at this point was powerlessness. Everything is easily put together, but how can I get to the bottom of the problems? I then did debugging my server script via <code>print</code>, not a particularly neat experience.</p>
<h2>MCP via Client Script &#x26; Ngrok</h2>
<p>Okay, end of UIs - then we'll script the client.</p>
<pre><code class="language-python">import anthropic
from rich import print

# Your server URL (replace with your actual URL)
url = 'https://############.ngrok-free.app'

client = anthropic.Anthropic()

response = client.beta.messages.create(
    model="claude-3-5-haiku-20241022",
    max_tokens=1000,
    messages=[{"role": "user", "content": "What issues have been edited on Blueshoe YouTrack today?"}],
    mcp_servers=[
        {
            "type": "url",
            "url": f"{url}/sse",
            "name": "youtrack",
        }
    ],
    extra_headers={
        "anthropic-beta": "mcp-client-2025-04-04"
    }
)

print(response.content)
</code></pre>
<p>With ngrok we make our server available on the internet:</p>
<pre><code class="language-shell">ngrok http 8000 
</code></pre>
<p>The client script gets the corresponding address and our message. What was processed today in our YouTrack?</p>
<p>Some interesting outputs in the log:</p>
<pre><code class="language-python">BetaMCPToolUseBlock(
      id='mcptool_01JqQroumE3gHkVnYFZzRc7C',
      input={'query': 'updated: today'},
      name='POST_searchassist',
      server_name='youtrack',
      type='mcp_tool_use'
  )
</code></pre>
<p>Aha - a query <code>updated: today</code> - that looks good, right?</p>
<p>Unfortunately no - the final answer looks roughly like this:</p>
<blockquote>
<p>I apologize for the persistent errors. It seems there might be an issue with the YouTrack API tools at the moment. Without being able to directly query the system, I can provide some general advice: To find issues edited today in YouTrack, you would typically use a search query like "updated: today" in the YouTrack interface.</p>
</blockquote>
<p>Just use the YouTrack UI! Unfortunately, the LLM didn't come to a good conclusion when using the YouTrack API. After initially running into authentication problems, which I was able to resolve, the LLM actually had a clear path. Unfortunately, it was not able to formulate a sensible API query. Even seemingly simple queries like - <code>How many tickets were processed today?</code> (we just want "only" a number) - did not function.</p>
<h2>Conclusion</h2>
<p>The Model Context Protocol is a good systematization for communication between LLMs and existing systems. However, it does not mean that database or application integration is completed with a simple handshake. Even excellent integration points with OpenAPI schemas cannot be integrated without further ado - it requires corresponding fine-tuning so that an LLM can deliver good results.</p>
<p><a href="https://modelcontextprotocol.io/docs/concepts/tools#best-practice">Best Practices</a>{target="_blank"} for creating MCP structures are already available. An OpenAPI schema can form the basis for integration - but this does not replace the implementation.</p>
<p>MCP is <strong>not</strong> yet plug'n'play. It also requires good engineering and clear goals that will be pursued with an integration. I will continue to follow this topic, probably also in a further blog post.</p>
<p>One question that I, as a software project author, ask myself - how does one actually reliably test these integrations? The documentation from fastmcp appears somewhat brief in the <a href="https://gofastmcp.com/patterns/testing">Testing</a>{target="_blank"} area. If you know something about this or have insights, please share in the comments.</p>]]></content:encoded>
            <category>FastAPI</category>
            <category>API</category>
            <category>Development</category>
            <category>AI</category>
            <enclosure url="https://blueshoe.io/img/blogs/llm-openapi.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Cool new features for Django-Hurricane]]></title>
            <link>https://blueshoe.io/blog/new-features-django-hurricane</link>
            <guid>https://blueshoe.io/blog/new-features-django-hurricane</guid>
            <pubDate>Thu, 24 Jun 2021 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>We are pushing the development in the areas of Django and Kubernetes with our open-source project <strong>Django-Hurricane</strong>. Today, we’d like to show you some <strong>new features</strong>.</p>
<p><img src="/img/blog/new-features-django-hurricane.jpg" alt="Cool new features for Django-Hurricane">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
Towards the end of 2020, we told you about our new <strong>open-source</strong> project Django-Hurricane. You can read our blog post about the launch here. With this project, we want to drive the <strong>Django and Kubernetes</strong> development forward and make our developers’ lives easier by eliminating a lot of routine jobs that pop up during the project set-up, transferring them to a robust framework.
:::
:::globalParagraph
We also worked on the to-do list in the GitHub Repository and are now able to show you some new features of Django-Hurricane.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Documentation
:::
:::globalParagraph
We have done extensive work on the documentation. Apart from the <strong>user guide</strong>, there’s also a low-level <a href="/our-services/api-development-company/">API</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} documentation. The user guide provides helpful information about the use of <strong>Django-Hurricane</strong>, such as the available command options, useful tips for the configuration as well as general information about the individual components of Django-Hurricane. In the <a href="/our-services/api-development-company/">API</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} documentation, you’ll be able to look up the features of those individual components.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Running management commands
:::
:::globalParagraph
We have also implemented a feature which allows for the <strong>management commands</strong> to be specified and run directly in the ‘serve’ command. The new <strong>command options</strong> are:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>no-metrics: deactivate the collection of metrics.</li>
<li>command: the repeatable instruction that defines the management commands. The commands are carried out before the HTTP server is started. As the command can be run more than once, it can be repeatedly defined with those different management commands that might also have further options. In this case, you have to specify both the options and the name of the command in a string.
:::
:::globalParagraph
A command using the management commands could look as follows:
:::</li>
</ul>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">python manage.py serve --command makemigrations --command “compilemessages --locale =de_DE”
</code></pre>
<p>:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Probe Endpoints
:::
:::globalParagraph
Probe endpoints can now be defined separately using the ‘serve’ command. The options for this are:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>startup-probe: the path for the startup endpoint (default: /startup).</li>
<li>readiness-probe: the path for the readiness endpoint (default: /ready).</li>
<li>liveness-probe: the path for the liveness endpoint (default: /alive).</li>
<li>req-queue-len: the threshold for the request queue. If this threshold is exceeded, the readiness probe will trigger a request with the status code 400.
:::</li>
</ul>
<p>:::globalTitle{:size="lg" .mb-5}
Webhooks for probe events
:::
:::globalParagraph
In addition to this, we have also implemented a new feature for Django-Hurricane: the ability to send <strong>webhooks</strong> to a specific address. By now, there are three webhooks which correspond to the three probes. The first one is the startup webhook. It is run after the start of the HTTP server and sent to the specified address. Should the application have failed to start, a startup webhook with the status ‘failed’ is initiated. After this, the application is stopped.
:::
:::globalParagraph
The liveness and readiness webhooks are initiated after the respective <strong>probe requests</strong>. The webhooks are only run following a change in state – i. e. when the first probe request occurs, which changes the state from ‘None’ to ‘Healthy/Unhealthy’, as well as when the state changes from ‘Healthy’ to ‘Unhealthy’ or from ‘Unhealthy’ to ‘Healthy’. In the case of a webhook with the status ‘failed’, a corresponding error traceback is also sent.
:::
:::globalParagraph
In order for the webhooks to be sent at all, the URL has to be provided along with the <strong>webhook command</strong>. The command option is called ‘webhook-url’ and is provided as a URL for webhooks. The entire command for running the application together with webhooks would look as follows:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">python manage.py serve --webhook-url „http://&#x3C;Adresse>“
</code></pre>
<p>:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Every one of the webhooks also contains certain data:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Status: <em>‘failed’</em> or <em>‘succeeded’</em> – this depends on whether the probe is run successfully or whether it fails.</li>
<li>Type: <em>‘startup’</em>, <em>‘readiness’</em> or <em>‘liveness’</em>.</li>
<li>Timestamp: the moment when the webhook was initiated.</li>
<li>Hostname: the name of the computer or server.</li>
<li>Version: the Hurricane version.</li>
<li>Error trace: if the webhook has the status <em>‘failed’</em>, the server will send the error message along with the error path.
:::</li>
</ul>
<p><img src="/img/blog/cyclone.jpg" alt="cyclone">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
Despite all our hard work on the Django-Hurricane to-do list in the last few months, we’ve still got some unchecked boxed left. We’d therefore be very grateful for any help with the further development of Django-Hurricane as well as any of our other <strong>open-source projects</strong>. We’re already excited about the new challenges of the development and are looking forward to ticking off some more boxes in our GitHub Repository to-do list.
:::</p>
<p>::GlobalPodcastSection{:videoId="KapP6ouEL3E" :videoPosition="left" :bg="bg-bs-gray" .mb-5}
:::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR KUBERNETES PODCAST
:::
:::GlobalTitle{:tag="h3" .mb-6}
Tools for the Craft: Navigating the Kubernetes ecosystem
:::
:::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking about the various options developers have for running remote Kubernetes development environments.
:::
:::globalParagraph{:font-size="lg" .mb-4}
More editions of our podcast can be found here:
:::
::::GlobalButton{:url="/podcast/" :label="More podcasts!" :color="green"}
::::
::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Django</category>
            <category>Python</category>
            <category>Team Blueshoe</category>
            <enclosure url="https://blueshoe.io/img/blog/new-features-django-hurricane.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Nuxt in Production - Delivering Websites Statically vs. Dynamically]]></title>
            <link>https://blueshoe.io/blog/nuxt-generate-vs-server</link>
            <guid>https://blueshoe.io/blog/nuxt-generate-vs-server</guid>
            <pubDate>Sun, 12 Jan 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Nuxt is a modern framework that was specifically developed for creating SEO-optimized and high-performance web applications. In this article, you will learn how to optimally deploy Nuxt in production environments. We will explore the various rendering methods – static generation and server rendering – and provide tips on how you can get the best out of both worlds.</p>
<p><img src="/img/blogs/vuejsnuxt.svg" alt="Blueshoe and FastAPI in Production">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalTitle{:size="lg" .mb-4}
Introduction
::</p>
<p>::GlobalParagraph
Nuxt is a versatile framework that helps developers create modern web applications efficiently. A decisive aspect when developing with Nuxt is choosing the right rendering method. The two main options are:
::</p>
<p>::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li><strong>Static Generation (Nuxt Generate)</strong></li>
<li><strong>Server Rendering (Nuxt Server)</strong>
::</li>
</ul>
<p>::GlobalParagraph
But when should one use which method, and which best suits the requirements of your application? In this article, we will examine both approaches, their advantages and disadvantages, and provide practical recommendations. Hybrid approaches that combine both methods will also be presented.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Static Generation (Nuxt Generate)
::</p>
<p>::GlobalParagraph
Static generation is an approach where HTML files are created during the build process and then directly served by a web server or CDN. This ensures extremely fast loading times and reduces dependency on backend systems. All pages of the website are generated during the build process and prepared as static HTML files.
::</p>
<p>::GlobalParagraph
A practical example of static generation is creating a complete website (blog etc.). Here, all pages can be generated during the build process. Additionally, a crawler can be used that also generates all internal links of the pages to ensure that all found content is indexed:
::</p>
<p>::BlogCode</p>
<pre><code class="language-typescript">// nuxt.config.ts

export default defineNuxtConfig({
  nitro: {
    prerender: {
      crawlLinks: true,
      routes: ['/'],
    },
  }
})
</code></pre>
<p>::</p>
<p>::GlobalParagraph
With this configuration, all specified routes are generated during the build and prepared as static HTML files. The crawler ensures that all internal links are also generated. This offloads the web server and makes pages load extremely quickly. The <code>routes</code> option in the <code>nuxt.config.ts</code> explicitly specifies the paths that should be generated during the build process.
::</p>
<p>::GlobalParagraph
Further configuration information can be found in the <a href="https://nuxt.com/docs/getting-started/configuration">official Nuxt documentation</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
::</p>
<p>::GlobalParagraph
To use the static generation mode, you can run the following npm command:
::</p>
<p>::BlogCode</p>
<pre><code class="language-bash">npm run generate
</code></pre>
<p>::</p>
<p>::GlobalParagraph
After executing the command <code>npm run generate</code>, the static HTML files will be created in the <code>dist</code> directory. To make the page visible, you must upload the generated files to a web server or a CDN.
::</p>
<p>::GlobalParagraph
Summary:
::</p>
<p>:::GlobalBlock{.ul-disk .my-4}</p>
<ul>
<li><strong>Performance:</strong> Extremely fast loading times through pre-generated HTML files.</li>
<li><strong>Costs:</strong> Reduces dependency on server-side processes and lowers hosting costs.</li>
<li><strong>SEO Optimization:</strong> Better indexing through pre-rendered content.</li>
<li><strong>Limitations:</strong> No real-time content, long build times for extensive websites.
:::</li>
</ul>
<p>::GlobalTitle{:size="lg" .mb-5}
Server Rendering (Nuxt Server)
::</p>
<p>::GlobalParagraph
With server rendering, HTML output is dynamically generated on a server as soon as a user makes a request. This approach is particularly suitable for applications that need to provide personalized content or must frequently update data. A typical example is a dashboard that displays user data in real-time.
::</p>
<p>::BlogCode</p>
<pre><code class="language-vue">&#x3C;!-- pages/dashboard.vue -->

&#x3C;script setup lang="ts">
interface User {
  name: string
  lastLogin: string
}

const user = ref&#x3C;User>

  | null!(null)

const { data } = await useFetch&#x3C;User>('/api/user')
user.value = data.value
&#x3C;/script>

&#x3C;template>
  &#x3C;div v-if="user">
    &#x3C;h1>Welcome, {{ user.name }}&#x3C;/h1>
    &#x3C;p>Last Login: {{ user.lastLogin }}&#x3C;/p>
  &#x3C;/div>
&#x3C;/template>
</code></pre>
<p>::</p>
<p>::GlobalParagraph
In this example, the <code>useFetch</code> method is used to load the required user data during server rendering and pass it to the component. This ensures that the contents are always current. Thanks to the auto-import functionality of Nuxt, system functions like <code>ref</code> and <code>useFetch</code> do not need to be explicitly imported.
::</p>
<p>::GlobalParagraph
To use the server rendering mode, you can run the following npm command:
::</p>
<p>::BlogCode</p>
<pre><code class="language-bash">npm run build
npm run start
</code></pre>
<p>::</p>
<p>::GlobalParagraph
Further information can be found in the <a href="https://nuxt.com/docs/getting-started/deployment#server-hosting">official Nuxt documentation on server rendering</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
::</p>
<p>::GlobalParagraph
Summary:
::</p>
<p>:::GlobalBlock{.ul-disk .my-4}</p>
<ul>
<li><strong>Flexibility:</strong> Dynamically generated content enables real-time adjustments.</li>
<li><strong>SEO Benefits:</strong> Content is always current and optimized for search engines.</li>
<li><strong>Challenges:</strong> Higher server resource requirements and more complex infrastructure.
:::</li>
</ul>
<p>::GlobalTitle{:size="lg" .mb-5}
Hybrid Approaches with Nuxt
::</p>
<p>::GlobalParagraph
A hybrid approach combines the benefits of static generation and server rendering. You can statically generate selected pages and render others dynamically. An example would be an e-commerce website with statically generated product pages and a dynamic shopping cart function.
::</p>
<p>::BlogCode</p>
<pre><code class="language-vue">&#x3C;!-- pages/products/_id.vue -->

&#x3C;script setup lang="ts">
interface Product {
  id: number
  name: string
  description: string
}

const route = useRoute()
const product = ref&#x3C;Product | null>(null)

const { data } = await useFetch&#x3C;Product>(`/api/products/${route.params.id}`)
product.value = data.value

function addToCart(productId: number) {
  // Dynamic function to add product to shopping cart
  // ...
}
&#x3C;/script>

&#x3C;template>
  &#x3C;div v-if="product">
    &#x3C;h1>{{ product.name }}&#x3C;/h1>
    &#x3C;p>{{ product.description }}&#x3C;/p>
    &#x3C;button @click="addToCart(product.id)">
      Add to Cart
    &#x3C;/button>
  &#x3C;/div>
&#x3C;/template>
</code></pre>
<p>::</p>
<p>::GlobalParagraph
In this example, the product page is statically generated, while the shopping cart function is dynamically loaded when requested. This means that product information is generated as static HTML files during the build process, enabling faster page load times. The shopping cart function, however, is dynamically rendered to enable current user interactions.
::</p>
<p>::GlobalParagraph
Summary:</p>
<p>::GlobalBlock{.ul-disk .my-4}</p>
<ul>
<li><strong>Combination:</strong> Static generation for performant delivery, dynamic rendering for current content and interactions.</li>
<li><strong>Flexibility:</strong> Ideal for applications with mixed requirements.</li>
<li><strong>Complexity:</strong> Requires careful planning and implementation.
:::
::</li>
</ul>
<p>::GlobalParagraph
If you would like to learn more about Headless E-Commerce solutions, visit our <a href="/our-services/headless-e-commerce/">Headless E-Commerce page</a>{.bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Conclusion: Optimal Use of Nuxt in Production
::</p>
<p>::GlobalParagraph
The choice between <strong>static generation</strong>, <strong>server rendering</strong> or a <strong>hybrid approach</strong> depends on the requirements of your application. With the right strategies, you can maximize both performance and flexibility.
::</p>
<p>::GlobalParagraph
Static generation is ideal for websites with consistent content like blogs or marketing pages. Server rendering is better suited for applications that must offer frequently updated or personalized content. Hybrid approaches provide an optimal mix of both worlds and are suitable for complex applications like shops, where product pages are statically generated and functions are dynamically rendered.
::</p>
<p>::GlobalParagraph
Have questions or need support in implementation? Contact us - we help you successfully implement your Nuxt projects!
::</p>
<p>::GlobalParagraph{.mb-4}
Further information about rendering concepts can be found in the <a href="https://nuxt.com/docs/guide/concepts/rendering">official Nuxt documentation</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5 .mt-8}
Frequently Asked Questions
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. What is the difference between Nuxt Generate and Nuxt Server?
::</p>
<p>::GlobalParagraph
Nuxt Generate creates static HTML files during the build process, which are delivered from a server or CDN. Nuxt Server, on the other hand, renders pages dynamically when a user requests them. Both methods have different use cases and advantages.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. When should I use Nuxt Generate or Nuxt Server?
::</p>
<p>::GlobalParagraph
Use Nuxt Generate for websites with consistent content like blogs or marketing pages. Nuxt Server is better suited for applications with personalized or frequently updated content, such as dashboards or e-commerce sites.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. What advantages does static generation offer for SEO and performance?
::</p>
<p>::GlobalParagraph
With Nuxt Generate, load times are extremely fast because HTML files are pre-rendered. This improves SEO through easier indexing and reduces hosting costs. However, real-time content is not possible, which can be a limitation depending on the use case.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
4. Can I combine static and dynamic content in Nuxt?
::</p>
<p>::GlobalParagraph
Yes, hybrid approaches are possible. For example, you can statically generate blog articles and dynamically render client-side elements like comments. This combines fast performance with flexibility and adapts perfectly to different requirements.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
5. How do I set the right Nuxt Production Mode?
::</p>
<p>::GlobalParagraph{.mb-4}
For optimal production, use static generation when content is constant, or switch to server rendering for dynamic content. With hybrid approaches, you can combine both.
::</p>]]></content:encoded>
            <category>Nuxt</category>
            <category>Vue.js</category>
            <category>Development</category>
            <category>Operations</category>
            <category>Performance</category>
            <category>Security</category>
            <category>Documentation</category>
            <enclosure url="https://blueshoe.io/img/blogs/vuejsnuxt.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Nuxt & Keycloak: A Simple Guide to SSR Integration]]></title>
            <link>https://blueshoe.io/blog/nuxt-keycloak-integration</link>
            <guid>https://blueshoe.io/blog/nuxt-keycloak-integration</guid>
            <pubDate>Fri, 10 Oct 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Integrating Keycloak into a Nuxt application for robust authentication can be a challenge, especially with regard to Server-Side Rendering (SSR). In this article, we compare two leading modules, nuxt-auth-utils and @sidebase/nuxt-auth, and provide a step-by-step guide to help you choose the right solution.</p>
<p><img src="/img/blogs/nuxtkeycloak.svg" alt="Nuxt and Keycloak">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalBlogLevelInfo</p>
<ul>
<li><a href="https://www.keycloak.org/">Keycloak</a>{target="_blank"} and <a href="https://en.wikipedia.org/wiki/OAuth">OAuth2</a>{target="_blank"}</li>
<li><a href="https://nuxt.com/docs/4.x/guide/concepts/rendering">Nuxt Rendering Modes</a>{target="_blank"}</li>
</ul>
<p>If you have any questions or if anything is unclear, feel free to use the comment function below the article.
::</p>
<h2>SSR, Hybrid, Generate - Authentication per Mode</h2>
<p>First, it's important to identify the mode in which the Nuxt application is running. This determines the different requirements for authentication. A blog article on the different <a href="/blog/nuxt-generate-vs-server/">modes can be found here</a>{.bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.</p>
<p><strong>Server-Side Rendering -</strong> If user information such as permissions, names, etc., is processed during server-side rendering, the Nitro server needs access to it. Accordingly, the Nitro/Nuxt server must be able to validate whether a session is authenticated or authorized.</p>
<p><strong>Client-Side Rendering</strong> - There is no server logic. Everything is processed on the client-side (the browser) - accordingly, the handling of user information/session also runs exclusively on the client-side.</p>
<p><strong>Hybrid-Rendering</strong> - Depending on whether the SSR routes require user session information or not, the Keycloak integration must be considered.</p>
<h2>Option 1: Integration with nuxt-auth-utils - New and improved</h2>
<p>Integrating Keycloak and Nuxt with <a href="https://github.com/atinux/nuxt-auth-utils"><code>nuxt-auth-utils</code></a>{target="_blank"} could hardly be simpler. A little configuration in <code>nuxt.config.js</code> and the basic framework is ready:</p>
<pre><code class="language-json">runtimeConfig: {
    oauth: {
      keycloak: {
        serverUrl: 'https://keycloak.blueshoe.io',
        realm: 'Blueshoe',
        clientId: 'blueshoe-website',
        // clientSecret: '',
        redirectURL: 'https://blueshoe.io/auth/keycloak',
      },
    },
  },
  modules: [
    'nuxt-auth-utils',
  ]
</code></pre>
<p>Now, convenient <a href="https://github.com/atinux/nuxt-auth-utils/?tab=readme-ov-file#user-session">Composables</a>{target="_blank"} are available:</p>
<pre><code class="language-javascript">const { loggedIn, user, session, fetch, clear, openInPopup } = useUserSession()
</code></pre>
<p>With this information, user-specific information can be rendered quickly and easily:</p>
<pre><code class="language-vue">&#x3C;template>
  &#x3C;span v-if="loggedIn">Hello, {{ user.firstName }} {{ user.lastName }}&#x3C;/span>
  &#x3C;span v-else>Hello Guest&#x3C;/span>
&#x3C;/template>
</code></pre>
<p>Likewise, <a href="https://github.com/atinux/nuxt-auth-utils/?tab=readme-ov-file#session-management">useful utils are available</a>{target="_blank"} on the SSR side. If a route absolutely requires a valid user session, this can be easily achieved with the following composable:</p>
<pre><code class="language-javascript">const session = await requireUserSession(event)
</code></pre>
<p>So far, so good - but how does the login work?</p>
<p>Through the environment variables, the nuxt-auth-utils module has all the information available to generate the login redirect and to use the session on the redirect back to the Nuxt application.</p>
<blockquote>
<p>💡 <strong>Note:</strong> The <code>runtimeConfig</code> is not an environment variable itself, but the way Nuxt provides access to environment variables. More on this in the <a href="https://nuxt.com/docs/4.x/guide/going-further/runtime-config#environment-variables">Nuxt documentation</a>{target="_blank"}.</p>
</blockquote>
<p>For this, a server-side route is simply created under <code>server/auth/keycloak.get.ts</code>:</p>
<pre><code class="language-javascript">export default defineOAuthKeycloakEventHandler({
  async onSuccess(event, { user }) {
    await setUserSession(event, {
      user: {
        keycloak: user.preferred_username,
      },
      loggedInAt: Date.now(),
    })

    return sendRedirect(event, '/')
  },
})
</code></pre>
<p>If you now call /auth/keycloak/, you will be automatically redirected to the running Keycloak instance and receive a session after a successful login.</p>
<p>The module is by <a href="https://github.com/atinux">Sébastien Chopin</a>{target="_blank"} - the creator of Nuxt himself - and has a solid standing with (as of Sep 2025) 95,000 downloads per month, regular updates, and 1,400 stargazers, and can be recommended with confidence.</p>
<h2>Option 2: Integration with @sidebase/nuxt-auth - Battle-tested and ready to use</h2>
<p>Another way to bring Nuxt and Keycloak together is the <code>@sidebase/nuxt-auth</code> package. With a few adjustments in <code>nuxt.config.js</code>, the module can be easily configured:</p>
<pre><code class="language-json">runtimeConfig: {
    public: {
      authOrigin: 'http://localhost:3000',
    },
  },

  modules: [
    '@sidebase/nuxt-auth',
  ],

  auth: {
    isEnabled: true,
    disableServerSideAuth: false,
    provider: {
      type: 'authjs',
      trustHost: false,
      defaultProvider: 'keycloak',
      addDefaultCallbackUrl: true,
    },
    sessionRefresh: {
      enablePeriodically: true,
      enableOnWindowFocus: true,
    },
  },
</code></pre>
<p>As the configuration already shows, <code>@sidebase/nuxt-auth</code> takes care of refreshing the session automatically. Keycloak exists as a pre-configured provider. We create the following file at <code>server/api/auth/[...].ts</code> and configure the provider:</p>
<pre><code class="language-javascript">import KeycloakProvider from 'next-auth/providers/keycloak'
import { NuxtAuthHandler } from '#auth'

export default NuxtAuthHandler({
  secret: 'your-secret-here',
  providers: [
    KeycloakProvider.default({
      clientId: process.env.KEYCLOAK_ID,
      clientSecret: process.env.KEYCLOAK_SECRET,
      issuer: process.env.KEYCLOAK_ISSUER,
    })
  ]
})
</code></pre>
<p>It is important to note here that the URL to the Keycloak realm must be specified as the issuer: https://my-keycloak-domain.com/realms/My_Realm.</p>
<p>Additionally, callbacks can be defined to react to various events:</p>
<pre><code class="language-javascript">export default NuxtAuthHandler({
	...
	callbacks: {
	    /* on before signin */
	    async signIn({ user, account, profile, email, credentials }) {
	      return true
	    },
	    /* on redirect to another url */
	    async redirect({ url, baseUrl }) {
	      return baseUrl
	    },
	    /* on session retrieval */
	    async session({ session, user, token }) {
	      return session
	    },
	    /* on JWT token creation or mutation */
	    async jwt({ token, user, account, profile, isNewUser }) {
	      return token
	    }
	  }
})
</code></pre>
<p><code>@sidebase/nuxt-auth</code> automatically creates a page that provides the login and integrates with the given providers.</p>
<p>The user's data is then available in the application as follows:</p>
<pre><code class="language-vue">&#x3C;script setup>
const {
  status,
  data,
  lastRefreshedAt,
  getCsrfToken,
  getProviders,
  getSession,
  signIn,
  signOut
} = useAuth()
&#x3C;/script>
</code></pre>
<p>The status indicates whether the user is authenticated or not. <code>data</code> contains user-specific data.</p>
<p>The module is SSR compatible and allows rendering different information based on the session.</p>
<p><code>@sidebase/nuxt-auth</code> is developed by <a href="https://sidebase.io/">sidebase</a>{target="_blank"} and has established itself as a reliable solution for authentication in Nuxt applications. With a well-established company behind it and regular updates, it offers a solid foundation for production applications and can be recommended with confidence.</p>
<h2>Conclusion: nuxt-auth-utils or @sidebase/nuxt</h2>
<p>As is often the case, the devil is in the details of the use case. 😉 Both modules have a good track record regarding further development, the integration is simple, and the composables are very good.</p>
<p>If you want to customize the typical authentication pages yourself, <code>@sidebase/nuxt-auth</code> is more suitable. The module also comes with a configurable simple refresh logic for the user session.
<code>nuxt-auth-utils</code>, on the other hand, "feels" a bit more lightweight. A <a href="https://github.com/atinux/nuxt-auth-utils/issues/356">request</a>{target="_blank"} for automatic session refresh is still pending. However, this can usually be easily retrofitted yourself.</p>
<p>We can warmly recommend both modules and look forward to feedback on your experiences with Nuxt and Keycloak in the comments!</p>
<p>How did you "plug together" Nuxt and Keycloak? Are there better ways? What are your experiences with the modules?</p>]]></content:encoded>
            <category>Nuxt</category>
            <category>Vue.js</category>
            <category>Keycloak</category>
            <enclosure url="https://blueshoe.io/img/blogs/nuxtkeycloak.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[The Solution for SSR Sliders with Nuxt]]></title>
            <link>https://blueshoe.io/blog/nuxt-ssr-slider-top-performance</link>
            <guid>https://blueshoe.io/blog/nuxt-ssr-slider-top-performance</guid>
            <pubDate>Tue, 19 Aug 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>We spent a long time searching for a sensible solution for implementing sliders with Nuxt. Many developers struggle with performance issues, hydration mismatches, and poor SEO performance with conventional slider libraries. In this comprehensive guide, we show you how we use Embla Carousel in Nuxt to achieve top SSR performance.</p>
<p>In this article, we explain why traditional slider libraries often fail in Nuxt projects, what makes Embla Carousel so special, and how you can use it to implement high-performance, SEO-friendly carousels. From problem analysis to solutions to best practices for production environments—everything you need for top-performing sliders in Nuxt.</p>
<p><img src="/img/blogs/nuxt-embla-slider.svg" alt="Nuxt SSR Slider mit Top Performance"></p>
<h2>Introduction: The problem with sliders in Nuxt</h2>
<h3>Why are sliders in Nuxt a challenge?</h3>
<p>Sliders and carousels are among the most common UI components on the web. They are used for product galleries, testimonials, blog posts, and much more. However, in Nuxt projects with server-side rendering (SSR), they can quickly lead to performance issues and negatively impact the user experience.</p>
<p>This is because many traditional slider libraries such as Swiper.js, Slick Carousel, or Owl Carousel were not designed for modern SSR frameworks. They typically cause the following problems:</p>
<ul>
<li><strong>No SSR support</strong>: Many sliders only work on the client side and create hydration mismatches</li>
<li><strong>Hydration mismatches</strong>: Differences between server and client rendering lead to JavaScript errors</li>
<li><strong>Large JavaScript bundles</strong>: 50-100KB of additional code slow down loading time</li>
<li><strong>Layout shifts</strong>: Unpredictable size changes worsen Core Web Vitals</li>
<li><strong>Poor SEO performance</strong>: Search engines cannot index dynamic content properly</li>
<li><strong>Accessibility issues</strong>: Many sliders are not accessible and violate WCAG guidelines</li>
</ul>
<h3>The solution: Embla Carousel</h3>
<p>After extensive testing of various slider libraries in real Nuxt projects, we have identified <strong>Embla Carousel</strong> as the best solution for modern web applications. Embla was developed specifically for frameworks such as Nuxt, Next.js, and SvelteKit.</p>
<p><strong>What makes Embla Carousel so special:</strong></p>
<ul>
<li><strong>Native SSR support</strong>: Perfect integration with Nuxt's server-side rendering</li>
<li><strong>Minimal bundle size</strong>: Only ~7KB gzipped vs. 50-100KB for other libraries</li>
<li><strong>Excellent performance</strong>: Optimized rendering engine without virtual DOM overhead</li>
<li><strong>Touch gestures and accessibility</strong>: Full WCAG compliance and mobile optimization</li>
<li><strong>TypeScript support</strong>: Fully typed for secure development</li>
<li><strong>Flexible configuration</strong>: Adaptable to any design system and requirement</li>
</ul>
<h2>Why conventional slider libraries fail</h2>
<h3>Swiper.js: The classic with problems</h3>
<p><a href="https://swiperjs.com/">Swiper.js</a>{target="_blank"} is undoubtedly one of the most popular slider libraries. With over 35,000 GitHub stars and a large community, it seems to be the perfect choice. However, in Nuxt projects, Swiper quickly reaches its limits.</p>
<p>The problems with Swiper.js are manifold, ranging from the enormous bundle size of over 45KB gzipped for the basic version to fundamental SSR issues. Swiper does not offer native SSR support and requires client-only wrappers, which leads to frequent hydration issues. These mismatches between server and client rendering can significantly impair the user experience. In addition, performance suffers from virtual DOM overhead and unnecessary re-renders, while basic accessibility features are often missing.</p>
<h3>Slick Carousel: The legacy classic</h3>
<p><a href="https://kenwheeler.github.io/slick/">Slick Carousel</a>{target="_blank"} was long the standard for jQuery-based sliders. However, it is no longer up to date in modern Vue.js/Nuxt applications.</p>
<p>The jQuery dependency is an unnecessary burden in modern frameworks, while the lack of SSR support means that Slick only works on the client side. With a bundle size of over 30KB in addition to jQuery and an outdated architecture that was not designed for modern web standards, Slick Carousel is no longer practical today.</p>
<h3>Owl Carousel: The forgotten classic</h3>
<p><a href="https://owlcarousel2.github.io/OwlCarousel2/">Owl Carousel</a>{target="_blank"} was once popular, but has significant problems with modern frameworks.</p>
<p>There have been no active updates for years, leading to SSR incompatibility. The framework does not work with server-side rendering and suffers from performance issues due to inefficient DOM manipulation. In addition, it lacks accessibility features and WCAG compliance, making it unsuitable for modern web applications.</p>
<h2>Embla Carousel: The modern alternative</h2>
<h3>What is Embla Carousel?</h3>
<p><a href="https://www.embla-carousel.com/">Embla Carousel</a>{target="_blank"} is a modern, lightweight, and high-performance slider library designed specifically for modern web frameworks. Unlike traditional slider libraries, Embla was designed from the ground up for SSR, performance, and accessibility.</p>
<p>Embla Carousel's development philosophy is based on four fundamental principles: Framework agnosticism allows it to be used with Vue, React, Svelte, and Vanilla JavaScript. The performance-first approach guarantees minimal bundle size and an optimized rendering engine. Accessibility-by-default means that all WCAG guidelines are followed from the outset. Finally, Embla offers native support for server-side rendering, making it the ideal choice for Nuxt projects.</p>
<h3>Technical advantages of Embla Carousel</h3>
<p><strong>1. Minimal bundle size</strong>
Embla Carousel is extremely compact at only ~7KB gzipped. In comparison, Swiper.js requires over 45KB gzipped, Slick Carousel over 30KB gzipped plus jQuery, and Owl Carousel about 25KB gzipped. This significant reduction in bundle size results in faster loading times and better performance.</p>
<p><strong>2. Optimized performance</strong>
Embla Carousel deliberately avoids virtual DOM and uses direct DOM manipulation without overhead. Intelligent caching ensures efficient memory usage, while lazy loading loads images and content as needed. Touch optimization offers native touch gestures without additional libraries.</p>
<p><strong>3. Native SSR support</strong>
Embla Carousel is hydration-free and does not cause mismatches between server and client. The SEO-friendly structure allows search engines to index all content. Progressive enhancement means that the slider also works without JavaScript.</p>
<p><strong>4. Accessibility features</strong>
Full keyboard control enables navigation via keyboard, while screen reader support is ensured by ARIA labels and semantic structure. Intelligent focus management and WCAG 2.1 AA compliance meet all important accessibility standards.</p>
<p>For detailed performance benchmarks and comparisons, we recommend the <a href="https://www.embla-carousel.com/api/options/">official Embla Carousel API documentation</a>.</p>
<h2>Installation and setup</h2>
<h3>1. Installing Embla Carousel</h3>
<pre><code class="language-bash">npm install embla-carousel-vue
# or
yarn add embla-carousel-vue
</code></pre>
<h3>2. Basic component structure</h3>
<p>The <code>emblaCarouselVue</code> function offers seamless integration with Vue. A minimal setup requires an <strong>overflow wrapper</strong> and a <strong>scroll container</strong>. Here is the basic structure:</p>
<pre><code class="language-vue">&#x3C;script setup>
import emblaCarouselVue from 'embla-carousel-vue'

const [emblaRef] = emblaCarouselVue()
&#x3C;/script>

&#x3C;template>
  &#x3C;div class="embla" ref="emblaRef">
    &#x3C;div class="embla__container">
      &#x3C;div class="embla__slide">Slide 1&#x3C;/div>
      &#x3C;div class="embla__slide">Slide 2&#x3C;/div>
      &#x3C;div class="embla__slide">Slide 3&#x3C;/div>
    &#x3C;/div>
  &#x3C;/div>
&#x3C;/template>
</code></pre>
<h3>3. Styling the carousel</h3>
<p>The <code>emblaCarouselVue</code> function gives us an <strong>emblaRef</strong>, which we attach to our wrapper element with the class <code>embla</code> to hide the scroll overflow. The element with the <code>container</code> class is the scroll body that scrolls the slides. Here is the required <strong>CSS</strong>:</p>
<pre><code class="language-vue">&#x3C;style scoped>
.embla {
  overflow: hidden;
}

.embla__container {
  display: flex;
}

.embla__slide {
  flex: 0 0 100%;
  min-width: 0;
}
&#x3C;/style>
</code></pre>
<h3>4. Accessing the Carousel API</h3>
<p>The <code>emblaCarouselVue</code> function takes the Embla Carousel options as its first parameter. You can also access the API using <code>onMounted</code>:</p>
<pre><code class="language-vue">&#x3C;script setup>
import { onMounted } from 'vue'
import emblaCarouselVue from 'embla-carousel-vue'

const [emblaRef, emblaApi] = emblaCarouselVue({ loop: false })

onMounted(() => {
  if (emblaApi.value) {
    console.log(emblaApi.value.slideNodes()) // API-Zugriff
  }
})
&#x3C;/script>

&#x3C;template>
  &#x3C;div class="embla" ref="emblaRef">
    &#x3C;div class="embla__container">
      &#x3C;div class="embla__slide">Slide 1&#x3C;/div>
      &#x3C;div class="embla__slide">Slide 2&#x3C;/div>
      &#x3C;div class="embla__slide">Slide 3&#x3C;/div>
    &#x3C;/div>
  &#x3C;/div>
&#x3C;/template>
</code></pre>
<h3>5. Add plugins (optional)</h3>
<p>If you want to use plugins, first install the desired plugin. For example, the autoplay plugin:</p>
<pre><code class="language-bash">npm install embla-carousel-autoplay
# or
yarn add embla-carousel-autoplay
</code></pre>
<p>The <code>emblaCarouselVue</code> function accepts plugins as a second parameter. Plugins must be passed in an <strong>array</strong>:</p>
<pre><code class="language-vue">&#x3C;script setup>
import emblaCarouselVue from 'embla-carousel-vue'
import Autoplay from 'embla-carousel-autoplay'

const [emblaRef] = emblaCarouselVue({ loop: false }, [Autoplay()])
&#x3C;/script>

&#x3C;template>
  &#x3C;div class="embla" ref="emblaRef">
    &#x3C;div class="embla__container">
      &#x3C;div class="embla__slide">Slide 1&#x3C;/div>
      &#x3C;div class="embla__slide">Slide 2&#x3C;/div>
      &#x3C;div class="embla__slide">Slide 3&#x3C;/div>
      &#x3C;div class="embla__slide">Slide 4&#x3C;/div>
    &#x3C;/div>
  &#x3C;/div>
&#x3C;/template>
</code></pre>
<h2>Use cases</h2>
<h3>When should you use Embla Carousel?</h3>
<p>Embla Carousel is particularly well suited for various use cases, which we will examine in detail.</p>
<p><strong>1. Product galleries</strong>
High-performance product galleries are essential for e-commerce websites. Embla Carousel offers touch-optimized navigation for mobile users, lazy loading for fast loading times, zoom functionality for detailed product views, and an SEO-friendly structure for better search engine rankings. This combination makes Embla the ideal choice for online stores that focus on conversion and performance.</p>
<p><strong>2. Testimonials and reviews</strong>
Customer reviews are crucial for conversion rates. Embla Carousel supports this requirement with autoplay functionality for automatic rotation, pause on hover for better user control, comprehensive accessibility features for all users, and responsive design for all devices. These features ensure that customer reviews are presented in the best possible way.</p>
<p><strong>3. Blog Post Slider</strong>
For content marketing and blog websites, Embla Carousel offers infinite loop for endless navigation, keyboard navigation for power users, screen reader support for barrier-free access, and social media integration for sharing functions. This combination makes the slider ideal for content-driven websites.</p>
<p><strong>4. Team Member Presentation</strong>
For corporate websites and portfolios, Embla Carousel offers a grid layout for multiple slides at once, hover effects for interactive elements, smooth transitions for a professional look, and mobile-first design for optimal use. These features ensure a professional presentation of team members.</p>
<h3>Performance advantages in practice</h3>
<p><strong>SEO improvements</strong>
Native SSR support allows you to significantly improve SEO performance. Optimized Core Web Vitals through fast rendering lead to high Lighthouse scores and good Google PageSpeed ratings for mobile devices. Improved indexing of image content in Search Console rounds off the SEO advantages.</p>
<h2>Basic implementation</h2>
<h3>Simple slider</h3>
<pre><code class="language-vue">&#x3C;template>
  &#x3C;div class="embla" ref="emblaRef">
    &#x3C;div class="embla__container">
      &#x3C;div class="embla__slide" v-for="(slide, index) in slides" :key="index">
        &#x3C;img :src="slide.image" :alt="slide.title" class="w-full h-64 object-cover" />
        &#x3C;h3 class="text-xl font-bold mt-4">{{ slide.title }}&#x3C;/h3>
        &#x3C;p class="text-gray-600">{{ slide.description }}&#x3C;/p>
      &#x3C;/div>
    &#x3C;/div>
  &#x3C;/div>
&#x3C;/template>

&#x3C;script setup>
import { ref, onMounted } from 'vue'
import emblaCarouselVue from 'embla-carousel-vue'

const [emblaRef, emblaApi] = emblaCarouselVue()

const slides = ref([
  {
    image: '/img/blog/team/blue_shoes-47.jpg',
    title: 'Cloud Native Entwicklung',
    description: 'Moderne Anwendungen mit Docker, Kubernetes und Microservices',
    link: '/leistungen/cloud-native-entwicklung'
  },
  {
    image: '/img/blog/team/blue_shoes-3.jpg',
    title: 'Vue.js &#x26; Nuxt.js Expertise',
    description: 'Performante Frontend-Lösungen mit modernen Frameworks',
    link: '/technologien/vuejs-nuxt'
  },
  {
    image: '/img/blog/team/blue_shoes-28.jpg',
    title: 'DevOps &#x26; CI/CD',
    description: 'Automatisierte Deployment-Pipelines für maximale Effizienz',
    link: '/leistungen/devops-consulting'
  },
  {
    image: '/img/blog/team/blue_shoes-61.jpg',
    title: 'API-Entwicklung',
    description: 'RESTful APIs und GraphQL mit Python, FastAPI und Django',
    link: '/leistungen/api-entwicklung'
  }
])

onMounted(() => {
  if (emblaApi.value) {
    emblaApi.value.on('select', (event) => {
      const currentSlide = slides.value[event.selectedScrollSnap()]
      console.log(`Aktueller Slide: ${currentSlide.title}`)
    })
  }
})
&#x3C;/script>

&#x3C;style scoped>
.embla {
  overflow: hidden;
}

.embla__container {
  display: flex;
  gap: 1rem;
}

.embla__slide {
  flex: 0 0 100%;
  min-width: 0;
}
&#x3C;/style>
</code></pre>
<h2>Advanced Features and Configuration</h2>
<h3>Responsive Design and Breakpoints</h3>
<p>Embla Carousel offers excellent support for responsive design. With the integrated breakpoints, you can adapt the behavior of the slider to different screen sizes.</p>
<p>The advantages of responsive configuration include a mobile-first approach optimized for touch devices, tablet optimization with customized navigation for medium screens, desktop enhancement with advanced features for large screens, and performance optimization where only necessary features are loaded.</p>
<h3>Autoplay and interactivity</h3>
<p>The autoplay functionality of Embla Carousel is designed to be particularly user-friendly.</p>
<p>Intelligent autoplay features include pause on hover for automatic pausing when the user interacts, touch pause, which stops when touch gestures are made on mobile devices, keyboard pause for keyboard input, and visibility API, which pauses when the slider is not visible.</p>
<h3>Touch gestures and mobile optimization</h3>
<p>Embla Carousel offers native touch support without additional libraries.</p>
<p>Touch features include swipe gestures for natural swipe navigation, momentum scrolling with physics-based animations, touch resistance to prevent accidental navigation, and multi-touch support for pinch-to-zoom and other gestures.</p>
<h2>Performance optimizations</h2>
<h3>Lazy loading for images</h3>
<p>Embla Carousel natively supports lazy loading for optimal performance. Here is an example of the implementation:</p>
<pre><code class="language-vue">&#x3C;template>
  &#x3C;div class="embla" ref="emblaRef">
    &#x3C;div class="embla__container">
      &#x3C;div class="embla__slide" v-for="(slide, index) in slides" :key="index">
        &#x3C;img 
          :src="slide.image" 
          :alt="slide.title" 
          loading="lazy"
          class="w-full h-64 object-cover"
        />
        &#x3C;h3 class="text-xl font-bold mt-4">{{ slide.title }}&#x3C;/h3>
        &#x3C;p class="text-gray-600">{{ slide.description }}&#x3C;/p>
      &#x3C;/div>
    &#x3C;/div>
  &#x3C;/div>
&#x3C;/template>

&#x3C;script setup>
import emblaCarouselVue from 'embla-carousel-vue'

const [emblaRef] = emblaCarouselVue()
&#x3C;/script>
</code></pre>
<h3>Intersection Observer for performance</h3>
<p>For additional performance optimization, you can use Intersection Observer to initialize the slider only when it is visible:</p>
<pre><code class="language-vue">&#x3C;script setup>
import { ref, onMounted } from 'vue'
import emblaCarouselVue from 'embla-carousel-vue'

const isVisible = ref(false)
const sliderRef = ref(null)
const [emblaRef, emblaApi] = emblaCarouselVue()

onMounted(() => {
  const observer = new IntersectionObserver(
    ([entry]) => {
      isVisible.value = entry.isIntersecting
    },
    { threshold: 0.1 }
  )
  
  if (sliderRef.value) {
    observer.observe(sliderRef.value)
  }
})
&#x3C;/script>

&#x3C;template>
  &#x3C;div ref="sliderRef" class="embla" ref="emblaRef">
    &#x3C;div class="embla__container">
      &#x3C;div class="embla__slide" v-for="(slide, index) in slides" :key="index">
        &#x3C;img :src="slide.image" :alt="slide.title" class="w-full h-64 object-cover" />
      &#x3C;/div>
    &#x3C;/div>
  &#x3C;/div>
&#x3C;/template>
</code></pre>
<h3>Optimized bundle size</h3>
<p>For Nuxt projects, you can further reduce the bundle size through optimized configuration:</p>
<pre><code class="language-typescript">// nuxt.config.ts
export default defineNuxtConfig({
  build: {
    transpile: ['embla-carousel-vue']
  },
  vite: {
    optimizeDeps: {
      include: ['embla-carousel-vue']
    }
  }
})
</code></pre>
<h2>Accessibility and SEO optimization</h2>
<h3>Why accessibility is important for sliders</h3>
<p>Sliders are often critical UI components that must be accessible to all users. Embla Carousel was developed from the ground up with accessibility in mind and offers native support for screen readers, keyboard navigation, and WCAG 2.1 AA compliance.</p>
<p>The benefits for slider-specific accessibility range from full keyboard control for power users to screen reader support, which allows blind users to navigate through all slides. Embla's native ARIA labels and semantic structure ensure that each slide is correctly described.</p>
<h3>Embla Carousel Accessibility Features</h3>
<p>Embla Carousel offers comprehensive accessibility features designed specifically for slider implementations:</p>
<p><strong>1. Keyboard navigation for sliders</strong></p>
<pre><code class="language-vue">&#x3C;template>
  &#x3C;div 
    class="embla" 
    ref="emblaRef"
    @keydown="handleKeydown"
    tabindex="0"
    role="region"
    aria-label="Bildergalerie"
  >
    &#x3C;div class="embla__container">
      &#x3C;div class="embla__slide" v-for="(slide, index) in slides" :key="index">
        &#x3C;img :src="slide.image" :alt="slide.title" class="w-full h-64 object-cover" />
      &#x3C;/div>
    &#x3C;/div>
  &#x3C;/div>
&#x3C;/template>

&#x3C;script setup>
import emblaCarouselVue from 'embla-carousel-vue'

const [emblaRef, emblaApi] = emblaCarouselVue()

const handleKeydown = (event) => {
  switch (event.key) {
    case 'ArrowLeft':
      emblaApi.value?.scrollPrev()
      break
    case 'ArrowRight':
      emblaApi.value?.scrollNext()
      break
    case 'Home':
      emblaApi.value?.scrollTo(0)
      break
    case 'End':
      emblaApi.value?.scrollTo(slides.value.length - 1)
      break
  }
}
&#x3C;/script>
</code></pre>
<p><strong>2. Screen reader support for slider content</strong></p>
<pre><code class="language-vue">&#x3C;template>
  &#x3C;div class="embla" ref="emblaRef">
    &#x3C;div class="embla__container">
      &#x3C;div 
        class="embla__slide" 
        v-for="(slide, index) in slides" 
        :key="index"
        :aria-label="`Slide ${index + 1} von ${slides.length}: ${slide.title}`"
        :aria-hidden="emblaApi?.selectedScrollSnap() !== index"
      >
        &#x3C;img 
          :src="slide.image" 
          :alt="slide.title" 
          class="w-full h-64 object-cover"
        />
      &#x3C;/div>
    &#x3C;/div>
    
    &#x3C;!-- Navigation with ARIA-Labels -->
    &#x3C;button 
      class="embla__prev" 
      @click="emblaApi?.scrollPrev()"
      aria-label="Vorheriger Slide"
      :aria-disabled="emblaApi?.canScrollPrev() === false"
    >
      ←
    &#x3C;/button>
    &#x3C;button 
      class="embla__next" 
      @click="emblaApi?.scrollNext()"
      aria-label="Nächster Slide"
      :aria-disabled="emblaApi?.canScrollNext() === false"
    >
      →
    &#x3C;/button>
  &#x3C;/div>
&#x3C;/template>

&#x3C;script setup>
import emblaCarouselVue from 'embla-carousel-vue'

const [emblaRef, emblaApi] = emblaCarouselVue()
&#x3C;/script>
</code></pre>
<h3>SEO optimization for slider content</h3>
<p>Sliders pose a particular challenge for SEO, as dynamic content is often overlooked by search engines. Embla Carousel solves this problem with native SSR support and SEO-friendly structures.</p>
<p><strong>Why SEO is critical for sliders:</strong></p>
<ul>
<li><strong>Image indexing</strong>: Search engines can index all images in sliders, not just the first one</li>
<li><strong>Content accessibility</strong>: All slide content is available to crawlers, not just visible content</li>
<li><strong>Structured data</strong>: Schema.org markup for carousel content improves snippets</li>
<li><strong>Core Web Vitals</strong>: Optimized performance improves rankings for pages with sliders</li>
</ul>
<p><strong>SEO best practices for Embla Slider:</strong></p>
<ul>
<li><strong>Semantic HTML structure</strong>: Use correct HTML tags for slider content</li>
<li><strong>Meaningful alt text</strong>: Every image in the slider needs descriptive alt text</li>
<li><strong>Structured data</strong>: Implement Schema.org markup for carousel content</li>
<li><strong>Meta descriptions</strong>: Optimized meta tags for each slide content</li>
</ul>
<h2>Best practices for slider implementations</h2>
<h3>Why slider-specific optimization is important</h3>
<p>Sliders are often the first elements users see on a page. A poorly optimized slider implementation can negatively impact the entire user experience. Embla Carousel offers special features for production environments.</p>
<p><strong>Slider-specific issues:</strong></p>
<ul>
<li><strong>Slow loading times</strong>: Large images in sliders slow down the initial page load</li>
<li><strong>Layout shifts</strong>: Unpredictable size changes worsen Core Web Vitals</li>
<li><strong>Touch issues</strong>: Poor touch optimization on mobile devices</li>
<li><strong>Accessibility issues</strong>: Many sliders are not accessible</li>
</ul>
<h3>Error handling for sliders</h3>
<p>Robust error handling is particularly important for sliders, as they often present critical content:</p>
<p><strong>1. Graceful degradation for sliders</strong></p>
<pre><code class="language-vue">&#x3C;script setup>
import { ref, onMounted } from 'vue'
import emblaCarouselVue from 'embla-carousel-vue'

const hasError = ref(false)
const errorMessage = ref('')
const isLoading = ref(true)

const [emblaRef, emblaApi] = emblaCarouselVue()

onMounted(() => {
  try {
    if (emblaApi.value) {
      emblaApi.value.on('error', (error) => {
        console.error('Slider error:', error)
        hasError.value = true
        errorMessage.value = 'Image gallery could not be loaded'
      })
      
      emblaApi.value.on('init', () => {
        isLoading.value = false
        console.log('Slider successfully initialized')
      })
    }
  } catch (error) {
    hasError.value = true
    errorMessage.value = `Error initializing the image gallery: ${error.message}`
    isLoading.value = false
    console.error('Slider initialization failed:', error)
  }
})
&#x3C;/script>
</code></pre>
<h3>Progressive enhancement for sliders</h3>
<p>Sliders must also work without JavaScript to ensure SEO and accessibility:</p>
<p><strong>1. Progressive enhancement for sliders</strong></p>
<pre><code class="language-vue">&#x3C;template>
  &#x3C;div class="embla">
    &#x3C;!-- Fallback for SEO and no JS -->
    &#x3C;div class="embla__fallback" v-if="!isHydrated">
      &#x3C;div v-for="(slide, index) in slides" :key="index" class="slide-fallback mb-8">
        &#x3C;img 
          :src="slide.image" 
          :alt="slide.title" 
          class="w-full h-64 object-cover rounded-lg"
        />
        &#x3C;h3 class="text-xl font-bold mt-4 text-gray-800">{{ slide.title }}&#x3C;/h3>
        &#x3C;p class="text-gray-600 mt-2">{{ slide.description }}&#x3C;/p>
        &#x3C;a 
          :href="slide.link" 
          class="inline-block mt-4 px-6 py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 transition-colors"
        >
          Mehr erfahren
        &#x3C;/a>
      &#x3C;/div>
    &#x3C;/div>
    
    &#x3C;!-- Hydrated slider with full functionality -->
    &#x3C;div v-else class="embla" ref="emblaRef">
      &#x3C;div class="embla__container">
        &#x3C;div 
          class="embla__slide" 
          v-for="(slide, index) in slides" 
          :key="index"
        >
          &#x3C;div class="relative group">
            &#x3C;img 
              :src="slide.image" 
              :alt="slide.title" 
              class="w-full h-64 object-cover rounded-lg transition-transform group-hover:scale-105"
            />
            &#x3C;div class="absolute inset-0 bg-black bg-opacity-40 rounded-lg opacity-0 group-hover:opacity-100 transition-opacity">
              &#x3C;div class="absolute bottom-4 left-4 text-white">
                &#x3C;h3 class="text-xl font-bold">{{ slide.title }}&#x3C;/h3>
                &#x3C;p class="text-sm">{{ slide.description }}&#x3C;/p>
              &#x3C;/div>
            &#x3C;/div>
          &#x3C;/div>
        &#x3C;/div>
      &#x3C;/div>
    &#x3C;/div>
  &#x3C;/div>
&#x3C;/template>

&#x3C;script setup>
import { ref } from 'vue'

const isHydrated = ref(false)

onMounted(() => {
  isHydrated.value = true
})
&#x3C;/script>
</code></pre>
<h3>Performance metrics for sliders</h3>
<p>Certain metrics are particularly important for slider-specific performance optimization:</p>
<p><strong>Slider-specific metrics:</strong></p>
<ul>
<li><strong>Load time</strong>: Sliders should load in less than 1 second, as they are often above the fold</li>
<li><strong>Interaction delay</strong>: Slide changes should respond in less than 100 ms</li>
<li><strong>Memory usage</strong>: Monitoring memory leaks during long slider sessions</li>
<li><strong>Touch responsiveness</strong>: Touch gestures should respond immediately</li>
<li><strong>Slide transition performance</strong>: Smooth transitions between slides</li>
</ul>
<h2>Conclusion: Why Embla Carousel is the best slider solution</h2>
<p>After years of experience with various slider libraries in Nuxt projects, we can say with certainty: <strong>Embla Carousel is the best solution for modern slider implementations</strong>.</p>
<h3>Summary of slider advantages</h3>
<p><strong>Performance advantages for sliders:</strong>
Embla Carousel offers a small bundle size of only ~7KB gzipped, which is particularly important since sliders are often loaded above the fold. High FPS and smooth animations ensure professional slide transitions, while fast loading times thanks to optimized architecture improve the user experience. No hydration mismatches thanks to native SSR support make Embla the ideal choice for Nuxt projects.</p>
<p><strong>SEO and accessibility benefits for sliders:</strong>
Full WCAG 2.1 AA compliance is particularly important for sliders, as they often present critical content. Better Google rankings through optimized Core Web Vitals, screen reader support for barrier-free access to all slides, and keyboard navigation for power users make Embla the ideal choice for professional slider implementations.</p>
<p><strong>Developer experience for sliders:</strong>
TypeScript support for secure slider development, flexible configuration for all slider use cases, an active community and regular updates, as well as comprehensive documentation and examples ensure an excellent developer experience when implementing sliders.</p>
<h3>When to use Embla Carousel for sliders</h3>
<p><strong>Perfect for sliders in:</strong>
Embla Carousel is ideal for e-commerce websites with product galleries, corporate websites with team presentations, blog websites with content sliders, portfolio websites with project galleries, and marketing websites with testimonial sliders.</p>
<p><strong>Not suitable for:</strong>
Very simple image galleries without interactivity, legacy projects with jQuery dependencies, or projects with very specific requirements for other slider libraries are less suitable for Embla Carousel.</p>
<h3>Business impact for slider implementations</h3>
<p>Implementing Embla Carousel for Sliders can bring significant benefits to your business:</p>
<p><strong>Performance impact for sliders:</strong>
Faster loading times improve the user experience for sliders, while better Core Web Vitals lead to higher Google rankings. Reduced bounce rates thanks to faster slider interactions round off the performance benefits.</p>
<p><strong>SEO impact for sliders:</strong>
Improved image indexing through SEO-friendly slider structure, better accessibility scores in Lighthouse, and higher conversion rates through optimized slider user experience are the most important SEO advantages.</p>
<p><strong>Development impact for sliders:</strong>
Faster development through simple slider integration, fewer bugs thanks to TypeScript and robust slider architecture, and easier maintenance through clear API and documentation make Embla Carousel the ideal choice for slider development teams.</p>
<hr>
<h2>FAQ – Frequently asked questions about Embla Carousel in Nuxt</h2>
<h3>1. Why is Embla Carousel the best choice for Nuxt SSR?</h3>
<p>Embla Carousel offers native SSR support, minimal bundle size, and excellent performance. It is specifically designed for modern web frameworks and works seamlessly with Nuxt's server-side rendering. Unlike other slider libraries, Embla does not cause hydration mismatches and offers optimal performance for Nuxt projects.</p>
<h3>2. How do I implement Embla Carousel in a Nuxt project?</h3>
<p>The implementation is very simple: Install the Embla Carousel Vue plugin with <code>npm install embla-carousel-vue</code> and use the <code>emblaCarouselVue</code> function in your components. Create a client-only component for the carousel functionality and use Nuxt's SSR features for optimal performance. We have explained the detailed implementation step by step in this article.</p>
<h3>3. What performance optimizations does Embla Carousel offer?</h3>
<p>Embla Carousel offers lazy loading, touch gestures, keyboard navigation, automatic resizing, and minimal JavaScript execution. These features ensure smooth animations and an optimal user experience. In addition, Embla supports Intersection Observer for on-demand initialization and offers native touch optimization without additional libraries.</p>
<h3>4. Can I use Embla Carousel with TypeScript in Nuxt?</h3>
<p>Yes, Embla Carousel offers full TypeScript support. The types are already included in the package and offer excellent IDE support for secure development. The <code>emblaCarouselVue</code> function is fully typed and offers IntelliSense for all options and methods.</p>
<h3>5. How do I handle images and media in Embla Carousel?</h3>
<p>Embla Carousel supports lazy loading for images and offers optimized media handling features. Combine it with Nuxt Image for additional performance optimizations. The <code>loading="lazy"</code> attribute is natively supported, and you can use Intersection Observer for additional performance optimizations.</p>
<h3>6. What alternatives to Embla Carousel are there for Nuxt?</h3>
<p>Alternatives include Swiper.js, Splide.js, or Slick Carousel. However, Embla Carousel stands out with better SSR support, smaller bundle size, and more modern architecture. As we have shown in this article, traditional slider libraries have significant disadvantages in Nuxt projects, while Embla Carousel was developed specifically for modern web frameworks.</p>]]></content:encoded>
            <category>Nuxt</category>
            <category>Vue.js</category>
            <category>Development</category>
            <category>Performance</category>
            <category>SEO</category>
            <enclosure url="https://blueshoe.io/img/blogs/nuxt-embla-slider.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Optimal Use of TailwindCSS in Nuxt 3]]></title>
            <link>https://blueshoe.io/blog/nuxt3-tailwindcss-best-practices</link>
            <guid>https://blueshoe.io/blog/nuxt3-tailwindcss-best-practices</guid>
            <pubDate>Mon, 14 Jul 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>TailwindCSS is a high-performance solution for styling in Nuxt 3 projects. Learn how you optimally integrate it, create design systems, and maximize your Nuxt app's performance. We'll show you best practices for efficient usage.</p>
<p>In this article, we'll explain how you optimally integrate TailwindCSS into your Nuxt 3 project, which performance optimizations you can use, and how you build a scalable design system.</p>
<p><img src="/img/blog/tailwind-nuxt3.svg" alt="Optimal Use of TailwindCSS in Nuxt 3"></p>
<p>::GlobalBlogLevelInfo</p>
<ul>
<li><a href="https://tailwindcss.com/">TailwindCSS</a>{target="_blank"} and <a href="https://tailwindcss.nuxtjs.org/">Tailwind CSS for Nuxt</a>{target="_blank"}</li>
<li><a href="https://nuxt.com/">Nuxt in Version 3</a>{target="_blank"}</li>
<li>Clean Vue.js Components</li>
</ul>
<p>If you have any questions or if anything is unclear, you can use the comment function below the article.
::</p>
<h2>Introduction: TailwindCSS meets Nuxt 3</h2>
<h3>What is TailwindCSS?</h3>
<p>TailwindCSS is a Utility-First CSS framework that follows a completely different approach compared to traditional CSS frameworks. Instead of predefined components, it offers a comprehensive collection of utility classes that can be directly used in HTML. These classes enable creating designs directly in the markup without having to switch between HTML and CSS files.</p>
<h3>Why TailwindCSS with Nuxt 3?</h3>
<p>The combination of Nuxt 3 and TailwindCSS offers a high-performance foundation for modern web applications. Nuxt 3's Server-Side Rendering (SSR) and Static Site Generation (SSG) capabilities perfectly harmonize with TailwindCSS's Utility-First approach.</p>
<h4>Advantages of the Combination:</h4>
<ol>
<li>
<p><strong>Rapid Development</strong></p>
<ul>
<li>Direct styling in the template</li>
<li>No context switching between files</li>
<li>Consistent design language</li>
</ul>
</li>
<li>
<p><strong>Optimized Performance</strong></p>
<ul>
<li>Automatic PurgeCSS integration</li>
<li>Minimal bundle size</li>
<li>Efficient caching</li>
</ul>
</li>
<li>
<p><strong>Flexibility</strong></p>
<ul>
<li>Easy adaptation to design systems</li>
<li>Responsive design without media queries</li>
<li>Dark Mode support</li>
</ul>
</li>
<li>
<p><strong>Developer Experience</strong></p>
<ul>
<li>Intuitive API</li>
<li>Good IDE support</li>
<li>Comprehensive documentation</li>
</ul>
</li>
</ol>
<h4>Drawbacks and Challenges:</h4>
<ol>
<li>
<p><strong>Learning Curve</strong></p>
<ul>
<li>New styling approach</li>
<li>Many utility classes to learn</li>
<li>More complex templates</li>
</ul>
</li>
<li>
<p><strong>Team Coordination</strong></p>
<ul>
<li>Consistent naming conventions needed</li>
<li>Code reviews require Tailwind knowledge</li>
<li>Potential template complexity</li>
</ul>
</li>
<li>
<p><strong>Build Time</strong></p>
<ul>
<li>Longer build times for large projects</li>
<li>Higher memory usage during development</li>
<li>Complex PostCSS configuration</li>
</ul>
</li>
</ol>
<h3>Setting Up the Development Environment</h3>
<p>Before we begin installation, we should optimize the development environment. Here are the most important tools and settings for VSCode:</p>
<h4>1. VSCode Extensions</h4>
<p>Install the following extensions for the best developer experience:</p>
<ul>
<li><a href="https://marketplace.cursor.api.com/items?itemName=bradlc.vscode-tailwindcss"><strong>Tailwind CSS IntelliSense</strong></a>{target="_blank"}
<ul>
<li>Autocomplete for Tailwind classes</li>
<li>Hover preview</li>
<li>Syntax highlighting</li>
</ul>
</li>
</ul>
<h4>2. VSCode Settings</h4>
<p>Add these settings to your <code>settings.json</code>:</p>
<pre><code class="language-json">{
  "editor.quickSuggestions": {
    "strings": true
  },
  "files.associations": {
    "*.css": "tailwindcss"
  },
}
</code></pre>
<h4>3. Debug Tools</h4>
<p>We recommend the following tools for development:</p>
<ul>
<li>
<p><strong>Tailwind CSS Debug Tools</strong></p>
<ul>
<li>Browser DevTools for Tailwind</li>
<li>Component inspection</li>
<li>Responsive design testing</li>
</ul>
</li>
<li>
<p><strong>Nuxt DevTools</strong></p>
<ul>
<li>Performance monitoring</li>
<li>Component hierarchy</li>
<li>State management</li>
</ul>
</li>
</ul>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
We can improve your Vue and Nuxt apps, too.
::</p>
<h2>Installation and Basic Configuration</h2>
<p>Integrating TailwindCSS in Nuxt 3 is very simple thanks to the official @nuxtjs/tailwindcss module. The module automatically handles the basic configuration and optimizes TailwindCSS for use with Nuxt 3.</p>
<h4>1. Prerequisites</h4>
<p>Ensure you have a functioning <a href="https://nuxt.com/docs/getting-started/installation">Nuxt 3 Project</a>{target="_blank"}:</p>
<pre><code class="language-bash"># Create new Nuxt 3 Project
npm create nuxt &#x3C;project-name>
cd &#x3C;project-name>

# Install dependencies
npm install
</code></pre>
<h4>2. Installation</h4>
<p>Installation is done with a single command:</p>
<pre><code class="language-bash">npx nuxi@latest module add tailwindcss
</code></pre>
<p>You can also use npm or yarn:</p>
<pre><code class="language-bash"># With npm
npm install -D @nuxtjs/tailwindcss

# With yarn
yarn add -D @nuxtjs/tailwindcss
</code></pre>
<h4>3. Nuxt Configuration</h4>
<p>After installation, the module will be automatically activated in your <code>nuxt.config.ts</code>. You can adjust the configuration as needed:</p>
<pre><code class="language-typescript">// nuxt.config.ts
export default defineNuxtConfig({
  modules: ['@nuxtjs/tailwindcss']
})
</code></pre>
<h4>4. Tailwind Configuration</h4>
<p>The module automatically searches for the following files:</p>
<ul>
<li><code>./assets/css/tailwind.css</code></li>
<li><code>./tailwind.config.{js,cjs,mjs,ts}</code></li>
</ul>
<p>If these files do not exist, they will be automatically created with a base configuration. You can also manually adjust the configuration:</p>
<pre><code class="language-typescript">// tailwind.config.ts
import type { Config } from 'tailwindcss'

export default {
  content: [
    './components/**//*.{js,vue,ts}',
    './layouts/**//*.vue',
    './pages/**//*.vue',
    './plugins/**//*.{js,ts}',
    './app.vue',
  ],
  theme: {
    extend: {
      colors: {
        'primary': {
          50: '#f0f9ff',
          100: '#e0f2fe',
          // ... further gradations
          900: '#0c4a6e',
        }
      },
      fontFamily: {
        sans: ['Inter var', 'sans-serif'],
      }
    }
  },
  plugins: [
    require('@tailwindcss/typography'),
  ],
} satisfies Config
</code></pre>
<h4>5. CSS Directives</h4>
<p>In your <code>assets/css/tailwind.css</code> file, the following directives should be included:</p>
<pre><code class="language-css">@tailwind base;
@tailwind components;
@tailwind utilities;
</code></pre>
<h4>6. Start Development</h4>
<p>Start the development server and verify the installation:</p>
<pre><code class="language-bash">npm run dev
# or
yarn dev
</code></pre>
<p>Open the Tailwind viewer at <code>http://localhost:3000/_tailwind/</code> to check your configuration.</p>
<h2>From Utility Chaos to Scalable Design System</h2>
<p>The concern is understandable: Doesn't a utility-first approach lead to unwieldy, unreadable HTML templates? The answer is a clear no. Without structure, that can happen, but with the right strategy, TailwindCSS becomes the foundation for a <em>robust and scalable design system</em>.</p>
<p>Such a system is key for sustainable projects. It ensures that the design remains consistent, even as the project grows and new developers join. Changes can be made centrally instead of in hundreds of code locations.</p>
<p>Let's see how you can implement this in Nuxt 3 with a strategy based on two pillars.</p>
<h3>1. The <code>tailwind.config.ts</code> as Your Fundamental</h3>
<p>Consider the <code>tailwind.config.js</code> as the "Single Source of Truth" for your design. Here you define all fundamental visual aspects of your application — the so-called design tokens.</p>
<p>Instead of scattering hardcoded values like <code>#3b82f6</code> throughout the code, give them semantic names. This makes your intention clearer and maintenance easier.</p>
<p>Look at this extended configuration:</p>
<pre><code class="language-typescript">// tailwind.config.ts
import type { Config } from 'tailwindcss'
import defaultTheme from 'tailwindcss/defaultTheme'

export default {
  content: [
    // ... your content paths
  ],
  theme: {
    extend: {
      colors: {
        'primary': {
          '50': '#eff6ff',
          '100': '#dbeafe',
          '200': '#bfdbfe',
          // ... further colors
        }
      },
      fontFamily: {
        // ... font definitions
      }
    }
  },
  plugins: [
    // ... plugins
  ],
} satisfies Config
</code></pre>
<p>You no longer use <code>text-blue-500</code>, but <code>text-primary-500</code>. When your brand color changes, you only need to adjust it at <em>one single location</em> in the configuration. All occurrences will be automatically updated.</p>
<h3>2. Vue Components as Reusable Building Blocks</h3>
<p>After you've established the design tokens as a foundation, comes the second rule: Abstraction through components. Instead of using long and repetitive class lists in the markup, you encapsulate them in reusable Vue components.</p>
<p>Let's take the example of a simple button. Without a component, your code could look like this:</p>
<pre><code class="language-html">&#x3C;!-- Scattered throughout your application -->
&#x3C;button class="inline-flex items-center justify-center rounded-md px-4 py-2 text-base font-medium text-white transition-colors bg-primary-600 hover:bg-primary-700 focus:outline-none focus:ring-2 focus:ring-primary-500 focus:ring-offset-2">
  Execute Action
&#x3C;/button>
</code></pre>
<p>This is not only cluttered but also a nightmare for changes.</p>
<p>The solution is to create a <code>Button.vue</code> component, as will be detailed in the next section. The usage then reduces to a clean, readable line:</p>
<pre><code class="language-html">&#x3C;Button variant="primary">Execute Action&#x3C;/Button>
</code></pre>
<p>The added value is enormous:</p>
<ul>
<li><strong>Readability:</strong> The code expresses <em>intent</em>, not implementation details.</li>
<li><strong>Maintainability:</strong> Button styling is changed only in <code>Button.vue</code>.</li>
<li><strong>Consistency:</strong> All buttons are guaranteed to look identical.</li>
</ul>
<p>When you consistently apply these two principles - a central configuration for design tokens and encapsulation of UI logic in components - you build a system that can grow with your project. It remains maintainable, consistent, and makes onboarding for new team members significantly easier.</p>
<h2>Best Practices for Components</h2>
<h3>1. Component Structure</h3>
<p>Here's an example of an optimized Button component with TailwindCSS:</p>
<pre><code class="language-vue">&#x3C;!-- components/Button.vue -->
&#x3C;script setup lang="ts">
interface Props {
  variant?: 'primary' | 'secondary' | 'outline'
  size?: 'sm' | 'md' | 'lg'
}

const props = withDefaults(defineProps&#x3C;Props>(), {
  variant: 'primary',
  size: 'md'
})

const buttonClasses = computed(() => {
  const baseClasses = 'inline-flex items-center justify-center rounded-md font-medium transition-colors focus:outline-none focus:ring-2 focus:ring-offset-2'
  
  const variants = {
    primary: 'bg-primary-600 text-white hover:bg-primary-700 focus:ring-primary-500',
    secondary: 'bg-gray-100 text-gray-900 hover:bg-gray-200 focus:ring-gray-500',
    outline: 'border border-gray-300 bg-white text-gray-700 hover:bg-gray-50 focus:ring-primary-500'
  }
  
  const sizes = {
    sm: 'px-3 py-1.5 text-sm',
    md: 'px-4 py-2 text-base',
    lg: 'px-6 py-3 text-lg'
  }
  
  return `${baseClasses} ${variants[props.variant]} ${sizes[props.size]}`
})
&#x3C;/script>

&#x3C;template>
  &#x3C;button :class="buttonClasses">
    &#x3C;slot />
  &#x3C;/button>
&#x3C;/template>
</code></pre>
<h3>2. Dark Mode Support</h3>
<p>TailwindCSS offers an elegant solution for implementing a dark mode in Nuxt 3 applications. The dark mode can either be system-dependent or manually controlled. Here we explain both approaches:</p>
<h4>Use in components</h4>
<p>To use dark mode in your components, you can use the <code>dark:</code> variant of tailwind classes. Here is an example of a map component:</p>
<pre><code class="language-vue">&#x3C;template>
  &#x3C;div class="bg-white dark:bg-gray-800 text-gray-900 dark:text-gray-100 p-4 rounded-lg shadow">
    &#x3C;h2 class="text-xl font-bold mb-2">Titel&#x3C;/h2>
    &#x3C;p class="text-gray-600 dark:text-gray-300">
      Dieser Text passt sich automatisch dem Dark Mode an.
    &#x3C;/p>
  &#x3C;/div>
&#x3C;/template>
</code></pre>
<h4>Best Practices for Dark Mode</h4>
<ol>
<li>
<p><strong>Consistent colour palette</strong></p>
<ul>
<li>Define a clear colour palette for both modes</li>
<li>Use semantic colour names (e.g. <code>primary</code>, <code>secondary</code>)</li>
<li>Consider contrast ratios</li>
</ul>
</li>
<li>
<p><strong>Performance</strong></p>
<ul>
<li>Avoid unnecessary reflows when changing themes</li>
<li>Use CSS variables for dynamic values</li>
<li>Implement smooth transitions</li>
</ul>
</li>
</ol>
<p>3 <strong>Accessibility</strong></p>
<ul>
<li>Ensure that the contrast is sufficient in both modes</li>
<li>Test the readability of all texts</li>
<li>Take colour blindness into account</li>
</ul>
<ol start="4">
<li><strong>Persistence</strong>
<ul>
<li>Save the user preference</li>
<li>Respect system settings as fallback</li>
<li>Implement a smooth transition</li>
</ul>
</li>
</ol>
<p>In this case, dark mode is automatically activated when the operating system is in dark mode.</p>
<h2>Performance Optimizations for TailwindCSS in Nuxt 3</h2>
<p>TailwindCSS is not only flexible but also performance-strong - however, without optimizations, it can inflate your bundle size. Nuxt 3 and Tailwind offer smart features that you should use to minimize loading times and keep the app fluid.</p>
<h3>1. PurgeCSS and Just-in-Time (JIT) Mode</h3>
<p>Tailwind's JIT compiler generates CSS only for the classes you actually use. In Nuxt 3, this is activated by default. To optimize it, ensure that your <code>tailwind.config.ts</code> covers all relevant files in the <code>content</code> array:</p>
<pre><code class="language-typescript">// tailwind.config.ts
export default {
  content: [
    './components/**/*.{vue,js,ts}',
    './layouts/**/*.vue',
    './pages/**/*.vue',
    './plugins/**/*.{js,ts}',
    './nuxt.config.{js,ts}',
    './app.vue',
  ],
  // ...
}
</code></pre>
<p>This automatically reduces unused CSS during the build. Optionally, for manual tests or debugging:</p>
<pre><code class="language-bash">npx tailwindcss -i assets/css/tailwind.css -o .output/public/_nuxt/tailwind.css --minify
</code></pre>
<h3>2. Caching and Lazy Loading</h3>
<p>In large projects, the build process can become slow. Activate caching in Nuxt by adding the following to <code>nuxt.config.ts</code>:</p>
<pre><code class="language-typescript">export default defineNuxtConfig({
  // ...
  nitro: {
    compressPublicAssets: true,
  },
})
</code></pre>
<p>For dynamic components, load Tailwind styles lazily, e.g., with Nuxt's <code>&#x3C;NuxtLazyHydrate></code>.</p>
<h3>3. Bundle Analysis</h3>
<p>Use tools like:</p>
<pre><code class="language-bash">nuxt analyze
</code></pre>
<p>Or the Webpack Bundle Analyzer to see how much Tailwind contributes to your bundle size. Goal: Keep final CSS under 10-20 KB.</p>
<p>These measures save bandwidth and improve the Core Web Vitals of your app.</p>
<h2>Common Errors and How to Avoid Them</h2>
<p>Even experienced developers sometimes stumble with Tailwind in Nuxt. Here are common pitfalls and how to handle them:</p>
<h3>1. Overloaded Templates</h3>
<p><strong>Error:</strong> Too many utility classes directly in the HTML make the code unreadable.</p>
<p><strong>Solution:</strong> As described in the section on components, encapsulate them in Vue components. Add props for flexibility, without complicating the markup.</p>
<h3>2. Inconsistent Configuration</h3>
<p><strong>Error:</strong> Forgotten paths in the <code>content</code> array lead to classes not being generated.</p>
<p><strong>Solution:</strong> Check with the Tailwind viewer (<code>/_tailwind/</code>) and expand the array as needed.</p>
<h3>3. Performance Problems in Development</h3>
<p><strong>Error:</strong> Slow builds through JIT in large projects.</p>
<p><strong>Solution:</strong> Set <code>mode: 'jit'</code> only for production and use <code>tailwindcss --watch</code> for quick development.</p>
<h3>4. Accessibility Pitfalls</h3>
<p><strong>Error:</strong> Forgotten focus styles or insufficient contrast.</p>
<p><strong>Solution:</strong> Always integrate <code>focus:</code>-variants and test with tools like Lighthouse.</p>
<p>By avoiding these errors, you save time and frustration in the project process.</p>
<h2>FAQ</h2>
<h3>How do I integrate TailwindCSS into an existing Nuxt 3 Project?</h3>
<p>Integrating TailwindCSS into an existing Nuxt 3 project is simple. Install the official Nuxt module and configure it in your nuxt.config.ts. Detailed instructions can be found in the <a href="https://tailwindcss.nuxtjs.org/getting-started/installation">official documentation</a>.</p>
<h3>What Performance Optimizations Does TailwindCSS Offer in Nuxt 3?</h3>
<p>TailwindCSS in Nuxt 3 offers optimizations like PurgeCSS integration, JIT compiler, caching, and bundle analysis. These features reduce bundle size and improve load times. More details can be found in the performance optimization section.</p>
<h3>How Do I Create a Scalable Design System with TailwindCSS in Nuxt 3?</h3>
<p>Through central configuration in tailwind.config.ts and reusable Vue components, you build a scalable system. Define design tokens and encapsulate styles in components for consistency and maintainability.</p>
<h3>What Are the Advantages of Combining TailwindCSS with Nuxt 3?</h3>
<p>The combination offers fast development, optimized performance, consistent designs, and high maintainability. It enables modern, performant web applications with utility-first styling.</p>
<h3>How Do I Implement Dark Mode in TailwindCSS with Nuxt 3?</h3>
<p>Activate Dark Mode in the tailwind.config.ts with darkMode: 'class'. Use 'dark:' prefixes in classes and consider best practices for consistency and accessibility.</p>
<h3>What Common Errors Should I Avoid with TailwindCSS in Nuxt 3?</h3>
<p>Avoid overloaded templates, inconsistent configurations, performance problems, and accessibility pitfalls. Encapsulate styles in components and test regularly.</p>]]></content:encoded>
            <category>Nuxt</category>
            <category>Vue.js</category>
            <category>Tailwind CSS</category>
            <category>Vue.js</category>
            <category>Development</category>
            <category>Performance</category>
            <enclosure url="https://blueshoe.io/img/blog/tailwind-nuxt3.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Nuxt 4 is here: A Deep Dive into the New Features and What They Mean for Developers]]></title>
            <link>https://blueshoe.io/blog/nuxt4-new-features</link>
            <guid>https://blueshoe.io/blog/nuxt4-new-features</guid>
            <pubDate>Mon, 22 Sep 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>After months of speculation and intensive beta testing, the Nuxt team has officially released Nuxt 4.</p>
<p><img src="/img/blog/nuxt4.svg" alt="Nuxt 4 is here, what&#x27;s new?">{.object-cover .max-w-full .mb-5}</p>
<p>Unlike the jump from Nuxt 2 to 3, this time the team is focusing on evolution instead of revolution – an approach that is much more developer-friendly.<br>
In this article, we take a detailed look at the most important new features and show how Nuxt 4 noticeably improves the daily life of a developer. From the new directory structure to smarter data fetching features – here you'll learn everything you need to know about the framework's evolution.</p>
<h2>Nuxt 4 – Evolution instead of Revolution</h2>
<h3>The Success of Nuxt 3</h3>
<p>Nuxt 3 has fundamentally changed Vue development. While previous approaches often struggled with complex build configurations and tedious SSR setups, Nuxt 3 brought a new ease to development with the Nitro server, automatic imports, and seamless TypeScript integration.</p>
<h3>The New Strategy: Continuous Improvement</h3>
<p>The Nuxt team has learned from past experiences. Instead of another major rewrite, they are focusing on the continuous improvement of existing concepts. This approach significantly reduces the migration effort and allows developers to benefit from the new features step by step.</p>
<h3>Practical Implications</h3>
<p>The result is a framework update that allows existing Nuxt 3 projects to be upgraded without major disruptions. The improvements are immediately noticeable, while the migration effort remains manageable – a welcome contrast to previous major updates.</p>
<h2>The Top New Features in Nuxt 4 in Detail</h2>
<h3>New Project Structure: Better Organization with the app Directory</h3>
<p>One of the most noticeable new features in Nuxt 4 is the optional <code>app/</code> directory structure. This solves a well-known problem: in larger projects, numerous folders quickly accumulate in the root directory, which affects clarity.</p>
<p>The new structure clearly separates app-specific code from configuration files. All application files are organized under <code>app/</code>, while configuration files remain in the root directory.</p>
<p><strong>Advantages of the new structure:</strong></p>
<ul>
<li><strong>Clearer separation</strong> between app code and configuration</li>
<li><strong>Better clarity</strong> in larger projects</li>
<li><strong>Consistency</strong> with other modern frameworks like Next.js</li>
<li><strong>Improved IDE support</strong> through a clearer directory structure</li>
</ul>
<p><strong>Comparison of structures:</strong></p>
<pre><code class="language-bash"># Old structure (Nuxt 3)
/
├── components/
├── pages/
├── composables/
├── layouts/
├── middleware/
├── plugins/
├── assets/
├── public/
├── server/
├── nuxt.config.ts
└── package.json

# New structure (Nuxt 4)
/
├── app/
│   ├── components/
│   ├── pages/
│   ├── composables/
│   ├── layouts/
│   ├── middleware/
│   ├── plugins/
│   ├── assets/
│   ├── utils/
│   ├── app.vue
│   └── app.config.ts
├── content/
├── layers/
├── modules/
├── public/
├── server/
├── shared/
├── nuxt.config.ts
└── package.json
</code></pre>
<p>The new structure makes it much easier to distinguish between app-specific code and configuration files. Especially in larger teams and enterprise projects, this leads to significantly better code organization and reduces the time needed to search for specific files.</p>
<h3>Singleton Data Fetching Layer: Revolutionary Caching System</h3>
<p>Data fetching in Nuxt 4 has been fundamentally revised and introduces the "Singleton Data Fetching Layer". This innovation brings significant improvements in performance and consistency.</p>
<p><strong>What has changed:</strong></p>
<p><strong>1. Shared Refs for the same Key:</strong>
All calls to <code>useAsyncData</code> or <code>useFetch</code> with the same key now share the same <code>data</code>, <code>error</code>, and <code>status</code> refs. This means that all calls with an explicit key must not have conflicting <code>deep</code>, <code>transform</code>, <code>pick</code>, <code>getCachedData</code>, or <code>default</code> options.</p>
<p><strong>2. Extended <code>getCachedData</code> Control:</strong>
The <code>getCachedData</code> function is now called on every data fetch, even if it is caused by a watcher or <code>refreshNuxtData</code>. The function receives a context object with the cause of the request, which allows more control over the use of cached data.</p>
<p><strong>3. Reactive Key Support:</strong>
You can now use computed refs, plain refs, or getter functions as keys. This enables automatic data refetching and stores data separately.</p>
<p><strong>4. Automatic Data Cleanup:</strong>
When the last component that fetches data with <code>useAsyncData</code> is unmounted, Nuxt automatically cleans up the corresponding data from the cache.</p>
<p><strong>Practical Example: E-commerce Product Page</strong></p>
<p>Imagine you are building an e-commerce site with product details and related products:</p>
<pre><code class="language-javascript">// Product page - loads product data
const { data: product } = await useFetch(`/api/products/${productId}`, {
  key: `product-${productId}`
})

// Related products component - shares the same data
const { data: relatedProducts } = await useFetch(`/api/products/${productId}/related`, {
  key: `related-${productId}`
})

// Shopping cart component - loads product data again
const { data: cartProduct } = await useFetch(`/api/products/${productId}`, {
  key: `product-${productId}` // Same key = same data!
})
// → No additional API call, uses cached data
</code></pre>
<p><strong>What's happening here:</strong></p>
<ul>
<li><strong>One API call</strong> for product data, but <strong>three components</strong> use the data</li>
<li><strong>Automatic caching</strong> prevents duplicate requests</li>
<li><strong>Shared refs</strong> ensure that all components stay in sync</li>
</ul>
<p><strong>Real-world Scenario: Blog with Comments</strong></p>
<pre><code class="language-javascript">// Blog post page
const { data: post } = await useFetch(`/api/posts/${postId}`, {
  key: `post-${postId}`
})

// Comments component (same page)
const { data: comments } = await useFetch(`/api/posts/${postId}/comments`, {
  key: `comments-${postId}`
})

// User navigates to another page → components are unmounted
// → Cache is automatically cleared (no memory leaks!)

// User returns → data is reloaded
// → Fresh data, optimal memory consumption
</code></pre>
<p><strong>Advantages in practice:</strong></p>
<ul>
<li><strong>Fewer API calls</strong> = faster loading times</li>
<li><strong>Synchronized data</strong> = no inconsistencies between components</li>
<li><strong>Automatic cleanup</strong> = no memory leaks during navigation</li>
<li><strong>Intelligent caching</strong> = better user experience</li>
</ul>
<p><strong>Pro-Tip: Composable for reusable Data Fetching Logic</strong></p>
<p>Since all components with the same key automatically share the same data, it makes sense to extract the <code>useAsyncData</code>/<code>useFetch</code> calls into their own composables:</p>
<pre><code class="language-javascript">// composables/useProduct.js
export function useProduct(productId: string) {
  return useAsyncData(
    `product-${productId}`,
    () => $fetch(`/api/products/${productId}`),
    { 
      deep: true,
      transform: (product) => ({ 
        ...product, 
        formattedPrice: `€${product.price.toFixed(2)}`,
        lastViewed: new Date()
      })
    }
  )
}

// In different components
const { data: product } = await useProduct(productId)
// → All use the same data and transformation logic
</code></pre>
<p><strong>Advantages:</strong></p>
<ul>
<li><strong>Consistent keys</strong> across all components</li>
<li><strong>Centralized transformation logic</strong> (price formatting, etc.)</li>
<li><strong>Easy maintenance</strong> and updates</li>
<li><strong>TypeScript support</strong> through central types</li>
</ul>
<p><strong>Important Notes:</strong></p>
<ul>
<li><strong>Consistent options:</strong> All calls with the same key must have identical <code>deep</code>, <code>transform</code>, <code>pick</code>, <code>getCachedData</code>, or <code>default</code> options</li>
<li><strong>Reactive keys:</strong> Computed refs, plain refs, or getter functions as keys enable automatic refetching</li>
<li><strong>Memory management:</strong> Automatic cleanup prevents memory leaks in large applications</li>
</ul>
<p><strong>Configuration: Adjusting Cache Behavior</strong></p>
<p>If you want to disable the new cache behavior, you can configure it in <code>nuxt.config.ts</code>:</p>
<pre><code class="language-javascript">export default defineNuxtConfig({
  experimental: {
    granularCachedData: false,  // Disables granular caching
    purgeCachedData: false      // Disables automatic cleanup
  }
})
</code></pre>
<p><strong>When this makes sense:</strong></p>
<ul>
<li><strong>Legacy projects</strong> with complex caching requirements</li>
<li><strong>Step-by-step migration</strong> of existing caching strategies</li>
<li><strong>Debugging</strong> cache problems during development</li>
</ul>
<p>These improvements ensure that unnecessary API calls are avoided and the application becomes significantly more performant. The difference is particularly noticeable in complex applications with many data sources.</p>
<h3>Developer Experience: Improved Productivity and Efficiency</h3>
<p>The developer experience has been significantly improved in Nuxt 4, which has a direct impact on developer productivity. The optimizations focus on faster workflows and better development tools.</p>
<p><strong>Performance Improvements:</strong></p>
<ul>
<li><strong>Significantly faster start times</strong> for the development server</li>
<li><strong>Improved HMR reliability</strong> (Hot Module Replacement)</li>
<li><strong>Optimized TypeScript integration</strong> for better type safety and autocompletion</li>
<li><strong>Intelligent build optimizations</strong> for faster compile times</li>
</ul>
<p><strong>Practical Improvements:</strong></p>
<ul>
<li>Seamless integration with modern IDEs</li>
<li>Better error messages and debugging information</li>
<li>Optimized bundle size through intelligent tree-shaking</li>
<li>Improved source maps for more efficient debugging</li>
</ul>
<p>These improvements ensure that developers spend less time waiting and debugging and can concentrate more on the actual development.</p>
<h2>The Upgrade Process: How to Successfully Switch from Nuxt 3 to 4</h2>
<p>The switch from Nuxt 3 to Nuxt 4 has been deliberately designed to be straightforward. The Nuxt team has placed great emphasis on ensuring that existing projects can be migrated without major changes. After the upgrade, most Nuxt 4 behaviors are already standard, but some features can still be configured to ensure backward compatibility during migration.</p>
<h3>Step 1: Update Nuxt to Version 4</h3>
<p>The first step is to update the Nuxt package to version 4:</p>
<pre><code class="language-bash"># With npm
npm install nuxt@^4.0.0

# With yarn
yarn add nuxt@^4.0.0

# With pnpm
pnpm add nuxt@^4.0.0

# With bun
bun add nuxt@^4.0.0
</code></pre>
<p><strong>What's happening here?</strong></p>
<ul>
<li>The <code>nuxt</code> package is updated to version 4</li>
<li>All Nuxt 4 behaviors are activated</li>
<li>Most existing configurations remain functional</li>
</ul>
<h3>Step 2: Perform Migration</h3>
<p>You have two options for migration:</p>
<h4>Option A: Automatic Migration with Codemods (recommended)</h4>
<p>The Nuxt team has collaborated with the Codemod team to automate many migration steps:</p>
<pre><code class="language-bash"># Run all migration codemods

# With npm
npx codemod@latest nuxt/4/migration-recipe

# With yarn
yarn dlx codemod@latest nuxt/4/migration-recipe

# With pnpm
pnpm dlx codemod@latest nuxt/4/migration-recipe

# With bun
bun x codemod@latest nuxt/4/migration-recipe
</code></pre>
<p><strong>What do the codemods do?</strong></p>
<ul>
<li>Automatic adjustment of the directory structure</li>
<li>Migration of outdated configuration options</li>
<li>Adjustment of TypeScript configurations</li>
<li>Update of import paths and aliases</li>
</ul>
<h4>Option B: Manual Migration</h4>
<p>If you want to perform the migration manually or the codemods do not cover all aspects:</p>
<p><strong>2.1 Create New Directory Structure</strong></p>
<p>The new directory structure offers better performance and IDE support:</p>
<pre><code class="language-bash"># Create new app/ directory
mkdir app

# Move existing directories into app/
mv assets app/
mv components app/
mv composables app/
mv layouts app/
mv middleware app/
mv pages app/
mv plugins app/
mv utils app/

# Move app-specific files
mv app.vue app/
mv error.vue app/
mv app.config.ts app/
</code></pre>
<p><strong>2.2 Clean up Root Directory</strong></p>
<p>Make sure these directories remain in the root:</p>
<ul>
<li><code>nuxt.config.ts</code></li>
<li><code>content/</code></li>
<li><code>layers/</code></li>
<li><code>modules/</code></li>
<li><code>public/</code></li>
<li><code>server/</code></li>
</ul>
<p><strong>2.3 Adjust TypeScript Configuration</strong></p>
<p>Nuxt 4 uses new TypeScript project references for better type safety:</p>
<pre><code class="language-json">// tsconfig.json
{
  "files": [],
  "references": [
    { "path": "./.nuxt/tsconfig.app.json" },
    { "path": "./.nuxt/tsconfig.server.json" },
    { "path": "./.nuxt/tsconfig.shared.json" },
    { "path": "./.nuxt/tsconfig.node.json" }
  ]
}
</code></pre>
<pre><code class="language-json">// package.json - Update type-checking scripts
{
  "scripts": {
    "typecheck": "nuxt prepare &#x26;&#x26; vue-tsc -b --noEmit"
  }
}
</code></pre>
<p><strong>Move Type Augmentations:</strong></p>
<ul>
<li><strong>App context:</strong> Move files to <code>app/</code></li>
<li><strong>Server context:</strong> Move files to <code>server/</code></li>
<li><strong>Shared:</strong> Move files to <code>shared/</code></li>
</ul>
<p><strong>2.4 Adjust Configuration</strong></p>
<p><strong>Migrate outdated generate configuration:</strong></p>
<pre><code class="language-typescript">// Old configuration (Nuxt 3)
export default defineNuxtConfig({
  generate: {
    exclude: ['/admin', '/private'],
    routes: ['/sitemap.xml', '/robots.txt']
  }
})

// New configuration (Nuxt 4)
export default defineNuxtConfig({
  nitro: {
    prerender: {
      ignore: ['/admin', '/private'],
      routes: ['/sitemap.xml', '/robots.txt']
    }
  }
})
</code></pre>
<p><strong>Remove Experimental Features:</strong></p>
<p>These features are no longer configurable in Nuxt 4:</p>
<ul>
<li><code>experimental.treeshakeClientOnly</code> (always <code>true</code>)</li>
<li><code>experimental.configSchema</code> (always <code>true</code>)</li>
<li><code>experimental.polyfillVueUseHead</code> (always <code>false</code>)</li>
<li><code>experimental.respectNoSSRHeader</code> (always <code>false</code>)</li>
</ul>
<h3>Step 3: Testing and Validation</h3>
<pre><code class="language-bash"># Start development server
npm run dev

# Run type-checking
npm run typecheck

# Test build
npm run build

# Run all tests
npm run test
</code></pre>
<p><strong>What to test:</strong></p>
<ul>
<li>All pages load correctly</li>
<li>Data fetching works</li>
<li>TypeScript errors are resolved</li>
<li>Performance improvements are visible</li>
<li>All modules and plugins work</li>
</ul>
<h3>Step 4: Backward Compatibility (if necessary)</h3>
<p>If you want to keep the old directory structure:</p>
<pre><code class="language-typescript">// nuxt.config.ts
export default defineNuxtConfig({
  // Keep V3 structure
  srcDir: '.',
  dir: {
    app: 'app'
  }
})
</code></pre>
<h3>Important Breaking Changes</h3>
<p>Nuxt 4 brings some significant changes that need to be considered during migration:</p>
<p><strong>1. New Directory Structure:</strong></p>
<ul>
<li>Default <code>srcDir</code> is now <code>app/</code> instead of the root directory</li>
<li><code>serverDir</code> is now <code>&#x3C;rootDir>/server</code> instead of <code>&#x3C;srcDir>/server</code></li>
<li><code>layers/</code>, <code>modules/</code>, and <code>public/</code> are resolved relative to <code>&#x3C;rootDir></code></li>
</ul>
<p><strong>2. Singleton Data Fetching Layer:</strong></p>
<ul>
<li>Shared refs for the same key in <code>useAsyncData</code> and <code>useFetch</code></li>
<li>Extended <code>getCachedData</code> control with context object</li>
<li>Reactive key support for automatic refetching</li>
<li>Automatic data cleanup on unmount</li>
</ul>
<p><strong>3. TypeScript Configuration:</strong></p>
<ul>
<li>New project references for better type safety</li>
<li>Separate configurations for app, server, and build time</li>
<li>Type augmentations must be in corresponding directories (<code>app/</code>, <code>server/</code>, <code>shared/</code>)</li>
</ul>
<p><strong>Migration Guide:</strong> For detailed information, we recommend consulting the <a href="https://nuxt.com/docs/4.x/getting-started/upgrade">official Nuxt 4 Upgrade Guide</a>.</p>
<h2>What Does Nuxt 4 Mean for the Future? A Look Ahead</h2>
<h3>The Roadmap: What Comes After Nuxt 4?</h3>
<p>The Nuxt team is already working on <strong>Nuxt 5</strong> and <strong>Nitro v3</strong>, which will bring more revolutionary features. Nuxt 4 forms the solid foundation for these future developments.</p>
<p><strong>Planned Features for the Future:</strong></p>
<ul>
<li>Advanced Server-Side Rendering optimizations</li>
<li>Improved edge computing support</li>
<li>Enhanced TypeScript integration</li>
<li>New performance metrics and monitoring tools</li>
</ul>
<h3>The Importance for the Ecosystem</h3>
<p><strong>Module Developers:</strong> Can gradually adapt their modules to the new features and benefit from the improved API.</p>
<p><strong>Community:</strong> Benefit from the improved developer experience and new opportunities for performance optimizations.</p>
<p><strong>Enterprise:</strong> Have a stable basis for long-term projects with predictable upgrade paths.</p>
<h3>Classification: How Nuxt Continues to Expand its Position as a Leading Vue Framework</h3>
<p>Nuxt 4 solidifies its position as one of the most modern and developer-friendly Vue frameworks and sets new standards in web development. The focus on stability and continuous improvement makes it the ideal choice for professional projects.</p>
<p><strong>Competitive Advantages:</strong></p>
<ul>
<li>Superior developer experience compared to other Vue frameworks</li>
<li>Better performance through intelligent optimizations</li>
<li>Strong community and extensive ecosystem</li>
<li>Enterprise-ready with long-term support</li>
</ul>
<h2>Conclusion: Why Upgrading to Nuxt 4 is Worth it Now</h2>
<h3>Summary of the Main Advantages:</h3>
<p><strong>Performance:</strong> Significantly faster start times and better HMR reliability ensure smoother development.</p>
<p><strong>Code Quality:</strong> Better project structure through the new <code>src</code> directory organization and optimized TypeScript integration.</p>
<p><strong>Productivity:</strong> Smarter data fetching and an improved developer experience lead to less waiting time and more efficient development.</p>
<p><strong>Future-Proof:</strong> Stable basis for long-term projects with predictable upgrade paths.</p>
<h3>Clear Recommendation</h3>
<p>Upgrading to Nuxt 4 is recommended for all developers and teams already working with Nuxt 3. The improvements are immediately noticeable, while the migration effort remains minimal.</p>
<p><strong>For Teams:</strong> The improved developer experience and performance optimizations justify the upgrade after a short time.</p>
<p><strong>For Enterprise:</strong> The focus on stability and long-term support makes Nuxt 4 the ideal choice for professional projects.</p>
<p>Check out the <a href="https://nuxt.com">official Nuxt 4 documentation</a> now and start your first project with Nuxt 4!</p>
<hr>
<p><em>Have you already had experience with Nuxt 4? Share your insights in the comments and let us know which features excite you the most!</em></p>
<hr>
<h2>FAQ – Frequently Asked Questions about Nuxt 4</h2>
<h3>1. What are the most important new features in Nuxt 4?</h3>
<p>The most important new features in Nuxt 4 are the optional <code>src</code> directory structure for better project organization, smarter data fetching with automatic caching and deduplication, and an improved developer experience with a faster CLI and optimized TypeScript integration. The focus is on stability and a smooth upgrade path from Nuxt 3.</p>
<h3>2. How is Nuxt 4 different from Nuxt 3?</h3>
<p>Nuxt 4 builds on Nuxt 3 and focuses on evolution rather than revolution. While Nuxt 3 was a complete rewrite, Nuxt 4 concentrates on refining existing concepts. The new <code>src</code> directory structure offers better organization, data fetching has become smarter, and the developer experience has been improved with faster start times and better HMR reliability.</p>
<h3>3. Is the upgrade from Nuxt 3 to Nuxt 4 complicated?</h3>
<p>No, the upgrade from Nuxt 3 to Nuxt 4 has been deliberately designed to be straightforward. The Nuxt team has placed great emphasis on ensuring that existing projects can be migrated without major changes. The breaking changes are minimal, and most existing projects should work without major adjustments.</p>
<h3>4. What are the benefits of the new src directory structure in Nuxt 4?</h3>
<p>The new optional <code>src</code> directory structure provides a clearer separation between configuration files and the actual app code. This leads to a better overview in large projects, a consistent structure with other modern frameworks, and cleaner project organization. All app-specific files are organized under <code>src/</code> organized, while configuration files remain in the root directory.</p>
<h3>5. How does Nuxt 4 improve data fetching?</h3>
<p>Nuxt 4 introduces smarter data fetching with automatic caching and deduplication of requests with the same key. Additionally, there is automatic cleanup when components are unmounted, which prevents memory leaks. The improved error handling and retry mechanisms ensure more robust applications.</p>
<h3>6. What performance improvements does Nuxt 4 offer?</h3>
<p>Nuxt 4 offers significantly faster development server start times, improved HMR reliability, and optimized TypeScript integration. In addition, intelligent build optimizations ensure faster compile times and better bundle sizes through improved tree-shaking.</p>
<h3>7. Can I test Nuxt 4 features in Nuxt 3 already?</h3>
<p>Yes, you can test features from Nuxt 4 in Nuxt 3 by using the latest beta versions. This allows for a gradual migration and early detection of compatibility issues. The official migration guide provides detailed instructions for the transition.</p>
<h3>8. What are the breaking changes in Nuxt 4?</h3>
<p>Since Nuxt 4 focuses on stability, the breaking changes are minimal. Most existing projects should work without major adjustments. Important changes mainly concern the optional <code>src</code> structure and some outdated API functions that have been replaced by more modern alternatives.</p>]]></content:encoded>
            <category>Nuxt</category>
            <category>Vue.js</category>
            <category>Development</category>
            <category>Performance</category>
            <category>Documentation</category>
            <enclosure url="https://blueshoe.io/img/blog/nuxt4.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Performance comparison: GKE vs. EKS]]></title>
            <link>https://blueshoe.io/blog/performance-comparison-gke-vs-eks</link>
            <guid>https://blueshoe.io/blog/performance-comparison-gke-vs-eks</guid>
            <pubDate>Thu, 09 Feb 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>The solid performance of managed Kubernetes platforms is generally regarded as a given and is hardly ever put into question. However, maybe there is a difference in how containers perform on different popular managed Kubernetes platforms. I wanted to take a deeper look and selected the two most popular Kubernetes services we use at Blueshoe for our clients: Amazon Elastic Kubernetes Service (EKS) and the Google Kubernetes Engine (GKE).</p>
<p><img src="/img/blog/performance-comparison-gke-vs-eks.jpg" alt="Performance comparison: GKE vs. EKS">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
EKS vs. GKE – and why does it matter?
:::
:::globalParagraph
According to this statistic from February 2020, 540 respondents answered the question ‘Which of the following container orchestrators do you use?’ with:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>37% of all respondents use EKS</li>
<li>21% of the respondents use GKE
:::
:::globalParagraph
Please bear in mind that the selection of multiple answers was possible, hence why the groups are not exclusive. The numbers have probably changed a bit since then, but it’s obvious that these two are very popular choices in the world of managed Kubernetes. The numbers also match the distribution of Kubernetes platforms that are under Blueshoe’s management to date.
:::
:::globalParagraph
Naturally, we should start the container runtime performance analysis with these two solutions.
:::</li>
</ul>
<p>:::globalTitle{:size="lg" .mb-5}
But why?
:::
:::globalParagraph
For one thing, it’s simply interesting to establish how these two big players perform against each other. On the one hand, you’ve got Amazon Web Services – the giant in the market of hyperscalers. And on the other hand, there is Google – the tech titan and pioneer of Kubernetes.
:::
:::globalParagraph
But more importantly, it always boils down to the costs. If you can get 10% more performance at comparable pricing, some might want to take advantage of Kubernetes’ portability. This is not about the ecosystem or potentially attached services (such as managed databases or storage), but rather the pure container runtime performance. I wanted to answer the question: ‘At which speed runs my code in a very standard Kubernetes cluster?’. And this is what I found:
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
The Benchmark Setup
:::
:::globalParagraph
On EKS, I created a Kubernetes cluster with the following specs:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Instance type: t3.xlarge</li>
<li>Region: eu-west-1</li>
<li>K8s version: 1.23</li>
<li>OS: Amazon Linux 2</li>
<li>Container runtime: docker</li>
<li>Node VM pricing: 0.1824 USD per hour
:::
:::globalParagraph
To match these parameters as accurately as possible, I created a Google Kubernetes Engine cluster with the following specs:
:::
:::GlobalBlock{.ul-disk .mb-5}</li>
<li>Instance type: e2-standard-4</li>
<li>Region: europe-north1-a</li>
<li>K8s version: 1.23.14-gke.401</li>
<li>OS: Container-Optimized OS with containerd (cos_containerd)</li>
<li>Container runtime: containerd</li>
<li>Node VM pricing: 0.147552 USD per hour
:::
:::globalParagraph
Both machine types incorporate a 4 vCPU machine with 16 GB of RAM based on an Intel processor. The Kubernetes node running the test was dedicated to the test pod and only filled with other ‘default’ pods of that managed Kubernetes offering. I did not use any special configurations, I simply ordered a cluster with the defaults set.
:::</li>
</ul>
<p>:::globalTitle{:size="lg" .mb-5}
How to benchmark the container runtime
:::
:::globalParagraph
One of the main goals of running a performance analysis is to enable very easy replication. Luckily, we’re talking about Kubernetes, which means that it’s just a matter of writing Kubernetes configs and applying them to the cluster. Yet, a few things are still important:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>chose comparable Kubernetes node instance types, make the comparison as fair as possible</li>
<li>do not deploy the benchmark workload next to other containers</li>
<li>use the same Kubernetes version</li>
<li>note down any important differences between the contestants
:::
:::globalParagraph
Unfortunately, it wasn’t exactly easy to find a good benchmark tool that can serve the needs to benchmark the following parts:
:::
:::GlobalBlock{.ul-disk .mb-5}</li>
<li>the CPU</li>
<li>the memory (RAM)</li>
<li>the container filesystem (not the attached volumes, this is about the native filesystem)
:::</li>
</ul>
<p>:::globalParagraph
A quite commonly used tool with only a few known weaknesses is sysbench. With about 5k stars on GitHub and a rather large and active community, it may be suitable for my requirements. A big plus is the extensibility and the many built-in complex benchmark types, such as database benchmarks etc.
:::
:::globalParagraph
Luckily, someone at Severalnines has created a container image for sysbench already and made it public. So the benchmarking tool is ready.
:::
:::globalParagraph
To simplify this process and make it easily reproducible, I started a little test runner for sysbench. This tool schedules the benchmark in the cluster (with a node selector), waits for the job to complete, parses the result and compiles a file with the test results.
:::
:::globalParagraph
I made the code public here. It is based on Python and Poetry. If you do have Poetry installed, you can simply run poetry run benchmark  and it will benchmark the capacity of the CPU, memory and file system.
:::</p>
<p>:::GlobalPodcastSection{:videoId="tyvE9VlSWkE" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR KUBERNETES PODCAST
::::
::::GlobalTitle{:tag="h3" .mb-6}
TftC E1: Kubernetes development environments
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking in depth about the ins and outs of local Kubernetes development and also provide some real coding examples.
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="Show more" :color="green"}
::::
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
The Results
:::
:::globalParagraph
To start with the summary, it turned out that EKS provides higher performance across all metrics. The file IO performance in particular is, frankly, poor with GKE. We are talking about 10% less performance on the CPU, 9% less on memory and a huge gap in file operations for a default Kubernetes cluster. Let’s take a deeper look at the results.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
CPU Performance
:::
:::globalParagraph
The sysbench command for running the CPU test is: sysbench --test=cpu --time=60 run
:::
:::globalParagraph
This command executes the CPU benchmark for one minute.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
1) GKE vs EKS: CPU events per second
:::
:::globalParagraph
sysbench captures the executed loops (aka events) calculating all prime numbers up to a certain parameter in a given timeframe. It indicates how much CPU time was granted to the process and how fast the calculation was in general.
:::
<img src="/img/blog/performance-comparison-gke-vs-eks-1.jpg" alt="kubernetes">{.object-cover .w-max-full .mb-5}
:::globalParagraph
The result shows a shocking difference between EKS and GKE of about 11% more events on EKS. Since you do pay for the time of your Kubernetes node, getting more calculations done in that time is essential.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
2) GKE vs. EKS: CPU latency
:::
:::globalParagraph
Sysbench records the CPU latency for a requested event. It aggregates the results and returns the minimum, maximum, average and 95th percentile values.
:::
<img src="/img/blog/performance-comparison-gke-vs-eks-2.jpg" alt="kubernetes">{.object-cover .w-max-full .mb-5}</p>
<p>:::globalParagraph
As you can see, the performance of GKE’s containerd-based runtime is not significantly slower than the docker-based runtime of EKS. Yet, the difference in the 95th percentile is about 2%. This can be attributed to the rather short runtime of the benchmark and other factors.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Memory Performance
:::
:::globalParagraph
The sysbench command for running the memory (RAM) test is: sysbench --test=memory --memory-total-size=500G run
:::
:::globalParagraph
This command writes 500 Gigabytes to the main memory and captures the speed of writing.
:::
<img src="/img/blog/performance-comparison-gke-vs-eks-3.jpg" alt="kubernetes">{.object-cover .w-max-full .mb-5}</p>
<p>:::globalParagraph
Again, GKE is roughly 9% slower than the container runtime of EKS when it comes to writing a lot into the main memory. On an EKS cluster, your code can potentially write with 4.25 Gigabytes per second into RAM, while on GKE, your container can only shovel with 3.87 Gigabytes per second. However, compared to my laptop, which runs at about 6.36 Gigabytes per second, neither result is overwhelming
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
GKE vs EKS: File I/O Performance
:::
:::globalParagraph
The filesystem performance results paint a particularly dramatic picture. The sysbench command for running the file test is:
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>sysbench --test=fileio --file-num=5 --file-total-size=5G prepare</li>
<li>sysbench --test=fileio --file-total-size=5G --file-num=5 --file-test-mode=rndrw --time=100 --max-requests=0 run
:::</li>
</ul>
<p>:::globalTitle{:size="md" .mb-5}
1) GKE vs EKS: File throughput
:::
:::globalParagraph
The file throughput benchmark simply writes a file to the filesystem and reads an artificial file from the filesystem.
:::
<img src="/img/blog/performance-comparison-gke-vs-eks-4.jpg" alt="kubernetes">{.object-cover .w-max-full .mb-5}</p>
<p>:::globalParagraph
The write and read performance of a container running on EKS is about 95% better on read operations and 94% better on write operations. This metric could become relevant if an application writes and reads files from temporary storage in the container.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
2) File input/output latency
:::
<img src="/img/blog/performance-comparison-gke-vs-eks-5.jpg" alt="kubernetes">{.object-cover .w-max-full .mb-5}</p>
<p>:::globalParagraph
The file latency is almost on par for both platforms. Personally, I’d not put too much importance on the maximum latency (this can vary a lot per run), but rather look at the 95th percentile. With this metric, EKS outperforms GKE by an order of magnitude.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
3) File operations per second
:::
<img src="/img/blog/performance-comparison-gke-vs-eks-6.jpg" alt="kubernetes">{.object-cover .w-max-full .mb-5}</p>
<p>:::globalParagraph
The poor file operations per second on GKE are just a consequence of the results before. Please keep in mind that these filesystem performance evaluations are executed on the container’s native filesystem. There is no additional storage class attached to the pod running the benchmark.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Closing Remarks
:::</p>
<p>:::globalParagraph
I was a bit shocked about the results after comparing these two managed Kubernetes platforms and their container runtime performance. As you can see, the price of the GKE node is about 22% lower (in these regions) than the counterpart at AWS EKS. It compensates at least a bit for the difference in performance, but having these facts at hand may influence the decision of where to place a containerized workload in the future.
:::
:::globalParagraph
When trying to comprehend the results, I found Amazon’s Nitro system, a hardware technology that Amazon Web Service developed for their own cloud computing. Are these results proof of the promised performance gains? Does the docker-based container runtime on AWS play a part in this?
:::
:::globalParagraph
At Blueshoe, we love to work with the Google Cloud Platform, as we generally consider it more user-friendly and clear compared with the AWS console. Performance considerations are indeed very important, but there are other essential criteria, too, when it comes to selecting a managed Kubernetes offering. Also, please read this benchmark with a grain of salt, as there are plenty of configurations one may choose that can have a huge impact on the overall system performance.
:::
:::globalParagraph
Feel free to follow me on LinkedInor join our discord.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Performance</category>
            <category>Operation</category>
            <enclosure url="https://blueshoe.io/img/blog/performance-comparison-gke-vs-eks.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[PHP or Python? Let's compare!]]></title>
            <link>https://blueshoe.io/blog/php-vs-python</link>
            <guid>https://blueshoe.io/blog/php-vs-python</guid>
            <pubDate>Mon, 30 Jan 2017 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Why does Blueshoe trust in Django and Python even though PHP is much more widespread? We’ve certainly got our reasons and don’t want to keep them from you.</p>
<p>In this article, we’ll compare Python and Django with PHP.</p>
<p><img src="/img/blog/php-vs-python.jpg" alt="PHP or Python">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
Why does Blueshoe trust in django?
:::
:::globalParagraph
We use Django for many of our projects. During the first introduction with a potential new client, we’re often asked: ‘Why does Blueshoe use Django? Most of your competitors primarily trust in PHP systems like Wordpress, Drupal/Magento, Shopware or WooCommerce.’
:::
:::globalParagraph
This is the moment where we usually have to explain that a comparison between PHP systems like those above and Django isn’t really fair.
:::
:::globalParagraph
PHP is a programming language, Django is a web framework, Wordpress and Drupal are content management systems, and Magento and WooCommerce are e-commerce systems.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
PHP is a burden
:::
:::globalParagraph
PHP is a popular programming language used by many sources and many projects. Having said that, the modern world of web applications is slowly but steadily moving away from PHP:
:::
:::globalParagraph
Search queries for PHP:
:::</p>
<p><img src="/img/blog/php-blog-graph.jpg" alt="php_blog_graph">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
Search queries for Zend, the widespread framework (PHP-based):
:::</p>
<p><img src="/img/blog/zend.jpg" alt="zend">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
In his article ‘The PHP Singularity’[1], Jeff Atwood describes PHP as follows: ‘PHP isn't so much a language as a random collection of arbitrary stuff, a virtual explosion at the keyword and function factory.’
:::
:::globalParagraph
Reading Jeff Atwood’s article, you’ll quickly realise that PHP is a badly designed programming language. But then, why are there so many projects based on PHP? There could be a number of reasons:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>In its early days (PHP 1 was released in 1995), PHP offered the possibility to implement server-side logic.</li>
<li>PHP is cheap. Almost every web hosting provider allows PHP to be run. The deployment is simple – it couldn’t be easier. Simply load the data to the server and it’s ready to go.</li>
<li>Due to the early growing demand, naturally a lot of jobs surrounding PHP were developed. Consequently, a lot of projects are implemented using PHP.
:::
:::globalParagraph
Systems built on this basis appear cheap and easy to use at first glance. Things can get out of hand just as easily, however, with security gaps, poor maintainability and bad code quality[3]. Poor maintainability is notoriously expensive as it usually takes more time for developers to familiarise themselves with the code or they require such specific knowledge of the system that they request more and more money as time goes by. The result is often a dependency on the client, a vendor lock-in.
:::</li>
</ul>
<p>:::globalTitle{:size="lg" .mb-5}
PYTHON as foundation
:::
:::globalParagraph
Clean structures, a solid code quality and easy maintainability are second nature to Python. Unlike other programming languages, Python bases its structure on indentations. This not only forces the developer to use a consistent structure when indenting and structuring their code but it also makes it much easier for other developers to familiarise themselves with a new project.
:::
:::globalParagraph
A quick example:
:::
:::globalTitle{:size="md" .mb-5}
PHP
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-php">for ($i = 1; $i &#x3C; 10; $i++) {
  if $($i % 2 == 0) {
    echo $I;
  }
}
</code></pre>
<p>:::
:::globalTitle{:size="md" .mb-5}
Python
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-py">for i in range(0, 10, 2):
   print i
</code></pre>
<p>:::
:::globalParagraph
Obviously, this example isn’t a comprehensive comparison of both languages. It’s simply meant to give you a rough idea of both languages’ readability.
:::
:::globalParagraph
Furthermore, PEP8[4] provides clear guidelines on how to format Python code. At Blueshoe, our development environments are configured in such a way that they check for conformity with PEP8.
:::
:::globalParagraph
Alright, so we’ve shown that Python ‘trains’ its developers to use a fair bit of structuring. But aren’t professional PHP developers able to do the same?
:::
:::globalParagraph
Of course, there are PHP developers who manage to structure their code in a clear and readable manner. In reality, though, we still see the so-called spaghetti code used too often in PHP projects. Therefore, in the long term, Python shouldn’t be much more expensive than PHP as the maintenance costs can generally be minimised.
:::</p>
<p>::GlobalPodcastSection{:videoId="KapP6ouEL3E" :videoPosition="left" :bg="bg-bs-gray" .mb-5}
:::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR KUBERNETES PODCAST
:::
:::GlobalTitle{:tag="h3" .mb-6}
Tools for the Craft: Navigating the Kubernetes ecosystem
:::
:::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking about the various options developers have for running remote Kubernetes development environments.
:::
:::globalParagraph{:font-size="lg" .mb-4}
More editions of our podcast can be found here:
:::
::::GlobalButton{:url="/podcast/" :label="More podcasts!" :color="green"}
::::
::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Django - Our Web Applications Framework
:::
:::globalParagraph
So why does Blueshoe trust in Django and what exactly is Django, anyway?
:::
:::globalParagraph
Django is a framework for the development of web applications. This means Django specifies a certain workflow – a structure in the development of web applications. If you comply with this structure, it’s subsequently easier for new developers to familiarise themselves with the code of a new project. This results not only in the project being structured better, but also in time saved, which in turn is reflected in the costs.
:::
:::globalParagraph
Django automatically comes with a number of different security mechanisms, which are obligatory by default during implementation. For example, cross-site request forgery (CSRF) is prevented by the use of CSRF tokens. Cross-site scripting is also automatically prevented by the Django template Engine. A developer will have to make the active choice to do without this protection and deactivate it manually. Other attack vectors, which Django limits as much as possible, include SQL Injection. In 2013, the Open Web Application Security Project (OWASP) categorised SLQ Injection as the number one weak point and also as potentially the biggest attack vector[5].
:::
:::globalParagraph
Django describes itself as ‘the web framework for perfectionists with deadlines’[6]. Django’s development team decided on fundamental philosophies for the development of the framework[7]. Naturally, these aren’t total reinventions in the world of tech – they also apply to other projects and software systems. It’s worth noting, however, these are being adhered to much more strictly in Django projects. This is generally perceived very positively and at Blueshoe, too, we feel very passionately about this. The results are the dream of every (software) client:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Reusable components = less time, lower costs</li>
<li>Easy system extensibility = project as the foundation of further development</li>
<li>Simplicity, readability = no dependency on contractor</li>
<li>Up-to-date security mechanisms (for web applications)
:::
:::globalParagraph
Okay, so Django is secure, it’s well-structured, has very good maintainability and allows for many functionalities to be implemented within a short timeframe. But who uses Django? Are there examples of other well-known websites that count on Python/Django?
:::</li>
</ul>
<p>:::globalTitle{:size="md" .mb-5}
Who else puts their faith in Python/Django?
:::</p>
<p><img src="/img/blog/platforms.jpg" alt="platforms">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
Dropbox:
:::
:::globalParagraph
Many instances of the Dropbox infrastructure run Python code. Dropbox’s main goal was the fast development of their features – Python made this possible.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
YouTube[8]:
:::
:::globalParagraph
In 2009, YouTube smashed the limit of one billion views per day. YouTube uses an application server which was written in Python. In order to allow for different levels of usage, YouTube can simply add or turn off machines. During intense usage of the YT infrastructure, Python usually isn’t the bottleneck. What’s more, YouTube also primarily uses Python because it enables you to develop in a fast and flexible manner.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
Pinterest[9]:
:::
:::globalParagraph
During its peak times, Pinterest pays $52 per hour for their servers. Most of the traffic occurs in the afternoon and evening. At night, the overall number of instances can be reduced to 40% which results in a cost reduction of up to $15 per hour. For the delivery of content, Pinterest mostly uses Django and Tornado (a Python framework and Asynchronous Network Library)[10].
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
Spotify[11]:
:::
:::globalParagraph
Spotify’s back end consists of many different services. 80% of these services are written in Python. One of the main reasons is the fast development Python offers. They also use Python to work through asynchronous tasks as well as (about 90% of) the MapReduce tasks in Spotify’s Hadoop cluster.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
Instagram[12]:
:::
:::globalParagraph
With several million requests per second, Instagram successfully shows how perfect Django is for the building of web applications. The entire architecture is, of course, complex but the core of the Instagram server is Django as the application server.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
Disqus[13]:
:::
:::globalParagraph
Disqus is used as a comment plug-in on numerous different websites. With 45,000 connections per second (in 2013), Disqus is a web application that does an extreme amount of scaling. Django is almost exclusively used to work on all these connections on Disqus. There are obviously other technologies which are used to ‘make life easier’ for Django (caches, f. ex.). But here, too, a solid foundation like Django is needed to develop stable, secure and scalable web applications.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
BLUESHOE &#x3C;3 PYTHON/DJANGO
:::
:::globalParagraph
Python and Django have active communities, they are constantly being developed further and enable us to write high-quality software in no time. That’s why we love Python and Django. Shorter implementation periods following change requests, higher security standards and lower costs when it comes to maintenance – that’s what our clients love too. At Blueshoe, it’s important to us that our clients and partners understand why we have chosen to use this technology as the foundation for many of our projects.
:::
:::globalParagraph
To cut a long story short: Python/Django are a terrific basis for various different projects. We trust in them and our experience speaks for itself. We encounter new challenges on a daily basis and have never been let down by Django. It simply matches the core of our mentality – after all, we are perfectionists with deadlines.
:::</p>
<p>:::globalParagraph
[1]<a href="http://blog.codinghorror.com/the-php-singularity/">http://blog.codinghorror.com/the-php-singularity/</a>{.bs-link-blue :target="_blank"}</p>
<p>[2]<a href="http://php.net/manual/de/history.php.php">http://php.net/manual/de/history.php.php</a>{.bs-link-blue :target="_blank"}</p>
<p>[3]<a href="https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/#stance">https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/#stance</a>{.bs-link-blue :target="_blank"}</p>
<p>[4]<a href="https://www.python.org/dev/peps/pep-0008">https://www.python.org/dev/peps/pep-0008</a>{.bs-link-blue :target="_blank"}</p>
<p>[5]<a href="https://www.owasp.org/index.php/Top_10_2013-Top_10">https://www.owasp.org/index.php/Top_10_2013-Top_10</a>{.bs-link-blue :target="_blank"}</p>
<p>[6]<a href="https://www.djangoproject.com/">https://www.djangoproject.com/</a>{.bs-link-blue :target="_blank"}</p>
<p>[7]<a href="https://docs.djangoproject.com/en/stable/misc/design-philosophies/">https://docs.djangoproject.com/en/stable/misc/design-philosophies/</a>{.bs-link-blue :target="_blank"}</p>
<p>[8]<a href="http://highscalability.com/youtube-architecture">http://highscalability.com/youtube-architecture</a>{.bs-link-blue :target="_blank"}</p>
<p>[9]<a href="http://highscalability.com/blog/2012/5/21/pinterest-architecture-update-18-million-visitors-10x-growth.html">http://highscalability.com/blog/2012/5/21/pinterest-architecture-update-18-million-visitors-10x-growth.html</a>{.bs-link-blue :target="_blank"}</p>
<p>[10]<a href="http://www.tornadoweb.org/en/stable/">http://www.tornadoweb.org/en/stable/</a>{.bs-link-blue :target="_blank"}</p>
<p>[11]<a href="https://labs.spotify.com/2013/03/20/how-we-use-python-at-spotify/">https://labs.spotify.com/2013/03/20/how-we-use-python-at-spotify/</a>{.bs-link-blue :target="_blank"}</p>
<p>[12]<a href="http://instagram-engineering.tumblr.com/post/13649370142/what-powers-instagram-hundreds-of-instances">http://instagram-engineering.tumblr.com/post/13649370142/what-powers-instagram-hundreds-of-instances</a>{.bs-link-blue :target="_blank"}</p>
<p>[13]<a href="http://blog.disqus.com/post/62187806135/scaling-django-to-8-billion-page-views">http://blog.disqus.com/post/62187806135/scaling-django-to-8-billion-page-views</a>{.bs-link-blue :target="_blank"}
:::</p>]]></content:encoded>
            <category>Django</category>
            <category>Python</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blog/php-vs-python.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Time estimates at Blueshoe: How we plan your projects realistically]]></title>
            <link>https://blueshoe.io/blog/project-management-time-estimation-at-blueshoe</link>
            <guid>https://blueshoe.io/blog/project-management-time-estimation-at-blueshoe</guid>
            <pubDate>Sat, 09 Aug 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Time estimates are an absolute must for every project. They help us to realistically assess the effort required for individual tasks or the entire project – and ensure that we can plan resources and deadlines sensibly. Only with a good time estimate can expectations be clearly communicated to customers and the team, and unpleasant surprises avoided.</p>
<p><img src="/img/blogs/agile-backlog.svg" alt="Blueshoe time estimate">{.object-cover .max-w-full .mb-5}</p>
<p>Of course, time estimates are never 100% accurate. Projects often bring unexpected challenges: technical hurdles, missing information, or changing requirements. That's why we always plan for a buffer. This buffer gives us the necessary leeway to react flexibly to such situations without disrupting the entire schedule.</p>
<p>It is important that the buffer is neither too small nor too generous. It is based on experience and the complexity of the task at hand. And just as important: we communicate transparently how the time estimate is calculated – including the buffer. This way, everyone involved knows where they stand at all times and can plan better.</p>
<p>At Blueshoe, we therefore rely on a clear, structured workflow that allows us to record time estimates directly in the ticket system and make them traceable. In this article, we provide an insight into exactly how we do this and why it works so well for us.</p>
<p><img src="/img/blogs/blueshoe-81.webp" alt="Blueshoe">{.object-cover .max-w-full .mb-5}</p>
<h2>Our time estimation process in detail</h2>
<p>When a customer approaches us with a new requirement or problem that requires a time estimate, we initiate a clearly structured process. Our project manager creates a separate time estimation ticket as a sub-ticket to the main ticket and assigns it to a developer. The team then clarifies internally which tasks need to be done, which prerequisites must be met, and whether there are any dependencies. If questions arise, we naturally involve the customer directly - this way, we avoid misunderstandings from the outset.</p>
<p>The developer who receives the ticket is responsible for providing a realistic and careful estimate. If necessary, they seek support from colleagues to ensure that the estimate is as accurate as possible. To be on the safe side, we have every time estimate checked by a second developer.</p>
<p><img src="/img/blog/timeestimate.webp" alt="estimate ticket">{.object-cover .max-w-full .mb-5}</p>
<p>At the end, the customer receives a transparent overview from us with the final time estimate and the breakdown of the individual tasks. This allows them to decide in a relaxed manner whether to approve the estimate or whether any outstanding questions need to be clarified.</p>
<h3>Important framework conditions for time estimates</h3>
<ul>
<li><strong>Maximum time expenditure</strong>: We keep time estimates as lean as possible: ideally, they should not exceed 1–2 hours of work.</li>
<li><strong>Analysis budget</strong>: If it becomes apparent that a task is more complex and the time estimate will take longer, we discuss this with the customer at an early stage and request an analysis budget.</li>
<li><strong>Second opinion</strong>: We also rely on the perspective of two developers: each time estimate is checked and confirmed by a second developer. This ensures that our estimates are realistic and of high quality.</li>
</ul>
<h2>Structure of a time estimation ticket</h2>
<p>The time estimation ticket follows a clear structure:</p>
<ul>
<li><strong>Task packages and steps</strong>
A list of tasks 1-X, with optional notes and detailed descriptions.</li>
<li><strong>Assumptions and prerequisites</strong><br>
What dependencies exist? Are there any prerequisites that must be met by the customer or other teams?</li>
<li><strong>Technical limitations &#x26; risks</strong><br>
Possible technical challenges or risks that may affect the effort required.</li>
<li><strong>Testing effort</strong><br>
Estimation of the necessary testing effort.</li>
<li><strong>Buffer</strong><br>
The project manager adds a time buffer for unforeseen events.</li>
<li><strong>Confidence level</strong><br>
If the estimate is less than 100% certain, the unknowns are clearly listed.</li>
</ul>
<h2>Conclusion</h2>
<p>Our clearly structured time estimation workflow ensures that everyone involved maintains an overview at all times. Transparency, realistic planning, and open communication between customers, project management, and development are a matter of course for us. This enables us to assess and implement new requirements quickly and efficiently—to the benefit of everyone.</p>
<p>If you would like to know more about how we work or need support with your project, simply get in touch with us! We look forward to hearing from you.</p>]]></content:encoded>
            <category>Projekt Management</category>
            <category>Team Blueshoe</category>
            <enclosure url="https://blueshoe.io/img/blogs/agile-backlog.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Agile Product Backlog Instead of Requirements Catalog]]></title>
            <link>https://blueshoe.io/blog/project-start-with-agile-backlog</link>
            <guid>https://blueshoe.io/blog/project-start-with-agile-backlog</guid>
            <pubDate>Sun, 15 Dec 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>In agile project management, the product backlog replaces the rigid requirements catalog. Requirements are formulated as flexible user stories, purposefully prioritized, and iteratively further developed. This keeps your project dynamic, adaptable, and always close to user feedback.</p>
<p><img src="/img/blogs/agile-backlog.svg" alt="Blueshoe and FastAPI in Production">{.object-cover .max-w-full .mb-5}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Agile Project Kickoff: Goodbye Rigidity, Hello Flexibility
:::
:::GlobalParagraph
Starting a software project is always exciting. Ideas are flowing, visions are taking shape—and this is often the point where success is determined. Many teams cling too long to outdated methods, like the classic requirements catalog, which may suggest structure but often slows down agility.
:::</p>
<p>:::GlobalParagraph
At BLUESHOE, we've learned that flexibility from the very start is the key to success. That’s why we rely on the agile product backlog, which replaces the rigid requirements catalog. Curious how it works? Keep reading—and discover how our RAPID system can make your projects even more efficient.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
The Requirements Catalog: A Relic from the Waterfall Era
:::</p>
<p>:::GlobalParagraph
The classic requirements catalog lists every function down to the smallest detail. Sounds sensible at first, doesn't it? After all, it provides clear guidelines and a sense of planning security. But here’s where the problems start:
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li><strong>Inflexible</strong>: Changes are complex, expensive, and often not accounted for.</li>
<li><strong>No Dynamics</strong>: New insights are difficult to integrate.</li>
<li><strong>Risk of Missed Development</strong>: What’s planned at the start rarely fits the project’s goals by the end.
:::</li>
</ul>
<p>:::GlobalParagraph
In short: The requirements catalog is designed for a world that no longer exists.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
The Product Backlog: Agile, Flexible, Dynamic
:::</p>
<p>:::GlobalParagraph
The product backlog is the heart of agile projects. It’s not a static document but a dynamic list of tasks that is continuously updated. This keeps your project flexible and responsive to change.
:::</p>
<p><img src="/img/blogs/blueshoe-81.webp" alt="Blueshoe">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
But how are requirements defined in the backlog?
::</p>
<p>:::GlobalParagraph
This is where User Stories come into play. They are short, concise descriptions of requirements from the end user's perspective. A typical User Story follows this simple structure:
:::</p>
<p>:::GlobalParagraph
Structure: "As a <em>user role</em> I want <em>goal/wish</em>, so that <em>benefit</em>."
:::</p>
<p>:::GlobalParagraph
Example: "As an Online Shop Customer, I want to track my orders so that I always know when my package will arrive."
:::</p>
<p>:::GlobalParagraph
<strong>The benefits of User Stories:</strong>
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li><strong>User-Centric</strong>: They ensure every requirement delivers real value.</li>
<li><strong>Flexibile</strong>: Easy to adjust when requirements change.</li>
<li><strong>Clear</strong>: Everyone in the team - from developer to project manager - immediately understands the purpose.
:::</li>
</ul>
<p>:::GlobalParagraph
Through regular <strong>backlog refinement</strong>, we continuously improve and prioritize these User Stories to keep your project focused on what truly matters.
:::</p>
<p>:::GlobalParagraph
<strong>The key benefits at a glance:</strong>
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li><strong>Flexibility</strong>: Changes are possible at any time.</li>
<li><strong>Customer Focus</strong>: Emphasis on delivering real value.</li>
<li><strong>Quick Results</strong>: Iterative development with visible progress.</li>
<li><strong>Transparency</strong>: Everyone involved stays informed.
:::</li>
</ul>
<p>:::GlobalParagraph{.mb-5}
Sounds exciting? Then our RAPID System will inspire you even more - it makes your processes up to 35% faster.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
How to successfully transition from a requirements catalog to a product backlog
:::</p>
<p>:::GlobalParagraph
Switching to agile ways of working doesn’t have to be complicated. At BLUESHOE, we rely on a structured yet flexible approach:
:::</p>
<p>:::GlobalBlock{.ol-decimal .mb-4}</p>
<ol>
<li><strong>Transform Requirements into User Stories</strong>: Instead of rigid specifications, we define User Stories that focus on the needs of the users. This creates clarity and sharpens the project’s focus.</li>
<li><strong>Set Priorities</strong>: Not everything is equally important. In workshops, we work together to prioritize what delivers the most value—and identify what can wait.</li>
<li><strong>Regular Backlog Refinement</strong>: The Backlog is not a "set-and-forget" document. We continuously adapt it to new insights to stay on track throughout the project.</li>
<li><strong>Create Transparency</strong>: Thanks to digital tools and open communication, you'll always have insight into the project status. No surprises, no black boxes.
:::</li>
</ol>
<p>:::GlobalTitle{:size="lg" .mb-5}
Why the Agile Backlog Boosts Project Success
:::</p>
<p>:::GlobalParagraph
An agile backlog makes all the difference:
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li><strong>Adaptable</strong>: Changes? No problem—the backlog evolves with them.</li>
<li><strong>Focus on What Matters</strong>: Clear priorities keep the team on track.</li>
<li><strong>Quick Results</strong>: Iterative steps deliver visible value early on.</li>
<li><strong>Better Alignment</strong>: A shared understanding through clear requirements.
:::</li>
</ul>
<p>:::GlobalTitle{:size="lg" .mb-5}
Conclusion: Agility from the Start
:::</p>
<p>:::GlobalParagraph
Starting agile means staying flexible. With BLUESHOE, you’ll make the leap from rigid planning to dynamic execution. We help you structure requirements effectively and keep your backlog dynamic. If you’re ready to start your project the agile way, let’s get started together — <a href="/contact/">contact us</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, and we’ll take the first step with you!
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5 .mt-8}
Frequently Asked Questions
:::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}</p>
<ol>
<li>What is the difference between a requirements catalog and a product backlog?
::</li>
</ol>
<p>:::GlobalParagraph
The requirements catalog is a static list of specifications, while the product backlog is flexible and continuously updated based on new insights and changing priorities.
:::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. Why is agile project management more efficient?
::</p>
<p>:::GlobalParagraph
Because it treats change as an opportunity. Instead of rigid plans, it relies on iterative improvements, continuous feedback, and a focus on delivering real value quickly.
:::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. Does every project need a product backlog?
::</p>
<p>:::GlobalParagraph
For agile projects: Yes! The product backlog is the central tool for managing tasks, setting priorities, and maintaining transparency throughout the project.
:::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
4. How often should the backlog be updated?
::</p>
<p>:::GlobalParagraph
Regularly! Ideally after each sprint or whenever new insights emerge. This ensures your project stays on track and adapts to changes effectively.
:::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
5. How does the product backlog help secure project success?
::</p>
<p>:::GlobalParagraph{.mb-4}
The product backlog helps by setting clear priorities, enabling flexible adjustments, and ensuring continuous transparency. This keeps teams focused, responsive, and aligned with project goals.
:::</p>]]></content:encoded>
            <category>Project Management</category>
            <category>Digitalization</category>
            <enclosure url="https://blueshoe.io/img/blogs/agile-backlog.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Analysis of Import Times for Python Apps]]></title>
            <link>https://blueshoe.io/blog/python-django-fast-startup-time</link>
            <guid>https://blueshoe.io/blog/python-django-fast-startup-time</guid>
            <pubDate>Tue, 28 Jan 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>The fast start of Python applications in cloud environments can be essential for dynamic scaling, such as horizontal pod autoscaling. In this blog post, we share our method to analyze and optimize the import times of Python apps.</p>
<p><img src="/img/blogs/python-import.webp" alt="Analysis of Python App Import Time">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalTitle{:size="lg" .mb-4}
Fast Start of Python Apps
::</p>
<p>::GlobalParagraph
Why do we need applications that start quickly? Blueshoe is primarily active in cloud development. Our applications carry different loads at different times. Peak loads must be handled quickly, services must scale. If a service takes 10, 20, or even 30 seconds to start, this can be fatal for peak loads, as scaling happens too slowly. This applies to both classic <a href="/blog/kubernetes-autoscaling-keda/">Kubernetes Workloads</a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} and the use of <a href="/blog/function-as-a-service-faas-vs-kubernetes/">FaaS (Function-as-a-Service)</a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
::</p>
<p>::GlobalParagraph
Another important aspect is pure development work. Typically, services start more slowly in debug setups or development environments than in production. If I have to restart my service repeatedly and wait, these waiting times interrupt the workflow and disrupt the developer.
::</p>
<p>::GlobalParagraph
Long startup times can also be a symptom of other underlying problems - for example, <em>Memory Leaks</em>.
::</p>
<p>::GlobalButton{:url="/technologies/python-django-agency/" :label="Learn more about our Django development services" :color="blue" .mb-6}
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Tools for Analyzing Python App Import Time
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Python's Built-in Tool: -X importtime
::</p>
<p>::GlobalParagraph
Actually, Python itself brings an important tool for analyzing import times with: the flag <code>-X importtime</code>. When starting the Python interpreter for the application with this flag, a report will be generated about the import times of the imported modules:
::</p>
<p>::BlogCode{.mb-4}</p>
<pre><code class="language-text">...
import time:      1110 |       1569 | django.contrib.messages.storage.base
import time:       705 |        705 | django.contrib.admin.decorators
import time:      2103 |       2103 |       django.contrib.admin.utils
import time:      3080 |       5182 |     django.contrib.admin.helpers
import time:      3219 |       3219 |     django.contrib.admin.widgets
import time:      1862 |       1862 |     django.contrib.admin.checks
import time:       649 |        649 |     django.contrib.admin.exceptions
import time:       716 |        716 |       django.contrib.admin.templatetags
import time:      1328 |       2044 |     django.contrib.admin.templatetags.admin_urls
import time:       843 |        843 |     django.views.decorators.csrf
import time:      3891 |      17687 |   django.contrib.admin.options
import time:      1793 |      19479 | django.contrib.admin.filters
...

</code></pre>
<p>::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Visual Analysis
::</p>
<p>::GlobalParagraph
So far so good! To better analyze everything, this output is integrated into <a href="https://kmichel.github.io/python-importtime-graph/">kmichel's Importtime Graph</a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}:
::</p>
<p><img src="/img/blogs/python-import-chart.png" alt="Analysis of Import Time">{.object-cover .max-w-full .mb-5 .mx-auto}</p>
<p>::GlobalParagraph
Thus, one obtains a visual representation of the duration of imports for different modules. Each surface represents a module, which potentially contains subsurfaces. The larger a surface, the longer the import time.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Further Tools for Import Time Analysis
::</p>
<p>::GlobalParagraph
Besides the <code>-X importtime</code> flag, there are other useful tools that can help analyze and optimize startup time:
::</p>
<p>::GlobalBlock{.ol-decimal .my-4}</p>
<ol>
<li><a href="https://github.com/benfred/py-spy"><strong>Py-Spy:</strong></a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} A high-performance profiler that works without instrumentation. Py-Spy can be used to precisely understand how much time is spent during the initialization phase.</li>
<li><a href="https://github.com/plasma-umass/scalene"><strong>Scalene:</strong></a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} A highly precise profiler that analyzes CPU and memory usage. It is excellently suited to identify expensive initializations.</li>
<li><a href="https://docs.python.org/3/library/profile.html"><strong>cProfile:</strong></a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} A Python-integrated profiler that provides insights not only during runtime, but also during startup time. Can be combined with <a href="https://docs.python.org/3/library/profile.html">pstats</a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} or tools like <a href="https://jiffyclub.github.io/snakeviz/">snakeviz</a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} to visually analyze the output.</li>
<li><a href="https://modulegraph.readthedocs.io/en/latest/"><strong>Modulegraph:</strong></a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} This tool can visualize dependencies between modules to more easily identify unnecessary or duplicate imports.
::</li>
</ol>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
We can help you optimize your Python applications.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Tips for Optimizing Import Times
::</p>
<p>::GlobalParagraph
After clarifying which Python modules need closer examination, the question arises: What to do? How can I reduce the times? At Blueshoe, we proceed as follows:
::</p>
<p>::GlobalBlock{.ol-decimal .my-4}</p>
<ol>
<li><strong>Dead-Code:</strong> Is the code still being used? If not - remove it. Unused imports or modules unnecessarily contribute to startup time.</li>
<li><strong>Check External Dependencies:</strong> Are there external dependencies? If so, check whether there is a newer version that is optimized. Especially with extensive libraries like Django or NumPy, performance improvements are regularly integrated.</li>
<li><strong>Review Initialization:</strong> Does a module execute code during initialization that isn't absolutely necessary? This could involve, for example, a complex configuration check or initial database queries. Such processes should ideally occur later during runtime.</li>
<li><strong>Improve Import Structure:</strong> Group imports logically and check whether some modules can be combined.
::</li>
</ol>
<p>::GlobalParagraph
To reduce redundant dependencies. A good example is Utility modules, which are often used selectively.
::</p>
<p>::GlobalParagraph
Is your Python app starting too slowly? Reduce the startup time with Blueshoe: <a href="/technologies/python-django-agency/">To our optimization!</a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Conclusion
::</p>
<p>::GlobalParagraph
A short startup time of Python applications is essential – whether for frictionless scaling in cloud environments or for a pleasant developer experience. With simple means like <code>-X importtime</code>, further tools like <em>Py-Spy</em> or <em>Scalene</em> and a structured approach, import times can be analyzed and optimized. This ensures that applications start faster, work more efficiently, and are generally more stable.
::</p>
<p>::GlobalParagraph
How do you optimize the startup time of your Python apps? Feel free to share your experiences and tips in the comments!
::</p>
<p>::GlobalTitle{:size="lg" .mb-5 .mt-8}
Frequent Questions
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. How can I analyze the startup time of my Python App?
::</p>
<p>::GlobalParagraph
Use the <code>-X importtime</code>-flag of Python to measure the import times of your modules. With tools like kmic's Importtime Graph or Py-Spy, you can visualize this data and identify bottlenecks.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. Which tools help to optimize the Django App startup time?
::</p>
<p>::GlobalParagraph
Some of the best tools are:
::</p>
<p>::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Py-Spy for quick analyses without instrumentation</li>
<li>Scalene for detailed CPU and memory profiles</li>
<li>cProfile, which is integrated in Python, for comprehensive runtime analyses</li>
<li>Module graph for visualizing dependencies
::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. What are the most common causes for long startup times in Python Apps?
::</p>
<p>::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Unnecessary or double imports</li>
<li>Complex initialization processes (e.g. database queries during startup)</li>
<li>Unused modules or dead code</li>
<li>Poor organization of import structures
::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
4. How can I reduce the startup time of my Django app in Kubernetes containers?
::</p>
<p>::GlobalParagraph
Optimize the import structure to minimize load times; Use Lazy Initialization to start expensive processes only when needed; Update external libraries to leaner versions
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
5. Why is a short startup time for Python apps in the cloud important?
::</p>
<p>::GlobalParagraph
Short startup times enable efficient scaling during peak loads, especially in Kubernetes setups. Long startup times can delay scaling and negatively impact the performance of your applications.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
6. What role does startup time play in horizontal pod autoscaling in Kubernetes?
::</p>
<p>::GlobalParagraph
In horizontal Pod Autoscaling (HPA) in Kubernetes, additional pods are automatically started based on the load. If the Python app startup time is too long, the new pods cannot handle the rising load in time, which leads to bottlenecks and potential failures. Optimized startup times ensure smooth scaling and better availability.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
7. Can optimizing the Python startup time reduce development time?
::</p>
<p>::GlobalParagraph
Definitely! In development and debug environments, service restarts are often necessary. Short Python app startup times mean less waiting time for developers, which significantly improves the workflow and leads to a more productive development environment.
::</p>]]></content:encoded>
            <category>Python</category>
            <category>Django</category>
            <category>Development</category>
            <category>Performance</category>
            <category>Operations</category>
            <enclosure url="https://blueshoe.io/img/blogs/python-import.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Creating Python Executables for Windows, MacOS and Linux]]></title>
            <link>https://blueshoe.io/blog/python-executables-linux-macos-windows</link>
            <guid>https://blueshoe.io/blog/python-executables-linux-macos-windows</guid>
            <pubDate>Tue, 05 Jul 2022 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Before going into more details about how to develop an executable for a Python project, get some background information on our CLI tool Gefyra, a tool for local application development directly with Kubernetes. This is an Open Source Python project, that we are trying to wrap into convenient executables in this blog post.</p>
<p><img src="/img/blog/python-executables-linux-macos-windows.jpg" alt="Creating Python Executables for Windows, MacOS and Linux">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
The aim was to develop an executable with (almost) the startup performance of kubectl. kubectl is the executable to control a Kubernetes cluster. That means, fast startup times and ideally just one file (which is statically-linked) are crucial for an easy distribution. In addition, executables for Windows, MacOS and Linux shall be provided. For those requirements people would usually opt for Go . However we built a prototype that was written in Python and it evolved over time. Therefore a solution for Python should be developed.
:::
:::globalParagraph
Approach:
:::</p>
<p>:::GlobalBlock{.ol-decimal .mb-5}</p>
<ol>
<li>PyInstaller</li>
<li>Nuitka</li>
<li>PyOxidizer
:::</li>
</ol>
<p>:::globalTitle{:size="lg" .mb-5}
PyInstaller
:::
:::globalParagraph
PyInstaller was quite easy to set up. However, the resulting executable was complained about by Virustotal because of PyInstaller's bootloader. Somehow the code signature was also found in viruses. The compilation of a bootloader removed the virus issues.
:::
:::globalParagraph
Facing startup times of more than 10 seconds with internet connection and about 3 seconds without internet connection showed that the concept of PyInstaller will potentially always be a problem for fast startup times. Mac users complained about this issue before in the context of the former docker-compose command being created from PyInstaller.
:::
:::globalParagraph
This makes it unsuitable for CLI applications.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Nuitka
:::
:::globalParagraph
Using Nuitka very large binaries of about 150 Mb were generated. The startup performance was already much better than PyInstaller for Mac and Linux. However, very long compile times (about 10 min) left room for improvement.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
PyOxidizer
:::
:::globalParagraph
PyOxidizer turned out to be the best approach. This well-crafted toolkit compiles Python to Rust code and also includes all dependencies into one handy binary executable. With no special optimizations startup times of about 700 ms were possible. Those times being almost acceptable this was the basis for further development.
:::
:::globalParagraph
The examination of the output of python -X importtime -m gefyra 2> import.log was the starting point to check the imports. There is an awesome tool to analyze the Python imports: tuna. tuna allows analyzing the import times from the log. Run it like this tuna import.log. It opens a browser window and visualizes the import times.
:::
:::globalParagraph
Thus it is possible to manually move all imports to the functions in which they are needed (and bring in some other optimizations). This greatly violates PEP 8 but leads to very fast startup times.
:::
:::globalParagraph
These are the startup values finally reached with gefyra under average modern Ubuntu:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-json">> python -m timeit "__import__('os').system(gefyra)"
10 loops, best of 5: 33.5 msec per loop
</code></pre>
<p>:::</p>
<p>:::globalParagraph
Pretty neat, isn’t it?
:::
:::globalParagraph
In comparison the kubectl executable:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-json">> python -m timeit "__import__('os').system('kubectl')"
10 loops, best of 5: 24.9 msec per loop
</code></pre>
<p>:::</p>
<p>:::globalParagraph
In addition, GitHub actions were created to run the PyOxidizer builds once a new version is released. Only Windows is missing at the moment.
:::
:::globalParagraph
Although PyInstaller and Nuitka did not deliver the best startup times, the intent of this article is not to speak them ill. They probably shine at other aspects.
:::</p>
<p>:::GlobalPodcastSection{:videoId="KapP6ouEL3E" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR KUBERNETES PODCAST
::::
::::GlobalTitle{:tag="h3" .mb-6}
Tools for the Craft: Navigating the Kubernetes ecosystem
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking about the various options developers have for running remote Kubernetes development environments.
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="Show more" :color="green"}
::::
:::</p>]]></content:encoded>
            <category>Python</category>
            <category>Gefyra</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blog/python-executables-linux-macos-windows.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Efficiently Extending Python: PyO3 and Rust in Action]]></title>
            <link>https://blueshoe.io/blog/python-rust-pyo3</link>
            <guid>https://blueshoe.io/blog/python-rust-pyo3</guid>
            <pubDate>Tue, 04 Feb 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>How PyO3 revolutionizes the integration of Python and Rust. Discover how you can develop performant and secure applications with these tools that combine Python's flexibility with Rust's speed.</p>
<p><img src="/img/blogs/python-rust-pyo3.svg" alt="Efficiently Extending Python: PyO3 and Rust in Action">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalBlogLevelInfo</p>
<ul>
<li><a href="https://rust-lang.org/">Rust Programming Language</a>{target="_blank"}</li>
<li>Optional <a href="https://github.com/PyO3/pyo3">PyO3</a>{:target="_blank"}</li>
<li>Optional <a href="https://github.com/astral-sh/uv">uv</a>{:target="_blank"}</li>
</ul>
<p>If you have any questions or if anything is unclear, you can use the comment function below the article.
::</p>
<p>::GlobalTitle{:size="lg" .mb-4}
Pfuzzer: A Python fuzzy searcher built with Rust
::</p>
<p>::GlobalParagraph
The combination of <strong>Rust</strong> and <strong>Python</strong> is becoming increasingly popular - especially for performance-sensitive applications. With <a href="https://github.com/PyO3/pyo3"><strong>PyO3</strong></a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, Rust modules can be seamlessly integrated into Python, allowing you to combine Rust's speed and security with Python's flexibility.
::</p>
<p>::GlobalParagraph
A remarkable example of this synergy is <strong>Pfuzzer</strong> - a Python Fuzzy Search library based on the high-performance Rust library <a href="https://github.com/helix-editor/nucleo"><strong>Nucleo</strong></a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}. But Pfuzzer is more than just a practical solution for imprecise searches: It demonstrates how the <strong>PyO3 framework</strong> can be used to seamlessly integrate Rust code into Python.
::</p>
<p>::GlobalParagraph
In this article, you'll learn how Pfuzzer works, how PyO3 simplifies the development of Python modules in Rust, and how you can implement your own fuzzy search in Python with Rust.
::</p>
<p>::GlobalParagraph
Let's dive together into the world of PyO3 and discover how Rust and Python go Hand in Hand in projects like Pfuzzer!
::</p>
<p>::GlobalParagraph
Would you like to learn more about performant Python development? Check out our Python &#x26; Django Technologies page!
::</p>
<p>::GlobalButton{:url="/technologies/python-django-agency/" :label="Learn more about our Python development services" :color="blue" .mb-6}
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
What is PyO3?
::</p>
<p>::GlobalParagraph
PyO3 is a Rust framework that enables seamless integration of Rust code into Python. It provides a bridge between both languages, allowing Rust programs to be used as native Python modules. With PyO3, you can
::</p>
<p>::GlobalBlock{.ul-disk .my-4}</p>
<ul>
<li>Create Python APIs in Rust</li>
<li>Integrate existing Rust libraries in Python</li>
<li>Call Python functions from Rust
::</li>
</ul>
<p>::GlobalParagraph
The framework takes care of important technical details such as memory management, handling data structures, and interaction between runtimes of both languages.
::</p>
<p>::GlobalParagraph
The combination of Rust's high performance and safety with Python's flexibility makes PyO3 particularly interesting for computationally intensive tasks like data processing, machine learning or - as in Pfuzzer's case - developing a performant fuzzy search module.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Pfuzzer: A Python Module for Fuzzy Search with Rust
::</p>
<p>::GlobalParagraph
Pfuzzer is a Python Fuzzy Searcher that, with the help of Rust's Nucleo library, enables a fast and efficient fuzzy search.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Why Rust for Fuzzy Search?
::</p>
<p>::GlobalBlock{.ul-disk .my-4}</p>
<ul>
<li><strong>Performance</strong>: Rust is significantly faster than pure Python</li>
<li><strong>Memory Safety</strong>: No Garbage Collector, no Memory Leaks</li>
<li><strong>Simple Integration</strong>: Usable as Python extension in Rust with PyO3
::</li>
</ul>
<p>::GlobalTitle{:size="lg" .mb-5}
Installing and Setting Up Pfuzzer
::</p>
<p>::GlobalParagraph
To make a Rust project usable with PyO3 in Python, <a href="https://github.com/PyO3/maturin"><strong>Maturin</strong></a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} is one of the most popular tools. Maturin simplifies the entire build process by compiling the Rust code and preparing it as a Python wheel package. This package can subsequently be installed and used like any other Python library.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Setting Up a PyO3 Project
::</p>
<p>::GlobalTitle{:font="font-oswald" :size="s" :tag="h4" .mb-2}
1. Create Project
::</p>
<p>::GlobalParagraph
First create a new Python project (In the sense of "Rustification" we of course use the uv Package Manager):
::</p>
<p>::BlogCode{.mb-4}</p>
<pre><code class="language-bash">$ mkdir pfuzzer
$ cd pfuzzer
$ uv venv
$ uv add maturin
$ . .env/bin/activate
</code></pre>
<p>::</p>
<p>::GlobalTitle{:font="font-oswald" :size="s" :tag="h4" .mb-2}
2. Initialize Maturin
::</p>
<p>::GlobalParagraph
Run maturin and have it create all necessary files:
::</p>
<p>::BlogCode{.mb-4}</p>
<pre><code class="language-bash">$ maturin init ✔  📷 What kind of bindings to use? ¸ pyo3
  ✦ Done! New project created pfuzzer
</code></pre>
<p>::</p>
<p>::GlobalParagraph
This command will create a <code>Cargo.toml</code> as well as a <code>lib.rs</code> file. These will be used respectively for Rust Dependency Management and the actual Rust Code that will be called from Python.
::</p>
<p>::GlobalParagraph
The command <code>maturin develop</code> will then install your freshly created Rust package in the previously created virtualenv. Since we use uv, the <code>-–uv</code> flag must be added. With <code>maturin build –release</code>, the package can be built for roll-out.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Pfuzzer: Implementation and Functionality
::</p>
<p>::GlobalParagraph
The Pfuzzer Python Package is a wrapper for the Rust Nucleo library. As previously shown, wrapping an existing library is easily done with PyO3. Nevertheless, I will briefly explain the implementation and functionality. The <a href="https://github.com/Blueshoe/pfuzzer">entire code can be viewed here</a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
::</p>
<p>::GlobalParagraph
Thanks to PyO3, the implementation is quite straightforward. It provides several Rust attributes to make Rust code callable in Python. Here, e.g., the implementation of the fundamental Python module:
::</p>
<p>::BlogCode{.mb-4}</p>
<pre><code class="language-rust">mod python_classes;
use pyo3::prelude::*;
/// A Python fuzzy searcher module implemented in Rust.
#[pymodule]
fn pfuzzer(m: &#x26;Bound&#x3C;'_, PyModule>) -> PyResult&#x3C;()> {
    m.add_class::&#x3C;python_classes::pfuzzer::Pfuzzer>()?;
    Ok(())
}
</code></pre>
<p>::</p>
<p>::GlobalParagraph
This is so little code that one might think it's already written in Python. But briefly explained the most important sections:
::</p>
<p>::GlobalBlock{.ol-decimal .my-4}</p>
<ol>
<li><code>#[pymodule]</code> --> tells the compiler, it's a Python module. Through this, the PyModule struct is injected as a variable, which can then be used as an entry point for our Python module.</li>
<li><code>m.add_class::&#x3C;...></code> --> assigns a class to our module, in this case the not yet specified Pfuzzer class.
::</li>
</ol>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
We can also accelerate your Python application with Rust and PyO3.
::</p>
<p>::GlobalParagraph
There are other ways to assign Rust code to the module. Among these is <code>m.add_function</code>. Unlike <code>add_class</code>, the <a href="https://doc.rust-lang.org/reference/glossary.html?highlight=turbo#turbofish">Turbofish syntax</a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} is not used here, instead the Rust function must be called with a macro. More about this in the <a href="https://pyo3.rs">PyO3 docs</a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
::</p>
<p>::GlobalParagraph
Now let's get to implementing our Pfuzzer class. Since there are no classes in Rust, we use structs:
::
::BlogCode{.mb-4}</p>
<pre><code class="language-rust">#[pyclass]
pub struct Pfuzzer {
    pub matcher: Matcher,
}
</code></pre>
<p>::</p>
<p>::GlobalParagraph
As an attribute, <code>pyclass</code> is given here. This marks the struct as a Python class and can then, as previously shown, be assigned to the module.
::</p>
<p>::GlobalParagraph
Every good class obviously also needs a constructor:
::</p>
<p>::BlogCode{.mb-4}</p>
<pre><code class="language-rust">use nucleus::{Config, Matcher, Utf32Str};

#[pymethods]
impl Pfuzzer {
    #[new]
    pub fn new() -> PyResult&#x3C;Self> {
        Ok(Pfuzzer {
            matcher: Matcher::new(Config::DEFAULT),
        })
    }

...

</code></pre>
<p>::</p>
<p>::GlobalParagraph
Through <code>#[pymethods]</code>, we declare the entire implementation block as Python methods of the <code>Pfuzzer</code> class. The constructor itself is marked with <code>#[new]</code>. Currently, PyO3 supports only the <code>__new__</code> magic method and not <code>__init__</code>.
::</p>
<p>::GlobalParagraph
Finally, the actual "logic" of our wrapper. I present the <code>compare_strings</code> method:
::</p>
<p>::BlogCode{.mb-4}</p>
<pre><code class="language-rust">pub fn compare_strings(&#x26;self, targets: Vec&#x3C;String>, query: String) -> Vec&#x3C;Option&#x3C;u16>> {
        let mut res = Vec::&#x3C;Option&#x3C;u16>>::new();
        for target in targets {
            res.push(self.matcher.to_owned().fuzzy_match(
                Utf32Str::Ascii(target.as_bytes()),
                Utf32Str::Ascii(query.as_bytes()),
            ))
        }
        return res;
    }
</code></pre>
<p>::</p>
<p>::GlobalParagraph
As one can quickly recognize, it uses the nucleo Matcher and performs fuzzy matching for each target string based on the given query string.
The method's result is then an optional integer per target string, which indicates how well the query string matches the target string. The higher the result, the better the match. A <em>Null</em> / <em>None</em> value indicates that the target and query do not match (or at least are not measurably similar).
::</p>
<p>::GlobalParagraph
And how does this translate to Python? Have a look:
::</p>
<p>::BlogCode{.mb-4}</p>
<pre><code class="language-python">from pfuzzer import Pfuzzer

pf = Pfuzzer()

print(pf.compare_strings(["hello world", "hello blueshoe"], "helo world"))

>>> [257, None]
</code></pre>
<p>::</p>
<p>::GlobalParagraph
According to the result matrix, the first target string is the string which best matches the query.
::</p>
<p>::GlobalParagraph
If you still have ideas for new features for Pfuzzer, or even optimization thoughts, then feel free to leave a comment. Or create an <a href="https://github.com/Blueshoe/pfuzzer/issues">Issue on Github</a>{:target="_blank" .text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}!
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Conclusion: Rust and Python - An Unbeatable Combination
::
::GlobalParagraph
In this article, we have illuminated the impressive synergy between Rust and Python through the development of the Pfuzzer module. Pfuzzer demonstrates how PyO3 enables integrating the high performance and efficiency of Rust into the user-friendly environment of Python. The seamless connection of both languages opens up new possibilities for developers, especially in areas like data processing and machine learning.
::
::GlobalParagraph
If you have a passion for really fast Python, then write to us in the comments and we'll discuss the deployment possibilities of Rust in your Python project! We're also looking forward to comments about your use of PyO3!
::</p>
<p>::GlobalTitle{:size="lg" .mb-5 .mt-8}
Frequently Asked Questions
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. How can I create a Python module with Rust and PyO3?
::</p>
<p>::GlobalParagraph
To write a Rust module for Python, you'll need PyO3 and Maturin. The fundamental steps:
::
::GlobalBlock{.ol-decimal .my-4}</p>
<ol>
<li><strong>Set up Rust environment:</strong> Run commands: <code>cargo new --lib my_project</code>, <code>cd my_project</code>.</li>
<li><strong>Add PyO3:</strong> Add the dependency <code>pyo3 = { version = "0.18", features = ["extension-module"] }</code> in <code>Cargo.toml</code>.</li>
<li><strong>Implement Module:</strong> Use <code>use pyo3::prelude::*;</code> and <code>#[pymodule]</code></li>
<li><strong>Build with Maturin:</strong> <code>maturin build</code>
::</li>
</ol>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. How does Fuzzy Search work in Rust with the Nucleo library?
::</p>
<p>::GlobalParagraph
The Nucleo library offers a high-performance Fuzzy Matching Algorithm to calculate string similarity. <em>Pfuzzer</em> uses this library for fast approximate searches in Python.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. Can I use Rust modules in existing Python projects?
::</p>
<p>::GlobalParagraph
Yes! Rust modules can be seamlessly integrated into existing Python projects. Thanks to <strong>PyO3</strong> you can directly import and use Rust functions as Python modules.
::</p>]]></content:encoded>
            <category>Python</category>
            <category>Rust</category>
            <category>Development</category>
            <category>Performance</category>
            <category>Security</category>
            <enclosure url="https://blueshoe.io/img/blogs/python-rust-pyo3.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Why Our Maintenance Reports Are a Key to Successful Collaboration]]></title>
            <link>https://blueshoe.io/blog/quarterly-maintenance-reports</link>
            <guid>https://blueshoe.io/blog/quarterly-maintenance-reports</guid>
            <pubDate>Mon, 17 Nov 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>For us, a maintenance report is more than a table of numbers: it is a tool for communication, planning, and trust. It shows what we have done, where potential lies, and what topics are on the agenda for the next quarter - so our customers know their systems are in good hands.</p>
<p><img src="/img/blogs/agile-backlog.svg" alt="Blueshoe Maintenance Quarterly Report">{.object-cover .max-w-full .mb-5}</p>
<h2>1. Why we rely on quarterly maintenance reports</h2>
<p>If Heraclitus were alive today, he would probably say, "The only thing that is constant in software is change." As it is, this quote comes from Jeff Atwood.
Software is never truly "finished." As soon as it is in productive use, it must be maintained. For this, we are <a href="/our-services/cloud-application-development/">commissioned by our customers with maintenance agreements</a>{target="_blank"}, so we regularly invest an agreed amount of time in the care, adaptation, and further development of the projects.
Most of our customers do not want to or cannot be involved in the details of the content. That is why we, as a service provider, take on the reliable planning and execution of maintenance tasks.</p>
<p>To give our customers an overview of past and future maintenance tasks outside of the ticket system, we create quarterly maintenance reports.
Our customers have the opportunity to review the work performed and influence future activities.
The reports create transparency about the state of complex systems. Ultimately, they are an important tool for long-term collaboration.</p>
<h2>2. What our reports contain</h2>
<p>Our reports follow a clear structure.
In addition to the classic figures on support times, they contain technical assessments, e.g., on the stability of the software components used, backup mechanisms, and security aspects.
The overview of the software components used looks like this, for example:
<img src="/img/blog/lts-versions-overview.png" alt="Stability of the software components used"></p>
<p>This is supplemented by key figures from monitoring systems, such as an <a href="/blog/tracking-errors-django-sentry/">overview of error analysis using Sentry</a>{target="_blank"}:
<img src="/img/blog/sentry-overview.png" alt="Overview of error analysis">
Furthermore, there is an overview of test coverage:
<img src="/img/blog/test-coverage-overview.png" alt="Overview of test coverage"></p>
<p>An assessment of risks, challenges, and planned topics for the coming quarter completes the maintenance report.</p>
<h2>3. How we prepare our reports</h2>
<p>The reports are created quarterly as part of the maintenance routine. A report is prepared by at least two people.
Unless the customer has specific requests, we set thematic priorities that we examine in more detail for the quarter.
We check data from various systems, consolidate it, and evaluate trends.
Every recommendation is discussed by our team before it is included in the report.</p>
<p>Over time, a nice side effect has emerged: The meeting in which our team prepares the report for a customer is an ideal opportunity to brainstorm and discuss ideas and development opportunities for the customer's projects.
One deals intensively with a customer's project but is not entangled in detailed work, which allows for a comprehensive view.</p>
<h2>4. The exchange with our customers</h2>
<p>The report is discussed in a joint call. For our customers, this is the ideal opportunity to ask questions, set priorities, or initiate topics.
For us, the report is a fixed touchpoint to receive feedback and develop new ideas. Our customers receive the reports as PDFs.
Even less technically savvy customers understand where their system stands thanks to the structured presentation. We have often heard that the reports are very helpful in creating transparency for the topic of maintenance within the company or towards superiors.</p>
<h2>5. Why transparency creates trust</h2>
<p>Regular, understandable reports create security. They provide an overview of the state of complex systems and their future viability.
Thanks to their regularity, customers can track developments over time and also intervene in a controlling manner without having to deal with the very technical details of the maintenance activities.</p>
<h2>6. Conclusion: Working together on stable and future-proof systems</h2>
<p>Our maintenance reports are not a mandatory document, but a reflection of our collaboration.
They show what has been achieved, where we stand, and what opportunities and challenges await in the next quarter.
This is how we ensure together that <a href="/blog/advantages-of-open-source-software-for-companies/">systems remain stable, secure, and developable in the long term</a>{target="_blank"}.</p>]]></content:encoded>
            <category>Project Management</category>
            <category>Documentation</category>
            <category>Security</category>
            <category>Operations</category>
            <enclosure url="https://blueshoe.io/img/blogs/agile-backlog.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Requirements analysis in project management]]></title>
            <link>https://blueshoe.io/blog/requirements-analysis-in-project-management</link>
            <guid>https://blueshoe.io/blog/requirements-analysis-in-project-management</guid>
            <pubDate>Thu, 20 Aug 2020 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Whether a project runs like clockwork or drags on forever depends on how precisely the client requests were identified and implemented. In this article, we’d like to share the experiences we gained from various projects. We’ll show where the challenges lurk when analysing the client requirements and how this process can be integrated into the project schedule.</p>
<p><img src="/img/blog/requirements-analysis-in-project-management.jpg" alt="Requirements analysis in project management">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
The identification of the client requests (the requirements analysis and elicitation) is often neglected as these conversations take up time before an order has even been placed – and time is money. However, this often comes back to haunt you later on during the implementation phase and even those ‘easy’ projects can suffer unexpected consequences. In this article, we’ll have a closer look at this part of the project implementation.
:::</p>
<p>:::globalTitle{:size="lg" :color="text-bs-blue" .mb-5}
The 'Magical Triangle'
:::
:::globalParagraph
The magical triangle is a widespread concept in theoretical project management. A project’s implementation is only well-balanced if the aspects of cost, time and quality form an equilateral triangle. We prefer to replace the term ‘quality’ with ‘scope’. That’s because the scope of the service provision can be defined much more clearly than its quality. This way, the scope can also be connected to the aspects of cost and time more effectively, while the quality aspect effectively becomes part of the scope.
:::</p>
<p><img src="/img/blog/grafik_blog_erfolgreichespm.jpg" alt="grafik_blog_erfolgreichespm">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="md" .mb-5}
Identifying client requests
:::
:::globalParagraph
This brings us to the question: how can the scope be ascertained in the most precise way?
That’s when you’ll usually hear the term ‘requirements analysis’. This means you’re analysing what the client wants and which client request is to be implemented in the respective project. The process is often called requirements analysis, but it technically consists of two separate steps: first, there’s the requirements elicitation and only during the second step will you do the actual analysis.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Requirements Elicitation
:::
:::globalParagraph
With requirements elicitation, it’s really just about finding out what the client ‘really wants’.
The client’s requirements for the future project are usually expressed with the client’s own language and vocabulary. It’s therefore worth doing an extra round of discussing in order to ensure the client and contractor are both on the same page regarding their expectations of the future project.
:::</p>
<p>:::globalTitle{:size="lg" :color="text-bs-blue" .mb-5}
Functional and non-functional requirements
:::
:::globalParagraph
Functional requirements describe WHAT a system should be able to accomplish. Non-functional requirements describe HOW the system should function.
:::
:::globalParagraph
A clear distinction between both aspects isn’t always possible during the client meeting, because for the client, the two aspects are often one and the same. The challenge for the contractor is therefore to separate the two – and this might only happen after the client meeting. The distinction is extremely important for the scope: the functional requirements (i.e. the WHAT) have to shape the core of the requirements elicitation. If client and contractor don’t agree on this, the project cannot be successfully realised.
:::</p>
<p>:::globalTitle{:size="sm" :tag="h4" .mb-5}
Real-Life Example
:::</p>
<p><img src="/img/blog/nick-fewings.jpg" alt="nick-fewings">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
Let’s use the example of a restaurant owner who needs a reservation system. The owner describes the functional requirements as follows: ‘The system should show how much seating space is still available in the restaurant’. This wording poses a high risk of causing misunderstandings: does the requirement solely refer to the free seats and how many potential guests could still be seated? Or does the allocation of seats to certain tables have to be taken into account?
:::
:::globalParagraph
Another example: a search function is to be integrated into a client’s database. The functional requirement has been discussed and the client and contractor have come to mutual understanding about the matter. A non-functional requirement is the subsequent representation of the search results. For example, should the search results be displayed in several pages or not – and how many search results should be displayed, anyway? Further points are the intended target group of the search and the question of how the search results should be displayed – e.g. should only the title be shown or further details, too?
:::
:::globalParagraph
Both examples show that while the functional requirements are the core of the scope, the non-functional aspects must still not be neglected and should be identified just as diligently. Even if all functional requirements are realised in a project, it’s ultimately still possible the result doesn’t meet the client’s expectations due to the non-functional requirements not having been discussed enough beforehand.
:::</p>
<p>:::globalTitle{:size="lg" :color="text-bs-blue" .mb-5}
Requirements elicitation = the service provider as detective
:::
:::globalParagraph
The focus of the requirements elicitation should be to uncover all possible eventualities in the client requirements. As such, the software service provider can regard themselves as a detective to avoid losing sight of the focus. They should also ask themselves two questions:
:::
:::GlobalBlock{.ol-decimal .mb-5}</p>
<ol>
<li>Who is actually going to benefit from it?
Who is supposed to find the ultimate product useful? Who exactly is the target group – the client themselves or the client’s customer? The target group influences functional and non-functional requirements significantly. This question seems like an easy one but it often hides fundamental aspects, which are vital for a comprehensive requirements elicitation.
What doesn’t fit in?</li>
<li>What doesn’t fit in?
Seemingly trivial functionalities often hide further functions that are indispensable for the implementation. The question is aimed at edge cases, i.e. exceptions. These edge cases have to be tracked down thoroughly. Many clients aren’t even aware of edge cases that have to be considered during the project implementation. In the later implementation, however, a previously created conception may have to be completely revised due to the sudden occurrence of edge cases.
:::
:::globalParagraph
Continuing the metaphor of a police investigator, the requirements elicitation essentially serves to identify the ‘true motives’ of the client.
:::</li>
</ol>
<p>:::globalTitle{:size="lg" :color="text-bs-blue" .mb-5}
Requirements analysis as the second step
:::</p>
<p><img src="/img/blog/abi-ismail-zod.jpg" alt="abi-ismail-zod">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
The actual analysis of the requirements only takes place after the requirements elicitation. After the client requirements have been identified, you have to determine whether these can actually be implemented – ideally, together with the client.
:::
:::globalParagraph
This involves clarifying whether the technical requirements needed for the implementation of the client requirements are provided and whether the client requirements can be realised within the predetermined timeframe and budget.
:::
:::globalParagraph
If that isn’t the case, you first have to check whether it makes sense to first implement parts of the requirements in an initial implementation period. If this is possible, a prioritisation of requirements is helpful for both client and contractor as it’s a chance to clarify which elements are the most important to the client and whether these can be realised.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Comparison with reality
:::
:::globalParagraph
One thing is often missing in books about successful project management: a comparison with reality. In an ideal world, both client and contractor have enough time to establish the requirements, to analyse them thoroughly and to prioritise the most important ones. The contractor would subsequently be able to create a well-calculated proposal.
:::
:::globalParagraph
As contractors, however, we have found reality doesn’t work like this in most cases. In other words, time is money and money first has to be earned. The client depends on the speed at which a proposal can be provided and they can then accept so the desired project is realised as quickly as possible. The contractor, too, is interested in the order being placed quickly to ensure planning security. Sometimes, an extensive requirements elicitation and analysis can even take place after the order has been placed.
:::</p>
<p>:::globalTitle{:size="lg" :color="text-bs-blue" .mb-5}
Four insights for a successful requirements phase
:::</p>
<p><img src="/img/blog/cow.jpg" alt="cow">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
In the following, we’ll outline four insights from our experience. They show how you can map the requirements elicitation and analysis in your project calculation and how the risks for both client and contractor can be minimised.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Time tracking before the project even starts
:::
:::globalParagraph
As the contractor, you should continuously monitor how much time is needed for the clarification of the project – it’s the only way to gain experience. Later, you’ll also know how complex the process will be and you can determine the costs accordingly in a better way.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Allow for conception phases
:::
:::globalParagraph
The quote should include an additional conception phase – depending on the cope, this may be anything from a few hours to entire days. This phase should not only factor in the time needed for the software architecture, but also time for essential consultations with the client.
:::
:::globalParagraph
Especially when a job spec isn’t clearly formulated, these types of conception phases should be considered.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Define the boundaries
:::
:::globalParagraph
The proposal should define clear boundaries. For clients in particular, it’s not always clear what hides behind the different elements of the quote. It may even make sense to include an additional document in the proposal, which explains these points in further detail.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Better distribution of project management costs
:::
:::globalParagraph
Project management is often regarded as ‘unproductive’ overhead. The higher the project management costs, the more difficult it is to explain to the client how important this part is.
:::
:::globalParagraph
When calculating the individual technical elements of the proposal, we therefore suggest increasing the time for project management by a certain amount. The quote for the client will then still include the final time and cost expenditure. This means the separate quote for the PM proportion won’t be too high in your proposal. The additional PM can be used for consultations with the client and should always be monitored separately from the technical implementation by the project controller.
:::
:::globalParagraph
And there’s an additional fifth insight for a successful implementation of projects: the definition of acceptance criteria. However, this insight is such a comprehensive subject that we shall dedicate a separate article to it.
:::</p>
<p>::GlobalPodcastSection{:videoId="KapP6ouEL3E" :videoPosition="left" :bg="bg-bs-gray" .mb-5}
:::GlobalPreTitle{:color="text-bs-green" .mb-3}
PODCAST: TOOLS FOR THE CRAFT
:::
:::GlobalTitle{:tag="h3" .mb-6}
Edition 2: Remote Kubernetes development environments
:::
:::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking about the various options developers have for running remote Kubernetes development environments.
:::
:::globalParagraph{:font-size="lg" .mb-4}
More editions of our podcast can be found here:
:::
::::GlobalButton{:url="/podcast/" :label="More podcasts!" :color="green"}
::::
::</p>
<p>:::globalTitle{:size="lg" :color="text-bs-blue" .mb-5}
Conclusion: Blueshoe’s requirements process
:::
:::globalParagraph
At the beginning of our company history, the developers were still a big part of the requirements process and communicated personally with the clients. Sometimes, a demand was even made on the fly and implemented immediately. As the number of our clients and projects increased, we recognised that this approach becomes impossible at a certain project size. We have since begun to make the requirements process a central part of our project management and it is now carried out by our project managers.
:::
:::globalParagraph
For us, it’s always vital that we are able to react flexibly to changes in the requirements which may pop up during the project implementation. However, the thorough requirements elicitation still gives us (but primarily the client) a considerably higher planning security regarding the implementation period of a project.
:::</p>]]></content:encoded>
            <category>Project management</category>
            <category>Documentation</category>
            <enclosure url="https://blueshoe.io/img/blog/requirements-analysis-in-project-management.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Service availability using Kubernetes probes]]></title>
            <link>https://blueshoe.io/blog/service-availability-using-kubernetes-probes</link>
            <guid>https://blueshoe.io/blog/service-availability-using-kubernetes-probes</guid>
            <pubDate>Thu, 20 Jul 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Within this blog post, I discuss the benefits and necessity of using application feedback to tackle basic aspects of service availability. The framework used to elaborate the concepts is Kubernetes. However, the underlying ideas are transferable and not limited to Kubernetes. Furthermore, while the use of probes (health check) allows us to tackle basic availability aspects, there are many more aspects of (high) availability which are not mentioned within this article.</p>
<p><img src="/img/blog/kubernetes.jpg" alt="my image">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
How to damage user satisfaction and destroy value?
:::
:::GlobalParagraph
What a simple question… offer a service that isn't always available. How do you feel when you sit down after a long exhausting day to listen to a podcast or watch a movie with your friends/family and you see an error message? Or, the homepage doesn't load at all? You are certainly not happy. Worse yet, you have to finish work for an important customer within the next hour and the services required for that are temporarily not responding? Well, you might lose an order, recurring customer, or long standing business relationship. It doesn't matter that the services were available all week long. The moment you needed them, they were not.
:::
:::GlobalParagraph
Clearly, availability matters because it directly impacts user satisfaction. Moreover it ensures business continuity, reliability, strengthens trust, and is the foundation for compliance topics, scalability and growth, as well as operational efficiency. But how do you check if your service is available?
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Without feedback you know nothing
:::
:::GlobalParagraph
Even if you have to interact with a single instance of an application, it can be quite difficult to correctly infer its availability status. Has the application started? Is it ready to process my next request? Is it still alive after not using it for weeks? Without feedback, your only option is to send your request and hope that everything goes well. This is clearly no way to use or run a serious application.
:::
:::GlobalParagraph
However, it is rarely the case that an application provides no feedback at all. For web applications, HTTP requests are answered by corresponding responses that contain information provided by the application or other infrastructure components in between. So we know something! But is this information enough to make valuable statements about availability? I would argue that this depends to a large extent on the application itself and even more on your quality requirements. At one point, HTTP status codes do not cut in anymore.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
From Application to Service Availability
:::
:::GlobalParagraph
At the latest when you start scaling your application into a full fledged service, it makes sense to leverage the monitoring concepts provided by your orchestration framework to gain insights into the health and readiness of your application. Kubernetes uses probes to ensure the liveness and readiness of containers in pods. Depending on the status of the probes, pods are replaced or restarted, traffic is routed to endpoints capable of handling requests, and more. All you have to do is to add some lines of yaml to your resource definitions.
:::
:::GlobalParagraph
Wait a second! We just jumped from scaling an application to writing yaml files. Welcome to kubernetes :D . Noone said Kubernetes doesn’t have a (steep) learning curve, however, it makes life easier. Especially from the perspective of a developer or infrastructure point of view. Most people with a more technical background can see that it's not a trivial task to implement (and maintain!) the availability functionalities provided by Kubernetes out of the box. And even as a non technical person, it is possible to understand the concepts - which is often sufficient. For probes, one might even argue that everyone can understand the code. It is quite verbose and self explanatory:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-bash">...
   livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5
...
</code></pre>
<p>:::</p>
<p>:::GlobalParagraph
Yes, this is the definition of a liveness probe which executes a command - something with “cat” and “/tmp/healthy” - after an initial delay of 5 seconds every 5 seconds (By the way, this example comes directly out of the official kubernetes documentation).
:::
:::GlobalParagraph
So how do I transform my application into a service with basic availability mechanisms using probes? Summarised in <a href="/blog/kubernetes-explained-for-non-developers/">Kubernetes</a>{.bs-link-blue} terms, you have to do the following: Containerize your application - which you should have done anyway. Create a deployment based on the applications container. Add the corresponding probe types to the container (see next section). And finally, expose the deployment using a service.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Different probes provide different feedback
:::
:::GlobalParagraph
There are three different types of probes in Kubernetes, which provide different kinds of feedback from the application. Thus, they serve different use cases.
:::</p>
<p>:::GlobalTitle{:size="md" :tag="h3" .mb-5}
Liveness probe
:::
:::GlobalParagraph
The liveness probe determines whether a container is running properly or not. It periodically checks the state of the container and restarts it if it is unresponsive or in a faulty state. This ensures that the application continues to run even if it may have crashed or encountered a temporary error.
:::</p>
<p>:::GlobalTitle{:size="md" :tag="h3" .mb-5}
Readiness probe
:::
:::GlobalParagraph
The readiness probe checks if a container has completed its initialization and is ready to serve requests. By doing so, it enables Kubernetes to ensure that only healthy containers receive traffic.
:::</p>
<p>:::GlobalTitle{:size="md" :tag="h3" .mb-5}
Startup probe
:::
:::GlobalParagraph
Startup probes determine whether a container has started successfully and is ready to handle requests. In contrast to the liveness probe, it only runs during the startup phase of a container, where the liveness probe is not active yet.
:::</p>
<p>:::GlobalPodcastSection{:videoId="5wNTUUSk1jA" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR PODCAST: TOOLS FOR THE CRAFT
::::
::::GlobalTitle{:tag="h3" .mb-6}
E3: Deep dive into Getdeck
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are introducing and demonstrating Getdeck as well as comparing local and remote Kubernetes and preproduction clusters
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="Show more" :color="green"}
::::
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Probes use case example
:::
:::GlobalParagraph
Let us assume that you have got a legacy Java application (built as a rather huge monolith), which has been containerized into a single container for you, in order to migrate it into the cloud (which is using Kubernetes). Your task is to provide this application as a service within your regional branch of the organisation. As you are somehow responsible for the service now, you want to ensure high user satisfaction and thus care about availability. Based on the sections above, you certainly know how to approach the problem.
:::
:::GlobalParagraph
First, the containerized application has to be transformed into a deployment which is exposed by a service. As a bigger monolithic Java application might not start instantly, a startup probe is added to the Java application container in order to get feedback about the application startup process. Furthermore, you want to add a readiness probe so that the service can distribute traffic accordingly to application instances which are ready to process it. Lasty, you definitely want a liveness probe, too, in order to continuously validate the state of your application. Yes, you most likely want all three types of probes in this case.
:::
:::GlobalParagraph
As you experience more and more usage of your service, you scale up the replications of your application using the replication count of the deployment. However, in order to keep costs lower, you think about applying autoscaling, which you have read about in another blog post. Luckily, Kubernetes autoscaling can utilise the existing probes to determine the scaling behaviour of the application. At this point, we definitely left the scope of this blog article. But as I mentioned in the beginning, availability is more than just probes.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Unlocking application availability: harnessing Kubernetes probes for enhanced user satisfaction
:::
:::GlobalParagraph
The concept of probes within Kubernetes provides us with a comfortable way to gain feedback from applications which allows us to tackle basic availability issues. While I did not comment on technical details on how probes communicate with the application (commands, HTTP, TCP, or gRPC), by now you should have understood the overall concept and importance of probes and be able to start using them to achieve higher availability. Certainly there will be additional technical challenges in order to provide the endpoints used by the probes within your application. These, however, highly depend on the application and may even not be required to be handled by yourself. And if you are confronted with them, there is no lack of documentation. Therefore, I encourage you to start using kubernetes probes to tackle basic availability issues in order to increase user satisfaction.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Security</category>
            <category>Operations</category>
            <enclosure url="https://blueshoe.io/img/blog/kubernetes.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Strategies for slim Docker images]]></title>
            <link>https://blueshoe.io/blog/strategies-for-slim-docker-images</link>
            <guid>https://blueshoe.io/blog/strategies-for-slim-docker-images</guid>
            <pubDate>Wed, 07 Jul 2021 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Docker has become more and more popular in recent years and has now essentially become the industry-standard for containerisation – be it via docker-compose or Kubernetes. When creating Dockerfiles, there are certain aspects that need to be considered. In this blog post, we’ll show you some strategies with which to create slim Docker images.</p>
<p><img src="/img/blog/strategies-for-slim-docker-images.jpg" alt="Strategies for slim Docker images">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
Why are we talking about Docker images?
:::
:::globalParagraph
Docker images have become our preferred technology – to develop applications locally on the one hand and to run applications in a testing/staging or production system (‘deployment’) on the other. As python developers, we don’t just face the unfamiliar situation of suddenly having a certain build waiting time again, but we’ve also noticed that some of the resulting Docker images are pretty big.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Reducing the Docker images' size
:::
:::globalParagraph
When creating our Dockerfiles, reducing the image size plays a particularly important role. Naturally, the Docker image should have everything it needs to be able to run the application – but ideally no more than that. Unnecessary features may be software packages and libraries that are only needed for compiling or running automated tests, for example.
:::
:::globalParagraph
There are a number of reasons why a Docker image should only contain the absolute minimum necessary:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>For one, the security of the image is increased. If I just want to run a simple Django application, I certainly don’t need a whole Debian or Ubuntu image. A simple Python image as a basis will be enough – it practically doesn’t need to be able to do anything more than running the Django application via a Python interpreter or application server. Why does this increase the security? Easy: fewer libraries mean that less can go wrong. Although this article is already a year old, it provides an informative insight into the underlying problems and outlines how to reduce the usual CVEs (Common Vulnerabilities and Exposures) with smaller versions of the base image.</li>
<li>Another issue is speed. On the surface, it doesn’t really matter whether the Docker image has a size of 2 gigabytes or only 200 megabytes when running the application. The deployment is often automated and it usually doesn’t make much difference if it takes a few minutes longer until the (new) code is deployed. But here, too, the golden rule is of course: whatever unnecessary data usage and data transfer can be avoided should be avoided.</li>
<li>As a developer, I can primarily benefit from smaller Docker images if I don’t have an automated build pipeline but am instead building them myself and deploying them in a container registry (the central storage for images), for example. If I’ve created a Dockerfile with a size of 2 gigabytes and I’m sitting in my home office due to Corona with a less than optimal internet connection, the upload may well take a while. It’s certainly annoying if the development process is prolonged unnecessarily, even if that’s just by half a minute for each build.</li>
<li>The next point is the resource consumption. Not only the one on my laptop, where more and more Docker images get dumped over time, but also the one in the container registry. The registry might be hosted by Gitlab, for example. And sure, storage space is usually not a major cost factor, but if every Docker image has a size of 1 to 2 gigabytes each and if the registry gets bombarded with dozens of images week after week, the used storage space can add up to quite a lot. If we manage to reduce the Docker images to a half or a quarter of their original size, we’ve already made a good amount of progress.</li>
<li>Last but not least, we mustn’t forget about the environment. First, every Docker image gets pushed into a registry and downloaded again from there – be it on a developer’s laptop or from a production system. If my Docker images are four times larger than they need to be, I am permanently creating four times as much traffic by using my images or I might even become the reason why additional storage systems have to be used.
:::</li>
</ul>
<p><img src="/img/blog/keppel.jpg" alt="Strategies for slim Docker images">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
What are the requirements for a Docker image?
:::
:::globalParagraph
Despite all the minimising of the image size, there are still a few requirements to be considered, of course. I can optimise my Docker images right down to the last megabyte, but if that means I can’t develop properly with them anymore or if the application isn’t running reliably as a result, I haven’t gained much. Ultimately, the things to be considered can be boiled down to three areas: development, testing/staging and production.
:::
:::globalParagraph
As a developer, I want my workflow to be interrupted as little as possible – convenience is definitely the main priority. This means that I need a Docker image which ideally has a short build time and supports the live code reloading, for example. I also might want to use some other tools like the Python debugger or telepresence. I may need further dependencies for this which might not be relevant in a production system, for example.
:::
:::globalParagraph
The requirements for testing/staging and production systems look pretty similar. Security is clearly the top priority here. I want to have as few libraries and packages on the system as possible that aren’t actually needed for the running of the application. As a developer, I shouldn’t even really have any need to interact with the running container which means I don’t need to worry about various concerns regarding convenience.
:::
:::globalParagraph
However, it can still be handy to have certain packages available for debugging purposes, particularly in a testing/staging system. That’s doesn’t necessarily mean that you should already have these packages available in the Docker images, though. Instead, as an example, you can use telepresence to swap a deployment in a Kubernetes environment. This means that I can build a Docker image locally which deploys all my necessary dependencies, and I can run it in my testing/staging cluster. Find out how you can accomplish this by checking out one of our other blog posts – <a href="/blog/local-kubernetes-development/">Cloud Native Kubernetes development</a>{.bs-link-blue}.
:::
:::globalParagraph
The use case described above can occur particularly often early on in the development phase. For a production system, this shouldn’t really make a difference anymore, though. I might want to look at some logs here, but this can either be done via kubectl or possibly also with a log collector solution. In the end, I want the testing/staging system to be run with the identical Docker image, like the production system. Otherwise, we might risk some malfunction in the production system which wouldn’t have occurred in the testing/staging system due to the different environment.
:::
:::globalParagraph
There might also be some requirements from an operations point of view – a vulnerability check, for instance, to ensure that known vulnerabilities aren’t even present in the image, where possible, or that vulnerabilities that can be corrected will be corrected. Furthermore, a company policy might also have influence on the Dockerfile – either one’s own company policy or that of the client. One possible scenario would be that certain base images are excluded or that the availability of certain packages or libraries is ensured.
:::</p>
<p>:::globalTitle{:size="sm" .mb-5}
Evolution of a Docker image
:::
:::globalParagraph
In the following sections, let’s have a look at the exact steps you can take to optimise your own Dockerfile. We’ll first check out the prerequisites that can influence the resulting Docker image. We’ll then look at the strategies and patterns for Dockerfiles before checking out the optimisation impact in several iterations.
:::</p>
<p>:::globalTitle{:size="sm" .mb-5}
Prerequisites
:::
:::globalParagraph
First off, we’ve got to select the base image. This one also has to first be defined in the Dockerfile. For a Django application, a Python base image will suffice. We could, of course, simply choose an Ubuntu image, but we want to keep the image size as small as possible and not even introduce unnecessary packages to the image in the first place. The Docker Hub provides many prefabricated images. For Python, too, there are different images which can be differentiated by the Python version number or by the terms ‘slim’ and ‘alpine’.
:::
:::globalParagraph
The ‘standard’ Python base image is based on Debian Buster and therefore represents one of the three variants. The slim variant is also based on Debian Buster, though with trimmed-down packages. So naturally, the resulting image will be smaller. The third variant is alpine and is, as you can probably guess from the name, based on Alpine Linux. The corresponding Python base image does have the smallest size, but you might well have to install additional required packages in the Dockerfile.
:::
:::globalParagraph
Some of the things that can have a big influence on the Dockerfile are the system runtime and build dependencies. Especially for the base image, it should be noted that an Alpine-based image has to be distributed with musl libc and not with glibc, as is the case with Debian-based images. The same applies to gcc, the GNU Compiler Collection, which isn’t automatically available on Alpine.
:::
:::globalParagraph
As Django developments, applications naturally have a few pip requirements. Depending on the base image selected, the Dockerfile also has to ensure that all system packages and libraries needed for the pip requirements are installed. However, the pip requirements themselves can also have an impact on the Dockerfile if I have additional packages for the development that aren’t needed in the production environment and shouldn’t really be present there either. An example for this is the pydevd-pycharm package which we only need for the Python remote debugger in PyCharm.
:::</p>
<p>:::globalTitle{:size="sm" .mb-5}
Strategies and patterns for Dockerfiles
:::
:::globalParagraph
Over time, various different strategies and patterns emerged with which to create and optimise Dockerfiles. The challenge of making the image size as small as possible is closely linked to the process of the Dockerfile becoming an image. Every instruction adds another layer, with every layer being stacked on top of the previous one. Once you’re aware of this, you naturally want to try and keep the number of layers and the size of each individual layer to a minimum. For example, you could delete artefacts within a layer which are not needed anymore, or you might combine different instructions within a layer by using various shell tricks and other logics.
:::
:::globalParagraph
A by now outdated pattern is the so-called builder pattern. With this, a Dockerfile is created for the development – the builder Dockerfile. It contains everything needed to build the application. For the testing/staging and production environment, a second, trimmed-down Dockerfile is created. This one contains the application itself and other than that, only whatever’s needed to run the software. While this approach is doable, it has two major drawbacks: for one, having to maintain two different Dockerfiles is anything but ideal, and secondly, it creates a rather complex workflow:
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>compile a builder image</li>
<li>create a container with the builder image</li>
<li>copy or extract the required artefacts from the container</li>
<li>remove container</li>
<li>build production image using the extracted artefacts
:::</li>
</ul>
<p>:::globalParagraph
This procedure can be automated with scripts, of course, but it’s still not ideal.
:::
:::globalParagraph
So-called multi-stage Dockerfiles are now gaining more and more popularity as the preferred solution. They’re even recommended by Docker themselves. A multi-stage Dockerfile technically follows a pretty simple structure:
:::
:::globalParagraph
The different stages are separated from one another by FROM statements. You can also give the stages different names so that it’s easier to reference an individual stage. As every stage starts with a FROM statement, every stage also uses a new base image. The benefit of having several stages is that you can now select individual artefacts from one stage and copy them into the next one. It’s also possible to stop at a certain stage – to deploy debugging functions, for example, or to support different stages for development/debugging and staging/production.
:::
:::globalParagraph
If you don’t specify a stage at which you want to stop during the build process, the Dockerfile will run in its entirety which should result in the Docker image for the production system. Compared to the build pattern, only one Dockerfile is needed for this and you also don’t need a build script in order to display the workflow.
:::</p>
<p>::GlobalPodcastSection{:videoId="KapP6ouEL3E" :videoPosition="left" :bg="bg-bs-gray" .mb-5}
:::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR KUBERNETES PODCAST
:::
:::GlobalTitle{:tag="h3" .mb-6}
Tools for the Craft: Navigating the Kubernetes ecosystem
:::
:::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking about the various options developers have for running remote Kubernetes development environments.
:::
:::globalParagraph{:font-size="lg" .mb-4}
More editions of our podcast can be found here:
:::
::::GlobalButton{:url="/podcast/" :label="Yes, more please!" :color="green"}
::::
::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Evaluating different Dockerfiles
:::
:::globalParagraph
Now that we’ve got all this knowledge, let’s do an evaluation of different Dockerfiles. We have written six different Dockerfiles which we have evaluated according to their size. First, let’s have a look at the individual Dockerfiles with the selected optimisations. There are some common features that all Dockerfiles share: the installation of postgresql-client or postgresql-dev, the copying and installation of pip requirements as well as the copying of the application.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Dockerfile 1 - Naive
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">FROM python:3.8

RUN apt-get update
RUN apt-get install -y --no-install-recommends postgresql-client

COPY requirements.txt /requirements.txt
RUN pip install -r /requirements.txt

COPY src /app

WORKDIR /app
</code></pre>
<p>:::
:::globalParagraph
The first Dockerfile is based on the python:3.8 base image. This means that we’re equipped with all the Debian ‘nuts and bolts’ that we need to work within the container without any real restrictions.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Dockerfile 2 - Naive; Removing the apt lists
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">FROM python:3.8

RUN apt-get update \
    &#x26;&#x26; apt-get install -y --no-install-recommends postgresql-client \
    &#x26;&#x26; rm -rf /var/lib/apt/lists/*

COPY requirements.txt /requirements.txt
RUN pip install -r /requirements.txt

COPY src /app

WORKDIR /app
</code></pre>
<p>:::
:::globalParagraph
This Dockerfile is identical to the previous one with the exception that the content of the directory /var/lib/apt/lists/ is removed after the required additional package has been installed. Package lists which are no longer relevant for our Docker image are saved in this directory after an apt update.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Dockerfile 3 - Alpine naive
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">FROM python:3.8-alpine

RUN apk update &#x26;&#x26; apk --no-cache add libpq gcc python3-dev musl-dev linux-headers postgresql-dev

COPY requirements.txt /requirements.txt
RUN pip install -r /requirements.txt

COPY src /app

WORKDIR /app
</code></pre>
<p>:::
:::globalParagraph
Our third Dockerfile is simply based on Alpine Linux. As there are some packages for the running of the application missing in Alpine, we have to install these first.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Dockerfile 4 – Alpine linux; Removing build dependencies
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">FROM python:3.8-alpine

COPY requirements.txt /requirements.txt

RUN apk update &#x26;&#x26; apk add --no-cache --virtual .build-deps gcc python3-dev musl-dev linux-headers postgresql-dev &#x26;&#x26; \
 apk --no-cache add libpq &#x26;&#x26; \
 pip install -r /requirements.txt &#x26;&#x26; \
 apk del .build-deps

COPY src /app
WORKDIR /app
</code></pre>
<p>:::
:::globalParagraph
The next Dockerfile based on Alpine Linux technically only adds a removal of build dependencies, i.e. additionally installed packages that are only needed for the image building but not for the running of the application. For this to work in only one command, the COPY command for the pip requirements has to be moved upwards – this will allow the pip requirements to be installed between the installation and removal of the build dependencies.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Dockerfile 5 – Alpine linux, multi-stage
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-docker">FROM python:3.8-alpine as base

FROM base as builder

RUN apk update &#x26;&#x26; apk add gcc python3-dev musl-dev linux-headers postgresql-dev
RUN mkdir /install
WORKDIR /install
COPY requirements.txt /requirements.txt
RUN pip install --prefix=/install -r /requirements.txt

FROM base

COPY --from=builder /install /usr/local
COPY src /app
RUN apk --no-cache add libpq
WORKDIR /app
</code></pre>
<p>:::
:::globalParagraph
The last Dockerfile uses the multi-stage pattern. Both stages use the Alpine Linux Python base image. In the first stage, builder, the required build dependencies are installed, the directory /install is used as WORKDIR and subsequently, the pip requirements are copied and installed. The second stage now copies the content from the /install directory of the first stage, it copies the code of the application and installs another package with libpq which is needed for the running of the application.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Evaluation of the resulting sizes
:::
:::globalParagraph
The following table shows the size of the Docker images for our five Dockerfiles:
:::</p>
<p><img src="/img/blog/bildschirmfoto_2.jpg" alt="bildschirmfoto_2">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
The clearest jump in terms of quantity can be seen in the usage of the Alpine-based base image. If both Debian-based Docker images have a size of at least one gigabyte each, the usage of Alpine Linux will cut this by more than half. The size of the resulting image is therefore just short of 400 megabytes. By some skilfully written commands and thus optimised Dockerfiles, we can even reduce this size further by more than half and ultimately land at 176 megabytes.
:::
:::globalParagraph
The multi-stage Dockerfile has around 155 megabytes. Compared to the previously optimised Dockerfile, we haven’t saved that much. The Dockerfile is a bit more elaborate due to the various stages, but it’s also considerably tidier and, as described above, a lot more flexible thanks to the different stages. With this image, we’ve only reached just about 15% of the first naive Debian-based image. Even compared to the naive Alpine image, we’ve saved more than 60%.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Our recommendation: Multi-stage Dockerfiles
:::
:::globalParagraph
Our recommendation is definitely the use of multi-stage Dockerfiles. As you were able to see quite impressively in the evaluation, Dockerfiles can reduce the resulting image size significantly. So long as the conditions and the application permit it, you should also use an Alpine-based base image if you’re keen on reducing the image size.
:::
:::globalParagraph
We’re not just recommending multi-stage builds because of the resulting image sizes, however – the flexibility thanks to the different stages is another massive advantage in our opinion. We can manage the development process up to the production deployment with only one Dockerfile and don’t have to maintain several Dockerfiles.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Docker</category>
            <category>Python</category>
            <category>Operations</category>
            <enclosure url="https://blueshoe.io/img/blog/strategies-for-slim-docker-images.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Exploring Telepresence’s Docker Desktop Extension]]></title>
            <link>https://blueshoe.io/blog/telepresences-docker-desktop-extension</link>
            <guid>https://blueshoe.io/blog/telepresences-docker-desktop-extension</guid>
            <pubDate>Wed, 14 Jun 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Robert tried out Telepresence’s Docker Desktop extension and drew comparisons to Blueshoe’s own cloud-native development tool Gefyra. This post is the result of a live video exploring session on the first time use of the extension.</p>
<p><img src="/img/blog/gefyra-telepresence.jpg" alt="my image">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalParagraph
For the past couple of years Blueshoe has been working intensely on an alternative solution to Telepresence - called <a href="/products/">Gefyra</a>{.bs-link-blue}. Since the good folks of Ambassador Labs have released Telepresence v2, which was a massive rewrite, fixed some issues, added new features as well as a Docker Desktop Extension.
:::</p>
<p>:::GlobalPodcastSection{:videoId="Sxf0onWzXyo" :bg="bg-white" :videoPosition="left" .mb-6}
::::GlobalParagraph
Since then Gefyra also came a long way, we released Gefyra v1 as well as a Docker Desktop Extension. This blog post is a summary of the exploration session video I did in the beginning of May 2023.
::::
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>Installation of Telepresence Docker Desktop Extension</strong>
:::
:::GlobalParagraph
Docker’s marketplace makes it super easy to install new extensions. Same with Telepresence. Just look it up in the marketplace and hit “Install”.
:::</p>
<p><img src="/img/blog/telepresence.jpg" alt="telepresence">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalParagraph
The screenshot shows an already installed extension state.
:::
:::GlobalParagraph
After the installation has finished the Telepresence extension shows up in Docker Desktop’s sidebar. A nice graphic as well as some explanations are displayed when opening the extension for the first time.
:::</p>
<p><img src="/img/blog/telepresence-2.jpg" alt="telepresence">{.object-cover .max-w-full .mb-6}</p>
<p><img src="/img/blog/telepresence-3.jpg" alt="telepresence">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalParagraph
Oops - you first have to create an account - before actually you can get started with Telepresence Docker Desktop. Fair enough - let’s set up an account and continue. The process was a bit flaky, after creating the account its page redirects to a localhost port which simply does not respond. After trying again the registration seems to have worked.
:::</p>
<p><img src="/img/blog/telepresence-4.jpg" alt="telepresence">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalParagraph
Back to Docker Desktop!
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>Simple Intercept with Telepresence on Docker Desktop</strong>
:::
:::GlobalParagraph
After the account setup we can now actually start working with Telepresence’s Docker Desktop extension. The home screen now looks like this:
:::</p>
<p><img src="/img/blog/telepresence-5.jpg" alt="telepresence">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalParagraph
Docker Desktop’s Kubernetes context is preselected. One thing I noticed (also in the video) is that you cannot change your kubeconfig to select different contexts. It seems Telepresence assumes all your relevant clusters are available in your default kubeconfig file.
:::
:::GlobalParagraph
Make sure to tick “Install Telepresence on this cluster” since it - same as Gefyra - needs to install a few components the first time it is used on a cluster.
:::
:::GlobalParagraph
After a short loading time everything is available. In the video I am trying out the demo case and struggle a bit, since there have been other components installed in the cluster. I leave it out here for the sake of the reader 🙂
:::
:::GlobalParagraph
On the following screen Telepresence displays the “services” for a given namespace. In case you watch the video - I was a bit confused since my service did not show up here. Turns out Telepresence does not allow to intercept pods - rather it wants to work with deployments.
:::</p>
<p><img src="/img/blog/telepresence-6.jpg" alt="telepresence">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalParagraph
I rewrote my frontend pod’s manifest to be a deployment. It showed up afterwards - let’s intercept! The intercept options are fairly limited which makes it easier for the user to finish the intercept setup:
:::</p>
<p><img src="/img/blog/telepresence-7.jpg" alt="telepresence">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalParagraph
The ports come prefilled - I only have to choose a “Target docker image”. The dropdown allows me to choose any of the available images on my machine. The image then will be used to start a container which then can receive the traffic from Telepresence’s intercept.
:::
:::GlobalParagraph
After setting up the intercept the Docker Desktop Extension displays the log of the container. In the past we did this with Gefyra as well - however, there is a built-in log viewer in Docker Desktop, which basically does the job.
:::</p>
<p><img src="/img/blog/telepresence-8.jpg" alt="telepresence">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalParagraph
Surely the way Telepresence solved displaying the logs allows the user to stay within the extension. However for longer log output this seems a bit messy. Furthermore I am not sure (from a user’s perspective) why this is the time and place to manage my subscriptions. Let’s close that overlay.
:::</p>
<p><img src="/img/blog/telepresence-9.jpg" alt="telepresence">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalParagraph
Alright! The traffic is intercepted! And there is even a preview url - which allows me to share my running container with my team. That’s indeed an amazing feature. There are some information displayed about the currently running intercepts.
:::</p>
<p><img src="/img/blog/telepresence-10.jpg" alt="telepresence">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalParagraph
Below the intercept overview are two more buttons to create another intercept. It seems they have a standardized file format to share intercept request parameters within your development team. That’s amazing.
:::
:::GlobalParagraph
However - as a developer I am missing especially one particular feature here - hot code reloading. In case anyone reading this knows how to do it - feel free the contact us - we will update the blog post asap!
:::
:::GlobalParagraph
With Gefyra’s Docker Desktop Extension you can just mount your code into your running container. This way the code within the container is continuously updated which is a huge advantage to Telepresence’s implementation of running containers locally. Also Telepresence’s CLI actually supports intercepting traffic and sending it to a specified process on your machine - it’s Docker Desktop Extension does not (yet).
:::
:::GlobalParagraph
After stopping the intercept we are redirected to the home screen:
:::</p>
<p><img src="/img/blog/telepresence-11.jpg" alt="telepresence">{.object-cover .max-w-full .mb-6}</p>
<p>:::GlobalParagraph
One thing I also noticed during the exploration of this Docker Desktop Extension was that it is not open source (8 May 2023). All of Gefyra’s components and extensions are fully open source - this way developers can actually check what happens on their machine and in their clusters.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
<strong>Summary</strong>
:::
:::GlobalParagraph
Telepresence currently only implements few of their capabilities in their Docker Desktop Extension. I am sure they will upgrade it in the future, especially since they announced a partnership with Docker itself. The extension brings some value to the user, however some of the really important core features for cloud native software development are missing. We developed Gefyra’s Docker Desktop Extension based on Blueshoe’s needs as a cloud native software development company.
:::
:::GlobalParagraph
We’re super excited to see how Telepresence has evolved and even though <a href="/products/">Gefyra</a>{.bs-link-blue} and Telepresence are competing tools for similar use-cases it gives us a lot of motivation and inspiration to push <a href="/products/">Gefyra</a>{.bs-link-blue} further!
:::</p>
<p>:::GlobalPodcastSection{:videoId="5wNTUUSk1jA" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR PODCAST: TOOLS FOR THE CRAFT
::::
::::GlobalTitle{:tag="h3" .mb-6}
E3: Deep dive into Getdeck
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are introducing and demonstrating Getdeck as well as comparing local and remote Kubernetes and preproduction clusters
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="Show more" :color="green"}
::::
:::</p>]]></content:encoded>
            <category>Gefyra</category>
            <category>Team Blueshoe</category>
            <enclosure url="https://blueshoe.io/img/blog/gefyra-telepresence.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Test Kubernetes locally]]></title>
            <link>https://blueshoe.io/blog/test-kubernetes-locally</link>
            <guid>https://blueshoe.io/blog/test-kubernetes-locally</guid>
            <pubDate>Wed, 20 Apr 2022 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Reddit can be a wonderful community, not just for entertainment but also for professional purposes. We regularly skim through r/kubernetes and the level of discussion can be quite enlightening.</p>
<p>A couple of weeks ago we came across the following question:</p>
<p><strong>“How are your developers testing their code locally?”</strong></p>
<p><img src="/img/blog/test-kubernetes-locally.jpg" alt="Test Kubernetes locally">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
Since “local Kubernetes” is kind of “our thing” we of course had to chime in.
:::
:::globalParagraph
In general “testing code locally” in a Kubernetes infrastructure was a true issue we had to deal with on a daily basis. In the end we feel the best solution would be if developers would Kubernetes already during the local development process as that already solves a bunch of the main issues.
:::</p>
<p><img src="/img/blog/test-kubernetes-locally-1.jpg" alt="Test locally">{.object-cover .w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
Moments of clarity
:::
:::GlobalBlock{.ol-decimal .mb-5}</p>
<ol>
<li>
<p><strong>We must leverage Kubernetes in development</strong>
::::globalParagraph
With container orchestration platforms like Kubernetes (with its compelling features to actually operate the software) applications have to intertwine with the platform one way or another.
::::
::::globalParagraph
For example, you’d like to have reasonable probes (not just serving a http 200, but mechanisms that really find out the state of the application), use sidecar patterns or speak to Kubernetes and operators via custom resource definitions.
::::
::::globalParagraph
Only that will leverage Kubernetes to the full extent and will make software much more reliable and scalable than ever. In addition, we hope that Kubernetes as a development platform would finally change the mindsets to create less monolithic applications and support the way of service-oriented thinking.
::::
::::globalParagraph
From our perspective, docker-compose or Docker alone did not lead to that in the past.
::::</p>
</li>
<li>
<p><strong>Let ops be ops and devs be devs</strong>
::::globalParagraph
We observed that creating well-crafted container images is still something developers struggle with.
::::
::::globalParagraph
If you have a very good container image (i.e. secure, as slim as possible, etc) it’ll take ages to build. That is frustrating and thus doesn't really add to the acceptance of this technology. Keeping in mind that containers as well as container orchestration is driven by the IT operation side of things.
::::
::::globalParagraph
Of course you can find developers (mostly backend) with a strong affinity for infrastructure (I count containers to that part), so-called DevOps working on the interception of code and infrastructure, but this talent is rare.
::::
:::</p>
</li>
</ol>
<p>:::globalTitle{:size="lg" .mb-5}
“Just mock all adjacent services” / “Contract testing is enough”
:::</p>
<p>:::globalParagraph
We don’t think that this can be the answer.
:::
:::GlobalBlock{.ul-disk .mb-5}</p>
<ul>
<li>Who is responsible for the mock service implementation?</li>
<li>The neighboring development team writing the actual service?</li>
<li>Or the team depending on that particular service?
:::
:::globalParagraph
We did that in the past and found mocking services to be not enough. Three relevant points:
:::</li>
</ul>
<p>:::GlobalBlock{.ol-decimal .mb-5}</p>
<ol>
<li>
<p><strong>Who is taking care of the mock?</strong>
::::globalParagraph
The mock service is doomed to become outdated. Interfaces develop, data structures evolve over time, etc. Those mock services usually do have their own logic to create mock answers to mock complex scenarios. Since a mock service is perceived to not really contribute to the usable system the responsibility question is difficult to figure out within companies.
::::</p>
</li>
<li>
<p><strong>Bugs are rarely just in one service</strong>
::::globalParagraph
Our dev teams quite often experienced bugs to be in between two or more services. That often depends on somewhat special data constellations in all participating applications. Something you can’t really test or hunt down running mock services.
::::</p>
</li>
<li>
<p><strong>What would your devs prefer?</strong>
::::globalParagraph
After all, it’s not massive fun for developers to only write against mock services, contracts, and so on. Having real world setups with close-to-production data feels more productive as you can test your development effort and get instant feedback. From my point of view this is not negligible.
::::
:::</p>
</li>
</ol>
<p>:::globalParagraph
Still, there are situations where we feel a mock service is not avoidable.
:::
:::globalParagraph
Our dev teams have been using k3d (or other local Kubernetes provider) plus Telepresence 2 for quite some time. However, while Telepresence is a pretty cool tool, we must admit that we had quite a few troubles with it in the past.
:::</p>
<p>:::GlobalPodcastSection{:videoId="EtOCTuwYdE4" :videoPosition="left" .mb-6}</p>
<p>::::GlobalTitle{:tag="h3" .mb-6}
Stop mocking - develop frontends with real K8s setups
::::
::::globalParagraph{:font-size="lg" .mb-4}
During this talk, a showcase will be presented on federating multiple backend GraphQL interfaces into one common interface, which is then consumed by the frontend. The services are orchestrated with Kubernetes running locally on the developer's machine. The frontend comes with a webpack development server and is built with Vue.js.
::::</p>
<p>:::</p>
<p>:::globalParagraph
However, with that approach being somehow forced onto the dev teams, we had them slightly overwhelmed with using k3d + kubectl + Helm + sops + still having to deal with Docker + other tooling.
:::
:::globalParagraph
And that is basically also part of the backstory as to why we started Unikube.
:::
:::globalParagraph
The idea is to expose as little of the complexity as possible to the developer while still being in the driver's seat.
:::
:::globalParagraph
The development of new features, bug fixes, and so on happens locally using the local Docker host, but having the application to behave as it would be right within the cluster. We see massive advantages of that approach (especially once all of the features are in place).
:::</p>
<p>:::GlobalPodcastSection{:videoId="FazRUwBWPUI" :videoPosition="left" .mb-6}</p>
<p>::::GlobalTitle{:tag="h3" .mb-6}
An alternative to telepresence
::::
::::globalParagraph{:font-size="lg" .mb-4}
We decided to create an alternative to Telepresence 2. Check it out if you like: <a href="https://gefyra.dev">https://gefyra.dev</a>{.bs-link-blue :target="_blank"}
::::
::::globalParagraph{:font-size="lg" }
Also, have a look at my talk at Conf42 about debugging a container with sidecar using Gefyra.
::::
::::GlobalButton{:url="https://gefyra.dev/" :label="Gefyra" :color="blue" target="_blank"}
::::
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Gefyra</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blog/test-kubernetes-locally.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Effective error tracking in Django with Sentry]]></title>
            <link>https://blueshoe.io/blog/tracking-errors-django-sentry</link>
            <guid>https://blueshoe.io/blog/tracking-errors-django-sentry</guid>
            <pubDate>Mon, 19 May 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Errors happen in productive environments. And the worst thing is when you only find out about them through the users. This is exactly where Sentry comes into play. One look at the dashboard and you can see what's happening: A clear stack trace, the affected user and how often the error has already occurred.</p>
<p><img src="/img/blog/sentry_blog_header.webp" alt="Effektives Fehler-Tracking in Django mit Sentry"></p>
<h2>Why You Absolutely Need Sentry</h2>
<p>Errors in live environments can become expensive. If you rely on emails from your users, it's often too late. <a href="https://sentry.io">Sentry</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} brings you real-time error messages directly to your dashboard – including user information, stacktrace, and even environment (Staging, Prod, etc.).</p>
<h2>Quick Start: How to Integrate Sentry into Django</h2>
<h3>1. Install SDK</h3>
<pre><code class="language-bash">pip install --upgrade sentry-sdk
</code></pre>
<h3>2. Create a Project on Sentry.io</h3>
<p><img src="/img/blog/sentry-1.png" alt="Sentry: Create Project">
Sign up on sentry.io or log in. Create a new project there and select "Django" as the platform. Sentry will automatically generate a DSN for you, which is a unique URL through which your application sends errors to the right project on Sentry.</p>
<p>Example DSN:</p>
<pre><code class="language-bash">https://examplePublicKey@o0.ingest.sentry.io/0
</code></pre>
<p>Copy this DSN and include it in your Django app in the next step.</p>
<h3>3. Integrate into <code>settings.py</code></h3>
<p>Integrate the DSN and the basic settings for Sentry into your project. Three parameters are important here:</p>
<ul>
<li><strong>dsn</strong>: Connects your app to your project on Sentry.</li>
<li><strong>traces_sample_rate</strong>: Indicates what percentage of performance data (e.g., slowness of views) should be collected. At <code>1.0</code>, all data is captured. For production systems, values like <code>0.2</code> are common to reduce data volume and costs.</li>
<li><strong>send_default_pii</strong>: Enables sending personal data (e.g., logged-in users) so you can better track who is affected by an error.</li>
</ul>
<pre><code class="language-python">import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration

sentry_sdk.init(
    dsn=SENTRY_DSN,
    traces_sample_rate=0.1,
    profiles_sample_rate=0.1,
    environment=SENTRY_ENV,
    default_integrations=False,
    release=VERSION,
    integrations=[
        DjangoIntegration()
    ],
    send_default_pii=True
)
</code></pre>
<h3>Separating Environments</h3>
<p>Especially in larger projects, you want to be able to distinguish whether an error occurs in your development, staging, or production environment. Otherwise, there's a risk that you might mistake error messages generated during tests for live problems.</p>
<p>The environment specification in the Sentry setup helps you separate this cleanly:</p>
<pre><code class="language-python">ENVIRONMENT = os.getenv("ENVIRONMENT", "development")
</code></pre>
<p>You can set this value through environment variables per server. In the Sentry dashboard, you can later filter by environment or create targeted alerts only for production errors.</p>
<h2>Manual Logging</h2>
<p>To test your first error, you can also manually call Sentry. This is particularly helpful when you want to capture an error in a specific context. For example, if you want to capture an error in a particular function or for a specific user.</p>
<pre><code class="language-python">try:
    ...
except Exception as e:
    sentry_sdk.capture_exception(e)
</code></pre>
<p>With that, the setup of Sentry in Django is complete. You can now monitor errors in your app and get notified immediately when something goes wrong. Great! Now a few tips on how to use Sentry optimally.</p>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
We can also improve error tracking in your Django app
::</p>
<h2>Best Practices for Clean Error Monitoring</h2>
<p>Once set up, you can get the most out of Sentry with a few tips:</p>
<h3>1. Consistently Use Environments</h3>
<p>Organize all errors cleanly by <code>production</code>, <code>staging</code>, or <code>development</code>. This way, you can specifically search for problems in the live system – and maintain an overview during tests.</p>
<h3>2. Use Release Tags</h3>
<p>If you regularly deploy new versions, you should set <code>release="myapp@1.2.3"</code>. This helps you recognize if an error has only been occurring since a certain version. Practical for regressions or hotfixes.</p>
<h3>3. Activate User Context</h3>
<p>Sentry can tell you which logged-in user was affected. This helps tremendously with understanding and debugging.</p>
<pre><code class="language-python">set_user({"email": user.email})
</code></pre>
<h3>4. Specifically Ignore Errors</h3>
<p>Not every exception is critical. You can create rules in Sentry to filter out 404 errors or specific warnings, for example. This keeps your alert feed organized.</p>
<h3>5. Define Alert Rules</h3>
<p>Set alerts, e.g., for more than 10 errors in 5 minutes or if a specific environment is affected. These rules can be triggered via Slack, email, or webhook.</p>
<h2>Understanding the Sentry Dashboard</h2>
<ul>
<li>The dashboard is your central entry point. It immediately shows you:</li>
<li>New and frequent errors</li>
<li>Which releases and environments are affected</li>
<li>The timeline and trends of errors</li>
</ul>
<p>Here you get an overview of all errors that have occurred in your app. You can filter them by severity, frequency, and environment.</p>
<p><img src="/img/blog/sentry-2.png" alt="Sentry Dashboard Example"></p>
<hr>
<h2>Errors and Issues in Detail</h2>
<p>When you click on an error in the dashboard, you get a detailed overview:</p>
<ul>
<li>Stacktrace with code lines</li>
<li>Request context (headers, URL, etc.)</li>
<li>User data (if activated)</li>
<li>Link to the affected release</li>
</ul>
<p>Sentry automatically groups identical errors so you don't have to manually check each individual incident.
<img src="/img/blog/sentry-3.png" alt="Sentry Error List">
Here you can see at a glance: What is critical? What was last seen? How many users are affected?</p>
<hr>
<h2>Conclusion</h2>
<p>With Sentry, you're always immediately informed in case of an error – and can act before it escalates. Setting it up is worthwhile, not only in production but also in staging &#x26; dev environments.</p>]]></content:encoded>
            <category>Django</category>
            <category>Development</category>
            <category>Operations</category>
            <category>Security</category>
            <enclosure url="https://blueshoe.io/img/blog/sentry_blog_header.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Varnish - The Secret Hero of Website Performance with Kubernetes]]></title>
            <link>https://blueshoe.io/blog/varnish-website-performance-with-kubernetes</link>
            <guid>https://blueshoe.io/blog/varnish-website-performance-with-kubernetes</guid>
            <pubDate>Mon, 28 Jul 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>In modern web applications, performance isn't just a feature - it's essential. Users expect fast, responsive experiences, and slow load times can lead to high bounce rates and lost engagement. This is where Varnish Cache comes in. By placing it in front of your web application, you can serve cached content at lightning speed, dramatically reducing the load on your backend. This article guides you through setting up Varnish in a Kubernetes environment to make your website significantly faster.</p>
<p><img src="/img/blog/varnishk8s.svg" alt="Varnish Kubernetes"></p>
<p>::GlobalBlogLevelInfo</p>
<ul>
<li>Kubernetes Basics</li>
<li>Docker Basics</li>
<li><a href="https://en.wikipedia.org/wiki/Web_cache">HTTP Caching</a>{target="_blank"} Basics</li>
<li><a href="https://varnish-cache.org/">Varnish HTTP Cache</a>{target="_blank"}</li>
</ul>
<p>If you have any questions or if anything is unclear, you can use the comment function below the article.
::</p>
<h2>Why Varnish Transforms Your Website</h2>
<p>Performance is everything on the web. When your users have to wait, they bounce. Varnish elegantly solves this problem: As an HTTP cache, it sits in front of your application and delivers content lightning-fast from storage. The result? Up to 90% less load on your backend and significantly faster loading times.</p>
<h2>Let's Get Started: What You'll Need</h2>
<p>Before we begin, here are the four files you'll need for a simple setup:</p>
<ul>
<li><code>default.vcl</code>: Your Varnish configuration</li>
<li><code>Dockerfile</code>: To build your Varnish image including VCL</li>
<li><code>varnish.yaml</code>: Deployment and Service for Kubernetes</li>
<li><code>Ingress.yaml</code>: Optional, to make your website accessible via a domain name</li>
</ul>
<h3>Understanding and Adapting the VCL File</h3>
<p>Let's first look at the individual parts of the <code>default.vcl</code> in detail:</p>
<pre><code class="language-vcl">backend default {
    .host = "website-svc";
    .port = "3001";
    .first_byte_timeout = 300s;
}
</code></pre>
<p>The first block defines your backend. <code>website-svc</code> is the name of the Kubernetes service that Varnish will access. The port <code>3001</code> is the port of your application (e.g., Django) which is accessible within the cluster. With <code>first_byte_timeout</code> you set how long Varnish waits for the first response from the backend.</p>
<pre><code class="language-vcl">sub vcl_recv {
    if (req.method == "GET" &#x26;&#x26; req.http.Cookie !~ "sessionid") {
        unset req.http.Cookie;
    }
}
</code></pre>
<p><code>vcl_recv</code> is called for every incoming request. Here you decide whether cookies are removed. In this case, Varnish removes all cookies from GET requests that do not contain a <code>sessionid</code>. This makes caching more effective because different cookie values would otherwise create different cache objects.</p>
<pre><code class="language-vcl">sub vcl_backend_response {
    if (bereq.url ~ "^/de-de/aktuelles/.*") {
        set beresp.ttl = 2h;
    } else {
        set beresp.ttl = 12h;
    }
}
</code></pre>
<p><code>vcl_backend_response</code> is called when Varnish has queried the backend and received a response. Here you define how long content should be cached. News pages get 2 hours, everything else remains in the cache for 12 hours.</p>
<h2>Advanced VCL techniques for practical use</h2>
<p><strong>Grace Mode &#x26; Saint Mode (bridging backend failure):</strong><br>
With Grace and Saint Mode, you can deliver outdated content when the backend is unavailable. This greatly increases reliability.</p>
<pre><code class="language-vcl">sub vcl_backend_fetch {
    if (beresp.status == 500 || beresp.status == 503) {
        set beresp.saintmode = 5m; // Mark backend as “down” for 5 minutes
        return (abandon);
    }
    set beresp.grace = 2h; // Allow this content to be served 2 hours beyond its TTL
}
</code></pre>
<p><strong>Adjust cache keys:</strong><br>
Manipulate the cache key to ignore unimportant query parameters, for example, and thus increase the hit rate.</p>
<pre><code class="language-vcl">sub vcl_hash {
    // Only consider 'id' and 'lang' for the cache key
    if (req.url ~ "\?") {
        set req.hash += regsuball(req.url, '^.*\?((?:id|lang)=[^&#x26;]+).*$', "\1");
    }
    // ...
}
</code></pre>
<p><strong>Secure purging via API:</strong>
Set up a secure HTTP endpoint for targeted cache purging.</p>
<pre><code class="language-vcl">acl purge {
    "localhost";
    "192.168.1.0"/24; // Only allow purging from this network
}
sub vcl_recv {
    if (req.method == "PURGE") {
        if (!client.ip ~ purge) {
            return (synth(405, "Not allowed."));
        }
        return (hash);
    }
}
</code></pre>
<h3>Dockerfile: Embedding VCL into the Image</h3>
<pre><code class="language-dockerfile">FROM varnish:stable
COPY default.vcl /etc/varnish/
</code></pre>
<p>This ensures that your VCL is directly included in the container.</p>
<p>Next. You can attach your image with:</p>
<pre><code class="language-bash">docker build -t varnish:latest .
</code></pre>
<h3>Deployment and Service Definition</h3>
<p>The Deployment ensures that your Varnish Pod runs permanently and Kubernetes automatically restarts it when it crashes. In our example, only one instance runs (replicas: 1), but for production environments you can also enter multiple replicas.</p>
<pre><code class="language-yaml">apiVersion: apps/v1
kind: Deployment
metadata:
  name: varnish
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: varnish
    spec:
      containers:
      - name: varnish
        image: varnish:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: varnish
spec:
  selector:
    app: varnish
  ports:
    - port: 80
      targetPort: 80
</code></pre>
<p>The associated Service makes your Pod accessible in the cluster. It connects the Ingress or other Pods with your Varnish container. It's important that targetPort and containerPort match.</p>
<p><code>Super!</code> Now your Pod is reachable in the cluster.</p>
<h2>Ingress Configuration for Domain Routing</h2>
<p>The Ingress is the last link in the chain. It ensures that requests from the internet reach your cluster and are forwarded to the correct service, in this case your Varnish. This allows you to centrally manage domains and TLS certificates.</p>
<p>A simple example looks like this:</p>
<pre><code class="language-yaml">apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: varnish-ingress
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  rules:
  - host: "example.com"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: varnish
            port:
              number: 80
</code></pre>
<p>Pay attention that the service name and port exactly match your previously defined service. This way, all web traffic is processed through Varnish and your application benefits from caching.</p>
<h2>Monitoring and Debugging</h2>
<p>Here are a few practical commands to monitor your cache and Varnish server:</p>
<pre><code class="language-bash">varnishstat
</code></pre>
<p>Shows live cache statistics, for example hits and misses.</p>
<pre><code class="language-bash">varnishlog
</code></pre>
<p>Shows exactly why requests were cached or not.</p>
<pre><code class="language-bash">varnishadm ban "req.url ~ .*"
</code></pre>
<p>This clears the cache and forces new content.</p>
<pre><code class="language-bash">varnishadm ping
</code></pre>
<p>Checks if your Varnish admin is reachable.</p>
<pre><code class="language-bash">varnishadm status
</code></pre>
<p>Shows the current status of your Varnish process.</p>
<h2>Best Practices for Optimal Caching</h2>
<h3>1. Define Cache Strategies</h3>
<ul>
<li>Cache static content longer (CSS, JS, images)</li>
<li>Cache dynamic content shorter</li>
<li>Handle session-based content individually</li>
</ul>
<h3>2. Set Up Performance Monitoring</h3>
<ul>
<li>Regularly check cache hit rates</li>
<li>Monitor storage utilization</li>
<li>Monitor backend health</li>
</ul>
<h3>3. Plan Cache Invalidation</h3>
<ul>
<li>Automated processes for content updates</li>
<li>Targeted invalidation instead of complete clearing</li>
<li>Health checks for backend servers</li>
</ul>
<h2>Varnish in comparison: Choosing the right caching strategy</h2>
<p>Varnish is extremely powerful, but it is not always the only solution. How does it compare to other caching mechanisms?</p>
<p>| Caching solution      | Strengths                                                                 | Weaknesses                                                        | Ideal for...                                                                                   |
| :------------------ | :--------------------------------------------------------------------- | :--------------------------------------------------------------- | :---------------------------------------------------------------------------------------------- |
| <strong>Varnish Cache</strong>   | <strong>Maximum flexibility</strong> thanks to VCL; caching of entire HTTP objects; features such as Grace/Saint Mode. | <strong>Greater complexity</strong> than simple caches; no native TLS support (requires a proxy in front of it). | ...complex websites (e.g., e-commerce, news portals) with dynamic content that require fine-grained cache control. |
| <strong>Nginx Caching</strong>   | <strong>Simple configuration</strong>; integrated directly into the web server; very high performance; can terminate TLS. | <strong>Less flexible</strong> cache logic than Varnish; primarily for static assets and simple responses. | ...simpler websites and applications where an uncomplicated cache for static files and API responses is sufficient. |
| <strong>CDN (e.g., Cloudflare)</strong> | <strong>Globally distributed</strong> (low latency worldwide); protection against DDoS attacks; easy setup. | <strong>More expensive</strong>; less control over cache invalidation; "black box." | ...globally operating websites that want to benefit from a distributed cache and additional security features. |</p>
<h2>Frequently asked questions (FAQ)</h2>
<h3>How can I check whether a page is being loaded from the Varnish cache?</h3>
<p>Check the HTTP headers of the response in your browser's developer tools. Varnish typically adds headers such as <code>X-Varnish</code> (internal transaction ID) and <code>Age</code> (how long the object has been in the cache). An <code>Age</code> value greater than 0 is a sure sign of a cache hit.</p>
<h3>Why is my cache hit rate low?</h3>
<p>The most common reasons are:</p>
<ol>
<li><strong>Cookies:</strong> By default, Varnish does not cache requests with cookies. Adjust your <code>vcl_recv</code> to remove unnecessary cookies.</li>
<li><strong><code>Set-Cookie</code> header from the backend:</strong> If your backend sends a <code>Set-Cookie</code> header, the response will not be cached.</li>
<li><strong><code>Cache-Control</code> header:</strong> Watch out for <code>Cache-Control: private</code> or <code>max-age=0</code> from the backend.</li>
</ol>
<h3>How do I protect the Varnish admin port in Kubernetes?</h3>
<p>The admin port should never be accessible externally. Make sure that your Kubernetes service for Varnish only opens the HTTP port (e.g., 80). Access to <code>varnishadm</code> should only be via <code>kubectl exec</code> in the pod.</p>
<h3>How much RAM or memory do I need for Varnish?</h3>
<p>That depends heavily on your usage profile. A RAM configuration between 1 GB and 16 GB is recommended, combined with SSD storage.</p>
<h3>Can Varnish cache without query parameters?</h3>
<p>Yes, you can remove query parameters (e.g., using RegEx in VCL) to cache only one object:</p>
<pre><code class="language-vcl">sub vcl_recv {
  set req.url = regsub(req.url, "\?.*", "");
}
</code></pre>
<h3>How can custom error pages be configured?</h3>
<p>Using <code>vcl_error</code>, you can create your own HTML pages with a synthetic response. Example:</p>
<pre><code class="language-vcl">sub vcl_error {
  set obj.http.Content-Type = "text/html; charset=utf-8";
  synthetic {"&#x3C;html>…&#x3C;/html>"};
  deliver;
}
</code></pre>
<h2>Conclusion</h2>
<p>Now you're completely done. Your setup from <code>default.vcl</code> to Ingress is complete, and Varnish now ensures your site loads significantly faster. With the right configuration, Varnish becomes the secret hero of your website performance.</p>
<p>:GlobalButton{:url="https://varnish-cache.org" :label="Learn more about Varnish" :target="_blank" :color="blue" .mb-6}</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Varnish</category>
            <category>Development</category>
            <category>Operations</category>
            <category>Performance</category>
            <enclosure url="https://blueshoe.io/img/blog/varnishk8s.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Virtualized Kubernetes: comparing vcluster and Getdeck]]></title>
            <link>https://blueshoe.io/blog/virtualized-kubernetes-comparing-vcluster-getdeck</link>
            <guid>https://blueshoe.io/blog/virtualized-kubernetes-comparing-vcluster-getdeck</guid>
            <pubDate>Thu, 24 Aug 2023 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Virtualization lets multiple OS instances run on one system and has modernized over time. In Kubernetes, running Kubernetes within Kubernetes is emerging as a new standard, altering cluster management and resource use. This offers better isolation, simpler development, and efficient resource use. We'll explore two key tools for this: <code>vcluster</code> by Loft Labs and <code>Getdeck</code> by Blueshoe.</p>
<p><img src="/img/blog/kubernetes-vcluster-getdeck.jpg" alt="my image">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalTitle{:size="lg" .mb-5}
What is virtualized Kubernetes?
:::
:::globalParagraph
At its core, the idea is simple: running a logical or "virtual" Kubernetes cluster within a physical or "host" Kubernetes cluster. This concept brings a new layer of flexibility, allowing developers to spin up isolated environments quickly without the overhead of managing multiple physical clusters. With this, developers can replicate production environments, experiment safely, and optimize applications for the Kubernetes ecosystem.
:::
:::globalParagraph
There are two major benefits with this approach. First, you get the possibility to run ephemeral Kubernetes clusters, i.e. K8s clusters with a short lifespan, that you can create and delete much quicker than spinning up regular clusters. And second, the resources of the shared host cluster are much more efficiently used when running for example 5 virtual clusters, than running 5 regular K8s clusters.
:::
:::globalParagraph
Both should lead to a decrease in operating costs, as you are now able to more easily spin up clusters when you need them and delete them when you don’t need them anymore. This comes in handy in development setups, when you don’t work on the same cluster from day to day. And it can also be advantageous with automated testing against K8s clusters or also QA and staging systems that don’t need to run 24/7 but for example only when new features are tested.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
vcluster: a closer look
:::
:::globalParagraph
Vcluster is the most visible player when it comes to tools for Kubernetes-in-Kubernetes. In a nutshell, it works by setting up virtual Kubernetes clusters inside namespaces of a host Kubernetes cluster. To deploy and manage virtual clusters, a CLI <em>vcluster</em> is used. Let’s look at some key points.
:::
:::globalParagraph
<strong>Workload Execution:</strong> Crucially, the workloads in vcluster are executed on the host cluster. The pods of a virtual cluster are synced to and executed in the host cluster.
:::
:::globalParagraph
<strong>Isolation:</strong> While each vcluster has its distinct control plane ensuring Kubernetes-level isolation, the underlying nodes and resources are shared from the host cluster.
:::
:::globalParagraph
<strong>Multitenancy:</strong> Designed with multi-tenancy in mind, vcluster is perfect for scenarios where distinct teams or projects need their own isolated Kubernetes environments without the overhead of separate physical clusters.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Getdeck: a new player for K8s-in-K8s
:::
:::globalParagraph
Much like vcluster, <a href="/products">Getdeck</a>{.bs-link-blue} provides a Kubernetes-in-Kubernetes approach. However, there are nuances that set it apart. Virtual K8s clusters are created by running k3s server-/agent-nodes inside namespaces of a host cluster. To deploy and manage virtual clusters, a CLI <em>beibootctl</em> are available.
:::
:::globalParagraph
<strong>Workload Execution:</strong>  <a href="/products">Getdeck's</a>{.bs-link-blue} workloads run inside the virtual clusters. This provides deeper isolation, allowing workloads to operate independently of the host cluster.
:::
:::globalParagraph
<strong>Quick Provisioning &#x26; Flexibility:</strong> Getdeck is built for speed, provisioning Kubernetes clusters in under one minute.
:::
:::globalParagraph
<strong>Snapshot &#x26; Restore:</strong> The ability to fully snapshot and restore cluster states offers a unique blend of persistence and elasticity. This allows you to delete a cluster when it is not needed and create it again at a later time, exactly at the state it was deleted.
:::
:::globalParagraph
<strong>Automatic cluster management:</strong> Getdeck can automatically delete clusters at a specified time, or also dynamically by setting usage-timeouts.
:::</p>
<p>:::GlobalPodcastSection{:videoId="5wNTUUSk1jA" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
Our Kubernetes podcast
::::
::::GlobalTitle{:tag="h3" .mb-6}
TftC E3: Deep dive into Getdeck
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are introducing and demonstrating Getdeck as well as comparing local and remote Kubernetes and preproduction clusters
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="Show more" :color="green"}
::::
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Comparing vcluster and Getdeck
:::
:::globalParagraph
While both vcluster and <a href="/products">Getdeck</a>{.bs-link-blue} allow for the virtualization of Kubernetes clusters within a host Kubernetes cluster, their operation, design philosophy, and target audience exhibit noteworthy differences.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
Workload Execution and Isolation
:::
:::globalParagraph
<strong>vcluster:</strong> The workloads of the virtual clusters are executed on the host cluster, leading to a tighter integration with the host. This might offer better overall resource utilization, as the resources are effectively shared. However, it also means that the boundaries between the virtual and host clusters are more porous.
:::
:::globalParagraph
<strong>Getdeck:</strong> Emphasizes deep isolation by running the workloads inside the virtual clusters. This results in a working environment that feels more akin to managing standalone Kubernetes instances.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
Development &#x26; Management Tools
:::
:::globalParagraph
<strong>vcluster:</strong> Primarily uses its CLI, vcluster, to manage and deploy virtual clusters.
:::
:::globalParagraph
<strong>Getdeck:</strong> Provides a more diverse toolkit, with both a CLI, beibootctl, and a dedicated desktop app for cluster management. This might offer a more versatile user experience catering to both CLI enthusiasts, as well as enabling diverse configuration and management options and those preferring a GUI, providing a low level entry point to manage and interact with virtual clusters.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
Flexibility &#x26; Persistence
:::
:::globalParagraph
<strong>vcluster:</strong> Offers standard Kubernetes-in-Kubernetes functionalities, emphasizing multi-tenancy and isolation.
:::
:::globalParagraph
<strong>Getdeck:</strong> Pushes the envelope with features like fast provisioning (in under a minute), the ability to snapshot and restore cluster states, and automated cluster management. These functionalities lean heavily into the ephemeral nature of development, testing, and staging environments, maximizing the efficient use of resources.
:::</p>
<p>:::globalTitle{:size="md" :tag="h3" .mb-5}
Use Cases
:::
:::globalParagraph
<strong>vcluster:</strong> Its design philosophy revolves around providing isolated Kubernetes environments for different teams or projects. Its core strengths lie in scenarios demanding multi-tenancy without the overhead of multiple physical clusters.
:::
:::globalParagraph
<strong>Getdeck:</strong> With its emphasis on speed, flexibility, and deeper isolation, Getdeck seems tailor-made for dynamic development and staging setups where clusters' lifespan varies, and resources need to be optimized. The snapshot &#x26; restore functionality further endorses its use in scenarios where persistence across sessions is crucial.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
Conclusion
:::
:::globalParagraph
The rise of virtualized Kubernetes, signified by tools like vcluster and Getdeck, showcases the ever-evolving nature of container orchestration and its alignment with modern development needs. While both tools have their distinct advantages, the choice between them should be influenced by specific use cases, required levels of isolation, and the importance of features like snapshotting and automatic cluster management.
:::
:::globalParagraph
Vcluster, being a prominent player in the space, offers solid Kubernetes-in-Kubernetes functionalities with a focus on multi-tenancy. On the other hand, Getdeck, a newer entrant, brings in innovative features aimed at optimizing resource usage and ensuring deeper workload isolation. As with most technology choices, understanding your requirements and mapping them to the features of these tools is key to making an informed decision.
:::
:::globalParagraph
As with most technology choices, understanding your requirements and mapping them to the features of these tools is key to making an informed decision. For those interested in a hands-on experience with Getdeck without initial setup complexities, there's the free Getdeck as a Service available.
:::</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Team Blueshoe</category>
            <enclosure url="https://blueshoe.io/img/blog/kubernetes-vcluster-getdeck.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Visual Regression with Lost Pixel and Gitlab]]></title>
            <link>https://blueshoe.io/blog/visual-regression-in-gitlab-with-lost-pixel</link>
            <guid>https://blueshoe.io/blog/visual-regression-in-gitlab-with-lost-pixel</guid>
            <pubDate>Fri, 07 Mar 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>When processing CSS rules, side effects are often noticed too late. To find these, Visual Regression Tests are helpful. Lost Pixel is a popular, simple, and very good tool to perform these types of tests. Lost Pixel comes with GitHub Actions support - the integration into Gitlab is not entirely straightforward. We'll show how to use Lost Pixel and Gitlab CI/CD together.</p>
<p><img src="/img/blog/lost_pixel.svg" alt="Visual Regression with Lost Pixel and Gitlab">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalTitle{:size="lg" .mb-5}
What is Visual Regression Testing?
::
::GlobalParagraph
Visual Regression Testing enables the identification of optical changes based on a previously established baseline. These changes sometimes include regressions, which can be quickly and cost-effectively recognized. Modern applications often contain a lot of JavaScript and CSS. Changes to the source code, or even simple updates, can lead to side effects. To uncover these, Visual Regression Testing is very suitable.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
How does Visual Regression Testing work?
::
::GlobalParagraph
The procedure for VRT tools is always similar:
::
::GlobalBlock{.ol-decimal .mb-4}</p>
<ol>
<li>A set of components or pages is defined to be tested.</li>
<li>Based on this, a baseline is generated.</li>
<li>During code changes and updates, screenshots are taken and compared with the baseline.
::</li>
</ol>
<p>::GlobalParagraph
Often, testing tools work with a threshold. This means the deviation from the baseline has a tolerance - absolute or relative. For example, one could specify that 10 pixels or 1% deviation from the baseline is acceptable.
::</p>
<p>::GlobalParagraph
Why is this the case? - Website rendering differs partially between browsers and operating systems. Here, a certain tolerance is helpfully expressed.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Lost Pixel in Practice
::
::GlobalParagraph
<a href="https://www.lost-pixel.com/">Lost Pixel</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} is a tool from the company nineLemon. It is cloud-based. The simplest integration into a CI pipeline can be realized with Github Actions. The Lost Pixel Cloud Application takes over the management of the baseline. The screenshots of the baseline do not need to be checked into the repository.
::</p>
<p><img src="/img/blog/lost-pixel-1.png" alt="Lost Pixel front page">{.mx-auto .max-w-full}</p>
<p>::GlobalParagraph{.italic .mb-4 .text-center}
(Source: <a href="https://lost-pixel.com">https://lost-pixel.com</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid})
::</p>
<p>::GlobalParagraph
By integrating the provided Github Action, screenshots of the defined pages/components are automatically created and compared with the baseline. The Lost Pixel Cloud Application offers excellent views for the before-and-after comparison.
::</p>
<p><img src="/img/blog/lost-pixel-2.png" alt="Lost Pixel front page 2">{.mx-auto .max-w-full}</p>
<p>::GlobalParagraph{.italic .mb-4 .text-center}
(Source: <a href="https://lost-pixel.com">https://lost-pixel.com</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} left afterwards, right before)
::</p>
<p>::GlobalParagraph
Over the simple user interface, changes to the baseline can be accepted or rejected. If rejected, the corresponding Pull Request on Github will be marked as "failed".
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Lost Pixel with Gitlab
::
::GlobalParagraph
Github Actions is de facto the “first class citizen” implementation of Lost Pixel in CI pipelines. In this section, we would like to show how we use Lost Pixel with the Gitlab software.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Creating the Baseline for Visual Regression Testing
::</p>
<p>::GlobalParagraph
First, the basis is created. For this, the <code>lostpixel.config.js</code> must be set up:
::</p>
<p>::BlogCode{.mb-4}</p>
<pre><code class="language-js">module.exports = {
  pageShots: {
    pages: [
      { path: '/pattern-library/render-pattern/cms/blocks/patterns/text/text.html', name: 'text', threshold: 0.01 },
    ],
    breakpoints: [320, 640, 1024, 1200, 1920],
    baseUrl: 'http://localhost:8000',
  },
  waitBeforeScreenshot: 2000,
  waitForLastRequest: 5000,
  failOnDifference: true,
  shotConcurrency: 1,
  generateOnly: true,
}
</code></pre>
<p>::</p>
<p>::GlobalParagraph
This configuration uses "Page Shots". With this, only full-page screenshots are made from the corresponding URLs. A detailed setup of the settings <a href="https://docs.lost-pixel.com/user-docs/setup/project-configuration">can be found here</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}. The <code>breakpoints</code> parameter allows checking different screen resolutions simultaneously.
::</p>
<p>::GlobalParagraph
To generate and compare the baseline, we use Docker Images to minimize differences through operating systems or browsers. The baseline will be created as follows:
::</p>
<p>::BlogCode{.mb-4}</p>
<pre><code class="language-bash">docker run --rm -v $PWD:$PWD -e WORKSPACE=$PWD -e DOCKER=1 -e LOST_PIXEL_DISABLE_TELEMETRY=0 -e LOST_PIXEL_MODE=update --network="host" lostpixel/lost-pixel:v3.22.0
</code></pre>
<p>::</p>
<p>::GlobalParagraph
The baseline is thus newly created and stored locally. Since the Lost Pixel Cloud Platform is not usable with Gitlab, these files must be checked into the Git repository.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Installation of Lost Pixel on the Gitlab Runner
::</p>
<p>::GlobalParagraph
First, it must be ensured that the Lost Pixel CLI is available in the pipeline:
::</p>
<p>::BlogCode{.mb-4}</p>
<pre><code class="language-bash">apt-get update
apt-get install -y nodejs npm curl
npm i -g lost-pixel
npx playwright@1.47.2 install --with-deps chromium
</code></pre>
<p>::</p>
<p>::GlobalParagraph
Playwright is installed in exactly the version needed for Lost Pixel.
::</p>
<p>::GlobalParagraph
Playwright is a modern, Microsoft-developed end-to-end test framework used for testing web applications. It enables automated UI tests in multiple browsers.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Visual Regression Testing in the CI Pipeline
::</p>
<p>::GlobalParagraph
Performing the comparison is now remarkably simple. We start our application (in our case a Django App) and run the comparison command:
::</p>
<p>::BlogCode{.mb-4}</p>
<pre><code class="language-bash">python /app/die/manage.py serve --static &#x26;
npx lost-pixel local
</code></pre>
<p>::</p>
<p>::GlobalParagraph
We ensure that the screenshots are temporarily available to easily examine differences. This allows failed jobs to be easily investigated:
::</p>
<p>::BlogCode{.mb-4}</p>
<pre><code class="language-yaml"># gitlab-ci.yaml
artifacts:
  paths:
    - .lostpixel/difference/*
    - .lostpixel/current/*
  expire_in: 1 week
  when: always
</code></pre>
<p>::</p>
<p>::GlobalParagraph
Here's an excerpt from a possible <code>gitlab-ci.yaml</code>. In this case, we're running Django applications with a temporarily available, local Postgres database:
::</p>
<p>::BlogCode{.mb-4}</p>
<pre><code class="language-yaml"># gitlab-ci.yaml

stages:
  - lint
  - build
  - test
  - release
  - deploy

# [...]

visual_regression:
  stage: test
  image:
    name: ${TEST_IMAGE_NAME}
    docker:
      user: root
  services:
    - postgres:17-alpine
  before_script:
    - apt-get update
    - apt-get install -y nodejs npm curl
    - python die/manage.py migrate &#x26; &#x26;&#x26; npm i -g lost-pixel
    - npx playwright@1.47.2 install --with-deps chromium
  script:
    - python /app/die/manage.py serve --static &#x26;
    - npx lost-pixel local
  variables:
  # [...]
  artifacts:
    paths:
      - .lostpixel/difference/*
      - .lostpixel/current/*
    expire_in: 1 week
    when: always
</code></pre>
<p>::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Analysis of the Visual Regression Test
::
::GlobalParagraph
If the job fails, the results can now be easily retrieved from the sidebar in GitLab:
::</p>
<p><img src="/img/blog/lost-pixel-3.png" alt="Lost Pixel results">{.mx-auto}</p>
<p>::GlobalParagraph
With a click on <em>Search</em>, the folder structure of the artifacts can be searched:
::</p>
<p><img src="/img/blog/lost-pixel-4.png" alt="Lost Pixel folders">{.mx-auto}</p>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
We can also set up visual regression tests for your app.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
What alternative tools are there for Visual Regression Testing?
::
::GlobalParagraph
Besides Lost Pixel, the tool which we primarily use at Blueshoe for automated frontend testing, there are also some alternatives:
::
::GlobalParagraph
<strong>Cloud-based Tools</strong>
::
::GlobalBlocks{.ul-disk .mb-4}</p>
<ul>
<li><a href="https://applitools.com/platform/eyes/">Applittools Eyes</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} – AI-supported visual tests with integration in many test frameworks</li>
<li><a href="https://www.browserstack.com/percy">Percy (by BrowserStack)</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} – Automated UI Screenshots with GitHub Integration</li>
<li><a href="https://www.lambdatest.com/visual-regression-testing">Visual AI by LambdaTest</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} – Cloud-based visual regression tests</li>
<li><a href="https://www.chromatic.com/">Chromatic</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} – Specifically for Storybook-based UI tests
::</li>
</ul>
<p>::GlobalParagraph
<strong>Open-Source &#x26; Self-hosted Tools</strong>
::
::GlobalBlocks{.ul-disk .mb-8}</p>
<ul>
<li><a href="https://github.com/garris/BackstopJS">BackstopJS</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} – Headless Browser-based visual regression tests</li>
<li><a href="https://github.com/bbc/wraith">Wraith</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} – Developed by BBC Open-Source tool for screenshot comparison</li>
<li><a href="https://rsmbl.github.io/Resemble.js/">Resemble.js</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} – JavaScript library for pixel-precise image comparisons</li>
<li><a href="https://github.com/mapbox/pixelmatch">Pixelmatch</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} – Lightweight image comparison library for visual regression
::</li>
</ul>
<p>::GlobalTitle{:size="lg" .mb-5}
Conclusion
::
::GlobalParagraph{.mb-4}
In today's agile development environment, Visual Regression Testing is a crucial component of quality assurance. It helps to detect unintended UI changes early and ensures that new features or bug fixes do not destroy existing design elements.
::
::GlobalParagraph{.mb-4}
By using modern tools like Lost Pixel, teams can efficiently integrate visual tests into their CI/CD pipelines and maintain the consistency and user-friendliness of their applications. Especially in the era of complex web applications and responsive design, reliable visual testing is a real game-changer.
::
::GlobalParagraph
Ultimately, Visual Regression Testing not only saves time and costs for manual UI reviews but also contributes to ensuring a perfect user experience – and that across all devices and browsers. 🚀
::</p>
<p>:::GlobalTitle{:size="lg" .mb-5 .mt-8}
Frequently Asked Questions
:::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. What is Lost Pixel and why should I use it with GitLab?
::
::GlobalParagraph
Lost Pixel is an open-source tool for visual regression testing. In GitLab pipelines, it helps detect unexpected UI changes early and prevents faulty builds by stopping the pipeline if visual differences are found.
::
::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. How can I prevent irrelevant changes from failing visual tests?
::
::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>Use <code>--diff-threshold</code> to ignore minor visual differences</li>
<li>Apply <code>excludeSelectors</code> to ignore dynamic elements like timestamps</li>
<li>Ensure consistent screenshot conditions such as viewport size and theme
::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. How can I integrate Lost Pixel with GitLab Merge Requests?
::
::GlobalParagraph
Lost Pixel can be configured to generate visual diffs for each merge request and attach them as comments. This requires a GitLab CI/CD setup or an external bot integration.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
4. How do I ensure only manually approved changes update the visual baseline?
::
::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>Create a dedicated <code>baseline</code> branch to store reference screenshots</li>
<li>Use a CI/CD job that only merges updated screenshots into the <code>baseline</code> branch after manual review
::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
5. Why does Lost Pixel randomly fail in GitLab pipelines?
::</p>
<p>::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>Unstable screenshots are often caused by random UI elements or animations.</li>
<li>Use the <code>--wait</code> parameter to ensure all UI components are fully rendered before taking screenshots.
::</li>
</ul>]]></content:encoded>
            <category>Development</category>
            <category>Operations</category>
            <category>Digitalization</category>
            <enclosure url="https://blueshoe.io/img/blog/lost_pixel.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[StrongSwan VPN in Kubernetes: Securely Integrate External Services]]></title>
            <link>https://blueshoe.io/blog/vpn-kubernetes-strongswan</link>
            <guid>https://blueshoe.io/blog/vpn-kubernetes-strongswan</guid>
            <pubDate>Tue, 06 May 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>In some cases, a VPN connection to an external service is needed - which can be tricky with Kubernetes. In this article, we show how to set up an IPsec tunnel with StrongSwan. From Kubernetes to an external service using Nginx as a reverse proxy. The setup is clearly structured, easily maintainable, and dynamically distinguishes between staging and production environments.</p>
<p><img src="/img/blog/kubernetes-strongswan.svg" alt="Securely Building an IPsec-VPN Tunnel from Kubernetes"></p>
<p>::GlobalBlogLevelInfo</p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/IPsec">IPSec</a>{:target="_blank"} and VPN</li>
<li><a href="https://strongswan.org/">Strongswan</a>{:target="_blank"}</li>
<li><a href="https://en.wikipedia.org/wiki/Internet_Key_Exchange">Internet Key Exchange (IKE)</a>{:target="_blank"}</li>
</ul>
<p>If you have any questions or if anything is unclear, you can use the comment function below the article.
::</p>
<h2>1. Problem Statement</h2>
<p>Sometimes an important database is not in the cluster itself, but behind a firewall on an external server. Access is only possible via a VPN - for example via IPsec with a WatchGuard Appliance. The challenge: How can applications in the cluster reliably and securely reach this service?</p>
<h2>2. Target Vision</h2>
<p>The goal was to create a lean and maintainable solution that:</p>
<ul>
<li>Can access fixed defined external services via VPN from within the cluster,</li>
<li>Clearly distinguishes between Staging and Production access,</li>
<li>Remains flexibly configurable,</li>
<li>Works with Kubernetes standards (Probes, Services, ConfigMaps etc.).</li>
</ul>
<p><img src="/img/blog/strongswan-k8s.png" alt="The Strongswan Kubernetes Setup"></p>
<h2>3. Solution Approach</h2>
<p>We deployed a dedicated pod with Strongswan and Nginx:</p>
<ul>
<li>Strongswan handles building the IPsec tunnel to the external VPN endpoint.</li>
<li>Nginx functions as a TCP proxy and forwards requests from the cluster to the external service.</li>
<li>Depending on the environment (Staging or Production), Nginx is configured accordingly.</li>
</ul>
<p>Within the cluster, other services can easily access Nginx via <code>ClusterIP</code>, which then transfers the connection over the VPN.</p>
<h2>4. Implementation in Kubernetes</h2>
<h3>a) Dockerfile</h3>
<p>Nginx and Strongswan run side by side in the container. A startup script decides which configuration files to use based on the <code>ENVIRONMENT</code> variable.</p>
<pre><code class="language-dockerfile">FROM nginx:1.27.3-alpine
RUN apk add --no-cache strongswan netcat-openbsd
...
CMD ["/scripts/startup.sh"]
</code></pre>
<h3>b) Nginx Configuration</h3>
<p>For Production, there's a single TCP forwarding (e.g., on port 8080), while the staging environments has two (8080 + 8081 for production/staging).</p>
<pre><code class="language-nginx">stream {
  upstream ext-db-production {
    server NGINX_EXTDB_IP_PRODUCTION:NGINX_EXTDB_PORT_PRODUCTION;
  }
  server {
    listen 8080;
    proxy_pass ext-db-production;
  }
  ...
}
</code></pre>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
We can also connect your Kubernetes apps via VPN.
::</p>
<h3>b) VPN-Setup with Strongswan</h3>
<p>The tunnel is configured dynamically via <code>swanctl.conf</code> and environment variables. This allows IP addresses, PSK and subnets to be imported via a Helm template.</p>
<pre><code class="language-bash">connections {
   k8s-ext-db {
      remote_addrs = SWANCTL_CONF_REMOTE_ADDRS

      local {
         auth = psk
         id = SWANCTL_CONF_LOCAL_ID
      }
      remote {
         auth = psk
      }
      children {
         net-net {
            local_ts  = SWANCTL_CONF_LOCAL_TS
            remote_ts = SWANCTL_CONF_REMOTE_TS
            esp_proposals = SWANCTL_CONF_ESP_PROPOSALS
            ...
         }
      }
      version = 2
      proposals = SWANCTL_CONF_PROPOSALS
      ...
   }
}

secrets {
   ike-k8s-nors {
      id-k8s = SWANCTL_CONF_LOCAL_ID
      secret = SWANCTL_CONF_SECRET_PSK_TOKEN
   }
}
</code></pre>
<p>The capitalised variables in all config files are replaced by the startup script with values from the environment variables. This allows the pod to be configured dynamically for different environments. The values could of course also be specified directly.</p>
<p>The startup script then only performs the following commands to establish the VPN connection:</p>
<pre><code class="language-bash">swanctl --load-all
ipsec up k8s-ext-db
</code></pre>
<h3>d) Health Probes</h3>
<p>A shell script checks regularly:</p>
<ul>
<li>the VPN target IP is reachable: <code>nc -z -w 1 $PROBE_STRONGSWAN_IP $PROBE_STRONGSWAN_PORT</code></li>
<li>the internal Nginx serves a <code>/healthz</code> route: <code>curl -f -LI $PROBE_NGINX_IP:$PROBE_NGINX_PORT/healthz</code></li>
</ul>
<p>The exit code will only return 0 if both conditions are met. This makes it easy to monitor the pods in Kubernetes and restart them if something hangs.</p>
<h2>5. Special Features</h2>
<ul>
<li>The pod requires the <code>NET_ADMIN</code> capability to set Strongswan routing tables.</li>
<li>Production and staging differ not only in target systems but also in the number of connected external services.</li>
<li>The entire tunnel runs in a dedicated service - other pods in the cluster do not need to know about it.</li>
</ul>
<h2>6. Pod Manifest</h2>
<p>Here is an example of the K8s pod description:</p>
<pre><code class="language-yaml">apiVersion: v1
kind: Pod
metadata:
  name: vpn-client
  namespace: vpn
spec:
  containers:
  - name: vpn
    image: your-vpn-image:1.0.0
    imagePullPolicy: Always
    envFrom:
    - configMapRef:
        name: vpn-client-configmap
    ports:
    - containerPort: 8080
      name: ext-db-prod
      protocol: TCP
    livenessProbe:
      exec:
        command:
        - /scripts/probe.sh
      failureThreshold: 3
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 2
    readinessProbe:
      exec:
        command:
        - /scripts/probe.sh
      failureThreshold: 1
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 2
    startupProbe:
      exec:
        command:
        - /scripts/probe.sh
      failureThreshold: 40
      periodSeconds: 5
      successThreshold: 1
      timeoutSeconds: 2
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
</code></pre>
<h2>7. Conclusion</h2>
<p>The combination of Strongswan and Nginx is a robust, lightweight option to connect external services over VPN in Kubernetes. The strict separation by environments, flexible Helm templates, and Kubernetes probes make the solution production-ready - without overhead through additional tools.</p>
<h2>8. Frequently Asked Questions</h2>
<h3>1. Why do I need a VPN in Kubernetes?</h3>
<p>A VPN enables secure connections between your cluster and external systems, such as databases or legacy systems, that are not publicly accessible. It is particularly sensible when sensitive data needs to be transferred or direct IP communication is required.</p>
<h3>2. What is Strongswan and why is it suitable for Kubernetes?</h3>
<p>Strongswan is an established open-source VPN solution for IPsec-based tunnels. It is lightweight, reliable, and due to its modularity can be well integrated into containerized environments like Kubernetes.</p>
<h3>3. How do I integrate Strongswan into Kubernetes?</h3>
<p>Strongswan can be operated as a dedicated deployment in a pod. Through an adapted network configuration and iptables routing, the traffic is guided through the VPN. NGINX or other proxies handle the routing to the target system over the VPN.</p>
<h3>4. Are there Helm Charts for Strongswan in Kubernetes?</h3>
<p>No, there are no official Helm Charts. Most setups use custom deployments with Docker images, configuration files, and init scripts. This can be complex but enables maximum flexibility in VPN setup.</p>
<h3>5. What are the alternatives to Strongswan?</h3>
<p>Depending on requirements, you can also use WireGuard (modern, fast), OpenVPN (proven, but complex) or VPN services from cloud providers like AWS Site-to-Site VPN or Azure VPN Gateway.</p>]]></content:encoded>
            <category>Kubernetes</category>
            <category>Development</category>
            <category>Operations</category>
            <category>Security</category>
            <enclosure url="https://blueshoe.io/img/blog/kubernetes-strongswan.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Vue Vapor: Goodbye Virtual DOM, Hello Performance!]]></title>
            <link>https://blueshoe.io/blog/vue-vapor-performance-without-virtual-dom</link>
            <guid>https://blueshoe.io/blog/vue-vapor-performance-without-virtual-dom</guid>
            <pubDate>Fri, 09 May 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Load times, complex component structures and unnecessary overhead — the Virtual DOM is reaching its limits. With Vue Vapor comes the answer: maximum performance, minimal JavaScript ballast — whether classic SPA or Server-Side Rendering (SSR).</p>
<p>In this article, we explain how Vue Vapor works, what advantages it offers compared to the previous Vue.js 3 Virtual DOM, and what developers should pay attention to in order to unleash the full performance potential. Additionally, we compare Vue Vapor vs. Vue 3, and take a look at modern concepts like Partial Hydration and server-driven UI.</p>
<p><img src="/img/blog/vuevapor.svg" alt="Vue Vapor: Bye Virtual DOM, Hello Performance!"></p>
<h2>Introduction: A New Era for Vue</h2>
<p>With <strong>Vue Vapor</strong>, the Vue-Core team is starting a new chapter in the history of the popular framework. Instead of relying on the Virtual DOM as before, Vue Vapor takes a radically different approach: It focuses on <strong>Compile-Time Optimizations</strong> and completely <strong>DOM-based rendering</strong> — all without diffing, patching or VDOM reconciliation.</p>
<p>The goal? <strong>Maximum Performance</strong> with minimal overhead.</p>
<h2>Why Vue Throws the Virtual DOM Overboard</h2>
<p>The Virtual DOM was long a clever compromise: It enabled a declarative programming style without directly working with the real DOM. However, this approach also brings disadvantages:</p>
<ul>
<li><strong>Performance Costs</strong> when comparing VDOM and real DOM (diffing)</li>
<li><strong>Additional Memory Consumption</strong> through Virtual Node structures</li>
<li><strong>Complexity</strong> in larger component structures</li>
</ul>
<p>With increasing requirements for load time, energy efficiency (e.g. on mobile devices) and interactivity (e.g. in E-Commerce or SaaS), it became clear: The VDOM is no longer the fastest way to the goal.</p>
<h2>What is Vue Vapor?</h2>
<p>Vue Vapor is a new <strong>Rendering Engine</strong> for Vue.js, working completely <strong>VDOM-free</strong>. It is based on an intelligent Compile-Time approach that analyzes the code during the build and generates exactly the minimally necessary DOM code. Instead of general "diffable" components, it creates precise, specialized output.</p>
<p><strong>Key Features of Vue Vapor:</strong></p>
<ul>
<li>No Virtual DOM ➜ no Diffing operations at runtime</li>
<li>Statically analyzed &#x26; compiled DOM code</li>
<li>Minimal JavaScript overhead</li>
<li>Automatic optimizations for SSR, CSR &#x26; Hydration</li>
<li>Integration with modern patterns like <strong>Partial Hydration</strong> &#x26; <strong>Server-driven UI</strong></li>
</ul>
<h2>How Does This Work in Practice?</h2>
<p>Instead of generating a generic render function, the Vue-Vapor compiler analyzes the component already <strong>during the build</strong>. In doing so, reactivity, conditional render paths, loops and even event handlers are optimized and "hard-wired" into DOM manipulations.</p>
<h3>Example: Button Component with Vue Vapor</h3>
<pre><code class="language-vue">&#x3C;!-- traditional Vue Component -->
&#x3C;template>
  &#x3C;button @click="count++">Clicked {{ count }} times&#x3C;/button>
&#x3C;/template>

&#x3C;script setup lang="ts">
const count = ref(0)
&#x3C;/script>
</code></pre>
<p>With Vue 3, this results in a render function with VDOM diffing.
With <strong>Vue Vapor</strong>, however, directly generated DOM code is created, which means: Only the <code>textContent</code> of this DOM node must be updated when changing — nothing more.</p>
<h2>Performance Comparison: Vue 3 vs. Vue Vapor</h2>
<p>Instead of relying on diff-based updates at runtime, Vue Vapor generates optimized DOM code during the build. This reduces JavaScript overhead and makes rendering particularly efficient for complex or dynamic components.</p>
<p>While Vue 3 relies on the proven Virtual DOM, Vue Vapor follows a radically different approach — with clear advantages.</p>
<p>Benefits during Initial Rendering, Hydration, and Storage Usage.</p>
<blockquote>
<p>First tests and experience reports from the community point to a significantly better performance, especially in SSR context and for mobile applications. Official, reproducible benchmarks from the Vue Core team are still pending.</p>
</blockquote>
<h2>What Developers Need to Consider Now</h2>
<p>Vue Vapor is <strong>opt-in</strong>. You decide whether to activate it – currently still experimental, but stable enough for tests and new projects. Prerequisites:</p>
<ul>
<li>Vue ✓ 3.5 (Vapor Preview)</li>
<li><code>@vue/compiler-vapor</code> as additional dependency</li>
<li>Components must be <strong>Single File Components (SFCs)</strong></li>
<li>No dynamic components or <code>$refs</code> on root level</li>
<li>SSR/CSR-Hybrid projects should focus on Partial Hydration</li>
</ul>
<p>🚩 Important: Some features like <code>v-html</code>, dynamic components, or non-deterministic behavior (e.g. <code>Math.random()</code>) are not compatible with Vapor – at least not without workarounds.</p>
<h2>Modern Concepts: Partial Hydration &#x26; Server-driven UI</h2>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
We can improve your Vue and Nuxt apps, too.
::</p>
<h3>Partial Hydration</h3>
<p>Vue Vapor perfectly integrates with <strong>Partial Hydration</strong>. Here, only the interactive part of a page is hydrated – the rest remains static. This saves loading time and prevents unnecessary JavaScript execution.</p>
<h3>Server-driven UI</h3>
<p>With server-side control of UI elements (e.g. via JSON payloads or ViewModel structures), Vue Vapor becomes the ideal foundation for headless-optimized frontends. You render what is needed – and nothing else.</p>
<hr>
<h2>Conclusion: Why Vue Vapor is a Real Game Changer</h2>
<p>Vue Vapor shows how modern frontend can work without VDOM – faster, leaner, more intelligent. For performance-critical applications (e.g. E-Commerce, Dashboards, PWAs) it is a <strong>highly promising future technology</strong>.</p>
<p><strong>Our Tip:</strong> Now test, learn and align your head toward the post-VDOM future. The difference is not just measurable – it's perceptible.</p>
<hr>
<h2>FAQ – Frequently Asked Questions about Vue Vapor</h2>
<h3>What is Vue Vapor?</h3>
<p><strong>Vue Vapor</strong> is a novel <strong>Rendering Engine</strong> from the Vue ecosystem that completely eliminates the <strong>Virtual DOM</strong>. Instead, it uses a <strong>Compiler-first Approach</strong> that directly generates optimized DOM code – for maximum <strong>Performance</strong> and minimal overhead.</p>
<h3>Why does Vue Vapor Abandon the Virtual DOM?</h3>
<p>The classic <strong>Virtual DOM</strong> creates additional computational effort during <strong>DOM Diffing</strong>. <strong>Vue Vapor</strong> eliminates this step by generating the most efficient DOM code at build time. This makes rendering significantly faster, especially for large or interactive applications.</p>
<h3>How Does Vue Vapor Differ from Vue 3?</h3>
<p><strong>Vue 3</strong> continues to use the Virtual DOM. <strong>Vue Vapor</strong>, however, replaces this with a <strong>statically analyzed rendering strategy</strong> that no longer requires diffing-based comparisons. The result: <strong>Less Storage Usage</strong> and a noticeable <strong>Performance Boost</strong> for typical Vue applications.</p>
<h3>Is Vue Vapor Already Suitable for Production Projects?</h3>
<p>Currently, <strong>Vue Vapor</strong> is still in the experimental stage. It primarily targets developers who want to evaluate <strong>Cutting-Edge Technologies</strong>. For stable productive environments, <strong>Vue 3</strong> remains the recommended standard.</p>
<h3>Can I Use Vue Vapor with Nuxt?</h3>
<p><strong>Not yet.</strong> Currently, <strong>Nuxt</strong> is based entirely on <strong>Vue 3</strong>. Medium-term integration of Vue Vapor is conceivable – especially once there's a stable version and official support from the Nuxt team.</p>
<h3>What Alternatives Exist to Vue Vapor?</h3>
<p>Similar approaches include <strong>React Server Components</strong>, <strong>SvelteKit</strong>, or <strong>Qwik</strong>. All aim for better <strong>Frontend Performance</strong> through reduced overhead and more intelligent <strong>Hydration Strategies</strong> – however, with different architectures and philosophies.</p>]]></content:encoded>
            <category>Vue.js</category>
            <category>Nuxt</category>
            <category>Development</category>
            <category>Performance</category>
            <enclosure url="https://blueshoe.io/img/blog/vuevapor.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Avoiding Prop Drilling in Vue.js - How to Make It Work]]></title>
            <link>https://blueshoe.io/blog/vuejs-avoiding-prop-drilling</link>
            <guid>https://blueshoe.io/blog/vuejs-avoiding-prop-drilling</guid>
            <pubDate>Tue, 15 Apr 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Sharing data between components in Vue.js everyday life – but what sounds simple quickly becomes messy. The reason: Prop Drilling. Here, you pass props through multiple components, even if they are only used at the end. This works – but is anything but elegant.
If you want to avoid prop drilling in Vue, you're in the right place. We'll show you best practices and tools like <code>provide/inject</code> and State Management with Pinia to keep your app clean, maintainable, and scalable.</p>
<p><img src="/img/blog/propsdrilling_pinia.svg" alt="Prop Drilling in Vue: How to Avoid Chaos in Code">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalTitle{:size="lg" .mb-5}
What is Prop Drilling in Vue.js?
::</p>
<p>::GlobalParagraph
Prop Drilling means that data from an overarching component (Parent) is passed through multiple intermediate components to an underlying component (Child).
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Example of Prop Drilling
::</p>
<p>::GlobalParagraph
Assume an underlying component needs information from an overarching component:
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-vue">&#x3C;!-- App.vue -->
&#x3C;script setup>
import { ref } from 'vue';
import ParentComponent from './ParentComponent.vue';

const username = ref('Max');
&#x3C;/script>
&#x3C;template>
  &#x3C;ParentComponent :username="username" />
&#x3C;/template>
</code></pre>
<p>::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-vue">&#x3C;!-- ParentComponent.vue -->
&#x3C;script setup>
import ChildComponent from './ChildComponent.vue';

defineProps(['username']);
&#x3C;/script>

&#x3C;template>
  &#x3C;ChildComponent :username="username" />
&#x3C;/template>
</code></pre>
<p>::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-vue">&#x3C;!-- ChildComponent.vue -->
&#x3C;script setup>
defineProps(['username']);
&#x3C;/script>

&#x3C;template>
  &#x3C;p>Hallo, {{ username }}!&#x3C;/p>
&#x3C;/template>
</code></pre>
<p>::</p>
<p>::GlobalParagraph
Here, <code>username</code> is passed from <code>App.vue</code> via <code>ParentComponent.vue</code> to <code>ChildComponent.vue</code> - even if <code>ParentComponent</code> does not need the value at all.
::</p>
<p>:hr{.mb-5}</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Why should you avoid prop drilling in Vue?
::</p>
<p>::GlobalParagraph
It might still work for small projects - but you'll regret it when your code grows. Here are the biggest pain points when prop drilling in Vue.js:
::</p>
<p>::GlobalParagraph
❌ <strong>Unclear code</strong>: If a prop is passed through many components, it can be difficult to understand the data flow.
::</p>
<p>::GlobalParagraph
❌  <strong>High maintenance effort</strong>: If a prop changes, all components involved must be adapted.
::</p>
<p>::GlobalParagraph
❌ <strong>Low reusability</strong>: Components become unnecessarily dependent on each other as they have to pass on data that they do not use themselves.
::</p>
<p>::GlobalParagraph
If you're looking for <strong>Vue.js best practices</strong>, there's no way around it: <strong>avoid prop drilling</strong> wherever possible.
::</p>
<p>::GlobalCallAnExpert{:bg="bg-bs-blue"}
#buttonBlockTitle
We can also improve the state management of your Vue.js app.
::</p>
<p>:hr{.mb-5}</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Optimising Vue.js data flow - alternatives to prop drilling
::</p>
<p>::GlobalParagraph
Fortunately, Vue offers various solutions for passing on data efficiently without having to loop it through every intermediate component.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. <code>provide</code> and <code>inject</code> in Vue.js - The built-in solution
::</p>
<p>::GlobalParagraph
With <code>provide</code> and <code>inject</code>, Vue offers a native option for passing on data to deeper components.
::</p>
<p>::GlobalParagraph
<strong>Examples using <code>provide</code> und <code>inject</code></strong>
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-vue">&#x3C;!-- App.vue -->
&#x3C;script setup>
import { provide, ref } from 'vue';
import ChildComponent from './ChildComponent.vue';

const username = ref('Max');
provide('username', username);
&#x3C;/script>

&#x3C;template>
  &#x3C;ChildComponent />
&#x3C;/template>
</code></pre>
<p>::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-vue">&#x3C;!-- ChildComponent.vue -->
&#x3C;script setup>
import { inject } from 'vue';

const username = inject('username');
&#x3C;/script>

&#x3C;template>
  &#x3C;p>Hallo, {{ username }}!&#x3C;/p>
&#x3C;/template>
</code></pre>
<p>::</p>
<p>::GlobalParagraph
<strong>Advantages of <code>provide</code> and <code>inject</code>:</strong>
::</p>
<p>::GlobalParagraph
✅ No unnecessary props in intermediate components
::</p>
<p>::GlobalParagraph
✅ Better structure and readability
::</p>
<p>::GlobalParagraph
✅ Ideal for themes, localisation, form handling
::</p>
<p>::GlobalParagraph
<strong>Disadvantages:</strong>
::</p>
<p>::GlobalParagraph
⚠️ Only works within a component tree
::</p>
<p>::GlobalParagraph
⚠️ Data flow is less obvious
::</p>
<p>::GlobalParagraph
Nevertheless, it's a great solution for small to medium-sized projects to avoid prop drilling in Vue.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. State Management with Pinia
::</p>
<p>::GlobalParagraph{.mb-5}
For larger applications, a <strong>state management tool</strong> such as <a href="https://pinia.vuejs.org/">Pinia</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} is an even better alternative.
::</p>
<p>::GlobalParagraph
<strong>A Pinia example</strong>
::</p>
<p>::GlobalParagraph
Pinia must first be installed:
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-bash">yarn add pinia
# or
npm install pinia
</code></pre>
<p>::</p>
<p>::GlobalParagraph
A store is then created:
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-typescript">// stores/user.ts
import { defineStore } from 'pinia';
import { ref } from 'vue';

export const useUserStore = defineStore('user', () => {
  const username = ref&#x3C;string>('Max');

  return {
    username,
  };
});
</code></pre>
<p>::</p>
<p>::GlobalParagraph
The store can now be used in every component:
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-vue">&#x3C;!-- ChildComponent.vue -->
&#x3C;script setup>
import { useUserStore } from '@/stores/user';

const userStore = useUserStore();
&#x3C;/script>

&#x3C;template>
  &#x3C;p>Hallo, {{ userStore.username }}!&#x3C;/p>
&#x3C;/template>
</code></pre>
<p>::</p>
<p>::GlobalParagraph
<strong>Advantages of Pinia:</strong>
::</p>
<p>::GlobalParagraph
✅ Centralised management of all states
::</p>
<p>::GlobalParagraph
✅ No prop drilling
::</p>
<p>::GlobalParagraph{.mb-5}
✅ Perfect for large, complex Vue projects
::</p>
<p>::GlobalParagraph
With Pinia you can optimise your Vue.js data flow and finally decouple and scale the code.
::</p>
<p>:hr{.mb-5}</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Conclusion: How to avoid prop drilling in Vue.js correctly
::</p>
<p>::GlobalParagraph
Prop drilling can lead to confusing code and high maintenance costs. Fortunately, Vue offers better alternatives for clean and maintainable data flows with <code>provide/inject</code> and Pinia.
::</p>
<p>::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>For smaller projects, <code>provide/inject</code> is often enough.</li>
<li>For larger projects, you can't avoid clean state management with <em>Pinia</em>.
::</li>
</ul>
<p>::GlobalParagraph{.mb-5}
With the right method, prop drilling can be avoided, which ensures a better code structure and maintainability.
::</p>
<p>::GlobalParagraph
Both solutions help you to implement vue.js best practices and untangle your component tree.
::</p>
<p>:hr{.mb-5}</p>
<p>::GlobalTitle{:size="lg" .mb-5 .mt-8}
FAQ – Frequent Questions about Prop Drilling in Vue
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. When is Prop Drilling okay?
::</p>
<p>::GlobalParagraph
Prop Drilling is fine when a prop is only passed through one or two components and no complex data structures are involved. In small projects, it can be a simple solution without additional code for State Management or <code>provide/inject</code>.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. When should <code>provide</code> and <code>inject</code> be used instead of Props?
::</p>
<p>::GlobalParagraph
This method is especially suitable for globally relevant data like theme settings, translations, or overarching form validations. It is ideal when data needs to be passed to multiple subordinate components without creating unnecessary prop chains.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. When is Pinia the better solution?
::</p>
<p>::GlobalParagraph
Pinia should be used when multiple components need to access the same state independently. It is particularly useful for user management, shopping carts, or larger applications where data needs to be accessed from different areas of the application.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
4. Can <code>provide</code> and <code>inject</code> be combined with Pinia?
::</p>
<p>::GlobalParagraph
Yes! For example, <code>provide</code> can be used to override a Pinia store instance within a specific component tree without changing the global state.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
5. Are there other alternatives for data transfer?
::</p>
<p>::GlobalParagraph
Yes. Depending on the use case, Composables can be used to encapsulate reusable logic and state. In rare cases, an Event Bus can also make sense, especially when events need to be exchanged between non-directly related components.
::</p>]]></content:encoded>
            <category>Vue.js</category>
            <category>Nuxt</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blog/propsdrilling_pinia.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[VueJS error tracking with Sentry]]></title>
            <link>https://blueshoe.io/blog/vuejs-error-tracking-with-sentry</link>
            <guid>https://blueshoe.io/blog/vuejs-error-tracking-with-sentry</guid>
            <pubDate>Thu, 02 Dec 2021 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>When it comes to error tracking in our Cloud Native applications Sentry has become our go-to solution. We do love the ease of use, the deep insights and the well-structured documentation. This blog post aims to provide you with our learnings on using Sentry on Unikube’s VueJS based frontend.</p>
<p><img src="/img/blog/vuejs-error-tracking-with-sentry.jpg" alt="VueJS error tracking with Sentry">{.object-cover .max-w-full .mb-5}</p>
<p>:::globalParagraph
Sentry is simply a great piece of software we love to use in our everyday work and we want to share our learnings on using Sentry on Unikube’s VueJS based frontend with you. The code snippets are a bit simplified compared to the originals, however, they contain all necessary information.
:::</p>
<p>:::globalTitle{:size="lg" .mb-5}
General information &#x26; dependencies
:::
:::globalParagraph
<a href="/products">Unikube’s frontend</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} provides a UI to our many backend microservices. All our microservices are equipped with GraphQL interfaces which are then federated via Apollo Federation. This gives us the convenience of consuming the whole (external) graph through one endpoint. The frontend itself is a monolithic single page application, built with TypeScript on top of VueJS and Vuetify. We have included internationalisation via vue-i18n and frontend permission management through CASL.
:::
:::globalParagraph
When it comes to error tracking we have included Sentry’s SDK, which already includes a beautiful integration for VueJS.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Sentry Integration
:::
:::globalParagraph
The basic integration itself is as simple as it can be:
main.ts:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json5">import * as Sentry from '@sentry/vue';

Sentry.init({
    Vue,
    dsn: process.env.VUE_APP_SENTRY_DSN,
});
</code></pre>
<p>:::
:::globalParagraph
Wait - but what is this dsn-thing? It provides the SDK with the endpoint to which it will send any relevant events tracked by Sentry. To make sure we do not track any events from our local development environments we surround the initialisation with a simple if statement.
main.ts:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json5">import * as Sentry from '@sentry/vue';
if (process.env.NODE_ENV === 'production') {
    Sentry.init({
        Vue,
        dsn: process.env.VUE_APP_SENTRY_DSN,
    });
}
</code></pre>
<p>:::
:::globalParagraph
process.env contains certain environment variables and their values.
:::
:::globalParagraph
However, we do not stop here. To provide our developers with as much information ass possible, to ease the process of debugging, we add some more information at runtime. One very helpful thing is the integration of Sentry with Vue’s router.
router/index.ts:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json5">router.beforeEach((to: Route, from, next) => {
    Sentry.configureScope((scope) => scope.setTransactionName(to?.meta?.label));
    next();
});
</code></pre>
<p>:::
:::globalParagraph
Setting the label of the current route to Sentry’s transaction name immediately restricts the context in which our developers are „hunting“ the bug.
:::
:::globalParagraph
To provide our user’s with the best support possible we also enrich Sentry’s events with some user information. Our store's auth.ts modules handle the authentication tokens and states. As soon as the user information change (login, logout) we update Sentry’s context.
store/auth.ts:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json5">setRpt(state, rpt: string): void {
    state.rawRpt = rpt;
    Sentry.setUser({ email: state.email });
}
</code></pre>
<p>:::
:::globalParagraph
This way we can filter the tracked errors for certain users and provide them with superb support and insights as well as super-quick bug fixes.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Sentry Source Maps and Release Health
:::
:::globalParagraph
Sentry comes with a fancy Webpack integration, which allows streamlining the creation of releases and linking source maps to those. This is an essential step to make sure our developers are not confronted with stack traces and errors from the minified JavaScript sources, but with the actual code from the repository.
:::
:::globalParagraph
We simply add the Sentry CLI Webpack plugin to our vue.config.js - and therefore to the Webpack configuration.
webpack.config.js:
:::
:::BlogCode{.mb-5}</p>
<pre><code class="language-json5">if (process.env.NODE_ENV === 'production') {
  const sentryAuthToken = fs.readFileSync('/run/secrets/SENTRY_AUTH_TOKEN', 'utf8');
      plugins.push(
          new SentryCliPlugin({
            // webpack specific configuration
              authToken: sentryAuthToken,
              include: '.',
              ignore: ['node_modules', 'babel.config.js', 'apollo.config.js', '.eslintrc.js'],
              release: `unikube-frontend@${process.env.VUE_APP_VERSION}`,
    }),
  );
}
</code></pre>
<p>:::
:::globalParagraph
We’re surrounding the release process with process.env.NODE_ENV === 'production' to make sure nothing from our development or CI test workflows lands as a release in Sentry.
:::
:::globalParagraph
If you take a closer look at the code you’ll notice that we work with an environment variable 'VUE_APP_VERSION' as well as with a file, which is located under '/run/secrets/SENTRY_AUTH_TOKEN'.
:::
:::globalParagraph
Let’s dive a bit deeper into the build process of the frontend.
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Docker Arguments and Secrets
:::
:::globalParagraph
Since we make use of environment variables, which may differ from one environment to the other (compare development and production), those variables need to be set at build time. The 'dist' version of the frontend is served through a simple NGINX docker container. Therefore we leverage Docker’s arguments to pass in environment variables at build time (for the docker image as well as for the frontend).

Dockerfile:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-json5">FROM quay.io/blueshoe/node14.4-slim AS app
…
### Build ENV variables
ARG VUE_APP_BASE_URL
ARG VUE_APP_PUBLIC_PATH
ARG VUE_APP_I18N_LOCALE
ARG VUE_APP_I18N_FALLBACK_LOCALE
ARG VUE_APP_GRAPHQL_URL
ARG VUE_APP_KEYCLOAK_JS
ARG VUE_APP_KEYCLOAK_AUTHZ_JS
ARG VUE_APP_KEYCLOAK_URL
ARG VUE_APP_KEYCLOAK_REALM
ARG VUE_APP_KEYCLOAK_CLIENT_ID
ARG VUE_APP_UPLOAD_URL
ARG VUE_APP_VERSION
ARG VUE_APP_SENTRY_DSN
ARG SENTRY_DSN
ARG SENTRY_ORG
ARG SENTRY_PROJECT
ARG SENTRY_URL
</code></pre>
<p>:::
:::globalParagraph
If you wonder about the difference between docker ARG and ENV check it out here.
:::
:::globalParagraph
However, the SentryCLIPlugin sends something to our Sentry installation and therefore needs some kind of authentication. For that, the 'authToken' configuration parameter is used. Now - Unikube is an open-source project, which means our CI pipelines are open source as well. Nevertheless, we would like to keep our 'authToken' a secret, so that no one else can manipulate any release information on our Sentry installation. For that, we use Docker’s secret management. It allows us to inject secret information into the build process, which can only be used during build time. This is exactly what we need since the release information (including the source maps) only need to be generated once when the Docker image is built.
:::
:::globalParagraph
Docker injects the secret as a file, located under '/run/secrets/SENTRY_AUTH_TOKEN'. This file then is read by the 'SentryCLIPlugin'.
:::</p>
<p>:::GlobalPodcastSection{:videoId="KapP6ouEL3E" :videoPosition="left" .mb-6}
::::GlobalPreTitle{:color="text-bs-green" .mb-3}
OUR PODCAST: TOOLS FOR THE CRAFT
::::
::::GlobalTitle{:tag="h3" .mb-6}
E2: Remote Kubernetes development environments
::::
::::globalParagraph{:font-size="lg" .mb-4}
Michael and Robert are talking about the various options developers have for running remote Kubernetes development environments.
::::
::::globalParagraph{:font-size="lg" }
More editions of our podcast can be found here:
::::
::::GlobalButton{:url="/podcast/" :label="More podcasts!" :color="green"}
::::
:::</p>
<p>:::globalTitle{:size="md" .mb-5}
Github Actions
:::
:::globalParagraph
The last missing piece of the puzzle is the automated build of the Docker image in our CI pipeline. Since Unikube’s repositories are located on Github we leverage [Github Actions].
:::
:::globalParagraph
Within the <code>docker.yaml</code> we define the steps to build the Docker images for our development and production environments.
.github/workflows/docker.yaml:
:::</p>
<p>:::BlogCode{.mb-5}</p>
<pre><code class="language-json5">name: Create Docker Image

on:
    push:
        tags:
            - '*'
jobs:
    build_local_dev_docker_image:
        name: Build docker images (dev and prod).
        runs-on: ubuntu-latest
        strategy:
            matrix:
                include:
                    # production image
                    - graphql_url: https://api.unikube.io/graphql
                    keycloak_url: https://login.unikube.io/auth
                    upload_url: https://api.unikube.io/upload
                    tag_suffix: ''
                    # dev image
                    - graphql_url: http://gateway.unikube.127.0.0.1.nip.io:8085/graphql
                    keycloak_url: http://keycloak.127.0.0.1.nip.io:8085/auth
                    upload_url: http://gateway.unikube.127.0.0.1.nip.io:8085/upload
                    tag_suffix: '-dev'
        steps:
            - name: Checkout
            uses: actions/checkout@v2
            - name: Login to quay.io
            uses: docker/login-action@v1
            with:
                registry: quay.io
                username: ${{ secrets.QUAY_USERNAME }}
                password: ${{ secrets.QUAY_ROBOT_TOKEN }}
            - name: Docker meta
            uses: docker/metadata-action@v3
            id: meta
            with:
                images: quay.io/unikube/frontend
                flavor: |
                    latest=false
                    suffix=${{ matrix.tag_suffix }}
                tags: |
                    type=semver,pattern={{version}}
                    latest
            - name: Build and push
            id: docker_build
            uses: docker/build-push-action@v2
            with:
                push: true
                tags: ${{ steps.meta.outputs.tags }}
                secrets: |
                "SENTRY_AUTH_TOKEN=${{ secrets.SENTRY_AUTH_TOKEN }}"
                file: docker/Dockerfile
                context: .
                build-args: |
                    VUE_APP_BASE_URL
                    VUE_APP_PUBLIC_PATH
                    VUE_APP_I18N_LOCALE=en
                    VUE_APP_I18N_FALLBACK_LOCALE=en
                    VUE_APP_KEYCLOAK_JS=/js/keycloak.js
                    VUE_APP_KEYCLOAK_AUTHZ_JS=/js/keycloak-authz.js
                    VUE_APP_KEYCLOAK_REALM=unikube
                    VUE_APP_KEYCLOAK_CLIENT_ID=frontend
                    VUE_APP_GRAPHQL_URL=${{ matrix.graphql_url }}
                    VUE_APP_KEYCLOAK_URL=${{ matrix.keycloak_url }}
                    VUE_APP_UPLOAD_URL=${{ matrix.upload_url }}
                    VUE_APP_VERSION=${{ steps.meta.outputs.version }}
                    SENTRY_RELEASE=${{ steps.meta.outputs.version }}
                    VUE_APP_SENTRY_DSN=${{ secrets.SENTRY_DSN }}
                    SENTRY_DSN=${{ secrets.SENTRY_DSN }}
                    SENTRY_ORG=${{ secrets.SENTRY_ORG }}
                    SENTRY_PROJECT=${{ secrets.SENTRY_PROJECT }}
                    SENTRY_URL=${{ secrets.SENTRY_URL }}
            - name: Image digest
            run: echo ${{ steps.docker_build.outputs.digest }}
</code></pre>
<p>:::</p>
<p>:::globalParagraph
There are a few things to consider here:
:::
:::GlobalBlock{.ol-decimal .mb-5}</p>
<ol>
<li>We use Github Action’s 'matrix' keyword to use the same job template for multiple environment builds.</li>
<li>We log in to quay.io which is the service hosting our Docker images.</li>
<li>Secret and Arguments are passed via 'docker/build-push-action@v2'.
:::</li>
</ol>
<p>:::globalParagraph
The last thing to do here is set the secret information for the repository on its Github settings page:
:::</p>
<p><img src="/img/blog/vuejs-error-tracking-with-sentry-1.jpg" alt="settings page">{.object-cover .w-full .mb-6}</p>
<p>:::globalParagraph
Done! This is how Unikube’s frontend currently integrates Sentry to better understand errors, fix bugs faster and provide better support for our users.
:::
:::globalParagraph
Let us know if you have any questions, feedback or maybe things we should consider to change. Thank you very much.
:::</p>
<p>::GlobalButton{:url="/contact/" :label="Contact us" :color="blue"}
::</p>]]></content:encoded>
            <category>Docker</category>
            <category>Vue.js</category>
            <category>Development</category>
            <category>Operations</category>
            <category>Security</category>
            <enclosure url="https://blueshoe.io/img/blog/vuejs-error-tracking-with-sentry.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Wagtail Localize: Multilingual Django Websites]]></title>
            <link>https://blueshoe.io/blog/wagtail-localize-tutorial-multilingual-django-websites</link>
            <guid>https://blueshoe.io/blog/wagtail-localize-tutorial-multilingual-django-websites</guid>
            <pubDate>Wed, 09 Apr 2025 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Want to build a multilingual website with Django? With Wagtail Localize, it's fast and developer-friendly. This powerful extension for the Wagtail CMS gives you full control over your translations – right inside the admin interface.</p>
<p><img src="/img/blog/wagtail-django.svg" alt="Wagtail Localize: multilingual websites with django">{.object-cover .max-w-full .mb-5}</p>
<p>::GlobalTitle{:size="lg" .mb-5}
What is Wagtail Localize?
::
::GlobalParagraph
<a href="/technologies/wagtail-cms/">Wagtail</a>{.text-bs-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} is a flexible, modern CMS for Django. If you’re building multilingual websites, <a href="https://wagtail-localize.org/stable/">Wagtail Localize</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} makes it easy to manage content and translations – manually or automatically.
::</p>
<p>::GlobalParagraph
With Wagtail Localize, you can:
::</p>
<p>::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>Manage multilingual content directly in the Wagtail admin</li>
<li>Use manual or machine translation</li>
<li>Define translation workflows and optimize content per locale</li>
<li>Build SEO-friendly, multilingual sites with ease
::</li>
</ul>
<p>:::GlobalButton{:url="/technologies/wagtail-cms/" :label="Ready to launch your Wagtail project? Discover how Wagtail makes your web projects faster, more flexible, and easily scalable." :target="_blank" :color="blue" .mb-6}
:::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Installation &#x26; Getting Started
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. Install Wagtail Localize
::</p>
<p>::GlobalParagraph
Run the installation via pip:
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-python">pip install wagtail-localize
</code></pre>
<p>::</p>
<p>::GlobalParagraph
Then add the required packages to your INSTALLED_APPS in settings.py:
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-python">INSTALLED_APPS = [
    "wagtail_localize",
    "wagtail_localize.locales",  # Enables the management of multilingual content
    "wagtail.contrib.modeladmin",
    *INSTALLED_APPS,
]
</code></pre>
<p>::
::GlobalParagraph
Now run the migrations:
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-python">python manage.py migrate wagtail_localize
</code></pre>
<p>::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. Configure Your Languages
::</p>
<p>::GlobalParagraph
Set up the supported languages in settings.py:
::
::BlogCode{.mb-5}</p>
<pre><code class="language-python">from django.utils.translation import gettext_lazy as _

LANGUAGES = [
    ("en", _("English")),
    ("de", _("Deutsch")),
    ("fr", _("Français")),
]

WAGTAIL_CONTENT_LANGUAGES = LANGUAGES
</code></pre>
<p>::</p>
<p>::GlobalParagraph
Activate internationalization in Wagtail:
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-python">WAGTAIL_I18N_ENABLED = True
</code></pre>
<p>::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Managing Multilingual Pages in Wagtail
::</p>
<p>::GlobalParagraph
Wagtail uses Locales to organize language versions. It automatically creates separate versions for each language, all accessible via the admin UI.
::</p>
<p><img src="/img/blog/mehrsprachige-Seiten-verwalten-in-Wagtail.png" alt="Managing multilingual pages in Wagtail">{.mx-auto .max-w-full}</p>
<p>::GlobalParagraph
To create custom pages in Wagtail, just use the regular Page model.
::</p>
<p>::GlobalParagraph
Example: multilingual custom page
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-python">from wagtail.models import Page
class CustomPage(Page):
    pass
</code></pre>
<p>::</p>
<p>::GlobalParagraph
This page model is automatically handled by Wagtail Localize and can be added in different languages.
::</p>
<p><img src="/img/blog/wagtail-localize-seitenmenue.png" alt="Page menu in Wagtail Localize">{.mx-auto .max-w-full}</p>
<p>::GlobalParagraph
You’re all set to manage your first multilingual pages – but that’s just the beginning.
::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
When to Use TranslatableMixin?
::
::GlobalParagraph
FWant to translate snippets or custom models? Then you’ll need to use <a href="https://wagtail-localize.org/stable/tutorial/3-content/">TranslatableMixin</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
::
::GlobalParagraph
Example: multilingual person snippet:
::</p>
<p>::BlogCode{.mb-5}</p>
<pre><code class="language-python">
from wagtail.snippets.models import register_snippet
from wagtail_localize.models import TranslatableMixin
from django.db import models
from wagtail.admin.panels import FieldPanel
from django.utils.translation import gettext_lazy as _

@register_snippet
class Person(TranslatableMixin, models.Model):
    first_name = models.CharField(max_length=200)
    last_name = models.CharField(max_length=200)
    email = models.EmailField(blank=True, null=True)
    job_function = models.CharField(_("Function"), max_length=400, blank=True, null=True)
    job_function_international = models.CharField(_("Function international"), max_length=400, blank=True, null=True)
    image = models.ForeignKey("wagtailimages.Image", null=True, blank=True, on_delete=models.SET_NULL, related_name="+")
    phone = models.CharField(_("Phone"), max_length=200, blank=True, null=True)
    mobile = models.CharField(_("Mobile"), max_length=200, blank=True, null=True)
    country = models.CharField(_("Country"), max_length=200, blank=True, null=True)

    panels = [
        FieldPanel("first_name"),
        FieldPanel("last_name"),
        FieldPanel("email"),
        FieldPanel("job_function"),
        FieldPanel("job_function_international"),
        FieldPanel("image"),
        FieldPanel("phone"),
        FieldPanel("mobile"),
        FieldPanel("country"),
    ]

    def __str__(self):
        return f"{self.first_name} {self.last_name}"
</code></pre>
<p>::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Common use cases:
::
::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>Translating team member profiles</li>
<li>Multilingual product descriptions</li>
<li>Locale-specific configuration settings
::</li>
</ul>
<p>::GlobalTitle{:size="lg" .mb-5}
Best Practices with Wagtail Localize
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. Keep Your Language Versions Consistent
::
::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>Maintain the same structure across languages</li>
<li>Translate SEO content like page titles and meta descriptions
::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. Optimize for Multilingual SEO
::
::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>Use hreflang tags to help search engines match the right language</li>
<li>Structure your URLs with language prefixes, e.g., /en/about/ instead of /about-en/
::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. Don’t Forget Performance
::
::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>Enable caching to serve frequently translated content faster
::</li>
</ul>
<p>:::GlobalButton{:url="/technologies/wagtail-cms/" :label="Use Wagtail for powerful multilingual sites – Learn more" :target="_blank" :color="blue" .mb-6}
:::</p>
<p>::GlobalTitle{:size="lg" .mb-5}
Your Start with Wagtail Localize
::</p>
<p>::GlobalParagraph
With Wagtail Localize, going multilingual is simple. From setup to translation and SEO optimization, it’s a powerful tool for any Django-based website.
::
::GlobalParagraph
Use these best practices and start building robust, multilingual experiences today!
::</p>
<p>:::GlobalTitle{:size="lg" .mb-5 .mt-8}
Frequently Asked Questions
:::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
1. What is Wagtail Localize and why should I use it?
::
::GlobalParagraph
Wagtail Localize is a powerful translation extension for Wagtail CMS. It lets you manage multilingual content directly in the admin, streamline translation workflows, and optimize pages for international SEO.
::
::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
2. How do I install Wagtail Localize in my Django project?
::
::GlobalParagraph
Installation takes just a few steps:
::
::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>Install the package: pip install wagtail-localize</li>
<li>Add it to your INSTALLED_APPS</li>
<li>Run migrations with python manage.py migrate wagtail_localize
::</li>
</ul>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
3. Can I use automatic translations with Wagtail Localize?
::
::GlobalParagraph
Yes! Wagtail Localize supports both manual and machine translations. You can integrate tools like DeepL or Google Translate for auto-translation.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
4. How does language management work in Wagtail?
::
::GlobalParagraph
Wagtail uses a Locale system to organize content. Each language version gets its own page structure, all managed within the Wagtail admin interface.
::</p>
<p>::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
5. Is Wagtail Localize SEO-friendly?
::
::GlobalParagraph
Absolutely. Wagtail Localize supports multilingual SEO best practices like:
::
::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li> tags</li>
<li>Clean, language-specific URLs</li>
<li>Custom meta titles &#x26; descriptions per locale
::</li>
</ul>]]></content:encoded>
            <category>Django</category>
            <category>Wagtail</category>
            <category>Development</category>
            <category>Operations</category>
            <category>SEO</category>
            <enclosure url="https://blueshoe.io/img/blog/wagtail-django.svg" length="0" type="image/svg"/>
        </item>
        <item>
            <title><![CDATA[Efficient Web Development with Vue.js & Nuxt.]]></title>
            <link>https://blueshoe.io/blog/web-development-with-vuejs-and-nuxt</link>
            <guid>https://blueshoe.io/blog/web-development-with-vuejs-and-nuxt</guid>
            <pubDate>Mon, 07 Oct 2024 00:00:00 GMT</pubDate>
            <content:encoded><![CDATA[<p>Together, Vue.js and Nuxt form a high-performance duo for modern web development. They enable developers to create quickly scalable and maintainable applications. Find out why in this blog post.</p>
<p><img src="/img/blog/vuejsnuxt.svg" alt="Vue.js and Nuxt at Blueshoe.">{.object-cover .max-w-full .mb-5}</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
What is Vue.js?
:::</p>
<p>:::GlobalParagraph
The first version of Vue.js was released in 2014. Similar to ReactJS, Vue.js focuses on providing a development framework for reactive applications. Vue.js is now in version 3, has gained significant maturity, and is being used by companies like Google, Netflix, Facebook, and Adobe.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Key functions of Vue.js:
:::</p>
<p>:::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li><strong>Reactive data binding:</strong> one of the main features is the use of references in data processing. Variables can be updated simultaneously at multiple locations</li>
<li><strong>Computed Properties:</strong> these calculate values based on references. If a reference is updated, the computed property will also be updated</li>
<li><strong>Components:</strong> Vue.js allows creating components as SFC - Single-File Components. These encapsulate all relevant information - DOM structure, JS logic, and CSS styles in one file</li>
<li><strong>Progressive:</strong> Vue.js is divided into various partial projects, so only the necessary dependencies need to be installed according to software requirements
:::</li>
</ul>
<p><img src="/img/blog/Vue.js_Logo_2.svg" alt="Vue.js logo">{.mx-auto .h-48 .max-w-full .mt-4}
:::GlobalParagraph{.text-center}
The Vue.js Logo
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Popularity and Community:
:::
:::GlobalParagraph{.mb-4}
The Vue.js project now has over 16,000 followers, compared to the old Platzirsch React with just under 5,000 followers. Unlike the Angular (Microsoft) and React (Meta) frameworks, Vue.js is driven by an independent community of free developers.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Introduction to Nuxt
:::
:::GlobalParagraph
Nuxt is a server-side rendering framework (SSR), which is built on Vue.js. Vue.js itself is designed to develop browser applications. Nuxt takes Vue.js as a foundation and enables the generation of HTML and Vue.js structures on the server-side.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
What is Server-Side Rendering (SSR)?
:::
:::GlobalParagraph
With SSR, the entire DOM - that is, the HTML structure - is generated on the web server and directly delivered to the browser. This saves computing power on one hand, and on the other hand, all information relevant for search engines (SEO) is already contained in the HTML. While in a pure Vue.js application relevant data is loaded asynchronously or subsequently, Nuxt ensures that this data is available at the time of page retrieval.
:::</p>
<p><img src="/img/blog/Nuxt_logo.svg" alt="Nuxt logo">{.mx-auto .h-48 .max-w-full .mt-4}
:::GlobalParagraph{.text-center}
The Nuxt Logo
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Other important features:
:::
:::GlobalBlock{.ul-disk .mb-4}</p>
<ul>
<li>
<p>Nuxt is typically operated in one of two modes:</p>
<ul>
<li><em>Static:</em> All available pages are generated and written to a hard drive. The files are then made accessible via a web server.</li>
<li><em>Server-Side Rendering:</em> Pages are generated when the visitor requests them, and sometimes cached in between.</li>
</ul>
</li>
<li>
<p>Automatic routing configuration: Nuxt automatically generates routes, sitemaps, and validates these.</p>
</li>
<li>
<p>Modules - an easy way to extend Nuxt applications are modules like <a href="https://image.nuxt.com/">@nuxt/image</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}. This module makes it easily conceivable to resize and deliver optimized images via Content Delivery Networks. Nuxt offers a wide selection of <a href="https://nuxt.com/modules">useful modules</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
:::</p>
</li>
</ul>
<p>:::GlobalTitle{:size="lg" .mb-5}
Advantages of Using Vue.js &#x26; Nuxt
:::
:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Performance and Scalability
:::
:::GlobalParagraph
By using Vue.js, it is simple to reduce the physical (file) size of websites, respectively to modularize and smartly load them. This creates possible performance potentials. Similarly, scaling, for example via Kubernetes, is simple as applications can be easily packed into containers.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Flexibility and Adaptability
:::
:::GlobalParagraph
Through the progressive build-up of both frameworks, development can start with a minimal set of basic functions. When new requirements arise, new modules can be easily added, thus expanding functionality seamlessly.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
SEO Optimization
:::
:::GlobalParagraph
With Nuxt Server-Side Rendering, reactive, user-friendly applications can be created with a strong SEO focus. Extremely good <a href="https://web.dev/articles/vitals?hl=de">Web Vitals</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, outstanding crawlability for search engines, short loading times, and smart logics for loading content are just the tip of the iceberg of SEO advantages for applications based on Nuxt.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Developer Friendliness
:::
:::GlobalParagraph
Vue.js as well as Nuxt have a very steep learning curve. The documentation is outstanding and allows even foreign developers to quickly get started. Both projects come with their own dev tools extensions, which enable developers to quickly gain deep-reaching insights into the applications.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Practical Examples and Successful Projects
:::
:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Project Examples
:::
:::GlobalParagraph
When it comes to building user-friendly, interactive frontends, we at Blueshoe rely on Vue.js and Nuxt. Whether in the online shop of <a href="https://luma-delikatessen.ch/de/">Luma Delikatessen</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}, the corporate website of <a href="https://www.winter-company.com/de/">Winter &#x26; Company</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} or the <a href="https://digitallearninglab.de/">digital.learning.lab</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} - both frameworks have always proven to be an excellent choice.
:::
:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Case Studies
:::
:::GlobalParagraph{.mb-4}
Particularly in the webshop of the Swiss online meat retailer Luma Delikatessen, Nuxt has proven itself. Outstanding crawlability, strong interactivity, quick response times in combination with modern design make the shopping experience on luma-delikatessen.com a pure pleasure.
With the help of Nuxt, Blueshoe has created interactive product views with intuitive operation, cleverly arranged product information and simple indexability for all search engines.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5 .mt-5}
What does an implementation strategy look like?
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Best Practices
:::
:::GlobalParagraph
To set up a project with Vue.js or in combination with Nuxt, it is recommended to use the Command-Line Interfaces (CLIs) of both frameworks. These already provide easily configurable project templates. We recommend TypeScript as a language, as strong and consistent typing will prevent simple bugs long-term and save costs. Furthermore, it is recommended to configure <a href="https://eslint.org/">eslint</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} as a linter and <a href="https://prettier.io/">Prettier</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid} for formatting the project.
:::</p>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Step-by-Step Instructions
:::
:::GlobalBlocks{.ul-disk .mb-4}</p>
<ul>
<li>Starting a project with Nuxt is remarkably simple: 
<code>npx nuxi@latest init &#x3C;project_name></code>
The basic project structure is already in place.</li>
<li>The development server can be easily started via <code>npm run dev -- -o</code></li>
<li>The basic structure of Nuxt projects is explained in detail here.
:::</li>
</ul>
<p>:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5 .mt-4}
Common Challenges
:::
:::GlobalParagraph
A frequent challenge when working with Nuxt is the distinction between client-side and server-side code execution. The developer must always be aware of the context in which the code is executed.
:::
:::GlobalParagraph
On the server side, for example, there is no <code>window</code> or <code>document</code> object - as these only exist in the browser. Similarly, the implementation of certain code sections can be easily limited to server or client side, for example through the <a href="https://nuxt.com/docs/api/components/client-only">ClientOnly Component</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Why Blueshoe as a Partner for Vue.js &#x26; Nuxt?
:::
:::GlobalTitle{:color="text-bs-blue" :font="font-oswald" :size="md" :tag="h3" .mb-5}
Our Expertise
:::
:::GlobalParagraph
After implementing numerous projects with Vue.js 2 Nuxt 1 as well as the transition of our existing projects to Vue.js 3 and Nuxt 2, Blueshoe is the contact partner for frontend development with these frameworks. We know the structures of the technologies like the back of our hand, are familiar with the modules and additional packages, and can quickly orient ourselves in existing projects.
:::</p>
<p>:::GlobalTitle{:size="lg" .mb-5}
Conclusion
:::
:::GlobalParagraph
With the steep learning curve of Vue.js and Nuxt, the progressive approach of both frameworks, and their versatility, both simple and complex modern web applications can be built. Whether SEO, performance, UX - all requirements for modern web projects can be realized with these two technologies.
:::
:::GlobalParagraph{.mb-4}
Can we be helpful with our expertise in frontend technologies for your project? Simply contact us at <a href="mailto:hello@blueshoe.io">frontend@blueshoe.de</a>{:target="_blank" .bs-link-blue .hover:underline .hover:decoration-bs-blue .hover:decoration-solid}.
:::</p>]]></content:encoded>
            <category>TypeScript</category>
            <category>Vue.js</category>
            <category>Nuxt</category>
            <category>Development</category>
            <enclosure url="https://blueshoe.io/img/blog/vuejsnuxt.svg" length="0" type="image/svg"/>
        </item>
    </channel>
</rss>