Amazon MSK makes it easy to run Apache Kafka clusters on AWS. Sometimes you need to expose MSK to external clients. Deploying it in a public subnet is the most common case to achieve this. However, in some cases, you need to keep your MSK in a private subnet or expose MSK with your custom domain name.
Sometimes you need to retrieve data from AWS Secrets Manager, but extending your application to support it or installing the AWS CLI can be redundant or overly complicated. For these cases, I created a simple binary CLI tool.
There are many ways to handle repeatable jobs in Kubernetes. For some cases, you can use a CronJob to run recurrent tasks. However, when you need to interact with Kubernetes objects, resources, or custom resources, implementing your controller is a more effective way to maintain the desired state with minimal effort.
In my case, I’ve created a special Kubernetes controller to work with a GitHub App to exchange the App JWT for an installation-specific and short-lived GHS token. This controller updates the token before it expires and updates some third-party integrations such as ArgoCD OCI repository credentials and Dockerconfig JSON secrets. Unfortunately, this controller is currently useless due to GitHub limitations: only personal access tokens (classic) can access private registries (see GitHub discussion for more details).
When we are working on developing a large and distributed system, the frontend is separated and normally presented as a single-page application (SPA) or a set of micro frontends (set of SPAs). Normally, an SPA is immutable and served from some static storage, and not served from the same app that provides the backend APIs or using server-side rendering (SSR) to generate them at runtime. This means you don't have access to environment variables to configure your application during deployment or at runtime.
This is a quite common problem with quite common solutions.
Working on maintaining billions of workloads in thousands of Kubernetes clusters has taught me to avoid repetitive routines, simplify service onboarding, and keep all manifests consistent. Usually, most applications have the same or very similar requirements from a rendered Kubernetes manifest perspective (like labels requirements, selectors, affinity rules, and so on). A Library Chart can be a solution that saves you hundreds of hours of writing Helm charts and keeps you away from repeating the same code again and again.
DLP (Data Loss Prevention) refers to strategies, tools, and processes designed to ensure that sensitive or critical information is not lost, misused, or accessed by unauthorized users. DLP is crucial for enterprises to protect their data, and I fully support its use. However, as a developer, I find DLP frustrating. Let me explain why.
In modern application ecosystems, ensuring secure authentication and authorization is paramount. Most web applications rely on OpenID Connect (OIDC) as a simple but robust way to secure an application or service. In this example, I'll show you how to implement OIDC-based SSO for CLI applications interacting with sensitive APIs.