Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions astro.config.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -440,12 +440,13 @@ export default defineConfig({
slug: 'aws/enterprise',
},
{
label: 'Single Sign-On',
autogenerate: { directory: '/aws/enterprise/sso' },
label: 'Kubernetes',
autogenerate: { directory: '/aws/enterprise/' },
collapsed: true,
},
{
label: 'Kubernetes Executor',
slug: 'aws/enterprise/kubernetes-executor',
label: 'Single Sign-On',
autogenerate: { directory: '/aws/enterprise/sso' },
},
{
label: 'Enterprise Image',
Expand Down
Binary file added public/images/aws/k8s-concepts.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
101 changes: 101 additions & 0 deletions src/content/docs/aws/enterprise/kubernetes/concepts.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
---
title: Concepts & Architecture
description: Concepts & Architecture
template: doc
sidebar:
order: 2
tags: ["Enterprise"]
---

This conceptual guide explains how LocalStack runs inside a Kubernetes cluster, how workloads are executed, and how networking and DNS behave in a Kubernetes-based deployment.


## How the LocalStack pod works

The LocalStack pod runs the LocalStack runtime and acts as the central coordinator for all emulated AWS services within the cluster.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not for current iteration, but we'll think about how to generalise the wording for all emulators we offer. I may create a placeholder Linear ticket for this.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mmaureenliu are you saying this in reference of the phrase "emulated AWS services"?

Currently, my understanding is that we use the following terms for our emulators:

  • LocalStack for AWS
  • LocalStack for Azure
  • LocalStack for Snowflake


Its primary responsibilities include:

* Exposing the LocalStack edge endpoint and AWS service API ports
* Receiving and routing incoming AWS API requests
* Orchestrating services that require additional compute (for example Lambda, Glue, ECS, and EC2)
* Managing the lifecycle of compute workloads spawned on behalf of AWS services

From a Kubernetes perspective, the LocalStack pod is a standard pod that fully participates in cluster networking. It is typically exposed through a Kubernetes `Service`, and all AWS API interactions —whether from inside or outside the cluster— are routed through this pod.

![How the Localstack Pod works](/images/aws/k8s-concepts.png)


## Execution modes

LocalStack supports two execution modes for running compute workloads:

* Docker executor
* Kubernetes-native executor

### Docker executor

The Docker executor runs workloads as containers started via a Docker runtime that is accessible from the LocalStack pod. This provides a simple, self-contained execution model without Kubernetes-level scheduling.

However, Kubernetes does not provide a Docker daemon inside pods by default. To use the Docker executor in Kubernetes, the LocalStack pod must be given access to a Docker-compatible runtime (commonly via a Docker-in-Docker sidecar), which adds complexity and security concerns.

### Kubernetes-native executor

The Kubernetes-native executor runs workloads as Kubernetes pods. In this mode, LocalStack communicates directly with the Kubernetes API to create, manage, and clean up pods on demand.

This execution mode provides stronger isolation, better security, and full integration with Kubernetes scheduling, resource limits, and lifecycle management.

The execution mode is configured using the `CONTAINER_RUNTIME` environment variable.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably more a question for @simonrw:
Better mention what's default - I assume it defaults to (or will default to from a certain version onwards) kubernetes mode?
And can refer to the when to use what mode section below.

Copy link
Contributor

@simonrw simonrw Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it defaults to docker so the user has to opt in to using the k8s executor. However only the operator sets this by default. The user has to configure the helm chart to pass it in as an envar, and of course when deploying manually this must be set.

The challenge is: the helm chart is open source so any user could deploy using it, but the executor is part of a specific tier that the user may not have subscribed to. If we set the default in the helm chart, it might send mixed confusing messages where they cannot use the executor because of a licensing mismatch. At least with the docker executor they could investigate alternative methods.

We don't really have a good story about this right now, so inputs are welcomed cc @localstack/unicorn



## Child pods

For compute-oriented AWS services, LocalStack can execute workloads either within the LocalStack pod itself or as separate Kubernetes pods.

When the Kubernetes-native executor is enabled, LocalStack launches compute workloads as dedicated Kubernetes pods (referred to here as *child pods*). These include:

* Lambda function invocations
* Glue jobs
* ECS tasks and Batch jobs
* EC2 instances
* RDS databases
* Apache Airflow workflows
* Amazon Managed Service for Apache Flink
* Amazon DocumentDB databases
* Redis instances
* CodeBuild containers

For example, each Glue job run or ECS task invocation results in a new pod created from the workload’s configured runtime image and resource requirements.

These child pods execute independently of the LocalStack pod. Kubernetes is responsible for scheduling them, enforcing resource limits, and managing their lifecycle. Most child pods are short-lived and terminate once the workload completes, though some services (such as Lambda) may keep pods running for longer periods.


## Networking model

LocalStack runs as a standard Kubernetes pod and is accessed through a Kubernetes `Service` that exposes the edge API endpoint and any additional service ports.

Other pods within the cluster communicate with LocalStack through this Service using normal Kubernetes DNS resolution and cluster networking.

When the Kubernetes-native executor is enabled, child pods communicate with LocalStack in the same way, by sending API requests over the cluster network to the LocalStack Service.


## DNS behavior

LocalStack includes a DNS server capable of resolving AWS-style service endpoints.

In a Kubernetes deployment:

* The DNS server can be exposed through the same Kubernetes Service as the LocalStack API ports.
* This allows transparent resolution of AWS service hostnames and `localhost.localstack.cloud` to LocalStack endpoints from within the cluster.
* If a custom domain is used to refer to the LocalStack Kubernetes service (via `LOCALSTACK_HOST`) then this name and subdomains of this name are also resolved by the LocalStack DNS server

This enables applications running in Kubernetes to interact with LocalStack using standard AWS SDK endpoint resolution without additional configuration.


## Choose execution mode

The Kubernetes-native executor should be used when LocalStack is deployed inside a Kubernetes cluster and workloads must run reliably and securely.

It is the recommended execution mode for nearly all Kubernetes deployments, because Kubernetes does not include a Docker daemon inside pods and does not provide native Docker access. The Kubernetes-native executor aligns with Kubernetes’ workload model, enabling pod-level isolation, scheduling, and resource governance.

The Docker executor is not supported for use inside Kubernetes clusters. While it may function in environments that have been explicitly configured to expose a Docker-compatible runtime to the LocalStack pod, such setups are uncommon and may introduce security or operational complexity. For Kubernetes-based deployments, the Kubernetes-native executor is the supported and recommended execution mode.
236 changes: 236 additions & 0 deletions src/content/docs/aws/enterprise/kubernetes/deploy-helm-chart.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,236 @@
---
title: Deploy with Helm
description: Install and run LocalStack on Kubernetes using the official Helm chart.
template: doc
sidebar:
order: 4
tags: ["Enterprise"]
---

A Helm chart is a package that bundles Kubernetes manifests into a reusable, configurable deployment unit. It makes applications easier to install, upgrade, and manage.

Using the LocalStack Helm chart lets you deploy LocalStack to Kubernetes with set defaults while still customizing resources, persistence, networking, and environment variables through a single `values.yaml`. This approach is especially useful for teams running LocalStack in shared clusters or CI environments where repeatable, versioned deployments matter.

## Getting Started

This guide shows you how to install and run LocalStack on Kubernetes using the official Helm chart. It walks you through adding the Helm repository, installing and configuring LocalStack, and verifying that your deployment is running and accessible in your cluster.

## Prerequisites

* **Kubernetes** 1.19 or newer

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a process the will trigger engineering to raise a docs issue to update such version requirements? @simonrw

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, and we don't routinely test multiple k8s versions either. We install k3d in our CI using version v5.8.1 (the latest is v5.8.3) and we don't specify the k8s version so we test with whatever the k3d default is.

Also FYI: k8s 1.19 went EOL in Oct 2021!

cc @localstack/unicorn

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@simonrw ah, let me know if there's a diff k8 version you'd prefer i mention in this line, is there a specific version you recommend?

* **Helm** 3.2.0 or newer
* A working Kubernetes cluster (self-hosted, managed, or local)
* `kubectl` installed and configured for your cluster
* Helm CLI installed and available in your shell `PATH`

:::note
**Namespace note:** All commands in this guide assume installation into the **`default`** namespace.
If you’re using a different namespace:
* Add `--namespace <name>` (and `--create-namespace` on first install) to Helm commands
* Add `-n <name>` to `kubectl` commands
:::

## Install

### 1) Add Helm repo

```bash
helm repo add localstack https://localstack.github.io/helm-charts
helm repo update
```

### 2) Install with default configuration

```bash
helm install localstack localstack/localstack
```

This creates the LocalStack resources in your cluster using the chart defaults.

### Install LocalStack Pro

If you want to use the `localstack-pro` image, create a `values.yaml` file:

```yaml
image:
repository: localstack/localstack-pro

extraEnvVars:
- name: LOCALSTACK_AUTH_TOKEN
value: "<your auth token>"
```

Then install using your custom values:

```bash
helm install localstack localstack/localstack -f values.yaml
```


#### Auth token from a Kubernetes Secret

If your auth token is stored in a Kubernetes Secret, you can reference it using `valueFrom`:

```yaml
extraEnvVars:
- name: LOCALSTACK_AUTH_TOKEN
valueFrom:
secretKeyRef:
name: <name of the secret>
key: <name of the key in the secret containing the API key>
```

## Configure chart

The chart ships with sensible defaults, but most production setups will want a small `values.yaml` to customize behavior.

### View all default values

```bash
helm show values localstack/localstack
```

### Override values with a custom `values.yaml`

Create a `values.yaml` and apply it during install/upgrade:

```bash
helm upgrade --install localstack localstack/localstack -f values.yaml
```


## Verify

### 1) Check the Pod status

```bash
kubectl get pods
```

After a short time, you should see the LocalStack Pod in `Running` status:

```text
NAME READY STATUS RESTARTS AGE
localstack-7f78c7d9cd-w4ncw 1/1 Running 0 1m9s
```

### 2) Optional: Port-forward to access LocalStack from localhost

If you’re running a **local cluster** (for example, k3d) and LocalStack is not exposed externally, port-forward the service:

```bash
kubectl port-forward svc/localstack 4566:4566
```

Now verify connectivity with the AWS CLI:

```bash
aws sts get-caller-identity --endpoint-url "http://0.0.0.0:4566"
```

Example response:

```json
{
"UserId": "AKIAIOSFODNN7EXAMPLE",
"Account": "000000000000",
"Arn": "arn:aws:iam::000000000000:root"
}
```

## Common customizations

### Enable persistence

If you want state to survive Pod restarts, enable PVC-backed persistence:

* Set: `persistence.enabled = true`

Example `values.yaml`:

```yaml
persistence:
enabled: true
```

:::note
This is especially useful for workflows where you seed resources or rely on state across restarts.
:::


### Set Pod resource requests and limits

Some environments (notably **EKS on Fargate**) may terminate the LocalStack pod if not configured with reasonable requests/limits:

```yaml
resources:
requests:
cpu: 1
memory: 1Gi
limits:
cpu: 2
memory: 2Gi
```

### Add environment variables and startup scripts

You can inject environment variables or run a startup script to:

* pre-configure LocalStack
* seed AWS resources
* tweak LocalStack behavior

Use:

* `extraEnvVars` for environment variables
* `startupScriptContent` for startup scripts

Example pattern:

```yaml
extraEnvVars:
- name: DEBUG
value: "1"

startupScriptContent: |
echo "Starting up..."
# add your initialization logic here
```

### Install into a different namespace

Use `--namespace` and create it on first install:

```bash
helm install localstack localstack/localstack --namespace localstack --create-namespace
```

Then include the namespace on kubectl commands:

```bash
kubectl get pods -n localstack
```

### Update installation

```bash
helm repo update
helm upgrade localstack localstack/localstack
```

If you use a `values.yaml`:

```bash
helm upgrade localstack localstack/localstack -f values.yaml
```


### Helm chart options

Run:

```bash
helm show values localstack/localstack
```

Keep the parameter tables on this page for quick reference (especially for common settings like persistence, resources, env vars, and service exposure).
Loading