Install Dify

This document describes how to deploy Dify to a Kubernetes cluster using the Helm chart and common configuration. For an overview of Dify and its components, see Introduction. The chart deploys Dify components only; PostgreSQL, Redis, and the vector store must be provided externally and configured in values.

Overview

The chart runs the following components as separate workloads:

  • API – Backend API and business logic
  • Worker – Celery workers for async tasks
  • Worker Beat – Celery beat for scheduled tasks
  • Web – Frontend UI
  • Plugin Daemon – Plugin runtime
  • Sandbox – Isolated code execution
  • SSRF Proxy – Squid-based proxy for outbound requests

Prerequisites

  • Kubernetes 1.19+
  • Ingress controller (e.g. nginx-ingress-controller) when exposing via Ingress
  • External PostgreSQL 12+ (this chart only supports PostgreSQL)
  • External Redis 6.0+ (standalone only; Sentinel and Cluster are not supported)
  • For RAG: PostgreSQL with pgvector extension, or set vectorStore.type: "" to disable

Downloading

Download the Dify installation file: dify.ALL.xxxx.tgz

Use the violet command to publish to the platform repository:

violet push --platform-address=platform-access-address --platform-username=platform-admin --platform-password=platform-admin-password dify.ALL.xxxx.tgz

Deployment

Prepare Database

Use PostgreSQL 12+. The chart only supports PostgreSQL. You can create a PostgreSQL cluster via the PostgreSQL operator in Data Services and obtain the host and credentials from the instance details.

Prepare Redis

Use Redis 6.0+ in standalone mode only (Sentinel and Cluster are not supported). You can create a Redis instance via Data Services. To use standalone mode:

  1. When creating the instance, select Redis Sentinel for Architecture.
  2. Switch to YAML mode, set spec.arch to standalone, then create.
  3. After creation, in Alauda Container Platform find the Service named rfr-<Redis instance name>-read-write for the Redis host.

Prepare Vector Store (for RAG)

This chart supports pgvector only. Use a PostgreSQL instance with the pgvector extension (can be the same host as the main database with a different database name, or a dedicated host). If you do not use RAG, set vectorStore.type: "" and omit pgvector.

Prepare Storage (optional)

The cluster should have CSI or pre-created PersistentVolumes if you use PVC for API and plugin storage. By default the chart uses PVC; you can override storageClass, size, and accessMode in values, or use S3/MinIO instead (see Storage (S3 and PVC)).

Create Application

  1. In Alauda Container Platform, select the namespace where Dify will be deployed.
  2. Go to Applications / Applications, click Create.
  3. Choose Create from Catalog and go to the Catalog view.
  4. Find 3rdparty/chart-dify and click Create.
  5. Enter Name (e.g. dify) and configure Custom values as below, then create. You can change values later via Update on the application.

Required Configuration

You must configure at least:

  1. URLsurls.consoleUrl and urls.appUrl (base URLs for browser access, no path suffix; required for the app to open correctly)
  2. Database – PostgreSQL 12+ (host and credentials via Secret)
  3. Redis – Redis 6.0+ standalone (host and credentials via Secret)
  4. Vector store – pgvector (host and credentials via Secret), or set vectorStore.type: "" to disable

Minimal Required Values

Create the secrets (replace placeholders with your values):

kubectl create secret generic dify-db-secret --from-literal=password='<db-password>'
kubectl create secret generic dify-redis-secret --from-literal=password='<redis-password>'
kubectl create secret generic dify-pgvector-secret --from-literal=password='<pgvector-password>'

Minimal Custom Values (fill in your hosts and secret names):

# Required: base URLs for console and app (no path suffix)
urls:
  consoleUrl: <https://console-domain>   # or http://
  appUrl: <https://app-domain>           # or http://

# Required: main database (PostgreSQL 12+)
database:
  host: <postgres-host-without-port>
  secret:
    name: <dify-db-secret>

# Required: Redis 6.0+ (standalone only)
redis:
  host: <redis-host-without-port>
  secret:
    name: <dify-redis-secret>

# Required: vector store (pgvector; or set vectorStore.type: "" to disable)
vectorStore:
  pgvector:
    host: <pgvector-host-without-port>
    secret:
      name: <dify-pgvector-secret>

Optional Configuration

Vector store (disable)

When not using RAG/vector search:

vectorStore:
  type: ""

Ingress (host and TLS)

By default ingress.enabled is true and ingress.hosts[].host can be empty (match all domains). Set a hostname and TLS when needed:

ingress:
  className: <ingress-class>
  hosts:
    - host: <dify.example.com>
  tls:
    - secretName: <dify-tls>
      hosts:
        - <dify.example.com>

Storage (S3 and PVC)

PVC (default): API and plugin daemon each use a PVC when enabled. Override storage class and size as needed:

storage:
  api:
    type: opendal
    pvc:
      enabled: true
      size: 10Gi
      storageClass: <storage-class>
      accessMode: ReadWriteOnce
  plugin:
    type: local
    pvc:
      enabled: true
      size: 10Gi
      storageClass: <storage-class>
      accessMode: ReadWriteOnce

S3 (object storage): Use S3 or MinIO-compatible storage for API and/or plugin. Create a Secret with ACCESS_KEY and SECRET_KEY (or configure custom keys in values):

kubectl create secret generic <dify-s3-secret> --from-literal=ACCESS_KEY='<access-key>' --from-literal=SECRET_KEY='<secret-key>'
storage:
  api:
    type: s3
    s3:
      endpoint: <s3-endpoint>   # empty for AWS default; MinIO e.g. http://minio:9000
      region: <region>
      bucket: <bucket>
      addressStyle: path   # "path" for MinIO; "virtual" for AWS
      secret:
        name: <dify-s3-secret>
  plugin:
    type: s3
    s3:
      endpoint: <s3-endpoint>
      region: <region>
      bucket: <bucket>
      secret:
        name: <dify-s3-secret>

PIP install mirror (proxy)

When the cluster cannot access PyPI (e.g. offline or restricted network), set a PIP index URL for Plugin Daemon (plugin dependencies) and/or Sandbox (code execution):

pluginDaemon:
  pipMirrorUrl: "<mirror-url>"   # e.g. https://mirrors.aliyun.com/pypi/simple/

sandbox:
  pipMirrorUrl: "<mirror-url>"

For a simple self-hosted PyPI proxy you can use devpi; then set pipMirrorUrl to that proxy URL (e.g. http://<host>:3141/root/pypi/+simple/).

Marketplace (internal network)

When the cluster cannot reach the public marketplace (https://marketplace.dify.ai):

Option 1 – Disable marketplace: Install plugins via "Install via Local Package File" in the console. See Dify: Installing the Plugin.

api:
  marketplace:
    enabled: false

Option 2 – Internal marketplace proxy: Deploy a reverse proxy to https://marketplace.dify.ai (the upstream requires Host: marketplace.dify.ai) and set:

api:
  marketplace:
    enabled: true
    url: <https://internal-marketplace.example.com>

Access

  • Via Ingress: If Ingress is enabled and a host is set, use https://<host> (or the configured urls.consoleUrl / urls.appUrl) to open the console and app.
  • Via Service: Otherwise use the API and Web Services (e.g. NodePort or LoadBalancer) as exposed by the chart; ensure urls.consoleUrl and urls.appUrl match how users reach the app so the frontend loads correctly.

User Management

Dify has no default admin password. Complete initial setup and create users (e.g. email/password) from the login or sign-up page after the application is running.