Skip to main content

Comprehensive Guide: Installing Docusaurus with Docker & Cloudflare Tunnel

info

Document Version: 1.1 Last Updated: 2026-02-01 Target Audience: DevOps Engineers, System Administrators Difficulty: Intermediate

This comprehensive guide details the complete process of deploying a production-ready Docusaurus instance using Docker containers, secured behind a Cloudflare Zero Trust Tunnel.

This guide is based on our specific infrastructure standards located at /opt/docker-data/apps/docusaurus.


1. System Architecture Overview

Before diving into commands, it is crucial to understand the architecture we are building.

The Components

  1. Docker Container: Run Docusaurus (Node.js runtime) in an isolated container.
  2. Docker Network: Bridges the internal container to the Cloudflare Tunnel container.
  3. Cloudflare Tunnel (cloudflared): Establishes a secure outbound connection to Cloudflare's edge network, exposing the internal Docusaurus service without opening inbound firewall ports.
  4. Cloudflare Zero Trust: Adds an authentication layer (SSO, OTP) in front of the application.

Data Flow Diagram


2. Prerequisites & Application Structure

2.1 Host Requirements

  • OS: Linux (Ubuntu 22.04+ recommended)
  • Runtime: Docker Engine & Docker Compose
  • Resources: Minimum 2 vCPU, 4GB RAM (Docusaurus builds are memory intensive)

2.2 Directory Structure

We strictly adhere to the designated path /opt/docker-data for all persistent data.

/opt/docker-data/apps/docusaurus/
├── site/ # (Mapped to /app) Source code of the website
│ ├── docs/ # Documentation markdown files
│ ├── src/ # React components and CSS
│ ├── static/ # Images and public assets
│ ├── docusaurus.config.js # Main configuration
│ └── sidebars.js # Sidebar structure
├── build/ # (Optional) Static build output
└── backups/ # Local backups of the content

2.3 Network Configuration

All applications must share a dedicated bridge network to communicate by hostname.

# Create the network if it doesn't exist
docker network create app-network

3. Step-by-Step Installation

Step 1: Prepare the File System

First, we create the necessary directory structure. This ensures data persistence across container restarts.

# Define the root path
export APP_ROOT="/opt/docker-data/apps/docusaurus"

# Create directories
mkdir -p "$APP_ROOT/site"
mkdir -p "$APP_ROOT/build"
mkdir -p "$APP_ROOT/backups"

# Set permissions (ensure your user has access)
sudo chown -R $USER:$USER "$APP_ROOT"
chmod -R 750 "$APP_ROOT"

Step 2: Initialize Docusaurus Source Code

We need to generate the scaffolding for the site. If you are migrating an existing site, skip to Step 3.

Option A: Automated Scaffolding (Recommended) Use npx to create the standard structure.

cd "$APP_ROOT"
# We use a temporary container to generate the files to avoid installing Node on the host
docker run --rm -v ".:/work" -w /work node:lts npx create-docusaurus@latest site classic

Option B: Manual Scaffolding (Custom/Minimal) Create the package.json manually if you need specific versions.

site/package.json
{
"name": "my-docusaurus-site",
"version": "1.0.0",
"scripts": {
"start": "docusaurus start --host 0.0.0.0 --port 3000",
"build": "docusaurus build",
"serve": "docusaurus serve --host 0.0.0.0 --port 3000"
},
"dependencies": {
"@docusaurus/core": "latest",
"@docusaurus/preset-classic": "latest",
"react": "latest",
"react-dom": "latest"
}
}
Port Binding

Note the --host 0.0.0.0 flag in the start scripts. This is critical. By default, Docusaurus binds to localhost (127.0.0.1). Inside a Docker container, localhost is isolated, meaning the host machine or other containers cannot reach it. Binding to 0.0.0.0 allows external connections.


Step 3: Deploy the Docusaurus Container

We will use a standard node image. We explicitly do not use a Dockerfile in this setup for development flexibility; instead, we mount the code and run npm start.

Docker Run Command

Execute the following command to start the container.

# Configuration Variables
APP_NAME="docusaurus"
NETWORK="app-network"
IMAGE="node:lts"

docker run -d \
--name "$APP_NAME" \
--hostname "$APP_NAME" \
--network "$NETWORK" \
--restart unless-stopped \
--cpus="3" \
--memory="8g" \
--workdir /app \
-v "/opt/docker-data/apps/docusaurus/site:/app" \
"$IMAGE" \
sh -c "npm install && npm start"

Detailed Breakdown of Flags:

  • -d: Detached mode (runs in background).
  • --name docusaurus: The container name.
  • --hostname docusaurus: Internal DNS name. Other containers (like the tunnel) will reach it at http://docusaurus:3000.
  • --network app-network: Connects to your shared Docker network.
  • --restart unless-stopped: Auto-restarts on crash or reboot.
  • --cpus="3": Performance Tuning. Docusaurus Webpack builds are CPU intensive. Without this, builds may hang.
  • --memory="8g": Performance Tuning. Prevents OOM (Out of Memory) kills during npm run build.
  • -v ...:/app: Mounts your local folder into the container.
  • sh -c "npm install && npm start": Runs installation and starts the dev server.

Step 4: Verify Container Health

After running the command, check if the container is up.

docker logs -f docusaurus

Expected Output:

> docusaurus start --host 0.0.0.0 --port 3000
[INFO] Starting the development server...
[SUCCESS] Docusaurus website is running at: http://0.0.0.0:3000/
Pro Troubleshooting

If you see Error: EACCES: permission denied, checks the folder ownership of /opt/docker-data/apps/docusaurus/site. The container runs as root by default, so it usually creates node_modules as root.


4. Cloudflare Tunnel Configuration

Now that Docusaurus is running locally (inside Docker), we need to expose it to the world securely.

Step 1: Locate your Tunnel Config

Your cloudflared instance should already be running. Configuration is typically found at: /etc/cloudflared/config.yml or mapped inside the tunnel container.

Step 2: Add Ingress Rule

You must tell Cloudflare to route traffic requesting your subdomain (e.g., docs.example.com) to your docker container.

Edit your config.yml:

config.yml
tunnel: <Your-Tunnel-UUID>
credentials-file: /etc/cloudflared/cert.json

ingress:
# Route for Docusaurus
- hostname: docs.brain.id86.net
service: http://docusaurus:3000

# Catch-all (must be last)
- service: http_status:404
Hostname Resolution

Notice service: http://docusaurus:3000.

  • http: The protocol.
  • docusaurus: The Container Name. This works because both containers are on app-network.
  • 3000: The port Docusaurus listens on.

Step 3: Restart Tunnel

For changes to take effect:

docker restart cloudflared

Check the logs to confirm the ingress rule was registered:

docker logs cloudflared | grep "Registered tunnel connection"

To make your documentation private (e.g., only for employees), set up a Zero Trust Application.

  1. Log in to Cloudflare Zero Trust Dashboard.
  2. Go to Access > Applications > Add an application.
  3. Select Self-hosted.
  4. Application Configuration:
    • Application Name: Internal Docs
    • Session Duration: 24 hours
    • Subdomain: docs (matches your ingress hostname)
    • Domain: brain.id86.net
  5. Identity Providers: Select configured providers (Google, GitHub, OTP).
  6. Policies:
    • Rule Name: Allow Team
    • Action: Allow
    • Include: Emails ending in @yourcompany.com or specific email list.
  7. Save Application.

Now, anyone visiting https://docs.brain.id86.net will be challenged to login before seeing the Docusaurus site.


6. Maintenance & Operations

How to Update Docusaurus

To update to the latest version of Docusaurus:

# 1. Update package.json inside container
docker exec -it docusaurus npm install @docusaurus/core@latest @docusaurus/preset-classic@latest

# 2. Restart container to rebuild
docker restart docusaurus

Installing Plugins

If you need to install a search plugin or theme:

docker exec -it docusaurus npm install @docusaurus/theme-search-algolia
docker restart docusaurus

Viewing Build Logs

If the site crashes or shows a 502 error, 99% of the time it is a build error.

docker logs --tail 100 -f docusaurus

Look for:

  • ReferenceError: You referred to a variable that doesn't exist.
  • Module not found: You forgot to install a dependency.
  • SyntaxError: You missed a comma in docusaurus.config.js.

7. Configuration Deep Dive: docusaurus.config.js

Configuring Docusaurus correctly is vital for performance and usability. Here are the key sections you need to understand.

Site Metadata

These fields affect SEO and the browser tab title.

module.exports = {
title: 'My Documentation',
tagline: 'Cool docs',
url: 'https://docs.brain.id86.net',
baseUrl: '/', // Always '/' unless hosted in a subfolder like /my-docs/
// ...
};

Presets (Classic)

The 'classic' preset includes the documentation plugin, blog plugin, and custom pages.

presets: [
[
'@docusaurus/preset-classic',
{
docs: {
sidebarPath: require.resolve('./sidebars.js'),
// Use 'true' to have a clickable breadcrumb to the "next" article
showLastUpdateAuthor: true,
showLastUpdateTime: true,
},
theme: {
customCss: require.resolve('./src/css/custom.css'),
},
},
],
],

The navbar allows you to link to external resources or switch between versions.

themeConfig: {
navbar: {
title: 'My Site',
logo: {
alt: 'My Site Logo',
src: 'img/logo.svg',
},
items: [
{
type: 'doc',
docId: 'intro',
position: 'left',
label: 'Tutorial',
},
{
href: 'https://github.com/facebook/docusaurus',
label: 'GitHub',
position: 'right',
},
],
},
}

Docusaurus allows for a multi-column footer.

footer: {
style: 'dark',
links: [
{
title: 'Docs',
items: [
{
label: 'Getting Started',
to: '/docs/intro',
},
],
},
// ... more columns
],
copyright: `Copyright © ${new Date().getFullYear()} My Project, Inc. Built with Docusaurus.`,
},

8. Advanced: Production Docker Strategy

While using npm start (development mode) is fine for small internal teams, for maximum performance and stability, you should use a multi-stage Dockerfile to serve static files with Nginx.

Why Production Builds?

  • Performance: Serving static HTML is 10x faster than the Node.js dev server.
  • Stability: The dev server can crash on memory spikes; Nginx is rock solid.
  • Security: Minimal attack surface.

The Multi-Stage Dockerfile

Create a file named Dockerfile.prod in your project root:

# Stage 1: Base
FROM node:lts-alpine as build
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build

# Stage 2: Serve
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Running Production Mode

To run this production build:

# 1. Build the image
docker build -t my-docusaurus-prod -f Dockerfile.prod .

# 2. Run the container
docker run -d \
--name docusaurus-prod \
--network app-network \
--restart unless-stopped \
my-docusaurus-prod

Note: You would need to update your Cloudflare Tunnel ingress to point to port 80 (Nginx default) instead of 3000.


9. Automated Backup Strategy

Data loss prevention is key. Since Docker containers are ephemeral, we must backup the mounted volumes.

The Backup Script

Create a script named backup-docusaurus.sh.

#!/bin/bash
# Docusaurus Backup Script

# Variables
BACKUP_DIR="/opt/docker-data/apps/docusaurus/backups"
SOURCE_DIR="/opt/docker-data/apps/docusaurus/site"
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
FILENAME="docusaurus_backup_$TIMESTAMP.tar.gz"

# Create backup dir if not exists
mkdir -p "$BACKUP_DIR"

# Clean old backups (keep last 7 days)
find "$BACKUP_DIR" -type f -name "*.tar.gz" -mtime +7 -delete

# Create Archive
echo "Creating backup: $FILENAME"
tar -czf "$BACKUP_DIR/$FILENAME" -C "$SOURCE_DIR" .

echo "Backup complete: $BACKUP_DIR/$FILENAME"

Scheduling with Cron

Make the script executable and add it to cron.

chmod +x backup-docusaurus.sh
crontab -e

Add the line to run daily at 3 AM:

0 3 * * * /path/to/backup-docusaurus.sh >> /var/log/docusaurus_backup.log 2>&1

10. Emergency Restoration Guide

If your site directory gets corrupted or you accidentally delete files, here is how to restore from the backups we setup in Section 9.

Step 1: Stop the Container

Stop the database or application to ensure data consistency during restore.

docker stop docusaurus

Step 2: Locate Backup

Find the file you want to restore.

ls -lh /opt/docker-data/apps/docusaurus/backups/
# Example: docusaurus_backup_20240201_030000.tar.gz

Step 3: Extract Archive

We will extract the archive back into the site directory.

# Define paths
BACKUP_FILE="/opt/docker-data/apps/docusaurus/backups/docusaurus_backup_20240201_030000.tar.gz"
TARGET_DIR="/opt/docker-data/apps/docusaurus/site"

# Warning: This overwrites existing files!
# It is good practice to clear the directory first to remove "deleted" files
rm -rf "$TARGET_DIR"/*

# Extract
tar -xzf "$BACKUP_FILE" -C "$TARGET_DIR"

# Restore Permissions (Crucial!)
sudo chown -R 1000:1000 "$TARGET_DIR"

Step 4: Restart Container

Bring the site back online.

docker start docusaurus

Check logs to verify everything loaded correctly:

docker logs --tail 20 docusaurus

11. Troubleshooting: Advanced Scenarios

Scenario A: npm install hangs indefinitely

Symptoms: The container log stops at "reify:fsevents: sil". Cause: Network MTU issues or insufficient entropy. Fix:

  1. Check your server's MTU setting (common in VPN/Wireguard setups).
  2. Sometimes utilizing yarn or pnpm instead of npm solves vague hanging issues.
  3. Ensure you have assigned at least 4GB RAM to the container.

Scenario B: "Invalid Host Header"

Symptoms: You see "Invalid Host Header" when accessing via Cloudflare. Cause: Webpack Dev Server checks the Host header for security. Fix: In docusaurus.config.js or package.json, disable host checks (only do this if truly necessary and you are behind a trusted proxy like Cloudflare).

In package.json:

"start": "docusaurus start --host 0.0.0.0 --port 3000 --no-open --poll 1000"

Scenario C: Cloudflare Tunnel "Unable to reach origin"

Symptoms: 502 Bad Gateway from Cloudflare. Analysis:

  1. Cloudflare cannot talk to cloudflared container (Unlikely if other apps work).
  2. cloudflared container cannot talk to docusaurus container.

Debugging Network: Exec into the cloudflared container and try to ping docusaurus.

docker exec -it cloudflared sh
# Inside container
ping docusaurus
wget -qO- http://docusaurus:3000

If ping works but wget fails, the Docusaurus server isn't listening on port 3000. If ping fails (Name resolution error), they are not on the same Docker network.


12. Security Checklist

Before considering your installation "Production Ready", verify these points:

  1. [ ] Zero Trust Active: Try accessing the site from an incognito window. You should be redirected to a login page.
  2. [ ] No Public Ports: Run nmap <your-ip>. Port 3000 should not be open.
  3. [ ] Volume Permissions: Ensure strictly only the docker user/root can read config.js if it contains secrets.
  4. [ ] Generic Error Pages: Ensure your application doesn't leak stack traces on 500 errors (Docusaurus production build handles this).

13. Frequently Asked Questions (FAQ)

Q: Can I run multiple Docusaurus sites on one VPS? A: Yes!

  1. Create a new folder: /opt/docker-data/apps/doc-site-2
  2. Run a new container named docusaurus-2 on port 3000 (inside the container).
  3. Connect it to app-network.
  4. Add a new ingress rule in cloudflared mapping docs2.domain.com -> http://docusaurus-2:3000.

Q: Why do I see "Disconnection" messages in the Docusaurus terminal? A: This is normal for the WebSocket connection used for Hot Module Replacement (HMR) if the connection is unstable or if Cloudflare times out long-lived idle connections.

Q: How do I customize the sidebar? A: Edit sidebars.js. You can manually list items or use type: 'autogenerated' to have it mirror your filesystem specific folders.


Conclusion

You now have a robust, secure, and scalable documentation platform. By leveraging Docker for isolation and Cloudflare Tunnel for secure connectivity, you avoid the complexity of managing Nginx reverse proxies, SSL certificates (Let's Encrypt), and firewall rules manually.

For further reading, consult the Official Docusaurus Documentation.