Hosting Stable Diffusion as a Family Service: Multi-User Setup (2026)

stable-diffusioncomfyuiautomatic1111multi-usertailscalehome-servertutorialfamily-setup

Every Stable Diffusion guide assumes you are the only person using the machine. Fire up ComfyUI on port 8188, generate images, done. That assumption breaks the moment your partner wants to generate wallpapers from their laptop, your teenager wants to run a fantasy portrait session from their phone, and you want all of this to happen without anyone stomping on each other’s queue or accidentally overwriting saved workflows.

The setup: a single GPU machine, multiple household users, and a configuration that does not require everyone to understand Python virtual environments. Stack is ComfyUI (or Automatic1111), Caddy for basic auth, and Tailscale for remote access. Expected time from a working SD install: 60–90 minutes.

If you are sharing an LLM server rather than an image-gen server, the Open WebUI multi-user guide covers that parallel problem with a similar approach.

The honest GPU baseline first

Before discussing software setup, you need to know whether your GPU can realistically serve a household. Stable Diffusion is GPU-bound — every generation job monopolizes the GPU until it finishes. Unlike an LLM backend that can at least partially batch requests, SD runs one job at a time unless you have multiple GPUs.

Practical minimums for shared use:

GPUVRAMSDXL throughputFlux.1 [dev] throughputVerdict
RTX 306012 GB~1.5 img/min at 1024×1024~0.4 img/minUsable solo, slow shared
RTX 4060 Ti16 GB~2.2 img/min at 1024×1024~0.7 img/minAcceptable for light family use
RTX 309024 GB~2.8 img/min at 1024×1024~1.1 img/minComfortable; Flux at full precision
RTX 409024 GB~5.5 img/min at 1024×1024~2.0 img/minFast enough that queue waits barely register

Throughput figures are approximate and vary with sampler, steps, ControlNet usage, and LoRA count. The RTX 3090 at around $700 used (Amazon) is the practical sweet spot for a shared household server — enough VRAM to load Flux.1 models without offloading, fast enough that a family member gets their result in under two minutes.

The RTX 5060 Ti 16 GB works but you will hit Flux VRAM limits. See the full GPU buying guide for the decision matrix if you are still choosing hardware.

If you need to test the shared-server concept before buying hardware, RunPod lets you rent an RTX 4090 by the hour to validate your setup without the upfront spend.

The concurrent usage reality

This is the thing most multi-user SD guides skip: there is no multi-user parallelism on a single GPU. Both ComfyUI and Automatic1111 serialize requests into a queue. When two people submit jobs simultaneously, the second job waits. Neither job fails — it just queues.

What this means in practice:

  • ComfyUI has a built-in queue visible in the UI (bottom panel). Users can see their job position.
  • Automatic1111 also queues, but the queue feedback is less visible to users on separate sessions.
  • Generation time at 20 steps SDXL is roughly 30–45 seconds on an RTX 3090. A queued second user waits that long.
  • At a 45-second per-job average, 4 family members each wanting 5 images would take roughly 15 minutes to work through sequentially.

For most households this is fine — nobody is running 50-image batches simultaneously. Problems arise when someone starts a 50-step Flux job or a batch of 20 images. The fix is social: tell household users to keep batches small during shared hours, or stagger use.

There is no software solution to single-GPU serialization short of adding a second GPU. If you genuinely need true parallel image generation for multiple heavy users, you are in multi-GPU territory — but that is a different article.

The stack

Three components:

LayerSoftwareJob
Image generation backendComfyUI v0.3+ or Automatic1111 v1.10+GPU inference, model management, queue
Auth + HTTPS reverse proxyCaddy 2Per-user basic auth, HTTPS termination
Remote accessTailscaleEncrypted mesh so phones/tablets reach the server without port forwarding

Caddy handles the front door. Every request to your SD server goes through Caddy first, which checks credentials before passing the request through to ComfyUI or A1111. Tailscale handles the network: no ports need to be open on your router, no dynamic DNS, no port forwarding rules.

If you are on Linux and want a more thorough production setup first, the ComfyUI Linux production guide covers systemd, Caddy, and Tailscale in detail as a starting point you can extend with the multi-user layer described here.

Step 1: Install Caddy as a reverse proxy

On Linux (Ubuntu/Debian)

sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update && sudo apt install caddy

On Windows

Download the Caddy binary from https://caddyserver.com/download and place it somewhere on your PATH, or use winget install Caddy.Caddy.

Caddyfile for ComfyUI with basic auth

Create /etc/caddy/Caddyfile (Linux) or Caddyfile next to the binary (Windows):

:8080 {
    basicauth /* {
        alice JDJhJDE0JHVjSGVzZzBFcHJKWXFpY3VGT0pCRk9vVTBob1VxZkdJNVBxVGJsbm9ZSmJrcVU0cGxDNGZP
        bob   JDJhJDE0JFZENjlhUm5vQ08xNjl2SUdNNHFGdGVEV2p5NkZUMEtiT3p3eWkucEVJVmEwSFhia0NkU0Rv
        carol JDJhJDE0JEFuZjFUd1ZFT0VQeFBMaVpVRXlxSGV5STdQdk54N0JEVHlTVFJzd0djUVM4bGQvV2dSeXRH
    }
    reverse_proxy localhost:8188
}

The hash strings above are examples — generate real ones with caddy hash-password:

caddy hash-password --plaintext "yourpassword"

Run that once per user and paste the output into the Caddyfile. Each user gets their own username and hashed password. Caddy stores only the hash — the plaintext never goes anywhere.

If ComfyUI is on port 8188 (default), the reverse_proxy localhost:8188 line is correct. For Automatic1111, change the port to 7860.

Restart Caddy after editing: sudo systemctl reload caddy on Linux, or restart the process on Windows.

Important: Basic auth protects the web interface but does not isolate the ComfyUI API. Any user who knows the URL and their credentials can call API endpoints directly. For a trusted household this is fine. For anything more adversarial you need a more sophisticated auth layer.

Step 2: Install Tailscale for remote access

Tailscale creates a private mesh network among all your devices. Your server, phones, laptops, and tablets all get a fixed 100.x.x.x address that works regardless of where they are physically located. No port forwarding, no dynamic DNS.

Install on the server

Linux:

curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up

Follow the URL it prints to authenticate the machine to your Tailscale account.

Windows: Download the installer from https://tailscale.com/download and run it. Sign in with the same account.

Add family member devices

Each family member’s phone, tablet, or laptop needs the Tailscale app. They log in with their own Tailscale account — or if you control the setup, you can add all devices under one account with separate ACL rules. For a simple household setup, one account with shared devices is the path of least resistance.

Once all devices are on the same Tailscale network, the server’s Tailscale IP (e.g., 100.64.0.5) is reachable from any device on the network, anywhere with internet. Family members access ComfyUI at http://100.64.0.5:8080 (the Caddy port) — not the raw ComfyUI port.

Tailscale provides certificates for your tailnet hostnames for free. After enabling MagicDNS in the Tailscale admin panel:

sudo tailscale cert your-machine.tailnet-name.ts.net

This creates a certificate Caddy can use. Update the Caddyfile to use your Tailscale hostname instead of :8080:

your-machine.tailnet-name.ts.net {
    basicauth /* {
        alice $2a$14$...
    }
    reverse_proxy localhost:8188
}

Now family members access https://your-machine.tailnet-name.ts.net — real HTTPS, real certificate, browser padlock included.

Step 3: User isolation in ComfyUI — saved workflows without the mess

ComfyUI’s default state is a single shared workspace. When your teenager saves their anime LoRA workflow with a custom filename, it overwrites the naming space everyone else uses. The fix is a directory structure that gives each user their own folder.

ComfyUI stores user-specific data (saved workflows, settings, preferences) in user/ under the ComfyUI installation directory. The structure looks like this:

ComfyUI/
└── user/
    └── default/
        ├── workflows/
        └── userdata/

By default, all users share default/. ComfyUI does not have native per-account directories — that feature requires the ComfyUI Manager extension or manual organization.

Practical approach: named workflow subfolders

Create subdirectories in user/default/workflows/ for each person:

ComfyUI/user/default/workflows/
├── alice/
├── bob/
└── carol/

Instruct each user to always save workflows inside their named folder. This is low-tech but completely reliable. ComfyUI’s workflow browser shows folders, so each person navigates to their section.

Alternative: separate ComfyUI instances on different ports

If you want true isolation — separate saved settings, separate workflow namespaces, no cross-contamination — run two ComfyUI instances. Both share the same models directory (more on that below) but have separate data directories:

# Instance for alice (port 8188)
python main.py --port 8188 --extra-model-paths-config /opt/comfyui/shared_models.yaml

# Instance for bob (port 8189)
python main.py --port 8189 --base-path /opt/comfyui_bob --extra-model-paths-config /opt/comfyui/shared_models.yaml

Both instances access the same model files but have separate user/ directories. The Caddyfile routes alice.$YOUR_HOST to 8188 and bob.$YOUR_HOST to 8189. Both still share the single GPU queue — isolation is logical, not hardware.

This is more setup but solves the workflow contamination problem completely.

Step 4: Model management — one copy everyone can access

Models are large. A typical SDXL base model is 6.5 GB. Flux.1 [dev] is 23.8 GB. Running duplicate copies for each user wastes disk space and is unnecessary — models are read-only during inference.

The right structure is a single shared model directory that all SD instances (or both users on the same instance) read from.

Shared model directory

Create a central path and move all models there:

sudo mkdir -p /opt/stable-diffusion/models/{checkpoints,vae,loras,controlnet,clip}
sudo chown -R $(whoami):$(whoami) /opt/stable-diffusion

Move your existing models:

mv ~/ComfyUI/models/checkpoints/* /opt/stable-diffusion/models/checkpoints/

Tell ComfyUI where to look

Create /opt/comfyui/shared_models.yaml:

comfyui:
  base_path: /opt/stable-diffusion/
  checkpoints: models/checkpoints/
  vae: models/vae/
  loras: models/loras/
  controlnet: models/controlnet/
  clip: models/clip/

Launch ComfyUI with:

python main.py --extra-model-paths-config /opt/comfyui/shared_models.yaml

For Automatic1111, set the --ckpt-dir, --lora-dir, and --vae-dir flags to the shared paths in your startup script.

On Windows

The same concept applies with Windows paths. Create C:\AI\models\ as the shared directory and update the extra_model_paths.yaml in your ComfyUI folder to point there. All users on the machine automatically see every model without duplication.

Who manages model downloads?

Designate one person (probably whoever owns the machine) as the model manager. They download new checkpoints and LoRAs into the shared directory. No one else needs access to the filesystem — they just select the model from the dropdown when it appears.

This prevents the classic failure mode where two people independently download the same 23 GB Flux model because neither could see what the other had already fetched.

Step 5: What the family actually sees

Once set up, the experience from a family member’s device:

  1. Open the Tailscale app (one-time setup per device).
  2. Navigate to https://your-machine.tailnet-name.ts.net in any browser.
  3. Enter their username and password (Caddy prompt).
  4. ComfyUI or A1111 loads. They pick a model, run a job, see it in the queue.
  5. If someone else is generating, they see “Queue size: 1” and their job is next.

No SSH, no terminal, no Python. The phone experience works well — ComfyUI’s web interface is responsive enough to use on mobile for submitting jobs and downloading results, even if workflow editing is clunky on small screens.

Honest take: where this setup reaches its limits

Basic auth through Caddy protects the web interface but does not give each user separate generation history within ComfyUI. All users see the same queue, the same history panel, the same “recent” list — there is no native per-user session isolation in ComfyUI as of v0.3.x.

If that matters — say, you do not want your kids browsing your generation history — the two-instance approach (separate ports per user, separate Caddy routes) is the real fix. It is more setup but fully solves the history isolation problem.

The other limit is pure throughput. One job at a time. If three people simultaneously want to run a 30-step SDXL batch, they are waiting. On an RTX 3090 generating at roughly 3 images per minute for SDXL, a 4-image batch takes about 80 seconds. Three queued batches: roughly 4 minutes of waiting for the last person in line. Manageable for most households; annoying for a teenager mid-session.

For more on whether your GPU is the right fit for this use case, the GPU buying guide and the RTX 5060 Ti vs RTX 3090 comparison cover the specific tradeoffs at the $400–$800 price range.

Setup checklist

Before calling this done:

  • ComfyUI or A1111 running and generating images locally
  • Caddy installed and Caddyfile written with hashed passwords for each user
  • Tailscale running on the server and all family devices
  • Shared model directory configured; no duplicate model files
  • Each user tested login from their device over Tailscale
  • Queue behavior explained to family: one job at a time, batch sizes capped
  • Workflow folders created per user (or separate instances if full isolation needed)

Sources

Last updated May 13, 2026. Hardware prices and model availability change frequently; verify current listings before purchasing.