Automates what was previously a six-step manual process that, twice
in this codebase's history, has produced version skew between git and
the released tarball (v1.2.0 was published with package.json 1.2.0 in
the tarball but the bump was never committed back to gitea — making
"what code is in v1.2.0?" answerable only by extracting the tarball).
The script:
- Refuses to run with a dirty tree, off main, or already at the
target version.
- Bumps dashcaddy-api/package.json, rebuilds status/dist/, commits
+ pushes to gitea — so the released artifact and gitea HEAD are
always in lockstep.
- Clones gitea HEAD on the release host, verifies the cloned commit
matches what we just pushed (catches a stale clone or a missed
push), tars it, computes sha256, writes version.json.
- Refreshes install.sh on the release host alongside the tarball
(fresh installs use the install.sh from the latest release).
- Mirrors the release dir to the get2 backup via rsync.
- Verifies live by curling version.json and re-hashing the served
tarball.
Hosts overridable via DASHCADDY_RELEASE_HOST / DASHCADDY_MIRROR_HOST
/ DASHCADDY_GITEA_URL env vars.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Cloud backups (Dropbox / WebDAV / SFTP):
- backup-manager.js: save + load handlers per provider, credential
resolution via credentialManager, destination probe.
- routes/backups.js: /credentials/{provider} (masked GET, POST, DELETE),
/test-destination, scheduling endpoints.
- status/js/backup-restore.js: destination picker, provider-specific
credential forms, test button wired to backend probe.
- npm deps already present (dropbox 10.34.0, webdav 5.7.1,
ssh2-sftp-client 11.0.0).
Resource history:
- resource-monitor.js: three-tier rollup storage — raw 10s samples
(7-day retention), hourly rollups (30-day), daily rollups
(365-day). getHistoryByRange() auto-selects the appropriate tier.
- routes/monitoring.js: /monitoring/history/:containerId now supports
startTime/endTime range mode (legacy ?hours=N still works).
- status/js/resource-monitor.js + dashboard.css: "History" tab with
range buttons (1h/24h/7d/30d/1y), SVG sparklines for
CPU / memory / network. Renderer handles raw and rolled-up shapes.
status/dist/features.js rebuilt from source via build.js.
Lifted out of wip/cloud-backups-and-history; the half-finished
app-deps feature from that branch (frontend calls /api/v1/apps/
check-dependencies but the endpoint doesn't exist) is preserved
separately on wip/app-deps for later.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- install.sh now deploys the src/ directory alongside routes/.
Without this, fresh installs of v1.3.0+ produce containers whose
Dockerfile references src/ but the directory is missing on the host
filesystem, so docker build fails with "/src: not found".
- The fallback heredoc that writes /etc/systemd/system/dashcaddy-
updater.path drops MakeDirectory=yes for the same reason it was
removed from the on-disk unit (e994ad1): systemd creates the watched
trigger.json path as an empty directory on unit start, blocking
every subsequent update with EISDIR.
Bumped to 1.3.1 so the existing v1.3.0 instance auto-updates and
picks up these and the host-script fix from 0cf6323.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two bugs in the host-side updater script:
1. The Dockerfile (since f5fe32b) does \`COPY src/ ./src/\`, but the
host script never copies src/ from staging into the api source
directory. Result: every update fails with
"failed to compute cache key: ... '/src': not found".
2. \`cp -rf staging/routes api_source/routes/\` does NOT replace the
destination directory — it copies the source dir INTO the
destination, producing api_source/routes/routes/. Means new route
files end up nested one level deep and never get loaded by
server.js, so updates silently regress route handlers even when
the build succeeds.
Switch to "rm -rf dest && cp -rf src dest" semantics for both routes
and src, in all four touch points (deploy + 3 rollback paths).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
`MakeDirectory=yes` on a `PathChanged=` directive whose target is a
file (not a directory) causes systemd to create the watched path as
an empty directory on unit start. The container's self-updater then
crashes with EISDIR every time it tries to writeFile() the trigger,
and the host script never runs.
The parent `/opt/dashcaddy/updates/` is already created by the
installer/Docker volume, so the flag is redundant and only here as
a footgun. Drop it.
Reproducer: enable the unit on a fresh system, watch
`/opt/dashcaddy/updates/trigger.json` get materialized as a directory
within milliseconds of `systemctl start dashcaddy-updater.path`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
v1.2.0 was published as a tarball but its package.json bump was never
committed back to git. This release picks up where that gap left off
and includes two fixes that v1.2.0 (commit a216dd8) was missing:
- 6abba43: clear ALL pending self-update history entries (not just
the first), so stuck installs unwind cleanly.
- 0460129: allow apiSourceDir to be overridden via the
DASHCADDY_API_SOURCE_DIR env var, so installs that don't follow
the default /etc/dashcaddy/sites/dashcaddy-api/ layout (e.g. older
deployments under /opt/dashcaddy/) can point the auto-updater at
the right path without patching the constructor.
Without these, instances on the older /opt/dashcaddy/ layout get
stuck in a 30-min retry loop where every update attempt fails with
'cp: cannot create directory /etc/dashcaddy/sites/dashcaddy-api/'.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mirrors the env-var pattern f5fe32b introduced for channel and
instanceIdFile. Lets installs that don't follow the default
/etc/dashcaddy/sites/dashcaddy-api/ layout (e.g. older deployments
under /opt/dashcaddy/) point the auto-updater at the right path
without having to patch the constructor call site.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
checkPostUpdateResult() used history.find() which only ever updated a
single pending entry. When multiple update attempts stacked up, the
extra pending entries stayed stuck in 'pending' forever even though
the actual update completed. Switch to filter() + loop to clear all
matching entries.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- health.js: replace magic number 5000 with TIMEOUTS.HTTP_DEFAULT (twice)
- services.js: replace magic number 5000 with TIMEOUTS.HTTP_DEFAULT
Both files already import TIMEOUTS from constants but weren't using it.
- monitoring.js: Added log dependency, replaced console.log with log.warn
- themes.js: Added log dependency, replaced console.error with log.error
- src/app.js: Pass log to monitoringRoutes and themesRoutes
This fixes error messages being lost to stdout instead of proper log files.
- Container exec/shell via WebSocket + xterm.js (subtle >_ button on cards)
- Live dashboard updates via SSE (resource alerts, health changes, update notices)
- Docker Compose import with YAML parsing, preview, and dependency-ordered deploy
- Volume & network management modal with disk usage overview
- CPU/memory resource limits on deploy and live update
- Email SMTP notifications (nodemailer) alongside Discord/Telegram/ntfy
- Scheduled auto-update scheduler with maintenance windows (daily/weekly/monthly)
New deps: ws, js-yaml, nodemailer
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The Phase 2.1 refactor wrapped success() responses as { success, data: {...} }
but the frontend expects flat responses like { success, license: {...} }.
This caused license to show FREE TIER and broke other API consumers.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
These endpoints must be accessible without TOTP auth for the dashboard
to load site config (TLD, DNS servers, custom logo) and service status
(bulk probe results). Without them, the dashboard shows all services
as OFF and loses custom branding after any session expiry.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
100 requests/15min was far too low for a dashboard with auto-refresh
polling every 10-30 seconds, causing 429s on TOTP config, site config,
license status and other endpoints after ~3 minutes of normal use.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
License status, services list, config, and license feature checks
were being rate-limited (429) after ~14 minutes of dashboard polling,
causing the license to show FREE TIER and services to fail loading.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Remove redundant ctx shim that conflicted with function parameter
- Use destructured notification/safeErrorMessage directly
- Add pylon, customLogoDark, customLogoLight to known config keys
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The lightweight probe endpoint used by the dashboard for live status
checks had no Pylon integration. When DNS2 (Singapore) tried to probe
home network services directly, all probes timed out with 502. Now
falls back to the configured Pylon relay before the domain fallback.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The modular refactor changed function signatures to destructured deps but
left internal ctx.* references intact, causing "ctx is not defined" errors
on /api/config, /api/logo, and many other endpoints. Also implements
loadTotpConfig and saveTotpConfig which were left as stubs.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add missing fetchT dep to session-handlers.js (fixes ctx is not
defined error that broke jellyfin/emby/plex/syncthing SSO)
- Replace all ctx.fetchT calls with direct fetchT usage
- Remove server-old.js (69K monolith backup) from tracking
- Remove AI-generated doc artifacts from repo root
- Update .gitignore to prevent re-adding these files
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Resolve conflict in server.js by accepting the remote's modular
refactor (1960 lines → 230 lines). Local Phase 1/2 changes
(logger swap, unused import) are superseded by the new structure.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>