ontime.sh
cron as a service. ssh is the signup.
Pipe a script. Give it a schedule. Walk away.
Each run boots a fresh Firecracker microVM in ~125ms, streams logs to disk, and vanishes. No dashboard, no SDK, no YAML.
$ cat report.py | ssh ontime.sh '0 9 * * 1' deployed id=1 schedule=0 9 * * 1 tier=free
how
- your ssh key is your account — no signup form
- pipe a shell script or a static linux-amd64 binary
- microVM per run: isolated, ephemeral, ~125ms boot
- secrets sealed with the server's master key
what you can pipe
Kind is auto-detected. Shell scripts run as-is; binaries must be statically linked for linux-amd64 (CGO_ENABLED=0 if you're coming from Go, bun build --compile from TypeScript, pyinstaller --onefile from Python).
$ cat backup.sh | ssh ontime.sh '@daily' # shell $ cat ./my-tool | ssh ontime.sh '@every 10m' # compiled binary
Default memory per run is 128 MiB; raise with --mem.
try it
$ ssh ontime.sh whoami $ cat job.sh | ssh ontime.sh '@hourly' $ ssh ontime.sh logs 42
secrets
Pass env vars at deploy, or attach them to an existing job. Sealed with AES-256-GCM on disk, decrypted in-VM at run time.
$ cat report.py | ssh ontime.sh deploy '@daily' \ --name report --env SLACK_URL=https://hooks... --env TZ=UTC $ ssh ontime.sh env set report DB_URL=postgres://... $ ssh ontime.sh env list report SLACK_URL TZ DB_URL $ ssh ontime.sh env rm report SLACK_URL
pricing
- free — 3 jobs, 128 MiB, 7-day logs
- $5/mo — 25 jobs, 512 MiB, 30-day logs
- $20/mo — unlimited jobs, 2 GiB, 90-day logs