Compare commits
23 Commits
4e78ef7219
...
9bc5c3a3a2
| Author | SHA1 | Date | |
|---|---|---|---|
| 9bc5c3a3a2 | |||
| d741541e23 | |||
| fa7c0fd1c6 | |||
| bb917e0b33 | |||
| 0ac0f3b907 | |||
| e0abe5cdfc | |||
| bee21594f5 | |||
| dd8f40774f | |||
| df8ccc694b | |||
| 84ab27a6b5 | |||
| 55e34afd59 | |||
| 5678138cc5 | |||
| 9fce04f212 | |||
| e964777f69 | |||
| 5f87545b66 | |||
| 67d10c4c9a | |||
| b00d9d6fbd | |||
| af444d7066 | |||
| c82dbc5654 | |||
| b3ed653bb3 | |||
| e747491b85 | |||
| ca9167d129 | |||
| cd07e43533 |
11
CLAUDE.md
11
CLAUDE.md
@@ -59,7 +59,9 @@ source .venv/bin/activate && pip install -r requirements.txt
|
||||
|
||||
**Heartbeat** (`src/heartbeat.py`): Email, calendar, KB, git checks. Quiet hours 23-08.
|
||||
|
||||
**Memory** (`src/memory_search.py`): Ollama all-minilm embeddings (384 dim) + SQLite cosine similarity. **Shared with Clawd** — `memory/` is a symlink to `/home/moltbot/clawd/memory/` (single source of truth for both Echo Core and Clawdbot).
|
||||
**Memory** (`src/memory_search.py`): Ollama all-minilm embeddings (384 dim) + SQLite cosine similarity. Lives at `memory/` inside this repo — single source of truth. Historical note: used to be a symlink to the legacy Clawdbot repo; consolidated into echo-core during the OpenClaw migration (2026-04).
|
||||
|
||||
**Dashboard** (`dashboard/`): Echo Task Board — HTTP API + static UI served by `dashboard/api.py` on port 8088, typically behind a reverse proxy at `/echo/`. Endpoint logic split across `dashboard/handlers/*.py` mixins; paths centralised in `dashboard/constants.py`. Systemd user unit template at `dashboard/echo-taskboard.service`.
|
||||
|
||||
## Import Convention
|
||||
|
||||
@@ -77,7 +79,12 @@ Absolute imports via `sys.path.insert(0, PROJECT_ROOT)`: `from src.config import
|
||||
| `config.json` | Runtime config |
|
||||
| `bridge/whatsapp/index.js` | Baileys + Express bridge, port 8098 |
|
||||
| `personality/*.md` | System prompt — cine ești |
|
||||
| `memory/` | Symlink → `/home/moltbot/clawd/memory/` (shared KB) |
|
||||
| `memory/` | Knowledge base — embeddings + SQLite (lives in-repo, not a symlink) |
|
||||
| `dashboard/api.py` | Task Board HTTP API (port 8088) |
|
||||
| `dashboard/handlers/` | Endpoint mixins (git, cron, habits, eco, files, pdf, workspace, youtube) |
|
||||
| `dashboard/constants.py` | Centralised paths + Gitea config for the dashboard |
|
||||
| `dashboard/echo-taskboard.service` | Systemd user unit template |
|
||||
| `cron/jobs.json` | APScheduler jobs (flat schema, Europe/Bucharest) |
|
||||
|
||||
## gstack
|
||||
|
||||
|
||||
163
MIGRATION-PLAYBOOK.md
Normal file
163
MIGRATION-PLAYBOOK.md
Normal file
@@ -0,0 +1,163 @@
|
||||
# OpenClaw → Echo-Core Migration Playbook
|
||||
|
||||
> **⚠️ This document should NOT be run by an AI agent.**
|
||||
> Marius runs it by hand, one step at a time, verifying between steps.
|
||||
> Each step has a "before" and "after" state. Never batch.
|
||||
|
||||
Run this after the PR `feat/openclaw-consolidation-2026-04` has merged to master.
|
||||
Estimated downtime: 5–10 minutes. Rollback path at the bottom.
|
||||
|
||||
---
|
||||
|
||||
## Pre-flight (read-only)
|
||||
|
||||
1. Confirm clean git state on echo-core master:
|
||||
```
|
||||
cd /home/moltbot/echo-core && git status
|
||||
```
|
||||
|
||||
2. Verify tests pass:
|
||||
```
|
||||
cd /home/moltbot/echo-core
|
||||
source .venv/bin/activate && pytest tests/ -x
|
||||
```
|
||||
|
||||
3. Backup:
|
||||
```
|
||||
cp -rp /home/moltbot/clawd/dashboard /home/moltbot/clawd-backup-$(date +%Y-%m-%d)/
|
||||
cp -rp /home/moltbot/clawd/memory /home/moltbot/clawd-backup-$(date +%Y-%m-%d)/memory
|
||||
```
|
||||
|
||||
## Legacy consumer grep (decide on compat symlink)
|
||||
|
||||
4. Check whether anything still reads clawd/memory:
|
||||
```
|
||||
grep -rn 'clawd/memory' /home/moltbot/{bin,.config,.openclaw} 2>/dev/null
|
||||
grep -rn 'clawd/memory' /home/moltbot/echo-core 2>/dev/null
|
||||
```
|
||||
- If empty → skip **step 11** (no compat symlink needed).
|
||||
- If non-empty → keep **step 11**.
|
||||
|
||||
## Stop services
|
||||
|
||||
5. ```
|
||||
systemctl --user stop echo-core echo-taskboard echo-whatsapp-bridge openclaw-gateway
|
||||
```
|
||||
|
||||
## Copy ANAF live state
|
||||
|
||||
6. ```
|
||||
cp -rp /home/moltbot/clawd/tools/anaf-monitor/{hashes.json,versions.json,monitor.log} \
|
||||
/home/moltbot/echo-core/tools/anaf-monitor/ 2>/dev/null
|
||||
cp -rp /home/moltbot/clawd/tools/anaf-monitor/snapshots \
|
||||
/home/moltbot/echo-core/tools/anaf-monitor/
|
||||
diff -r /home/moltbot/clawd/tools/anaf-monitor/snapshots \
|
||||
/home/moltbot/echo-core/tools/anaf-monitor/snapshots
|
||||
```
|
||||
Diff should be empty (or only show new snapshots echo-core captured during testing).
|
||||
|
||||
## Dashboard migration
|
||||
|
||||
7. Delete echo-core dashboard placeholder content if any collisions, then:
|
||||
```
|
||||
cp -rp /home/moltbot/clawd/dashboard/{habits,issues,status,todos}.json \
|
||||
/home/moltbot/echo-core/dashboard/
|
||||
cp -rp /home/moltbot/clawd/dashboard/tests/ \
|
||||
/home/moltbot/echo-core/dashboard/tests/
|
||||
# Recreate the 4 dashboard symlinks pointing into echo-core:
|
||||
ln -sfn /home/moltbot/echo-core/memory /home/moltbot/echo-core/dashboard/memory
|
||||
ln -sfn /home/moltbot/echo-core/conversations /home/moltbot/echo-core/dashboard/conversations # create conversations/ first if you want this
|
||||
ln -sfn /home/moltbot/echo-core/memory/kb /home/moltbot/echo-core/dashboard/notes-data
|
||||
ln -sfn /home/moltbot/echo-core/memory/kb/youtube /home/moltbot/echo-core/dashboard/youtube-notes
|
||||
```
|
||||
|
||||
## Memory inversion
|
||||
|
||||
8. `rm /home/moltbot/echo-core/memory` *(removes symlink only, not target)*
|
||||
9. `cp -rp /home/moltbot/clawd/memory /home/moltbot/echo-core/memory`
|
||||
10. `diff -rq /home/moltbot/clawd/memory /home/moltbot/echo-core/memory` *(verify identical)*
|
||||
11. *(only if step 4 found consumers)*
|
||||
```
|
||||
mv /home/moltbot/clawd/memory /home/moltbot/clawd/memory.old-2026-04
|
||||
ln -s /home/moltbot/echo-core/memory /home/moltbot/clawd/memory
|
||||
```
|
||||
12. `rm -rf /home/moltbot/echo-core/memory.bak` *(leftover, safe to delete)*
|
||||
|
||||
## Systemd
|
||||
|
||||
13. Copy the template into place:
|
||||
```
|
||||
cp /home/moltbot/echo-core/dashboard/echo-taskboard.service \
|
||||
/home/moltbot/.config/systemd/user/echo-taskboard.service
|
||||
systemctl --user daemon-reload
|
||||
```
|
||||
|
||||
## Crontab
|
||||
|
||||
14. ```
|
||||
bash /home/moltbot/echo-core/scripts/update_crontab.sh
|
||||
```
|
||||
|
||||
## Decommission OpenClaw
|
||||
|
||||
15. `systemctl --user stop openclaw-gateway`
|
||||
16. `systemctl --user disable openclaw-gateway`
|
||||
17. Strip credentials from `~/.openclaw/` but keep `jobs.json.bak`:
|
||||
```
|
||||
cd /home/moltbot/.openclaw
|
||||
find . -name 'auth*' -o -name '*token*' -o -name '*.secret' | xargs rm -v 2>/dev/null
|
||||
ls -la agents/*/ # inspect for any remaining secrets, delete manually
|
||||
```
|
||||
18. **Note:** schedule a reminder for 2026-05-21 to `rm -rf /home/moltbot/.openclaw`
|
||||
entirely if nothing was restored.
|
||||
|
||||
## Restart
|
||||
|
||||
19. `systemctl --user start echo-core echo-taskboard echo-whatsapp-bridge`
|
||||
20. `systemctl --user status echo-core echo-taskboard echo-whatsapp-bridge` — all **active (running)**.
|
||||
21. `systemctl --user status openclaw-gateway` — **inactive (dead)**.
|
||||
|
||||
## Verification
|
||||
|
||||
22. `curl -s http://localhost:8088/api/status` → `{"status":"ok",...}`
|
||||
23. Visit `https://moltbot.tailf7372d.ts.net/echo/` — home page loads.
|
||||
24. `/api/cron` panel populated with echo-core jobs (anaf-monitor, morning-report, etc).
|
||||
25. `/api/agents` returns 404 (removed).
|
||||
26. Click **Commit** in `index.html` — creates commit on echo-core repo.
|
||||
27. Manually trigger anaf monitor:
|
||||
```
|
||||
cd /home/moltbot/echo-core && .venv/bin/python3 tools/anaf-monitor/monitor_v2.py
|
||||
```
|
||||
Verify `status.json` updates **and** stdout ends with
|
||||
`GSTACK-CRON: changes=N`.
|
||||
28. Wait for first scheduled anaf-monitor trigger (10:00 or 16:00 Mon-Fri).
|
||||
Check `echo-core.log` for execution.
|
||||
|
||||
---
|
||||
|
||||
## Rollback path (if anything breaks badly)
|
||||
|
||||
```
|
||||
systemctl --user stop echo-core echo-taskboard echo-whatsapp-bridge
|
||||
rm -rf /home/moltbot/echo-core/memory
|
||||
cp -rp /home/moltbot/clawd-backup-YYYY-MM-DD/memory /home/moltbot/clawd/memory # restore
|
||||
ln -s /home/moltbot/clawd/memory /home/moltbot/echo-core/memory # restore symlink
|
||||
|
||||
# Restore old systemd unit from git history in clawd repo if available, else rewrite manually.
|
||||
git -C /home/moltbot/echo-core checkout master # revert to pre-merge state
|
||||
systemctl --user daemon-reload
|
||||
systemctl --user start echo-core echo-taskboard echo-whatsapp-bridge
|
||||
systemctl --user start openclaw-gateway # if wanted
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- **Cron schedules are Bucharest local time**, not UTC.
|
||||
- **Most imported claude jobs arrive DISABLED** — enable them via `eco` / dashboard
|
||||
once you've verified each one produces the expected output.
|
||||
- `heartbeat-2h` is the **only imported claude job that stays enabled** (preserving
|
||||
its state from OpenClaw).
|
||||
- The 5 shell jobs (anaf-monitor, security-audit-daily, kb-index-refresh,
|
||||
archive-tasks-daily, backup-config) start **enabled** on day one.
|
||||
@@ -11,6 +11,14 @@
|
||||
"echo-core": {
|
||||
"id": "1471916752119009432",
|
||||
"default_model": "sonnet"
|
||||
},
|
||||
"echo-work": {
|
||||
"id": "1466726254312030259",
|
||||
"default_model": "sonnet"
|
||||
},
|
||||
"echo-sprijin": {
|
||||
"id": "1466739361503772864",
|
||||
"default_model": "sonnet"
|
||||
}
|
||||
},
|
||||
"telegram_channels": {},
|
||||
|
||||
248
cron/jobs.json
248
cron/jobs.json
@@ -24,5 +24,253 @@
|
||||
"last_run": "2026-04-02T18:18:07.775703+00:00",
|
||||
"last_status": "ok",
|
||||
"next_run": "2027-01-01T00:00:00+00:00"
|
||||
},
|
||||
{
|
||||
"name": "anaf-monitor",
|
||||
"kind": "shell",
|
||||
"cron": "0 10,16 * * 1-5",
|
||||
"channel": "echo-work",
|
||||
"command": [
|
||||
"python3",
|
||||
"tools/anaf-monitor/monitor_v2.py"
|
||||
],
|
||||
"report_on": "changes",
|
||||
"timeout": 120,
|
||||
"enabled": true,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "security-audit-daily",
|
||||
"kind": "shell",
|
||||
"cron": "0 3 * * *",
|
||||
"channel": "echo-work",
|
||||
"command": [
|
||||
"python3",
|
||||
"tools/security_audit.py"
|
||||
],
|
||||
"report_on": "changes",
|
||||
"timeout": 180,
|
||||
"enabled": true,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "kb-index-refresh",
|
||||
"kind": "shell",
|
||||
"cron": "30 3 * * *",
|
||||
"channel": "echo-work",
|
||||
"command": [
|
||||
"python3",
|
||||
"tools/update_notes_index.py"
|
||||
],
|
||||
"report_on": "never",
|
||||
"timeout": 120,
|
||||
"enabled": true,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "archive-tasks-daily",
|
||||
"kind": "shell",
|
||||
"cron": "0 3 * * *",
|
||||
"channel": "echo-work",
|
||||
"command": [
|
||||
"python3",
|
||||
"dashboard/archive_tasks.py"
|
||||
],
|
||||
"report_on": "changes",
|
||||
"timeout": 60,
|
||||
"enabled": true,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "backup-config",
|
||||
"kind": "shell",
|
||||
"cron": "0 2 * * *",
|
||||
"channel": "echo-work",
|
||||
"command": [
|
||||
"bash",
|
||||
"tools/backup_config.sh"
|
||||
],
|
||||
"report_on": "never",
|
||||
"timeout": 120,
|
||||
"enabled": true,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "insights-extract",
|
||||
"cron": "0 4 * * *",
|
||||
"channel": "echo-work",
|
||||
"model": "sonnet",
|
||||
"prompt": "PLACEHOLDER — Marius will write the full prompt. Intent: extract daily insights from chat history (Discord, Telegram, WhatsApp) and save to memory/kb/insights/YYYY-MM-DD.md. Runs after content-discovery (03:00) so insights can incorporate discovered content proposals.",
|
||||
"allowed_tools": [],
|
||||
"enabled": false,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "weekly-planning-sun",
|
||||
"cron": "0 22 * * 0",
|
||||
"channel": "echo-work",
|
||||
"model": "sonnet",
|
||||
"prompt": "WEEKLY PLANNING - duminică seara\n\n## CALENDAR SĂPTĂMÂNA URMATOARE\nVerifică calendarul:\n```bash\ncd ~/echo-core && source venv/bin/activate && python3 tools/calendar_check.py week\n```\n\nȘi verifică travel reminders:\n```bash\ncd ~/echo-core && source venv/bin/activate && python3 tools/calendar_check.py travel\n```\n\n## TRIMITE PE DISCORD #echo-work\nchannel=discord, target=1466726254312030259\n\nFormat:\n[⚡ Echo] **Săptămâna începe mâine!**\n\n📅 **CALENDAR:**\n- Luni: [evenimente]\n- Marți: [evenimente]\n- ... (toate zilele cu evenimente)\n\n🚂 **TRAVEL:**\nDacă sunt evenimente NLP/București:\n- ⚠️ [Event] pe [dată] - Ai bilete și cazare?\n- Link CFR: https://bilete.cfrcalatori.ro/\n- Link cazare: booking.com sau unde stă de obicei",
|
||||
"allowed_tools": [],
|
||||
"enabled": false,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "grup-sprijin-5feb",
|
||||
"cron": "0 18 5 2 *",
|
||||
"channel": "echo-work",
|
||||
"model": "sonnet",
|
||||
"prompt": "Reminder: Azi la 18:00 ai întâlnirea grupului de sprijin cu liderii de la cercetași.\n\nTrimite pe Discord #echo-sprijin (message tool):\nchannel=discord, target=1466739361503772864\n\nMesaj:\n[⭕ Echo] *Azi la 18:00 - Grup de sprijin!*\n\nAi pregătit ceva sau vrei idei de exerciții/întrebări?\n\nFișier: /home/moltbot/echo-core/memory/kb/projects/grup-sprijin/README.md\n\nDupă întâlnire: întreabă cum a fost și notează în fișier.",
|
||||
"allowed_tools": [],
|
||||
"enabled": false,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "grup-sprijin-pregatire",
|
||||
"cron": "0 18 3 2 *",
|
||||
"channel": "echo-work",
|
||||
"model": "sonnet",
|
||||
"prompt": "Pregătire fișă grup sprijin - joi 5 februarie.\n\nTrimite pe Discord #echo-sprijin (message tool):\nchannel=discord, target=1466739361503772864\n\nMesaj:\n[⭕ Echo] *Întâlnirea de grup e joi!*\n\nHai să pregătim fișa:\n\n1. Ce temă vrei să abordezi de data asta?\n2. Uită-te la exerciții: https://moltbot.tailf7372d.ts.net/echo/grup-sprijin.html - care 1-2 vrei să folosim?\n3. Ai ceva nou de adăugat din săptămâna asta?\n\nDupă ce îmi spui, fac fișa.",
|
||||
"allowed_tools": [],
|
||||
"enabled": false,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "content-discovery",
|
||||
"cron": "0 3 * * *",
|
||||
"channel": "echo-work",
|
||||
"model": "sonnet",
|
||||
"prompt": "JOB NOAPTE (02:00) - Content Discovery proactiv.\n\n## SCOP\nCaută video-uri/articole/bloguri relevante DE CALITATE pentru Marius și generează propuneri în format insight.\n\n## PAȘI:\n\n### 1. Citește contextul\n- read: USER.md (interese, provocări)\n- read: memory/YYYY-MM-DD.md (note recente, teme)\n\n### 2. Generează 3-4 queries de căutare\nBazat pe:\n- 60% teme recente (din note zilnice)\n- 40% interese bază (NLP, coaching, productivitate, sănătate)\n\n### 3. Caută conținut de CALITATE\n\n**YouTube (1-2 video-uri):**\n- web_search: 'site:youtube.com [query]'\n- Preferă: <20 min, autori cunoscuți/credibili\n- Evită: clickbait, shorts fără substanță\n\n**Articole/Bloguri (1-2 surse):**\n- web_search: '[query] blog article'\n- Criterii OBLIGATORII pentru a fi inclus:\n ✅ Autor cu credibilitate (expert în domeniu, publicații recunoscute)\n ✅ Conținut profund (nu listicle superficiale)\n ✅ Relevanță directă cu provocările/interesele lui Maris\n ✅ Perspective practice (nu doar teorie)\n \n- Surse de încredere (exemple):\n * Medium (autori verificați cu track record)\n * Bloguri experți NLP/coaching/productivitate\n * HBR, Psychology Today, Scientific American (când e relevant)\n * Bloguri personale ale practițienilor (cu substanță, nu marketing)\n \n- EVITĂ:\n ❌ Listicle generice (\"10 tips for...\")\n ❌ Conținut SEO fără substanță\n ❌ Articole de marketing/vânzare\n ❌ Surse necredibile sau fără autor identificabil\n\n### 4. Verifică calitatea înainte de a propune\nPentru fiecare articol/blog găsit:\n- Citește abstract/primele paragrafe cu web_fetch\n- Întreabă-te: \"Are insight-uri practice pentru Marius?\"\n- Dacă răspuns = NU → nu-l include\n\n### 5. Adaugă în insights ca propuneri\nScrie în memory/kb/insights/YYYY-MM-DD.md (data de MÂINE):\n\n```markdown\n## 🔍 Content Discovery\n\n### [ ] 🎬 **Titlu Video** (💡 nice / 📌 important)\n\n**De ce:** Explicație scurtă - cum se leagă de interesele/provocările lui Marius\n\n**Acțiune:** Procesează video și extrage note\n\n**Link:** https://youtube.com/watch?v=...\n\n---\n\n### [ ] 📄 **Titlu Articol - Autor** (💡 nice / 📌 important)\n\n**De ce:** Explicație - ce insight-uri practice oferă\n\n**Credibilitate:** [Cine e autorul + de ce e relevant]\n\n**Acțiune:** Citește și extrage în kb/articole/\n\n**Link:** https://...\n```\n\n### 6. NU trimite mesaj\nRaportul de dimineață va propune automat.\n\n## REGULI:\n- Max 3-4 propuneri per noapte (1-2 video + 1-2 articole)\n- Prioritate: **CALITATE > CANTITATE**\n- Evită duplicate (verifică memory/kb/ pentru ce e deja procesat)\n- Fii variat - nu repeta aceiași autori zilnic\n- **FILTRARE STRICTĂ:** Doar conținut cu greutate, nu orice link",
|
||||
"allowed_tools": [],
|
||||
"enabled": false,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "provocare-reminder",
|
||||
"cron": "0 13 * * 1-5",
|
||||
"channel": "echo-work",
|
||||
"model": "sonnet",
|
||||
"prompt": "REMINDER PROVOCARE - la prânz\n\n1. Citește provocarea: read memory/provocare-azi.md\n\n2. Trimite pe Discord #echo-self (target=1466739112747864175):\n\n[⭕ Echo] **Reminder: Provocarea de azi**\n\n[conținutul provocării]\n\nAi făcut progres? Sau măcar un pas mic?\n\n3. NU trimite pe email (doar Discord)",
|
||||
"allowed_tools": [],
|
||||
"enabled": false,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "morning-report",
|
||||
"cron": "30 9 * * *",
|
||||
"channel": "echo-work",
|
||||
"model": "sonnet",
|
||||
"prompt": "RAPORT DIMINEAȚĂ - trimite pe EMAIL (Gmail: mmarius28@gmail.com)\n\n## CALENDAR\nVerifică calendarul:\n```bash\ncd ~/echo-core && source venv/bin/activate && python3 tools/calendar_check.py today\npython3 tools/calendar_check.py travel\npython3 tools/calendar_check.py week\n```\n\n## CITEȘTE CONTEXT\n- USER.md pentru programul lui Marius (luni-joi 15-16 liber)\n- memory/kb/insights/ pentru propuneri (ultimele 3 zile)\n- memory/approved-tasks.md pentru status proiecte/features\n\n## FORMAT EMAIL HTML\n- Font: 16px text, 18px titluri\n- Culori: albastru (#dbeafe) DONE, gri (#f3f4f6) PROGRAMAT, verde (#d1fae5) PROJECTS\n- Link-uri vizibile\n\n## STRUCTURA RAPORT\n\n### 1. CALENDAR\n- 📅 **AZI:** [evenimente]\n- 📅 **MÂINE:** [evenimente]\n- 📅 **PESTE 2 ZILE:** [dacă e GRUP, NLP, meeting mare]\n- 🚂 **TRAVEL:** Reminders bilete+cazare\n\n### 2. PROIECTE/FEATURES NOAPTEA 💻\n\nCitesc approved-tasks.md și raportez ce s-a realizat:\n\n**Format pentru fiecare proiect/feature [x]:**\n\n```html\n<div style=\"background: #d1fae5; padding: 15px; margin: 10px 0; border-radius: 8px;\">\n <h3>✅ P1 - Nume Proiect</h3>\n \n <p><strong>Status:</strong> X/Y stories complete</p>\n \n <p><strong>Stories realizate:</strong></p>\n <ul>\n <li>✅ US-001: Titlu story - implementat cu succes</li>\n <li>✅ US-002: Titlu story - quality checks pass</li>\n <li>🔄 US-003: Titlu story - în progres (blocat pe dependency)</li>\n </ul>\n \n <p><strong>Link:</strong> <a href=\"https://gitea.romfast.ro/romfast/PROJECT-NAME\">gitea.romfast.ro/romfast/PROJECT-NAME</a></p>\n \n <p><strong>Learnings:</strong> [din progress.txt - ce patterns am descoperit]</p>\n \n <p><strong>Next steps:</strong> [ce rămâne de făcut]</p>\n</div>\n```\n\n**Dacă NU s-au executat proiecte/features:**\n- Sari peste această secțiune\n\n### 3. STATUS GENERAL\n- Ce s-a făcut ieri (joburi, taskuri)\n- Git status ~/clawd\n- Joburi executate (YouTube, insights, etc.)\n\n### 4. PROPUNERI CU ZI ȘI ORĂ!\n\n**OBLIGATORIU:** Fiecare propunere TU+EU sau FAC TU trebuie să aibă ZI și ORĂ concrete!\n\nCategorii:\n- 🤖 **FAC EU** (0 efort) - execut singur\n- 🤝 **TU+EU** (eu pregătesc) - cu zi/oră!\n- 👤 **FAC TU** (template gata) - cu zi/oră!\n\nExemplu:\n- **A1 - Sesiune Dizolvare Vină** 🤝 TU+EU\n 📅 **Marți 3 feb, 15:00-15:30**\n Context + link sursă\n\nReguli programare:\n- Luni-Joi 15:00-16:00 = slot liber\n- Vineri-Duminică = NLP, evită\n- Verifică calendar să nu fie ocupat\n\n### 5. INSIGHTS DISPONIBILE\n\nListează insights-uri [ ] nepropuse încă (format scurt).\n\n### 6. CUM RĂSPUNZI\n- DA = aprob toate (cu zilele/orele propuse)\n- 1 pentru A1,A2 = execut ACUM\n- 2 pentru A3 = programez noapte\n- 3 pentru A5 = skip\n- Alt orar = \"A1 miercuri nu marți\"\n\n## TRIMITERE\npython3 /home/moltbot/echo-core/tools/email_send.py \"mmarius28@gmail.com\" \"Raport Dimineata DATA\" \"HTML_CONTENT\"\n\nNU trimite pe Discord - doar email.",
|
||||
"allowed_tools": [],
|
||||
"enabled": false,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "evening-report",
|
||||
"cron": "0 21 * * *",
|
||||
"channel": "echo-work",
|
||||
"model": "sonnet",
|
||||
"prompt": "RAPORT SEARĂ - trimite pe EMAIL (Gmail: mmarius28@gmail.com)\n\n## CALENDAR\nVerifică ce ai mâine și săptămâna:\n```bash\ncd ~/echo-core && source venv/bin/activate && python3 tools/calendar_check.py today\npython3 tools/calendar_check.py week\n```\n\n## CITEȘTE CONTEXT\n- USER.md pentru programul lui Marius (luni-joi 15-16 liber, vineri-dum NLP)\n- memory/kb/insights/YYYY-MM-DD.md pentru propuneri insights\n- memory/kb/youtube/ și memory/kb/articole/ pentru inspirație proiecte\n- memory/approved-tasks.md pentru status proiecte existente\n\n## FORMAT EMAIL HTML\n- Font: 16px text, 18px titluri\n- Culori: albastru (#dbeafe) DONE, gri (#f3f4f6) PROGRAMAT, verde (#d1fae5) PROJECTS\n- Link-uri vizibile\n\n## STRUCTURA RAPORT\n\n### 1. MÂINE\n- 📅 Evenimente calendar\n- 🚂 Travel reminders\n\n### 2. STATUS\n- Ce s-a făcut azi\n- Git status\n\n### 3. PROPUNERI CU ZI ȘI ORĂ!\n\n**OBLIGATORIU:** Fiecare propunere TU+EU sau FAC TU trebuie să aibă ZI și ORĂ concrete!\n\nReguli programare:\n- Luni-Joi 15:00-16:00 = slot liber\n- Vineri-Duminică = NLP, evită\n- Verifică calendar să nu fie ocupat\n- Sesiuni scurte: 15-30 min\n\n### 4. PROGRAME/PROIECTE PRACTICE 💻\n\n**CONTEXT OBLIGATORIU - citește înainte de a propune:**\n\n**Proiecte existente (PRIORITARE pentru features):**\n- **roa2web** (gitea.romfast.ro/romfast/roa2web) - FastAPI+Vue.js+Telegram bot\n - Are deja: balanță, facturi, trezorerie\n - Lipsesc: validări declarații ANAF, facturare valută/taxare inversă, notificări\n - Rapoarte ROA noi → FEATURE în roa2web, NU proiect separat!\n- **Chatbot Maria** (Flowise pe LXC 104, ngrok → romfast.ro/chatbot_maria.html)\n - Document store: XML, MD | Groq gratuit + Ollama embeddings + FAISS\n - Problema: răspunsuri nu sunt suficient de bune\n - Angajatul nou poate menține documentația (scrie TXT, trebuie converter)\n - Clientii îl accesează din programele ROA direct\n\n**Întrebări frecvente clienți (surse de proiecte):**\n- Erori validare declarații ANAF (D406, D394, D100 etc.)\n- Cum facturez în valută cu taxare inversă?\n- Probleme la instalări, inițializări firme noi, configurări\n\n**Reguli propuneri (80/20 STRICT):**\n- Impact mare pentru Marius → apoi pentru clienți ERP ROA\n- Inspirat din discovery (YouTube, articole, insights procesate)\n- Features roa2web > proiecte noi (integrare în existent)\n- Proiecte independente doar dacă NU se potrivesc în roa2web/Flowise\n\n**A. FEATURES PROIECTE EXISTENTE (2-3, PRIORITAR):**\n\nFormat:\n```\n### ⚡ F1 - Feature pentru [roa2web/chatbot]\n**Ce face:** Descriere scurtă\n**De ce:** Ce problemă rezolvă (ex: \"clienții întreabă X de 5 ori/săptămână\")\n**Complexitate:** S/M/L\n**Proiect:** roa2web / chatbot-maria\n```\n\n**B. PROIECTE NOI (max 1, doar dacă nu se integrează în existente):**\n\nFormat:\n```\n### 💻 P1 - Nume Proiect\n**De ce:** Cum se leagă de nevoile lui Marius/clienți\n**Impact:** Pentru Marius + pentru clienți\n**Efort:** Ore/zile realist\n**Stack:** Simplu (80/20)\n**Sursă:** [Link nota KB]\n```\n\n**NU propune:**\n- Proiecte complexe fără beneficiu clar\n- Proiecte duplicat cu ce există deja\n- Rapoarte ROA ca proiect separat (→ feature roa2web)\n\n### 5. INSIGHTS DISPONIBILE\nListează insights-uri [ ] nepropuse încă (format scurt).\n\n### 6. CUM RĂSPUNZI\n- DA = aprob toate (cu zilele/orele propuse)\n- 1 pentru A1,A2 = execut ACUM\n- 2 pentru A3 = programez noapte\n- 3 pentru A5 = skip\n- **F pentru F1,F3** = implementează features (joburi noapte)\n- **P pentru P1** = creează proiect nou (job noapte)\n- Alt orar = \"A1 miercuri nu marți\"\n\n## IMPLEMENTARE PROIECTE APROBATE\n\nCând Marius aprobă cu F sau P:\n1. Adaugă în memory/approved-tasks.md secțiunea \"Noaptea asta\"\n2. Night-execute (23:00) va:\n - Pentru proiecte noi: genera PRD cu ralph_prd_generator.py → ralph.sh\n - Pentru features roa2web: adaugă stories în prd.json existent → ralph.sh\n - Pentru chatbot: generează documentație nouă și uploadează în Flowise\n\n## TRIMITERE\npython3 /home/moltbot/echo-core/tools/email_send.py \"mmarius28@gmail.com\" \"Raport Seara DATA\" \"HTML_CONTENT\"\n\nNU trimite pe Discord - doar email.",
|
||||
"allowed_tools": [],
|
||||
"enabled": false,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "morning-coaching",
|
||||
"cron": "0 10 * * *",
|
||||
"channel": "echo-work",
|
||||
"model": "sonnet",
|
||||
"prompt": "COACHING DIMINEAȚĂ pentru Marius.\n\n## VERIFICĂ ÎNTÂI DACĂ S-A TRIMIS DEJA\n```bash\nls ~/clawd/memory/kb/coaching/ | grep \"$(date +%Y-%m-%d)\"\n```\n\nDacă EXISTĂ fișier cu data de azi (ex: 2026-02-03-dimineata.md):\n→ Răspunde doar: \"Coaching deja trimis azi. HEARTBEAT_OK\"\n→ NU crea alt fișier, NU trimite pe Discord, NU trimite email\n\nDacă NU există:\n→ Continuă cu pașii de mai jos\n\n## PAȘI\n\n1. Verifică ce ai trimis recent:\n exec ls -la /home/moltbot/echo-core/memory/kb/coaching/ | tail -7\n\n2. Inspiră-te din memory/kb/youtube/ și memory/kb/insights/\n\n3. Crează mesajul de coaching (inspirațional + provocare)\n\n4. Trimite pe Discord #echo-self:\n channel=discord, target=1466739112747864175\n Format: [⭕ Echo] **GÂNDUL DE DIMINEAȚĂ** + conținut\n\n5. Trimite pe EMAIL (Gmail):\n python3 /home/moltbot/echo-core/tools/email_send.py \"mmarius28@gmail.com\" \"Gandul de dimineata\" \"[MESAJUL - text sau HTML simplu]\"\n\n6. Salvează în memory/kb/coaching/YYYY-MM-DD-dimineata.md\n\n7. Salvează provocarea în memory/provocare-azi.md\n\n8. ADAUGĂ PROVOCAREA În TODO'S:\n Citește dashboard/todos.json, adaugă item nou cu structura:\n {\n \"id\": \"prov-YYYY-MM-DD\",\n \"text\": \"Provocare: [TEXT SCURT - max 100 caractere]\",\n \"context\": \"[EXPLICAȚIE: de ce e important, cum să fac pas cu pas]\",\n \"example\": \"[EXEMPLU CONCRET din viața lui Marius - situație reală]\",\n \"domain\": \"self\",\n \"dueDate\": \"YYYY-MM-DD\",\n \"done\": false,\n \"doneAt\": null,\n \"source\": \"[Autor - Titlu video/articol]\",\n \"sourceUrl\": \"https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/youtube/[fisier].md\",\n \"createdAt\": \"[now ISO]\"\n }\n Salvează dashboard/todos.json\n\n9. Update index:\n python3 /home/moltbot/echo-core/tools/update_notes_index.py\n\n## IMPORTANT\n- VERIFICĂ ÎNTÂI DACĂ EXISTĂ DEJA FIȘIER CU DATA DE AZI!\n- Context-ul explică DE CE și CUM\n- Exemplul e CONCRET, din viața lui Marius (clienți, angajat, etc.)\n- Citește USER.md pentru context despre Marius\n\nFii creativ!",
|
||||
"allowed_tools": [],
|
||||
"enabled": false,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "evening-coaching",
|
||||
"cron": "0 22 * * *",
|
||||
"channel": "echo-work",
|
||||
"model": "sonnet",
|
||||
"prompt": "COACHING SEARĂ pentru Marius.\n\n## PAȘI\n\n1. Verifică ce ai trimis:\n exec ls -la /home/moltbot/echo-core/memory/kb/coaching/ | tail -7\n\n2. Verifică provocarea de azi:\n read memory/provocare-azi.md\n\n3. Verifică dacă a fost bifată în Todo's:\n read dashboard/todos.json\n Caută \"prov-YYYY-MM-DD\" (azi) și vezi dacă \"done\": true\n\n4. Inspiră-te din memory/kb/youtube/ și memory/kb/insights/\n\n5. Crează reflecție + follow-up provocare:\n - Dacă a bifat: felicită-l, întreabă cum a fost\n - Dacă nu a bifat: întreabă ce l-a blocat, fără judecată\n\n6. Trimite pe Discord #echo-self:\n channel=discord, target=1466739112747864175\n Format: [⭕ Echo] **GÂNDUL DE SEARĂ** + reflecție\n\n7. Trimite pe EMAIL (Gmail):\n python3 /home/moltbot/echo-core/tools/email_send.py \"mmarius28@gmail.com\" \"Gandul de seara\" \"[MESAJUL - text sau HTML simplu]\"\n\n8. Salvează în memory/kb/coaching/YYYY-MM-DD-seara.md\n\n9. Update index:\n python3 /home/moltbot/echo-core/tools/update_notes_index.py\n\n## IMPORTANT\n- Verifică Todo's pentru a ști dacă a făcut provocarea\n- Fii empatic, nu critic\n- Citește USER.md pentru context\n\nFii creativ!",
|
||||
"allowed_tools": [],
|
||||
"enabled": false,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "exercise-snack-1",
|
||||
"cron": "30 9 * * *",
|
||||
"channel": "echo-work",
|
||||
"model": "sonnet",
|
||||
"prompt": "Trimite exercițiul pe Discord #echo-self:\n\nmessage tool: action=send, channel=discord, target=1466739112747864175\n\n⏰ Exercise Snack #1 (3 min)\n\n• 10 squats\n• 5 push-ups\n• 30 sec plank\n\nGata? Reacționează cu ✅",
|
||||
"allowed_tools": [],
|
||||
"enabled": false,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "exercise-snack-2",
|
||||
"cron": "30 13 * * *",
|
||||
"channel": "echo-work",
|
||||
"model": "sonnet",
|
||||
"prompt": "Trimite exercițiul pe Discord #echo-self:\n\nmessage tool: action=send, channel=discord, target=1466739112747864175\n\n⏰ Exercise Snack #2 (3 min)\n\n• 20 step-ups pe scaun\n• 20 high knees\n\nGata? Reacționează cu ✅",
|
||||
"allowed_tools": [],
|
||||
"enabled": false,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "exercise-snack-3",
|
||||
"cron": "30 17 * * *",
|
||||
"channel": "echo-work",
|
||||
"model": "sonnet",
|
||||
"prompt": "Trimite exercițiul pe Discord #echo-self:\n\nmessage tool: action=send, channel=discord, target=1466739112747864175\n\n⏰ Exercise Snack #3 (3 min)\n\n• 15 squats\n• 10 lunges\n• Marș pe loc 1 min\n\nGata? Reacționează cu ✅",
|
||||
"allowed_tools": [],
|
||||
"enabled": false,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
},
|
||||
{
|
||||
"name": "heartbeat-2h",
|
||||
"cron": "0 7-23/2 * * *",
|
||||
"channel": "echo-work",
|
||||
"model": "sonnet",
|
||||
"prompt": "Heartbeat check. Rulează src/heartbeat.py printr-un scurt raport de status.\nDacă nu e nimic de raportat (email=0, calendar nu are evenimente <2h, kb ok), răspunde doar cu HEARTBEAT_OK și oprește-te — nu trimite mesaj.\nDacă e ceva: raport scurt pe Discord #echo-work.",
|
||||
"allowed_tools": [],
|
||||
"enabled": true,
|
||||
"last_run": null,
|
||||
"last_status": null,
|
||||
"next_run": null
|
||||
}
|
||||
]
|
||||
|
||||
182
dashboard/api.py
Normal file
182
dashboard/api.py
Normal file
@@ -0,0 +1,182 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Echo Task Board API — thin HTTP router.
|
||||
|
||||
All endpoint logic lives in `dashboard/handlers/*.py`. This file is
|
||||
responsible only for URL dispatch, CORS, JSON response helpers, and
|
||||
server bootstrap.
|
||||
"""
|
||||
import json
|
||||
import sys
|
||||
from http.server import HTTPServer, SimpleHTTPRequestHandler
|
||||
from pathlib import Path
|
||||
|
||||
# Make dashboard/ importable for the handler submodules (constants,
|
||||
# habits_helpers, handlers.*). Tests rely on this as well.
|
||||
_DASH = Path(__file__).parent
|
||||
if str(_DASH) not in sys.path:
|
||||
sys.path.insert(0, str(_DASH))
|
||||
|
||||
from constants import ( # noqa: E402 re-exported for tests
|
||||
ALLOWED_WORKSPACES,
|
||||
BASE_DIR,
|
||||
ECHO_CORE_DIR,
|
||||
ECHO_LOG_FILE,
|
||||
ECHO_SESSIONS_FILE,
|
||||
ECO_SERVICES,
|
||||
GIT_WORKSPACE,
|
||||
GITEA_ORG,
|
||||
GITEA_TOKEN,
|
||||
GITEA_URL,
|
||||
HABITS_FILE,
|
||||
KANBAN_DIR,
|
||||
NOTES_DIR,
|
||||
TOOLS_DIR,
|
||||
VENV_PYTHON,
|
||||
WORKSPACE_DIR,
|
||||
)
|
||||
from handlers.cron import CronHandlers # noqa: E402
|
||||
from handlers.eco import EcoHandlers # noqa: E402
|
||||
from handlers.files import FilesHandlers # noqa: E402
|
||||
from handlers.git import GitHandlers # noqa: E402
|
||||
from handlers.habits import HabitsHandlers # noqa: E402
|
||||
from handlers.pdf import PDFHandlers # noqa: E402
|
||||
from handlers.workspace import WorkspaceHandlers # noqa: E402
|
||||
from handlers.youtube import YoutubeHandlers # noqa: E402
|
||||
|
||||
|
||||
class TaskBoardHandler(
|
||||
GitHandlers,
|
||||
HabitsHandlers,
|
||||
EcoHandlers,
|
||||
FilesHandlers,
|
||||
PDFHandlers,
|
||||
YoutubeHandlers,
|
||||
WorkspaceHandlers,
|
||||
CronHandlers,
|
||||
SimpleHTTPRequestHandler,
|
||||
):
|
||||
"""HTTP request handler — dispatches to handler-mixin methods."""
|
||||
|
||||
# ── shared utilities ────────────────────────────────────────
|
||||
def _read_post_json(self):
|
||||
"""Read a JSON body from the POST request."""
|
||||
content_length = int(self.headers['Content-Length'])
|
||||
post_data = self.rfile.read(content_length).decode('utf-8')
|
||||
return json.loads(post_data)
|
||||
|
||||
def send_json(self, data, code=200):
|
||||
self.send_response(code)
|
||||
self.send_header('Content-Type', 'application/json')
|
||||
self.send_header('Access-Control-Allow-Origin', '*')
|
||||
self.send_header('Cache-Control', 'no-cache, no-store, must-revalidate')
|
||||
self.send_header('Pragma', 'no-cache')
|
||||
self.send_header('Expires', '0')
|
||||
self.end_headers()
|
||||
self.wfile.write(json.dumps(data).encode())
|
||||
|
||||
def do_OPTIONS(self):
|
||||
self.send_response(200)
|
||||
self.send_header('Access-Control-Allow-Origin', '*')
|
||||
self.send_header('Access-Control-Allow-Methods', 'GET, POST, PUT, DELETE, OPTIONS')
|
||||
self.send_header('Access-Control-Allow-Headers', 'Content-Type')
|
||||
self.end_headers()
|
||||
|
||||
# ── dispatch ────────────────────────────────────────────────
|
||||
def do_GET(self):
|
||||
from datetime import datetime as _dt
|
||||
if self.path == '/api/status':
|
||||
self.send_json({'status': 'ok', 'time': _dt.now().isoformat()})
|
||||
elif self.path == '/api/git' or self.path.startswith('/api/git?'):
|
||||
self.handle_git_status()
|
||||
elif self.path == '/api/cron' or self.path.startswith('/api/cron?'):
|
||||
self.handle_cron_status()
|
||||
elif self.path == '/api/habits':
|
||||
self.handle_habits_get()
|
||||
elif self.path.startswith('/api/files'):
|
||||
self.handle_files_get()
|
||||
elif self.path.startswith('/api/diff'):
|
||||
self.handle_git_diff()
|
||||
elif self.path == '/api/workspace' or self.path.startswith('/api/workspace?'):
|
||||
self.handle_workspace_list()
|
||||
elif self.path.startswith('/api/workspace/git/diff'):
|
||||
self.handle_workspace_git_diff()
|
||||
elif self.path.startswith('/api/workspace/logs'):
|
||||
self.handle_workspace_logs()
|
||||
elif self.path == '/api/eco/status' or self.path.startswith('/api/eco/status?'):
|
||||
self.handle_eco_status()
|
||||
elif self.path == '/api/eco/sessions' or self.path.startswith('/api/eco/sessions?'):
|
||||
self.handle_eco_sessions()
|
||||
elif self.path.startswith('/api/eco/sessions/content'):
|
||||
self.handle_eco_session_content()
|
||||
elif self.path.startswith('/api/eco/logs'):
|
||||
self.handle_eco_logs()
|
||||
elif self.path == '/api/eco/doctor':
|
||||
self.handle_eco_doctor()
|
||||
elif self.path == '/api/eco/git' or self.path.startswith('/api/eco/git?'):
|
||||
self.handle_eco_git_status()
|
||||
elif self.path.startswith('/api/'):
|
||||
self.send_error(404)
|
||||
else:
|
||||
super().do_GET()
|
||||
|
||||
def do_POST(self):
|
||||
if self.path == '/api/youtube':
|
||||
self.handle_youtube()
|
||||
elif self.path == '/api/files':
|
||||
self.handle_files_post()
|
||||
elif self.path == '/api/refresh-index':
|
||||
self.handle_refresh_index()
|
||||
elif self.path == '/api/pdf':
|
||||
self.handle_pdf_post()
|
||||
elif self.path == '/api/habits':
|
||||
self.handle_habits_post()
|
||||
elif self.path.startswith('/api/habits/') and self.path.endswith('/check'):
|
||||
self.handle_habits_check()
|
||||
elif self.path.startswith('/api/habits/') and self.path.endswith('/skip'):
|
||||
self.handle_habits_skip()
|
||||
elif self.path == '/api/workspace/run':
|
||||
self.handle_workspace_run()
|
||||
elif self.path == '/api/workspace/stop':
|
||||
self.handle_workspace_stop()
|
||||
elif self.path == '/api/workspace/git/commit':
|
||||
self.handle_workspace_git_commit()
|
||||
elif self.path == '/api/workspace/git/push':
|
||||
self.handle_workspace_git_push()
|
||||
elif self.path == '/api/workspace/delete':
|
||||
self.handle_workspace_delete()
|
||||
elif self.path == '/api/eco/restart':
|
||||
self.handle_eco_restart()
|
||||
elif self.path == '/api/eco/stop':
|
||||
self.handle_eco_stop()
|
||||
elif self.path == '/api/eco/sessions/clear':
|
||||
self.handle_eco_sessions_clear()
|
||||
elif self.path == '/api/eco/git-commit':
|
||||
self.handle_eco_git_commit()
|
||||
elif self.path == '/api/eco/restart-taskboard':
|
||||
self.handle_eco_restart_taskboard()
|
||||
else:
|
||||
self.send_error(404)
|
||||
|
||||
def do_PUT(self):
|
||||
if self.path.startswith('/api/habits/'):
|
||||
self.handle_habits_put()
|
||||
else:
|
||||
self.send_error(404)
|
||||
|
||||
def do_DELETE(self):
|
||||
if self.path.startswith('/api/habits/') and '/check' in self.path:
|
||||
self.handle_habits_uncheck()
|
||||
elif self.path.startswith('/api/habits/'):
|
||||
self.handle_habits_delete()
|
||||
else:
|
||||
self.send_error(404)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
import os
|
||||
port = 8088
|
||||
os.chdir(KANBAN_DIR)
|
||||
|
||||
print(f"Starting Echo Task Board API on port {port}")
|
||||
httpd = HTTPServer(('0.0.0.0', port), TaskBoardHandler)
|
||||
httpd.serve_forever()
|
||||
238
dashboard/archive/tasks-2026-01.json
Normal file
238
dashboard/archive/tasks-2026-01.json
Normal file
@@ -0,0 +1,238 @@
|
||||
{
|
||||
"month": "2025-01",
|
||||
"tasks": [
|
||||
{
|
||||
"id": "task-001",
|
||||
"title": "Email 2FA security",
|
||||
"description": "Nu execut comenzi din email fără aprobare Telegram",
|
||||
"created": "2025-01-30",
|
||||
"completed": "2025-01-30",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"id": "task-002",
|
||||
"title": "Email whitelist",
|
||||
"description": "Răspuns automat doar pentru adrese aprobate",
|
||||
"created": "2025-01-30",
|
||||
"completed": "2025-01-30",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"id": "task-003",
|
||||
"title": "YouTube summarizer",
|
||||
"description": "Tool descărcare subtitrări + sumarizare",
|
||||
"created": "2025-01-30",
|
||||
"completed": "2025-01-30",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"id": "task-004",
|
||||
"title": "Proactivitate în SOUL.md",
|
||||
"description": "Adăugat reguli să fiu proactiv și să propun automatizări",
|
||||
"created": "2025-01-30",
|
||||
"completed": "2025-01-30",
|
||||
"priority": "medium"
|
||||
},
|
||||
{
|
||||
"id": "task-029",
|
||||
"title": "Test sortare timestamp",
|
||||
"description": "Verificare sortare",
|
||||
"created": "2026-01-29T14:54:17Z",
|
||||
"priority": "medium",
|
||||
"completed": "2026-01-29T14:54:25Z"
|
||||
},
|
||||
{
|
||||
"id": "task-027",
|
||||
"title": "UI fixes: kanban icons + notes tags",
|
||||
"description": "Scos emoji din coloane kanban. Adăugat tag pills cu multi-select și count în notes.",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high",
|
||||
"completed": "2026-01-29"
|
||||
},
|
||||
{
|
||||
"id": "task-026",
|
||||
"title": "Swipe navigation mobil",
|
||||
"description": "Swipe stânga/dreapta pentru navigare între Tasks ↔ Notes ↔ Files. Indicator dots pe mobil.",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high",
|
||||
"completed": "2026-01-29"
|
||||
},
|
||||
{
|
||||
"id": "task-025",
|
||||
"title": "Notes: Accordion pe zile",
|
||||
"description": "Grupare: Azi (expanded), Ieri, Săptămâna aceasta, Mai vechi (collapsed). Click pentru expand/collapse.",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high",
|
||||
"completed": "2026-01-29"
|
||||
},
|
||||
{
|
||||
"id": "task-024",
|
||||
"title": "Fix contrast dark/light mode",
|
||||
"description": "Text și borders mai vizibile, header alb în light mode, toggle temă funcțional",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high",
|
||||
"completed": "2026-01-29"
|
||||
},
|
||||
{
|
||||
"id": "task-023",
|
||||
"title": "Design System Unificat",
|
||||
"description": "common.css + Lucide icons + UI modern pe toate paginile: Tasks, Notes, Files",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high",
|
||||
"completed": "2026-01-29"
|
||||
},
|
||||
{
|
||||
"id": "task-022",
|
||||
"title": "Unificare stil navigare",
|
||||
"description": "Nav unificat pe toate paginile: 📋 Tasks | 📝 Notes | 📁 Files cu iconuri și stil consistent",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high",
|
||||
"completed": "2026-01-29"
|
||||
},
|
||||
{
|
||||
"id": "task-021",
|
||||
"title": "UI/UX Redesign v2",
|
||||
"description": "Kanban: doar In Progress expandat. Notes: mobile tabs. Files: Browse/Editor tabs cu grid.",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high",
|
||||
"completed": "2026-01-29"
|
||||
},
|
||||
{
|
||||
"id": "task-020",
|
||||
"title": "UI Responsive & Compact",
|
||||
"description": "Coloane colapsabile, task-uri compacte (click expand), sidebar toggle, Done minimizat by default",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high",
|
||||
"completed": "2026-01-29"
|
||||
},
|
||||
{
|
||||
"id": "task-019",
|
||||
"title": "Comparare bilanț 12/2025 vs 12/2024",
|
||||
"description": "Doar S1002 modificat! Câmpuri noi: AN_CAEN, d_audit_intern. Raport: bilant_compare/2025_vs_2024/",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high",
|
||||
"completed": "2026-01-29"
|
||||
},
|
||||
{
|
||||
"id": "task-018",
|
||||
"title": "Comparare bilanț ANAF 2024 vs 2023",
|
||||
"description": "Comparat XSD-uri S1002-S1005. Raport: anaf-monitor/bilant_compare/RAPORT_DIFERENTE_2024_vs_2023.md",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high",
|
||||
"completed": "2026-01-29"
|
||||
},
|
||||
{
|
||||
"id": "task-017",
|
||||
"title": "Scrie un haiku",
|
||||
"description": "Biți în noaptea grea / Claude răspunde în liniște / Ecou digital",
|
||||
"created": "2026-01-29",
|
||||
"priority": "medium",
|
||||
"completed": "2026-01-29"
|
||||
},
|
||||
{
|
||||
"id": "task-005",
|
||||
"title": "Kanban board",
|
||||
"description": "Interfață web pentru vizualizare task-uri",
|
||||
"created": "2025-01-30",
|
||||
"priority": "high",
|
||||
"completed": "2026-01-29"
|
||||
},
|
||||
{
|
||||
"id": "task-008",
|
||||
"title": "YouTube Notes interface",
|
||||
"description": "Interfață pentru vizualizare notițe cu search",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"id": "task-009",
|
||||
"title": "Search în notițe",
|
||||
"description": "Căutare în titlu, tags și conținut",
|
||||
"created": "2026-01-29",
|
||||
"priority": "medium"
|
||||
},
|
||||
{
|
||||
"id": "task-010",
|
||||
"title": "Sumarizare: Claude Code Do Work Pattern",
|
||||
"description": "https://youtu.be/I9-tdhxiH7w",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"id": "task-011",
|
||||
"title": "File Explorer în Task Board",
|
||||
"description": "Interfață pentru browse/edit fișiere din workspace",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"id": "task-013",
|
||||
"title": "Kanban interactiv cu drag & drop",
|
||||
"description": "Adăugat: drag-drop, add/edit/delete tasks, priorități, salvare automată",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"id": "task-014",
|
||||
"title": "Sumarizare: It Got Worse (Clawdbot)...",
|
||||
"description": "https://youtu.be/rPAKq2oQVBs?si=6sJk41XsCrQQt6Lg",
|
||||
"created": "2026-01-29",
|
||||
"priority": "medium",
|
||||
"completed": "2026-01-29"
|
||||
},
|
||||
{
|
||||
"id": "task-015",
|
||||
"title": "Sumarizare: Greșeli post cu apă",
|
||||
"description": "https://youtu.be/4QjkI0sf64M",
|
||||
"created": "2026-01-29",
|
||||
"priority": "medium"
|
||||
},
|
||||
{
|
||||
"id": "task-016",
|
||||
"title": "Sumarizare: GSD Framework Claude Code",
|
||||
"description": "https://www.youtube.com/watch?v=l94A53kIUB0",
|
||||
"created": "2026-01-29",
|
||||
"priority": "high"
|
||||
},
|
||||
{
|
||||
"id": "task-028",
|
||||
"title": "ANAF Monitor - verificare (test)",
|
||||
"description": "Testare manuală cron job",
|
||||
"created": "2026-01-29",
|
||||
"priority": "medium",
|
||||
"completed": "2026-01-29"
|
||||
},
|
||||
{
|
||||
"id": "task-030",
|
||||
"title": "Test task tracking",
|
||||
"description": "",
|
||||
"created": "2026-01-30T20:12:25Z",
|
||||
"priority": "medium",
|
||||
"completed": "2026-01-30T20:12:29Z"
|
||||
},
|
||||
{
|
||||
"id": "task-031",
|
||||
"title": "Fix notes tag coloring on expand",
|
||||
"description": "",
|
||||
"created": "2026-01-30T20:16:46Z",
|
||||
"priority": "medium",
|
||||
"completed": "2026-01-30T20:17:08Z"
|
||||
},
|
||||
{
|
||||
"id": "task-032",
|
||||
"title": "Fix cron jobs timezone Bucharest",
|
||||
"description": "",
|
||||
"created": "2026-01-30T20:21:26Z",
|
||||
"priority": "medium",
|
||||
"completed": "2026-01-30T20:21:44Z"
|
||||
},
|
||||
{
|
||||
"id": "task-033",
|
||||
"title": "Redirect coaching to @health, reports to @work",
|
||||
"description": "",
|
||||
"created": "2026-01-30T20:25:22Z",
|
||||
"priority": "medium",
|
||||
"completed": "2026-01-30T20:26:37Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
64
dashboard/archive/tasks-2026-02.json
Normal file
64
dashboard/archive/tasks-2026-02.json
Normal file
@@ -0,0 +1,64 @@
|
||||
{
|
||||
"month": "2026-02",
|
||||
"tasks": [
|
||||
{
|
||||
"id": "task-034",
|
||||
"title": "Actualizare documentație canale agenți",
|
||||
"description": "",
|
||||
"created": "2026-02-01T12:15:41Z",
|
||||
"priority": "medium",
|
||||
"completed": "2026-02-01T12:15:44Z"
|
||||
},
|
||||
{
|
||||
"id": "task-035",
|
||||
"title": "Restructurare echipă: șterg work, unific health+growth→self",
|
||||
"description": "",
|
||||
"created": "2026-02-01T12:20:59Z",
|
||||
"priority": "medium",
|
||||
"completed": "2026-02-01T12:23:32Z"
|
||||
},
|
||||
{
|
||||
"id": "task-036",
|
||||
"title": "Unificare în 1 agent cu tehnici diminuare dezavantaje",
|
||||
"description": "",
|
||||
"created": "2026-02-01T13:27:51Z",
|
||||
"priority": "medium",
|
||||
"completed": "2026-02-01T13:30:01Z"
|
||||
},
|
||||
{
|
||||
"id": "task-037",
|
||||
"title": "Coaching dimineață - Asumarea eforturilor (Zoltan Vereș)",
|
||||
"description": "",
|
||||
"created": "2026-02-02T07:01:14Z",
|
||||
"priority": "medium"
|
||||
},
|
||||
{
|
||||
"id": "task-038",
|
||||
"title": "Raport dimineata trimis pe email",
|
||||
"description": "",
|
||||
"created": "2026-02-03T06:31:08Z",
|
||||
"priority": "medium"
|
||||
},
|
||||
{
|
||||
"id": "task-039",
|
||||
"title": "Raport seară 3 feb trimis pe email",
|
||||
"description": "",
|
||||
"created": "2026-02-03T18:01:12Z",
|
||||
"priority": "medium"
|
||||
},
|
||||
{
|
||||
"id": "task-040",
|
||||
"title": "Job night-execute: 2 video-uri YouTube procesate",
|
||||
"description": "",
|
||||
"created": "2026-02-03T21:02:31Z",
|
||||
"priority": "medium"
|
||||
},
|
||||
{
|
||||
"id": "task-041",
|
||||
"title": "Raport dimineață trimis pe email",
|
||||
"description": "",
|
||||
"created": "2026-02-04T06:31:05Z",
|
||||
"priority": "medium"
|
||||
}
|
||||
]
|
||||
}
|
||||
97
dashboard/archive_tasks.py
Normal file
97
dashboard/archive_tasks.py
Normal file
@@ -0,0 +1,97 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Archive old Done tasks to monthly archive files.
|
||||
Run periodically (heartbeat) to keep tasks.json small.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
|
||||
TASKS_FILE = Path(__file__).parent / "tasks.json"
|
||||
ARCHIVE_DIR = Path(__file__).parent / "archive"
|
||||
DAYS_TO_KEEP = 7 # Keep Done tasks for 7 days before archiving
|
||||
|
||||
def archive_old_tasks():
|
||||
if not TASKS_FILE.exists():
|
||||
print("No tasks.json found")
|
||||
return
|
||||
|
||||
with open(TASKS_FILE, 'r') as f:
|
||||
data = json.load(f)
|
||||
|
||||
# Find Done column
|
||||
done_col = None
|
||||
for col in data['columns']:
|
||||
if col['id'] == 'done':
|
||||
done_col = col
|
||||
break
|
||||
|
||||
if not done_col:
|
||||
print("No Done column found")
|
||||
return
|
||||
|
||||
# Calculate cutoff date
|
||||
cutoff = (datetime.now() - timedelta(days=DAYS_TO_KEEP)).strftime('%Y-%m-%d')
|
||||
|
||||
# Separate old and recent tasks
|
||||
old_tasks = []
|
||||
recent_tasks = []
|
||||
|
||||
for task in done_col['tasks']:
|
||||
completed = task.get('completed', task.get('created', ''))
|
||||
if completed and completed < cutoff:
|
||||
old_tasks.append(task)
|
||||
else:
|
||||
recent_tasks.append(task)
|
||||
|
||||
if not old_tasks:
|
||||
print(f"No tasks older than {DAYS_TO_KEEP} days to archive")
|
||||
return
|
||||
|
||||
# Create archive directory
|
||||
ARCHIVE_DIR.mkdir(exist_ok=True)
|
||||
|
||||
# Group old tasks by month
|
||||
by_month = {}
|
||||
for task in old_tasks:
|
||||
completed = task.get('completed', task.get('created', ''))[:7] # YYYY-MM
|
||||
if completed not in by_month:
|
||||
by_month[completed] = []
|
||||
by_month[completed].append(task)
|
||||
|
||||
# Write to monthly archive files
|
||||
for month, tasks in by_month.items():
|
||||
archive_file = ARCHIVE_DIR / f"tasks-{month}.json"
|
||||
|
||||
# Load existing archive
|
||||
if archive_file.exists():
|
||||
with open(archive_file, 'r') as f:
|
||||
archive = json.load(f)
|
||||
else:
|
||||
archive = {"month": month, "tasks": []}
|
||||
|
||||
# Add new tasks (avoid duplicates by ID)
|
||||
existing_ids = {t['id'] for t in archive['tasks']}
|
||||
for task in tasks:
|
||||
if task['id'] not in existing_ids:
|
||||
archive['tasks'].append(task)
|
||||
|
||||
# Save archive
|
||||
with open(archive_file, 'w') as f:
|
||||
json.dump(archive, f, indent=2, ensure_ascii=False)
|
||||
|
||||
print(f"Archived {len(tasks)} tasks to {archive_file.name}")
|
||||
|
||||
# Update tasks.json with only recent Done tasks
|
||||
done_col['tasks'] = recent_tasks
|
||||
data['lastUpdated'] = datetime.now().isoformat()
|
||||
|
||||
with open(TASKS_FILE, 'w') as f:
|
||||
json.dump(data, f, indent=2, ensure_ascii=False)
|
||||
|
||||
print(f"Kept {len(recent_tasks)} recent Done tasks, archived {len(old_tasks)}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
archive_old_tasks()
|
||||
448
dashboard/common.css
Normal file
448
dashboard/common.css
Normal file
@@ -0,0 +1,448 @@
|
||||
/*
|
||||
* Echo Design System
|
||||
* Modern, minimalist, unified UI
|
||||
*/
|
||||
|
||||
/* ============================================
|
||||
CSS Variables - Design Tokens
|
||||
============================================ */
|
||||
:root {
|
||||
/* Colors - Dark theme (high contrast) */
|
||||
--bg-base: #13131a;
|
||||
--bg-surface: rgba(255, 255, 255, 0.12);
|
||||
--bg-surface-hover: rgba(255, 255, 255, 0.16);
|
||||
--bg-surface-active: rgba(255, 255, 255, 0.20);
|
||||
--bg-elevated: rgba(255, 255, 255, 0.14);
|
||||
|
||||
--text-primary: #ffffff;
|
||||
--text-secondary: #f5f5f5;
|
||||
--text-muted: #e5e5e5;
|
||||
|
||||
--accent: #3b82f6;
|
||||
--accent-hover: #2563eb;
|
||||
--accent-subtle: rgba(59, 130, 246, 0.2);
|
||||
|
||||
--border: rgba(255, 255, 255, 0.3);
|
||||
--border-focus: rgba(59, 130, 246, 0.7);
|
||||
|
||||
/* Header specific */
|
||||
--header-bg: rgba(19, 19, 26, 0.95);
|
||||
|
||||
--success: #22c55e;
|
||||
--warning: #eab308;
|
||||
--error: #ef4444;
|
||||
|
||||
/* Spacing (8px grid) */
|
||||
--space-1: 4px;
|
||||
--space-2: 8px;
|
||||
--space-3: 12px;
|
||||
--space-4: 16px;
|
||||
--space-5: 20px;
|
||||
--space-6: 24px;
|
||||
--space-8: 32px;
|
||||
--space-10: 40px;
|
||||
|
||||
/* Typography */
|
||||
--font-sans: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Inter', sans-serif;
|
||||
--font-mono: 'JetBrains Mono', 'Fira Code', monospace;
|
||||
|
||||
--text-xs: 0.75rem;
|
||||
--text-sm: 0.875rem;
|
||||
--text-base: 1rem;
|
||||
--text-lg: 1.125rem;
|
||||
--text-xl: 1.25rem;
|
||||
|
||||
/* Radius */
|
||||
--radius-sm: 4px;
|
||||
--radius-md: 8px;
|
||||
--radius-lg: 12px;
|
||||
--radius-full: 9999px;
|
||||
|
||||
/* Shadows */
|
||||
--shadow-sm: 0 1px 2px rgba(0, 0, 0, 0.3);
|
||||
--shadow-md: 0 4px 12px rgba(0, 0, 0, 0.4);
|
||||
--shadow-lg: 0 8px 24px rgba(0, 0, 0, 0.5);
|
||||
|
||||
/* Transitions */
|
||||
--transition-fast: 0.15s ease;
|
||||
--transition-base: 0.2s ease;
|
||||
}
|
||||
|
||||
/* Light theme */
|
||||
[data-theme="light"] {
|
||||
--bg-base: #f8f9fa;
|
||||
--bg-surface: rgba(0, 0, 0, 0.04);
|
||||
--bg-surface-hover: rgba(0, 0, 0, 0.08);
|
||||
--bg-surface-active: rgba(0, 0, 0, 0.12);
|
||||
--bg-elevated: rgba(0, 0, 0, 0.06);
|
||||
|
||||
--text-primary: #1a1a1a;
|
||||
--text-secondary: #444444;
|
||||
--text-muted: #666666;
|
||||
|
||||
--border: rgba(0, 0, 0, 0.12);
|
||||
--border-focus: rgba(59, 130, 246, 0.5);
|
||||
|
||||
--accent-subtle: rgba(59, 130, 246, 0.12);
|
||||
|
||||
/* Header light */
|
||||
--header-bg: rgba(255, 255, 255, 0.95);
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
Reset & Base
|
||||
============================================ */
|
||||
*, *::before, *::after {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
html {
|
||||
font-size: 16px;
|
||||
-webkit-font-smoothing: antialiased;
|
||||
-moz-osx-font-smoothing: grayscale;
|
||||
}
|
||||
|
||||
body {
|
||||
font-family: var(--font-sans);
|
||||
background: var(--bg-base);
|
||||
color: var(--text-secondary);
|
||||
line-height: 1.5;
|
||||
min-height: 100vh;
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
Header / Navigation
|
||||
============================================ */
|
||||
.header {
|
||||
position: sticky;
|
||||
top: 0;
|
||||
z-index: 100;
|
||||
background: var(--header-bg);
|
||||
backdrop-filter: blur(12px);
|
||||
border-bottom: 1px solid var(--border);
|
||||
padding: var(--space-3) var(--space-5);
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
gap: var(--space-4);
|
||||
}
|
||||
|
||||
.logo {
|
||||
font-size: var(--text-lg);
|
||||
font-weight: 600;
|
||||
color: var(--text-primary);
|
||||
text-decoration: none;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: var(--space-2);
|
||||
}
|
||||
|
||||
.logo svg {
|
||||
width: 24px;
|
||||
height: 24px;
|
||||
color: var(--accent);
|
||||
}
|
||||
|
||||
.nav {
|
||||
display: flex;
|
||||
gap: var(--space-1);
|
||||
}
|
||||
|
||||
.nav-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: var(--space-2);
|
||||
padding: var(--space-2) var(--space-4);
|
||||
color: var(--text-secondary);
|
||||
text-decoration: none;
|
||||
font-size: var(--text-sm);
|
||||
font-weight: 500;
|
||||
border-radius: var(--radius-md);
|
||||
transition: all var(--transition-fast);
|
||||
}
|
||||
|
||||
.nav-item:hover {
|
||||
color: var(--text-primary);
|
||||
background: var(--bg-surface-hover);
|
||||
}
|
||||
|
||||
.nav-item.active {
|
||||
color: var(--text-primary);
|
||||
background: var(--accent-subtle);
|
||||
}
|
||||
|
||||
.nav-item svg {
|
||||
width: 18px;
|
||||
height: 18px;
|
||||
}
|
||||
|
||||
/* Theme toggle */
|
||||
.theme-toggle {
|
||||
padding: var(--space-2);
|
||||
background: transparent;
|
||||
border: none;
|
||||
color: var(--text-muted);
|
||||
cursor: pointer;
|
||||
border-radius: var(--radius-md);
|
||||
display: flex;
|
||||
align-items: center;
|
||||
transition: all var(--transition-fast);
|
||||
}
|
||||
|
||||
.theme-toggle:hover {
|
||||
color: var(--text-primary);
|
||||
background: var(--bg-surface-hover);
|
||||
}
|
||||
|
||||
.theme-toggle svg {
|
||||
width: 18px;
|
||||
height: 18px;
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
Cards
|
||||
============================================ */
|
||||
.card {
|
||||
background: var(--bg-surface);
|
||||
border: 1px solid var(--border);
|
||||
border-radius: var(--radius-lg);
|
||||
padding: var(--space-4);
|
||||
transition: all var(--transition-base);
|
||||
}
|
||||
|
||||
.card:hover {
|
||||
background: var(--bg-surface-hover);
|
||||
border-color: var(--border-focus);
|
||||
}
|
||||
|
||||
.card-title {
|
||||
font-size: var(--text-base);
|
||||
font-weight: 600;
|
||||
color: var(--text-primary);
|
||||
margin-bottom: var(--space-1);
|
||||
}
|
||||
|
||||
.card-meta {
|
||||
font-size: var(--text-xs);
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
Buttons
|
||||
============================================ */
|
||||
.btn {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
gap: var(--space-2);
|
||||
padding: var(--space-2) var(--space-4);
|
||||
font-size: var(--text-sm);
|
||||
font-weight: 500;
|
||||
border: none;
|
||||
border-radius: var(--radius-md);
|
||||
cursor: pointer;
|
||||
transition: all var(--transition-fast);
|
||||
}
|
||||
|
||||
.btn svg {
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
}
|
||||
|
||||
.btn-primary {
|
||||
background: var(--accent);
|
||||
color: white;
|
||||
}
|
||||
|
||||
.btn-primary:hover {
|
||||
background: var(--accent-hover);
|
||||
}
|
||||
|
||||
.btn-secondary {
|
||||
background: var(--bg-surface);
|
||||
color: var(--text-secondary);
|
||||
border: 1px solid var(--border);
|
||||
}
|
||||
|
||||
.btn-secondary:hover {
|
||||
background: var(--bg-surface-hover);
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.btn-ghost {
|
||||
background: transparent;
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
.btn-ghost:hover {
|
||||
background: var(--bg-surface);
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.btn:disabled {
|
||||
opacity: 0.5;
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
Inputs
|
||||
============================================ */
|
||||
.input {
|
||||
width: 100%;
|
||||
padding: var(--space-3) var(--space-4);
|
||||
background: var(--bg-surface);
|
||||
border: 1px solid var(--border);
|
||||
border-radius: var(--radius-md);
|
||||
color: var(--text-primary);
|
||||
font-size: var(--text-sm);
|
||||
font-family: inherit;
|
||||
outline: none;
|
||||
transition: border-color var(--transition-fast);
|
||||
}
|
||||
|
||||
.input:focus {
|
||||
border-color: var(--accent);
|
||||
}
|
||||
|
||||
.input::placeholder {
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
/* Select dropdowns - fix for dark mode visibility */
|
||||
select.input {
|
||||
background: var(--bg-elevated);
|
||||
}
|
||||
|
||||
select.input option {
|
||||
background: var(--bg-base);
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
Tags / Badges
|
||||
============================================ */
|
||||
.tag {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
padding: var(--space-1) var(--space-2);
|
||||
font-size: var(--text-xs);
|
||||
color: var(--text-muted);
|
||||
background: var(--bg-surface);
|
||||
border-radius: var(--radius-sm);
|
||||
}
|
||||
|
||||
.badge {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
min-width: 20px;
|
||||
padding: 2px 6px;
|
||||
font-size: var(--text-xs);
|
||||
font-weight: 600;
|
||||
color: var(--text-primary);
|
||||
background: var(--bg-surface-active);
|
||||
border-radius: var(--radius-full);
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
Grid Layouts
|
||||
============================================ */
|
||||
.grid {
|
||||
display: grid;
|
||||
gap: var(--space-4);
|
||||
}
|
||||
|
||||
.grid-2 { grid-template-columns: repeat(2, 1fr); }
|
||||
.grid-3 { grid-template-columns: repeat(3, 1fr); }
|
||||
.grid-auto { grid-template-columns: repeat(auto-fill, minmax(280px, 1fr)); }
|
||||
|
||||
/* ============================================
|
||||
Status Colors
|
||||
============================================ */
|
||||
.status-success { color: var(--success); }
|
||||
.status-warning { color: var(--warning); }
|
||||
.status-error { color: var(--error); }
|
||||
|
||||
/* ============================================
|
||||
Scrollbar
|
||||
============================================ */
|
||||
::-webkit-scrollbar {
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-track {
|
||||
background: transparent;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-thumb {
|
||||
background: var(--bg-surface-active);
|
||||
border-radius: var(--radius-full);
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-thumb:hover {
|
||||
background: var(--text-muted);
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
Responsive
|
||||
============================================ */
|
||||
@media (max-width: 768px) {
|
||||
/* Larger base font for mobile readability */
|
||||
html {
|
||||
font-size: 18px;
|
||||
}
|
||||
|
||||
.header {
|
||||
padding: var(--space-3);
|
||||
}
|
||||
|
||||
.nav-item span {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.nav-item {
|
||||
padding: var(--space-2);
|
||||
}
|
||||
|
||||
.grid-2, .grid-3 {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
|
||||
/* Larger touch targets */
|
||||
.btn, .input, .tag {
|
||||
min-height: 44px;
|
||||
font-size: var(--text-base);
|
||||
}
|
||||
|
||||
.card {
|
||||
padding: var(--space-5);
|
||||
}
|
||||
|
||||
.card-title {
|
||||
font-size: var(--text-lg);
|
||||
}
|
||||
}
|
||||
|
||||
/* ============================================
|
||||
Utilities
|
||||
============================================ */
|
||||
.sr-only {
|
||||
position: absolute;
|
||||
width: 1px;
|
||||
height: 1px;
|
||||
padding: 0;
|
||||
margin: -1px;
|
||||
overflow: hidden;
|
||||
clip: rect(0, 0, 0, 0);
|
||||
border: 0;
|
||||
}
|
||||
|
||||
.flex { display: flex; }
|
||||
.flex-col { flex-direction: column; }
|
||||
.items-center { align-items: center; }
|
||||
.justify-between { justify-content: space-between; }
|
||||
.gap-2 { gap: var(--space-2); }
|
||||
.gap-4 { gap: var(--space-4); }
|
||||
39
dashboard/constants.py
Normal file
39
dashboard/constants.py
Normal file
@@ -0,0 +1,39 @@
|
||||
"""Shared path constants + .env loading for the dashboard package.
|
||||
|
||||
All path constants are centralised here so handlers can import them via
|
||||
`from constants import BASE_DIR, ...` (dashboard/ is placed on sys.path by
|
||||
api.py on startup).
|
||||
"""
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
BASE_DIR = Path(__file__).parent.parent # echo-core/
|
||||
TOOLS_DIR = BASE_DIR / 'tools'
|
||||
NOTES_DIR = BASE_DIR / 'memory' / 'kb' / 'youtube'
|
||||
KANBAN_DIR = BASE_DIR / 'dashboard'
|
||||
WORKSPACE_DIR = Path('/home/moltbot/workspace')
|
||||
HABITS_FILE = KANBAN_DIR / 'habits.json'
|
||||
|
||||
# Eco (echo-core) constants
|
||||
ECO_SERVICES = ['echo-core', 'echo-whatsapp-bridge', 'echo-taskboard']
|
||||
ECHO_CORE_DIR = BASE_DIR # same as BASE_DIR post-consolidation
|
||||
ECHO_LOG_FILE = ECHO_CORE_DIR / 'logs' / 'echo-core.log'
|
||||
ECHO_SESSIONS_FILE = ECHO_CORE_DIR / 'sessions' / 'active.json'
|
||||
|
||||
# Git + workspace sandbox
|
||||
GIT_WORKSPACE = BASE_DIR # was '/home/moltbot/clawd'
|
||||
ALLOWED_WORKSPACES = [BASE_DIR, WORKSPACE_DIR] # was [clawd, workspace] — clawd dropped
|
||||
VENV_PYTHON = BASE_DIR / '.venv' / 'bin' / 'python3'
|
||||
|
||||
# ── .env loading ───────────────────────────────────────────────────
|
||||
_env_file = KANBAN_DIR / '.env'
|
||||
if _env_file.exists():
|
||||
for line in _env_file.read_text().splitlines():
|
||||
line = line.strip()
|
||||
if line and not line.startswith('#') and '=' in line:
|
||||
k, v = line.split('=', 1)
|
||||
os.environ.setdefault(k.strip(), v.strip())
|
||||
|
||||
GITEA_URL = os.environ.get('GITEA_URL', 'https://gitea.romfast.ro')
|
||||
GITEA_ORG = os.environ.get('GITEA_ORG', 'romfast')
|
||||
GITEA_TOKEN = os.environ.get('GITEA_TOKEN', '')
|
||||
16
dashboard/echo-taskboard.service
Normal file
16
dashboard/echo-taskboard.service
Normal file
@@ -0,0 +1,16 @@
|
||||
[Unit]
|
||||
Description=Echo Task Board API (dashboard)
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
WorkingDirectory=/home/moltbot/echo-core/dashboard
|
||||
ExecStart=/home/moltbot/echo-core/.venv/bin/python3 /home/moltbot/echo-core/dashboard/api.py
|
||||
Restart=on-failure
|
||||
RestartSec=5
|
||||
Environment=PYTHONUNBUFFERED=1
|
||||
StandardOutput=append:/home/moltbot/echo-core/logs/echo-taskboard.log
|
||||
StandardError=append:/home/moltbot/echo-core/logs/echo-taskboard.log
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
1252
dashboard/eco.html
Normal file
1252
dashboard/eco.html
Normal file
File diff suppressed because it is too large
Load Diff
4
dashboard/favicon.svg
Normal file
4
dashboard/favicon.svg
Normal file
@@ -0,0 +1,4 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 32 32">
|
||||
<circle cx="16" cy="16" r="14" fill="none" stroke="#3b82f6" stroke-width="2.5"/>
|
||||
<circle cx="16" cy="16" r="3" fill="#3b82f6"/>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 200 B |
1935
dashboard/files.html
Normal file
1935
dashboard/files.html
Normal file
File diff suppressed because it is too large
Load Diff
501
dashboard/grup-sprijin.html
Normal file
501
dashboard/grup-sprijin.html
Normal file
@@ -0,0 +1,501 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="ro">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Echo · Grup Sprijin</title>
|
||||
<link rel="stylesheet" href="/echo/common.css">
|
||||
<script src="https://unpkg.com/lucide@latest/dist/umd/lucide.min.js"></script>
|
||||
<style>
|
||||
.main {
|
||||
max-width: 1000px;
|
||||
margin: 0 auto;
|
||||
padding: var(--space-5);
|
||||
}
|
||||
|
||||
.page-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: var(--space-6);
|
||||
flex-wrap: wrap;
|
||||
gap: var(--space-4);
|
||||
}
|
||||
|
||||
.page-title {
|
||||
font-size: var(--text-xl);
|
||||
font-weight: 600;
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.search-bar input {
|
||||
width: 250px;
|
||||
}
|
||||
|
||||
.filters {
|
||||
display: flex;
|
||||
gap: var(--space-2);
|
||||
flex-wrap: wrap;
|
||||
margin-bottom: var(--space-4);
|
||||
}
|
||||
|
||||
.filter-btn {
|
||||
padding: var(--space-2) var(--space-3);
|
||||
border-radius: var(--radius-md);
|
||||
border: 1px solid var(--border);
|
||||
background: var(--bg-surface);
|
||||
color: var(--text-secondary);
|
||||
cursor: pointer;
|
||||
font-size: var(--text-sm);
|
||||
transition: all var(--transition-fast);
|
||||
}
|
||||
|
||||
.filter-btn:hover {
|
||||
background: var(--bg-surface-hover);
|
||||
}
|
||||
|
||||
.filter-btn.active {
|
||||
background: var(--accent);
|
||||
color: white;
|
||||
border-color: var(--accent);
|
||||
}
|
||||
|
||||
.items-grid {
|
||||
display: grid;
|
||||
gap: var(--space-4);
|
||||
}
|
||||
|
||||
.item-card {
|
||||
background: var(--bg-surface);
|
||||
border: 1px solid var(--border);
|
||||
border-radius: var(--radius-lg);
|
||||
padding: var(--space-4);
|
||||
cursor: pointer;
|
||||
transition: all var(--transition-fast);
|
||||
}
|
||||
|
||||
.item-card:hover {
|
||||
border-color: var(--accent);
|
||||
transform: translateY(-2px);
|
||||
}
|
||||
|
||||
.item-card.used {
|
||||
opacity: 0.7;
|
||||
border-left: 3px solid var(--success);
|
||||
}
|
||||
|
||||
.item-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: flex-start;
|
||||
margin-bottom: var(--space-2);
|
||||
}
|
||||
|
||||
.item-title {
|
||||
font-weight: 600;
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.item-type {
|
||||
font-size: var(--text-xs);
|
||||
padding: var(--space-1) var(--space-2);
|
||||
border-radius: var(--radius-sm);
|
||||
text-transform: uppercase;
|
||||
}
|
||||
|
||||
.item-type.exercitiu { background: rgba(59, 130, 246, 0.2); color: #3b82f6; }
|
||||
.item-type.meditatie { background: rgba(139, 92, 246, 0.2); color: #8b5cf6; }
|
||||
.item-type.intrebare { background: rgba(20, 184, 166, 0.2); color: #14b8a6; }
|
||||
.item-type.reflectie { background: rgba(249, 115, 22, 0.2); color: #f97316; }
|
||||
|
||||
.item-tags {
|
||||
display: flex;
|
||||
gap: var(--space-1);
|
||||
flex-wrap: wrap;
|
||||
margin-top: var(--space-2);
|
||||
}
|
||||
|
||||
.tag {
|
||||
font-size: var(--text-xs);
|
||||
padding: 2px 6px;
|
||||
background: var(--bg-surface-hover);
|
||||
border-radius: var(--radius-sm);
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
.item-used {
|
||||
font-size: var(--text-xs);
|
||||
color: var(--success);
|
||||
margin-top: var(--space-2);
|
||||
}
|
||||
|
||||
/* Modal */
|
||||
.modal {
|
||||
display: none;
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
background: rgba(0,0,0,0.7);
|
||||
z-index: 1000;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
|
||||
.modal.open {
|
||||
display: flex;
|
||||
}
|
||||
|
||||
.modal-content {
|
||||
background: #1a1a2e;
|
||||
border-radius: var(--radius-lg);
|
||||
max-width: 600px;
|
||||
width: 90%;
|
||||
max-height: 80vh;
|
||||
overflow-y: auto;
|
||||
padding: var(--space-5);
|
||||
box-shadow: 0 20px 50px rgba(0,0,0,0.5);
|
||||
}
|
||||
|
||||
[data-theme="light"] .modal-content {
|
||||
background: #ffffff;
|
||||
}
|
||||
|
||||
.modal-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: flex-start;
|
||||
margin-bottom: var(--space-4);
|
||||
}
|
||||
|
||||
.modal-title {
|
||||
font-size: var(--text-lg);
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.modal-close {
|
||||
background: none;
|
||||
border: none;
|
||||
color: var(--text-muted);
|
||||
cursor: pointer;
|
||||
padding: var(--space-1);
|
||||
}
|
||||
|
||||
.modal-body {
|
||||
color: var(--text-secondary);
|
||||
line-height: 1.6;
|
||||
white-space: pre-wrap;
|
||||
}
|
||||
|
||||
.modal-actions {
|
||||
margin-top: var(--space-4);
|
||||
display: flex;
|
||||
gap: var(--space-2);
|
||||
}
|
||||
|
||||
.btn {
|
||||
padding: var(--space-2) var(--space-4);
|
||||
border-radius: var(--radius-md);
|
||||
border: none;
|
||||
cursor: pointer;
|
||||
font-size: var(--text-sm);
|
||||
}
|
||||
|
||||
.btn-primary {
|
||||
background: var(--accent);
|
||||
color: white;
|
||||
}
|
||||
|
||||
.btn-secondary {
|
||||
background: var(--bg-surface);
|
||||
color: var(--text-secondary);
|
||||
border: 1px solid var(--border);
|
||||
}
|
||||
|
||||
.stats {
|
||||
display: flex;
|
||||
gap: var(--space-4);
|
||||
margin-bottom: var(--space-4);
|
||||
font-size: var(--text-sm);
|
||||
color: var(--text-muted);
|
||||
}
|
||||
|
||||
.error-msg {
|
||||
background: rgba(239, 68, 68, 0.1);
|
||||
border: 1px solid rgba(239, 68, 68, 0.3);
|
||||
color: #ef4444;
|
||||
padding: var(--space-4);
|
||||
border-radius: var(--radius-md);
|
||||
text-align: center;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<header class="header">
|
||||
<a href="/echo/index.html" class="logo">
|
||||
<i data-lucide="circle-dot"></i>
|
||||
Echo
|
||||
</a>
|
||||
<nav class="nav">
|
||||
<a href="/echo/index.html" class="nav-item">
|
||||
<i data-lucide="layout-list"></i>
|
||||
<span>Tasks</span>
|
||||
</a>
|
||||
<a href="/echo/notes.html" class="nav-item">
|
||||
<i data-lucide="file-text"></i>
|
||||
<span>Notes</span>
|
||||
</a>
|
||||
<a href="/echo/habits.html" class="nav-item">
|
||||
<i data-lucide="dumbbell"></i>
|
||||
<span>Habits</span>
|
||||
</a>
|
||||
<a href="/echo/files.html" class="nav-item">
|
||||
<i data-lucide="folder"></i>
|
||||
<span>Files</span>
|
||||
</a>
|
||||
<a href="/echo/grup-sprijin.html" class="nav-item active">
|
||||
<i data-lucide="heart-handshake"></i>
|
||||
<span>Grup</span>
|
||||
</a>
|
||||
<button class="theme-toggle" onclick="toggleTheme()" title="Schimbă tema">
|
||||
<i data-lucide="sun" id="themeIcon"></i>
|
||||
</button>
|
||||
</nav>
|
||||
</header>
|
||||
|
||||
<main class="main">
|
||||
<div class="page-header">
|
||||
<h1 class="page-title">Grup Sprijin - Exerciții & Întrebări</h1>
|
||||
<div class="search-bar">
|
||||
<input type="text" class="input" placeholder="Caută..." id="searchInput">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="stats" id="stats"></div>
|
||||
|
||||
<div class="fise-section" id="fiseSection" style="margin-bottom: var(--space-5); display: none;">
|
||||
<h2 style="font-size: var(--text-lg); margin-bottom: var(--space-3); color: var(--text-primary);">Fișe întâlniri</h2>
|
||||
<div class="fise-list" id="fiseList" style="display: flex; gap: var(--space-2); flex-wrap: wrap;"></div>
|
||||
</div>
|
||||
|
||||
<div class="filters" id="filters">
|
||||
<button class="filter-btn active" data-filter="all">Toate</button>
|
||||
<button class="filter-btn" data-filter="exercitiu">Exerciții</button>
|
||||
<button class="filter-btn" data-filter="meditatie">Meditații</button>
|
||||
<button class="filter-btn" data-filter="intrebare">Întrebări</button>
|
||||
<button class="filter-btn" data-filter="reflectie">Reflecții</button>
|
||||
<button class="filter-btn" data-filter="unused">Nefolosite</button>
|
||||
<button class="filter-btn" data-filter="used">Folosite</button>
|
||||
</div>
|
||||
|
||||
<div class="items-grid" id="itemsGrid">
|
||||
<p>Se încarcă...</p>
|
||||
</div>
|
||||
</main>
|
||||
|
||||
<div class="modal" id="modal">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header">
|
||||
<div>
|
||||
<h2 class="modal-title" id="modalTitle"></h2>
|
||||
<span class="item-type" id="modalType"></span>
|
||||
</div>
|
||||
<button class="modal-close" onclick="closeModal()">
|
||||
<i data-lucide="x"></i>
|
||||
</button>
|
||||
</div>
|
||||
<div class="modal-body" id="modalBody"></div>
|
||||
<div class="item-tags" id="modalTags"></div>
|
||||
<div class="modal-actions">
|
||||
<button class="btn btn-primary" id="markUsedBtn" onclick="toggleUsed()">Marchează folosit</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Theme
|
||||
function toggleTheme() {
|
||||
const body = document.body;
|
||||
const current = body.getAttribute('data-theme') || 'dark';
|
||||
const next = current === 'dark' ? 'light' : 'dark';
|
||||
body.setAttribute('data-theme', next);
|
||||
localStorage.setItem('theme', next);
|
||||
updateThemeIcon();
|
||||
}
|
||||
|
||||
function updateThemeIcon() {
|
||||
const theme = document.body.getAttribute('data-theme') || 'dark';
|
||||
const icon = document.getElementById('themeIcon');
|
||||
if (icon) {
|
||||
icon.setAttribute('data-lucide', theme === 'dark' ? 'sun' : 'moon');
|
||||
lucide.createIcons();
|
||||
}
|
||||
}
|
||||
|
||||
// Load saved theme
|
||||
const savedTheme = localStorage.getItem('theme') || 'dark';
|
||||
document.body.setAttribute('data-theme', savedTheme);
|
||||
|
||||
// Data
|
||||
const API_BASE = window.location.pathname.includes('/echo/') ? '/echo' : '';
|
||||
let items = [];
|
||||
let currentFilter = 'all';
|
||||
let currentItem = null;
|
||||
|
||||
async function loadItems() {
|
||||
try {
|
||||
const response = await fetch('grup-sprijin/index.json?t=' + Date.now());
|
||||
if (!response.ok) throw new Error('Nu am găsit fișierul');
|
||||
items = await response.json();
|
||||
render();
|
||||
} catch (e) {
|
||||
console.error('Error loading items:', e);
|
||||
document.getElementById('itemsGrid').innerHTML = `
|
||||
<div class="error-msg">
|
||||
Eroare la încărcare: ${e.message}<br>
|
||||
<small>Verifică dacă fișierul grup-sprijin/index.json există</small>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
}
|
||||
|
||||
function render() {
|
||||
const search = document.getElementById('searchInput').value.toLowerCase();
|
||||
|
||||
let filtered = items.filter(item => {
|
||||
if (search && !item.title.toLowerCase().includes(search) &&
|
||||
!item.content.toLowerCase().includes(search) &&
|
||||
!item.tags.some(t => t.toLowerCase().includes(search))) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (currentFilter === 'all') return true;
|
||||
if (currentFilter === 'used') return item.used;
|
||||
if (currentFilter === 'unused') return !item.used;
|
||||
return item.type === currentFilter;
|
||||
});
|
||||
|
||||
const total = items.length;
|
||||
const used = items.filter(i => i.used).length;
|
||||
document.getElementById('stats').innerHTML = `
|
||||
<span>Total: ${total}</span>
|
||||
<span>Folosite: ${used}</span>
|
||||
<span>Nefolosite: ${total - used}</span>
|
||||
`;
|
||||
|
||||
const grid = document.getElementById('itemsGrid');
|
||||
if (filtered.length === 0) {
|
||||
grid.innerHTML = '<p style="color: var(--text-muted);">Niciun rezultat</p>';
|
||||
return;
|
||||
}
|
||||
|
||||
grid.innerHTML = filtered.map(item => `
|
||||
<div class="item-card ${item.used ? 'used' : ''}" onclick="openModal('${item.id}')">
|
||||
<div class="item-header">
|
||||
<span class="item-title">${item.title}</span>
|
||||
<span class="item-type ${item.type}">${item.type}</span>
|
||||
</div>
|
||||
<div class="item-tags">
|
||||
${item.tags.map(t => `<span class="tag">${t}</span>`).join('')}
|
||||
</div>
|
||||
${item.used ? `<div class="item-used">✓ Folosit: ${item.used}</div>` : ''}
|
||||
</div>
|
||||
`).join('');
|
||||
|
||||
lucide.createIcons();
|
||||
}
|
||||
|
||||
function openModal(id) {
|
||||
currentItem = items.find(i => i.id === id);
|
||||
if (!currentItem) return;
|
||||
|
||||
document.getElementById('modalTitle').textContent = currentItem.title;
|
||||
document.getElementById('modalType').textContent = currentItem.type;
|
||||
document.getElementById('modalType').className = `item-type ${currentItem.type}`;
|
||||
document.getElementById('modalBody').textContent = currentItem.content;
|
||||
document.getElementById('modalTags').innerHTML = currentItem.tags.map(t => `<span class="tag">${t}</span>`).join('');
|
||||
document.getElementById('markUsedBtn').textContent = currentItem.used ? 'Marchează nefolosit' : 'Marchează folosit';
|
||||
|
||||
document.getElementById('modal').classList.add('open');
|
||||
lucide.createIcons();
|
||||
}
|
||||
|
||||
function closeModal() {
|
||||
document.getElementById('modal').classList.remove('open');
|
||||
currentItem = null;
|
||||
}
|
||||
|
||||
async function toggleUsed() {
|
||||
if (!currentItem) return;
|
||||
|
||||
const idx = items.findIndex(i => i.id === currentItem.id);
|
||||
if (idx === -1) return;
|
||||
|
||||
if (items[idx].used) {
|
||||
items[idx].used = null;
|
||||
} else {
|
||||
items[idx].used = new Date().toLocaleDateString('ro-RO');
|
||||
}
|
||||
|
||||
try {
|
||||
await fetch(`${API_BASE}/api/files`, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
path: 'grup-sprijin/index.json',
|
||||
content: JSON.stringify(items, null, 2)
|
||||
})
|
||||
});
|
||||
} catch (e) {
|
||||
console.error('Error saving:', e);
|
||||
}
|
||||
|
||||
closeModal();
|
||||
render();
|
||||
}
|
||||
|
||||
document.querySelectorAll('.filter-btn').forEach(btn => {
|
||||
btn.addEventListener('click', () => {
|
||||
document.querySelectorAll('.filter-btn').forEach(b => b.classList.remove('active'));
|
||||
btn.classList.add('active');
|
||||
currentFilter = btn.dataset.filter;
|
||||
render();
|
||||
});
|
||||
});
|
||||
|
||||
document.getElementById('searchInput').addEventListener('input', render);
|
||||
|
||||
document.getElementById('modal').addEventListener('click', (e) => {
|
||||
if (e.target.id === 'modal') closeModal();
|
||||
});
|
||||
|
||||
// Load fise
|
||||
async function loadFise() {
|
||||
try {
|
||||
const response = await fetch(`${API_BASE}/api/files?path=kanban/grup-sprijin&action=list`);
|
||||
const data = await response.json();
|
||||
if (data.items) {
|
||||
const fise = data.items.filter(f => f.name.startsWith('fisa-') && f.name.endsWith('.md'));
|
||||
if (fise.length > 0) {
|
||||
document.getElementById('fiseSection').style.display = 'block';
|
||||
document.getElementById('fiseList').innerHTML = fise.map(f => `
|
||||
<a href="/echo/files.html#kanban/grup-sprijin/${f.name}" class="filter-btn" style="text-decoration: none;">
|
||||
${f.name.replace('fisa-', '').replace('.md', '')}
|
||||
</a>
|
||||
`).join('');
|
||||
}
|
||||
}
|
||||
} catch (e) {
|
||||
console.log('No fise yet');
|
||||
}
|
||||
}
|
||||
|
||||
// Init
|
||||
loadItems();
|
||||
loadFise();
|
||||
lucide.createIcons();
|
||||
updateThemeIcon();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
3494
dashboard/habits.html
Normal file
3494
dashboard/habits.html
Normal file
File diff suppressed because it is too large
Load Diff
123
dashboard/habits.json
Normal file
123
dashboard/habits.json
Normal file
@@ -0,0 +1,123 @@
|
||||
{
|
||||
"lastUpdated": "2026-03-31T19:39:08.013266",
|
||||
"habits": [
|
||||
{
|
||||
"id": "95c15eef-3a14-4985-a61e-0b64b72851b0",
|
||||
"name": "Bazin \u0219i Saun\u0103",
|
||||
"category": "health",
|
||||
"color": "#EF4444",
|
||||
"icon": "target",
|
||||
"priority": 50,
|
||||
"notes": "",
|
||||
"reminderTime": "19:00",
|
||||
"frequency": {
|
||||
"type": "x_per_week",
|
||||
"count": 5
|
||||
},
|
||||
"streak": {
|
||||
"current": 1,
|
||||
"best": 6,
|
||||
"lastCheckIn": "2026-03-31"
|
||||
},
|
||||
"lives": 2,
|
||||
"completions": [
|
||||
{
|
||||
"date": "2026-02-11",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-02-13",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-02-14",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-02-15",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-02-16",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-02-17",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-02-18",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-02-23",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-03-31",
|
||||
"type": "check"
|
||||
}
|
||||
],
|
||||
"createdAt": "2026-02-11T00:54:03.447063",
|
||||
"updatedAt": "2026-03-31T19:39:08.013266",
|
||||
"lastLivesAward": "2026-02-23"
|
||||
},
|
||||
{
|
||||
"id": "ceddaa7e-caf9-4038-94bb-da486c586bf8",
|
||||
"name": "Fotocitire",
|
||||
"category": "growth",
|
||||
"color": "#10B981",
|
||||
"icon": "camera",
|
||||
"priority": 30,
|
||||
"notes": "",
|
||||
"reminderTime": "",
|
||||
"frequency": {
|
||||
"type": "x_per_week",
|
||||
"count": 3
|
||||
},
|
||||
"streak": {
|
||||
"current": 1,
|
||||
"best": 6,
|
||||
"lastCheckIn": "2026-02-23"
|
||||
},
|
||||
"lives": 4,
|
||||
"completions": [
|
||||
{
|
||||
"date": "2026-02-11",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-02-13",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-02-14",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-02-15",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-02-16",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-02-17",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-02-18",
|
||||
"type": "check"
|
||||
},
|
||||
{
|
||||
"date": "2026-02-23",
|
||||
"type": "check"
|
||||
}
|
||||
],
|
||||
"createdAt": "2026-02-11T01:58:44.779904",
|
||||
"updatedAt": "2026-02-23T13:08:19.884995",
|
||||
"lastLivesAward": "2026-02-23"
|
||||
}
|
||||
]
|
||||
}
|
||||
387
dashboard/habits_helpers.py
Normal file
387
dashboard/habits_helpers.py
Normal file
@@ -0,0 +1,387 @@
|
||||
"""
|
||||
Habit Tracker Helper Functions
|
||||
|
||||
This module provides core helper functions for calculating streaks,
|
||||
checking relevance, and computing stats for habits.
|
||||
"""
|
||||
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, List, Any, Optional
|
||||
|
||||
|
||||
def calculate_streak(habit: Dict[str, Any]) -> int:
|
||||
"""
|
||||
Calculate the current streak for a habit based on its frequency type.
|
||||
Skips maintain the streak (don't break it) but don't count toward the total.
|
||||
|
||||
Args:
|
||||
habit: Dict containing habit data with frequency, completions, etc.
|
||||
|
||||
Returns:
|
||||
int: Current streak count (days, weeks, or months depending on frequency)
|
||||
"""
|
||||
frequency_type = habit.get("frequency", {}).get("type", "daily")
|
||||
completions = habit.get("completions", [])
|
||||
|
||||
if not completions:
|
||||
return 0
|
||||
|
||||
# Sort completions by date (newest first)
|
||||
sorted_completions = sorted(
|
||||
[c for c in completions if c.get("date")],
|
||||
key=lambda x: x["date"],
|
||||
reverse=True
|
||||
)
|
||||
|
||||
if not sorted_completions:
|
||||
return 0
|
||||
|
||||
if frequency_type == "daily":
|
||||
return _calculate_daily_streak(sorted_completions)
|
||||
elif frequency_type == "specific_days":
|
||||
return _calculate_specific_days_streak(habit, sorted_completions)
|
||||
elif frequency_type == "x_per_week":
|
||||
return _calculate_x_per_week_streak(habit, sorted_completions)
|
||||
elif frequency_type == "weekly":
|
||||
return _calculate_weekly_streak(sorted_completions)
|
||||
elif frequency_type == "monthly":
|
||||
return _calculate_monthly_streak(sorted_completions)
|
||||
elif frequency_type == "custom":
|
||||
return _calculate_custom_streak(habit, sorted_completions)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def _calculate_daily_streak(completions: List[Dict[str, Any]]) -> int:
|
||||
"""
|
||||
Calculate streak for daily habits (consecutive days).
|
||||
Skips maintain the streak (don't break it) but don't count toward the total.
|
||||
"""
|
||||
streak = 0
|
||||
today = datetime.now().date()
|
||||
expected_date = today
|
||||
|
||||
for completion in completions:
|
||||
completion_date = datetime.fromisoformat(completion["date"]).date()
|
||||
completion_type = completion.get("type", "check")
|
||||
|
||||
if completion_date == expected_date:
|
||||
# Only count 'check' completions toward streak total
|
||||
# 'skip' completions maintain the streak but don't extend it
|
||||
if completion_type == "check":
|
||||
streak += 1
|
||||
expected_date = completion_date - timedelta(days=1)
|
||||
elif completion_date < expected_date:
|
||||
# Gap found, streak breaks
|
||||
break
|
||||
|
||||
return streak
|
||||
|
||||
|
||||
def _calculate_specific_days_streak(habit: Dict[str, Any], completions: List[Dict[str, Any]]) -> int:
|
||||
"""Calculate streak for specific days habits (only count relevant days)."""
|
||||
relevant_days = set(habit.get("frequency", {}).get("days", []))
|
||||
if not relevant_days:
|
||||
return 0
|
||||
|
||||
streak = 0
|
||||
today = datetime.now().date()
|
||||
current_date = today
|
||||
|
||||
# Find the most recent relevant day
|
||||
while current_date.weekday() not in relevant_days:
|
||||
current_date -= timedelta(days=1)
|
||||
|
||||
for completion in completions:
|
||||
completion_date = datetime.fromisoformat(completion["date"]).date()
|
||||
|
||||
if completion_date == current_date:
|
||||
streak += 1
|
||||
# Move to previous relevant day
|
||||
current_date -= timedelta(days=1)
|
||||
while current_date.weekday() not in relevant_days:
|
||||
current_date -= timedelta(days=1)
|
||||
elif completion_date < current_date:
|
||||
# Check if we missed a relevant day
|
||||
temp_date = current_date
|
||||
found_gap = False
|
||||
while temp_date > completion_date:
|
||||
if temp_date.weekday() in relevant_days:
|
||||
found_gap = True
|
||||
break
|
||||
temp_date -= timedelta(days=1)
|
||||
if found_gap:
|
||||
break
|
||||
|
||||
return streak
|
||||
|
||||
|
||||
def _calculate_x_per_week_streak(habit: Dict[str, Any], completions: List[Dict[str, Any]]) -> int:
|
||||
"""Calculate streak for x_per_week habits (consecutive days with check-ins).
|
||||
|
||||
For x_per_week habits, streak counts consecutive DAYS with check-ins,
|
||||
not consecutive weeks meeting the target. The weekly target (e.g., 4/week)
|
||||
is a goal, but streak measures the chain of check-in days.
|
||||
"""
|
||||
# Use the same logic as daily habits - count consecutive check-in days
|
||||
return _calculate_daily_streak(completions)
|
||||
|
||||
|
||||
def _calculate_weekly_streak(completions: List[Dict[str, Any]]) -> int:
|
||||
"""Calculate streak for weekly habits (consecutive days with check-ins).
|
||||
|
||||
For weekly habits, streak counts consecutive DAYS with check-ins,
|
||||
just like daily habits. The weekly frequency just means you should
|
||||
check in at least once per week.
|
||||
"""
|
||||
return _calculate_daily_streak(completions)
|
||||
|
||||
|
||||
def _calculate_monthly_streak(completions: List[Dict[str, Any]]) -> int:
|
||||
"""Calculate streak for monthly habits (consecutive days with check-ins).
|
||||
|
||||
For monthly habits, streak counts consecutive DAYS with check-ins,
|
||||
just like daily habits. The monthly frequency just means you should
|
||||
check in at least once per month.
|
||||
"""
|
||||
return _calculate_daily_streak(completions)
|
||||
|
||||
|
||||
def _calculate_custom_streak(habit: Dict[str, Any], completions: List[Dict[str, Any]]) -> int:
|
||||
"""Calculate streak for custom interval habits (every X days)."""
|
||||
interval = habit.get("frequency", {}).get("interval", 1)
|
||||
if interval <= 0:
|
||||
return 0
|
||||
|
||||
streak = 0
|
||||
expected_date = datetime.now().date()
|
||||
|
||||
for completion in completions:
|
||||
completion_date = datetime.fromisoformat(completion["date"]).date()
|
||||
|
||||
# Allow completion within the interval window
|
||||
days_diff = (expected_date - completion_date).days
|
||||
if 0 <= days_diff <= interval - 1:
|
||||
streak += 1
|
||||
expected_date = completion_date - timedelta(days=interval)
|
||||
else:
|
||||
break
|
||||
|
||||
return streak
|
||||
|
||||
|
||||
def should_check_today(habit: Dict[str, Any]) -> bool:
|
||||
"""
|
||||
Check if a habit is relevant for today based on its frequency type.
|
||||
|
||||
Args:
|
||||
habit: Dict containing habit data with frequency settings
|
||||
|
||||
Returns:
|
||||
bool: True if the habit should be checked today
|
||||
"""
|
||||
frequency_type = habit.get("frequency", {}).get("type", "daily")
|
||||
today = datetime.now().date()
|
||||
weekday = today.weekday() # 0=Monday, 6=Sunday
|
||||
|
||||
if frequency_type == "daily":
|
||||
return True
|
||||
|
||||
elif frequency_type == "specific_days":
|
||||
relevant_days = set(habit.get("frequency", {}).get("days", []))
|
||||
return weekday in relevant_days
|
||||
|
||||
elif frequency_type == "x_per_week":
|
||||
# Always relevant for x_per_week (can check any day)
|
||||
return True
|
||||
|
||||
elif frequency_type == "weekly":
|
||||
# Always relevant (can check any day of the week)
|
||||
return True
|
||||
|
||||
elif frequency_type == "monthly":
|
||||
# Always relevant (can check any day of the month)
|
||||
return True
|
||||
|
||||
elif frequency_type == "custom":
|
||||
# Check if enough days have passed since last completion
|
||||
completions = habit.get("completions", [])
|
||||
if not completions:
|
||||
return True
|
||||
|
||||
interval = habit.get("frequency", {}).get("interval", 1)
|
||||
last_completion = max(completions, key=lambda x: x.get("date", ""))
|
||||
last_date = datetime.fromisoformat(last_completion["date"]).date()
|
||||
days_since = (today - last_date).days
|
||||
|
||||
return days_since >= interval
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def get_completion_rate(habit: Dict[str, Any], days: int = 30) -> float:
|
||||
"""
|
||||
Calculate the completion rate as a percentage over the last N days.
|
||||
|
||||
Args:
|
||||
habit: Dict containing habit data
|
||||
days: Number of days to look back (default 30)
|
||||
|
||||
Returns:
|
||||
float: Completion rate as percentage (0-100)
|
||||
"""
|
||||
frequency_type = habit.get("frequency", {}).get("type", "daily")
|
||||
completions = habit.get("completions", [])
|
||||
|
||||
today = datetime.now().date()
|
||||
start_date = today - timedelta(days=days - 1)
|
||||
|
||||
# Count relevant days and checked days
|
||||
relevant_days = 0
|
||||
checked_dates = set()
|
||||
|
||||
for completion in completions:
|
||||
completion_date = datetime.fromisoformat(completion["date"]).date()
|
||||
if start_date <= completion_date <= today:
|
||||
checked_dates.add(completion_date)
|
||||
|
||||
# Calculate relevant days based on frequency type
|
||||
if frequency_type == "daily":
|
||||
relevant_days = days
|
||||
|
||||
elif frequency_type == "specific_days":
|
||||
relevant_day_set = set(habit.get("frequency", {}).get("days", []))
|
||||
current = start_date
|
||||
while current <= today:
|
||||
if current.weekday() in relevant_day_set:
|
||||
relevant_days += 1
|
||||
current += timedelta(days=1)
|
||||
|
||||
elif frequency_type == "x_per_week":
|
||||
target_per_week = habit.get("frequency", {}).get("count", 1)
|
||||
num_weeks = days // 7
|
||||
relevant_days = num_weeks * target_per_week
|
||||
|
||||
elif frequency_type == "weekly":
|
||||
num_weeks = days // 7
|
||||
relevant_days = num_weeks
|
||||
|
||||
elif frequency_type == "monthly":
|
||||
num_months = days // 30
|
||||
relevant_days = num_months
|
||||
|
||||
elif frequency_type == "custom":
|
||||
interval = habit.get("frequency", {}).get("interval", 1)
|
||||
relevant_days = days // interval if interval > 0 else 0
|
||||
|
||||
if relevant_days == 0:
|
||||
return 0.0
|
||||
|
||||
checked_days = len(checked_dates)
|
||||
return (checked_days / relevant_days) * 100
|
||||
|
||||
|
||||
def get_weekly_summary(habit: Dict[str, Any]) -> Dict[str, str]:
|
||||
"""
|
||||
Get a summary of the current week showing status for each day.
|
||||
|
||||
Args:
|
||||
habit: Dict containing habit data
|
||||
|
||||
Returns:
|
||||
Dict mapping day names to status: "checked", "skipped", "missed", or "upcoming"
|
||||
"""
|
||||
frequency_type = habit.get("frequency", {}).get("type", "daily")
|
||||
completions = habit.get("completions", [])
|
||||
|
||||
today = datetime.now().date()
|
||||
|
||||
# Start of current week (Monday)
|
||||
start_of_week = today - timedelta(days=today.weekday())
|
||||
|
||||
# Create completion map
|
||||
completion_map = {}
|
||||
for completion in completions:
|
||||
completion_date = datetime.fromisoformat(completion["date"]).date()
|
||||
if completion_date >= start_of_week:
|
||||
completion_type = completion.get("type", "check")
|
||||
completion_map[completion_date] = completion_type
|
||||
|
||||
# Build summary for each day of the week
|
||||
summary = {}
|
||||
day_names = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
|
||||
|
||||
for i, day_name in enumerate(day_names):
|
||||
day_date = start_of_week + timedelta(days=i)
|
||||
|
||||
if day_date > today:
|
||||
summary[day_name] = "upcoming"
|
||||
elif day_date in completion_map:
|
||||
if completion_map[day_date] == "skip":
|
||||
summary[day_name] = "skipped"
|
||||
else:
|
||||
summary[day_name] = "checked"
|
||||
else:
|
||||
# Check if this day was relevant
|
||||
if frequency_type == "specific_days":
|
||||
relevant_days = set(habit.get("frequency", {}).get("days", []))
|
||||
if day_date.weekday() not in relevant_days:
|
||||
summary[day_name] = "not_relevant"
|
||||
else:
|
||||
summary[day_name] = "missed"
|
||||
else:
|
||||
summary[day_name] = "missed"
|
||||
|
||||
return summary
|
||||
|
||||
|
||||
def check_and_award_weekly_lives(habit: Dict[str, Any]) -> tuple[int, bool]:
|
||||
"""
|
||||
Check if habit qualifies for weekly lives recovery and award +1 life if eligible.
|
||||
|
||||
Awards +1 life if:
|
||||
- At least one check-in in the previous week (Monday-Sunday)
|
||||
- Not already awarded this week
|
||||
|
||||
Args:
|
||||
habit: Dict containing habit data with completions and lastLivesAward
|
||||
|
||||
Returns:
|
||||
tuple[int, bool]: (new_lives_count, was_awarded)
|
||||
"""
|
||||
completions = habit.get("completions", [])
|
||||
current_lives = habit.get("lives", 3)
|
||||
|
||||
today = datetime.now().date()
|
||||
|
||||
# Calculate current week start (Monday 00:00)
|
||||
current_week_start = today - timedelta(days=today.weekday())
|
||||
|
||||
# Check if already awarded this week
|
||||
last_lives_award = habit.get("lastLivesAward")
|
||||
if last_lives_award:
|
||||
last_award_date = datetime.fromisoformat(last_lives_award).date()
|
||||
if last_award_date >= current_week_start:
|
||||
# Already awarded this week
|
||||
return (current_lives, False)
|
||||
|
||||
# Calculate previous week boundaries
|
||||
previous_week_start = current_week_start - timedelta(days=7)
|
||||
previous_week_end = current_week_start - timedelta(days=1)
|
||||
|
||||
# Count check-ins in previous week
|
||||
checkins_in_previous_week = 0
|
||||
for completion in completions:
|
||||
completion_date = datetime.fromisoformat(completion["date"]).date()
|
||||
completion_type = completion.get("type", "check")
|
||||
|
||||
if previous_week_start <= completion_date <= previous_week_end:
|
||||
if completion_type == "check":
|
||||
checkins_in_previous_week += 1
|
||||
|
||||
# Award life if at least 1 check-in found
|
||||
if checkins_in_previous_week >= 1:
|
||||
new_lives = current_lives + 1
|
||||
return (new_lives, True)
|
||||
|
||||
return (current_lives, False)
|
||||
7
dashboard/handlers/__init__.py
Normal file
7
dashboard/handlers/__init__.py
Normal file
@@ -0,0 +1,7 @@
|
||||
"""Handler mixin modules for the Echo Task Board API.
|
||||
|
||||
Each module exposes a mixin class whose methods plug into
|
||||
`TaskBoardHandler` (defined in dashboard/api.py). This keeps
|
||||
api.py as a thin HTTP router while each concern lives in its
|
||||
own small module.
|
||||
"""
|
||||
95
dashboard/handlers/cron.py
Normal file
95
dashboard/handlers/cron.py
Normal file
@@ -0,0 +1,95 @@
|
||||
"""/api/cron — reads echo-core/cron/jobs.json (flat schema)."""
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
import constants
|
||||
|
||||
|
||||
def _parse_cron_time(expr):
|
||||
"""Extract a display-time string from a cron expression.
|
||||
|
||||
Echo-core cron strings are already Bucharest local time (Lane B
|
||||
scheduler sets tz=Europe/Bucharest), so NO UTC→local conversion.
|
||||
"""
|
||||
parts = expr.split()
|
||||
if len(parts) < 2:
|
||||
return expr[:15]
|
||||
minute, hour = parts[0], parts[1]
|
||||
if minute.isdigit() and (hour.isdigit() or '-' in hour):
|
||||
if '-' in hour:
|
||||
hour = hour.split('-')[0]
|
||||
try:
|
||||
return f"{int(hour):02d}:{int(minute):02d}"
|
||||
except ValueError:
|
||||
return expr[:15]
|
||||
return expr[:15]
|
||||
|
||||
|
||||
def _iso_to_epoch_ms(iso_str):
|
||||
"""Convert an ISO 8601 datetime string to epoch ms. Returns 0 on failure."""
|
||||
if not iso_str:
|
||||
return 0
|
||||
try:
|
||||
dt = datetime.fromisoformat(iso_str.replace('Z', '+00:00'))
|
||||
return int(dt.timestamp() * 1000)
|
||||
except (ValueError, TypeError):
|
||||
return 0
|
||||
|
||||
|
||||
class CronHandlers:
|
||||
"""Mixin for /api/cron."""
|
||||
|
||||
def handle_cron_status(self):
|
||||
"""Get enabled cron jobs from echo-core/cron/jobs.json (flat schema).
|
||||
|
||||
Output shape preserved for the frontend: id, name, time, schedule,
|
||||
ranToday, lastStatus, lastRunAtMs, nextRunAtMs.
|
||||
"""
|
||||
try:
|
||||
jobs_file = constants.BASE_DIR / 'cron' / 'jobs.json'
|
||||
if not jobs_file.exists():
|
||||
self.send_json({'jobs': [], 'error': 'No jobs file found'})
|
||||
return
|
||||
|
||||
all_jobs = json.loads(jobs_file.read_text())
|
||||
if not isinstance(all_jobs, list):
|
||||
self.send_json({'jobs': [], 'error': 'Unexpected jobs.json shape'})
|
||||
return
|
||||
|
||||
today_start = datetime.now().replace(hour=0, minute=0, second=0, microsecond=0)
|
||||
today_start_ms = today_start.timestamp() * 1000
|
||||
|
||||
jobs = []
|
||||
for job in all_jobs:
|
||||
if not job.get('enabled', False):
|
||||
continue
|
||||
|
||||
name = job.get('name', '')
|
||||
expr = job.get('cron', '')
|
||||
last_run_iso = job.get('last_run')
|
||||
next_run_iso = job.get('next_run')
|
||||
last_status = job.get('last_status', 'unknown')
|
||||
|
||||
last_run_ms = _iso_to_epoch_ms(last_run_iso)
|
||||
next_run_ms = _iso_to_epoch_ms(next_run_iso) or None
|
||||
ran_today = last_run_ms >= today_start_ms
|
||||
|
||||
jobs.append({
|
||||
'id': name, # echo-core has no separate id; use name
|
||||
'name': name,
|
||||
'time': _parse_cron_time(expr),
|
||||
'schedule': expr,
|
||||
'ranToday': ran_today,
|
||||
'lastStatus': last_status if ran_today else None,
|
||||
'lastRunAtMs': last_run_ms,
|
||||
'nextRunAtMs': next_run_ms,
|
||||
})
|
||||
|
||||
jobs.sort(key=lambda j: j['time'])
|
||||
self.send_json({
|
||||
'jobs': jobs,
|
||||
'total': len(jobs),
|
||||
'ranToday': sum(1 for j in jobs if j['ranToday']),
|
||||
})
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
378
dashboard/handlers/eco.py
Normal file
378
dashboard/handlers/eco.py
Normal file
@@ -0,0 +1,378 @@
|
||||
"""Echo Core (eco) service + session + doctor endpoints."""
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from urllib.parse import parse_qs, urlparse
|
||||
|
||||
import constants
|
||||
|
||||
|
||||
class EcoHandlers:
|
||||
"""Mixin for /api/eco/* endpoints."""
|
||||
|
||||
# ── /api/eco/status ─────────────────────────────────────────
|
||||
def handle_eco_status(self):
|
||||
"""Get status of echo-core services + active sessions."""
|
||||
try:
|
||||
services = []
|
||||
for svc in constants.ECO_SERVICES:
|
||||
info = {'name': svc, 'active': False, 'pid': None, 'uptime': None, 'memory': None}
|
||||
|
||||
result = subprocess.run(
|
||||
['systemctl', '--user', 'is-active', svc],
|
||||
capture_output=True, text=True, timeout=5,
|
||||
)
|
||||
info['active'] = result.stdout.strip() == 'active'
|
||||
|
||||
if info['active']:
|
||||
result = subprocess.run(
|
||||
['systemctl', '--user', 'show', '-p', 'MainPID', '--value', svc],
|
||||
capture_output=True, text=True, timeout=5,
|
||||
)
|
||||
pid = result.stdout.strip()
|
||||
if pid and pid != '0':
|
||||
info['pid'] = int(pid)
|
||||
|
||||
try:
|
||||
r = subprocess.run(
|
||||
['systemctl', '--user', 'show', '-p', 'ActiveEnterTimestamp', '--value', svc],
|
||||
capture_output=True, text=True, timeout=5,
|
||||
)
|
||||
ts = r.stdout.strip()
|
||||
if ts:
|
||||
start = datetime.strptime(ts, '%a %Y-%m-%d %H:%M:%S %Z')
|
||||
info['uptime'] = int((datetime.utcnow() - start).total_seconds())
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
try:
|
||||
for line in Path(f'/proc/{pid}/status').read_text().splitlines():
|
||||
if line.startswith('VmRSS:'):
|
||||
info['memory'] = line.split(':')[1].strip()
|
||||
break
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
services.append(info)
|
||||
|
||||
self.send_json({'services': services})
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
# ── sessions ────────────────────────────────────────────────
|
||||
def _eco_channel_map(self):
|
||||
"""Build channel_id -> {name, platform, is_group} from config.json."""
|
||||
config_file = constants.ECHO_CORE_DIR / 'config.json'
|
||||
m = {}
|
||||
try:
|
||||
cfg = json.loads(config_file.read_text())
|
||||
for name, ch in cfg.get('channels', {}).items():
|
||||
m[str(ch['id'])] = {'name': name, 'platform': 'discord'}
|
||||
for name, ch in cfg.get('telegram_channels', {}).items():
|
||||
m[str(ch['id'])] = {'name': name, 'platform': 'telegram'}
|
||||
for name, ch in cfg.get('whatsapp_channels', {}).items():
|
||||
m[str(ch['id'])] = {'name': name, 'platform': 'whatsapp', 'is_group': True}
|
||||
for admin_id in cfg.get('bot', {}).get('admins', []):
|
||||
m.setdefault(str(admin_id), {'name': 'TG DM', 'platform': 'telegram'})
|
||||
wa_owner = cfg.get('whatsapp', {}).get('owner', '')
|
||||
if wa_owner:
|
||||
m.setdefault(f'wa-{wa_owner}', {'name': 'WA Owner', 'platform': 'whatsapp'})
|
||||
except Exception:
|
||||
pass
|
||||
return m
|
||||
|
||||
def _eco_enrich_sessions(self):
|
||||
"""Return enriched sessions list sorted by last_message_at desc."""
|
||||
raw = {}
|
||||
if constants.ECHO_SESSIONS_FILE.exists():
|
||||
try:
|
||||
raw = json.loads(constants.ECHO_SESSIONS_FILE.read_text())
|
||||
except Exception:
|
||||
pass
|
||||
cmap = self._eco_channel_map()
|
||||
sessions = []
|
||||
if isinstance(raw, dict):
|
||||
for ch_id, sdata in raw.items():
|
||||
if 'MagicMock' in ch_id:
|
||||
continue
|
||||
entry = dict(sdata) if isinstance(sdata, dict) else {}
|
||||
entry['channel_id'] = ch_id
|
||||
if ch_id in cmap:
|
||||
entry['platform'] = cmap[ch_id]['platform']
|
||||
entry['channel_name'] = cmap[ch_id]['name']
|
||||
entry['is_group'] = cmap[ch_id].get('is_group', False)
|
||||
elif ch_id.startswith('wa-') or '@g.us' in ch_id or '@s.whatsapp.net' in ch_id:
|
||||
entry['platform'] = 'whatsapp'
|
||||
entry['is_group'] = '@g.us' in ch_id
|
||||
entry['channel_name'] = ('WA Grup' if entry['is_group'] else 'WA DM')
|
||||
elif ch_id.isdigit() and len(ch_id) >= 17:
|
||||
entry['platform'] = 'discord'
|
||||
entry['channel_name'] = 'Discord #' + ch_id[-6:]
|
||||
elif ch_id.isdigit():
|
||||
entry['platform'] = 'telegram'
|
||||
entry['channel_name'] = 'TG ' + ch_id
|
||||
else:
|
||||
entry['platform'] = 'unknown'
|
||||
entry['channel_name'] = ch_id[:20]
|
||||
sessions.append(entry)
|
||||
sessions.sort(key=lambda s: s.get('last_message_at', ''), reverse=True)
|
||||
return sessions
|
||||
|
||||
def handle_eco_sessions(self):
|
||||
"""Return enriched sessions list."""
|
||||
try:
|
||||
self.send_json({'sessions': self._eco_enrich_sessions()})
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
def handle_eco_session_content(self):
|
||||
"""Return conversation messages from a Claude session transcript."""
|
||||
try:
|
||||
params = parse_qs(urlparse(self.path).query)
|
||||
session_id = params.get('id', [''])[0]
|
||||
if not session_id or '/' in session_id or '..' in session_id:
|
||||
self.send_json({'error': 'Invalid session id'}, 400)
|
||||
return
|
||||
|
||||
transcript = Path.home() / '.claude' / 'projects' / '-home-moltbot-echo-core' / f'{session_id}.jsonl'
|
||||
if not transcript.exists():
|
||||
self.send_json({'messages': [], 'error': 'Transcript not found'})
|
||||
return
|
||||
|
||||
messages = []
|
||||
for line in transcript.read_text().splitlines():
|
||||
try:
|
||||
d = json.loads(line)
|
||||
except Exception:
|
||||
continue
|
||||
t = d.get('type', '')
|
||||
if t == 'user':
|
||||
msg = d.get('message', {})
|
||||
content = msg.get('content', '')
|
||||
if isinstance(content, str):
|
||||
text = content.replace('[EXTERNAL CONTENT]\n', '').replace('\n[END EXTERNAL CONTENT]', '').strip()
|
||||
if text:
|
||||
messages.append({'role': 'user', 'text': text[:2000]})
|
||||
elif t == 'assistant':
|
||||
msg = d.get('message', {})
|
||||
content = msg.get('content', '')
|
||||
if isinstance(content, list):
|
||||
parts = [block['text'] for block in content if block.get('type') == 'text']
|
||||
text = '\n'.join(parts).strip()
|
||||
if text:
|
||||
messages.append({'role': 'assistant', 'text': text[:2000]})
|
||||
elif isinstance(content, str) and content.strip():
|
||||
messages.append({'role': 'assistant', 'text': content[:2000]})
|
||||
|
||||
self.send_json({'messages': messages})
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
def handle_eco_sessions_clear(self):
|
||||
"""Clear active sessions (all or specific channel)."""
|
||||
try:
|
||||
data = self._read_post_json()
|
||||
channel = data.get('channel', None)
|
||||
|
||||
if not constants.ECHO_SESSIONS_FILE.exists():
|
||||
self.send_json({'success': True, 'message': 'No sessions file'})
|
||||
return
|
||||
|
||||
if channel:
|
||||
sessions = json.loads(constants.ECHO_SESSIONS_FILE.read_text())
|
||||
if isinstance(sessions, list):
|
||||
sessions = [s for s in sessions if s.get('channel') != channel]
|
||||
elif isinstance(sessions, dict):
|
||||
sessions.pop(channel, None)
|
||||
constants.ECHO_SESSIONS_FILE.write_text(json.dumps(sessions, indent=2))
|
||||
self.send_json({'success': True, 'message': f'Cleared session: {channel}'})
|
||||
else:
|
||||
if isinstance(json.loads(constants.ECHO_SESSIONS_FILE.read_text()), list):
|
||||
constants.ECHO_SESSIONS_FILE.write_text('[]')
|
||||
else:
|
||||
constants.ECHO_SESSIONS_FILE.write_text('{}')
|
||||
self.send_json({'success': True, 'message': 'All sessions cleared'})
|
||||
except Exception as e:
|
||||
self.send_json({'success': False, 'error': str(e)}, 500)
|
||||
|
||||
# ── logs + doctor ───────────────────────────────────────────
|
||||
def handle_eco_logs(self):
|
||||
"""Return last N lines from echo-core.log."""
|
||||
try:
|
||||
params = parse_qs(urlparse(self.path).query)
|
||||
lines = min(int(params.get('lines', ['100'])[0]), 500)
|
||||
|
||||
if not constants.ECHO_LOG_FILE.exists():
|
||||
self.send_json({'lines': ['(log file not found)']})
|
||||
return
|
||||
|
||||
result = subprocess.run(
|
||||
['tail', '-n', str(lines), str(constants.ECHO_LOG_FILE)],
|
||||
capture_output=True, text=True, timeout=10,
|
||||
)
|
||||
self.send_json({'lines': result.stdout.splitlines()})
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
def handle_eco_doctor(self):
|
||||
"""Run health checks on the echo-core ecosystem."""
|
||||
checks = []
|
||||
|
||||
# 1. Services
|
||||
for svc in constants.ECO_SERVICES:
|
||||
try:
|
||||
r = subprocess.run(
|
||||
['systemctl', '--user', 'is-active', svc],
|
||||
capture_output=True, text=True, timeout=5,
|
||||
)
|
||||
active = r.stdout.strip() == 'active'
|
||||
checks.append({
|
||||
'name': f'Service: {svc}',
|
||||
'pass': active,
|
||||
'detail': 'active' if active else r.stdout.strip(),
|
||||
})
|
||||
except Exception as e:
|
||||
checks.append({'name': f'Service: {svc}', 'pass': False, 'detail': str(e)})
|
||||
|
||||
# 2. Disk space
|
||||
try:
|
||||
st = shutil.disk_usage('/')
|
||||
pct_free = (st.free / st.total) * 100
|
||||
checks.append({
|
||||
'name': 'Disk space',
|
||||
'pass': pct_free > 5,
|
||||
'detail': f'{pct_free:.1f}% free ({st.free // (1024**3)} GB)',
|
||||
})
|
||||
except Exception as e:
|
||||
checks.append({'name': 'Disk space', 'pass': False, 'detail': str(e)})
|
||||
|
||||
# 3. Log file
|
||||
try:
|
||||
if constants.ECHO_LOG_FILE.exists():
|
||||
size_mb = constants.ECHO_LOG_FILE.stat().st_size / (1024 * 1024)
|
||||
checks.append({
|
||||
'name': 'Log file',
|
||||
'pass': size_mb < 100,
|
||||
'detail': f'{size_mb:.1f} MB',
|
||||
})
|
||||
else:
|
||||
checks.append({'name': 'Log file', 'pass': False, 'detail': 'Not found'})
|
||||
except Exception as e:
|
||||
checks.append({'name': 'Log file', 'pass': False, 'detail': str(e)})
|
||||
|
||||
# 4. Sessions file
|
||||
try:
|
||||
if constants.ECHO_SESSIONS_FILE.exists():
|
||||
data = json.loads(constants.ECHO_SESSIONS_FILE.read_text())
|
||||
count = len(data) if isinstance(data, list) else len(data.keys()) if isinstance(data, dict) else 0
|
||||
checks.append({'name': 'Sessions file', 'pass': True, 'detail': f'{count} active'})
|
||||
else:
|
||||
checks.append({'name': 'Sessions file', 'pass': False, 'detail': 'Not found'})
|
||||
except Exception as e:
|
||||
checks.append({'name': 'Sessions file', 'pass': False, 'detail': str(e)})
|
||||
|
||||
# 5. Config
|
||||
config_file = constants.ECHO_CORE_DIR / 'config.json'
|
||||
try:
|
||||
if config_file.exists():
|
||||
json.loads(config_file.read_text())
|
||||
checks.append({'name': 'Config', 'pass': True, 'detail': 'Valid JSON'})
|
||||
else:
|
||||
checks.append({'name': 'Config', 'pass': False, 'detail': 'Not found'})
|
||||
except Exception as e:
|
||||
checks.append({'name': 'Config', 'pass': False, 'detail': str(e)})
|
||||
|
||||
# 6. WhatsApp bridge log
|
||||
wa_log = constants.ECHO_CORE_DIR / 'logs' / 'whatsapp-bridge.log'
|
||||
try:
|
||||
if wa_log.exists():
|
||||
r = subprocess.run(['tail', '-1', str(wa_log)], capture_output=True, text=True, timeout=5)
|
||||
last = r.stdout.strip()
|
||||
has_error = 'error' in last.lower() or 'fatal' in last.lower()
|
||||
checks.append({
|
||||
'name': 'WhatsApp bridge log',
|
||||
'pass': not has_error,
|
||||
'detail': last[:80] if last else 'Empty',
|
||||
})
|
||||
else:
|
||||
checks.append({'name': 'WhatsApp bridge log', 'pass': False, 'detail': 'Not found'})
|
||||
except Exception as e:
|
||||
checks.append({'name': 'WhatsApp bridge log', 'pass': False, 'detail': str(e)})
|
||||
|
||||
# 7. Claude CLI
|
||||
try:
|
||||
r = subprocess.run(['which', 'claude'], capture_output=True, text=True, timeout=5)
|
||||
found = r.returncode == 0
|
||||
checks.append({
|
||||
'name': 'Claude CLI',
|
||||
'pass': found,
|
||||
'detail': r.stdout.strip() if found else 'Not in PATH',
|
||||
})
|
||||
except Exception as e:
|
||||
checks.append({'name': 'Claude CLI', 'pass': False, 'detail': str(e)})
|
||||
|
||||
self.send_json({'checks': checks})
|
||||
|
||||
# ── service control ─────────────────────────────────────────
|
||||
def handle_eco_restart(self):
|
||||
"""Restart an echo-core service (not the taskboard itself)."""
|
||||
try:
|
||||
data = self._read_post_json()
|
||||
svc = data.get('service', '')
|
||||
|
||||
if svc not in constants.ECO_SERVICES:
|
||||
self.send_json({'success': False, 'error': f'Unknown service: {svc}'}, 400)
|
||||
return
|
||||
if svc == 'echo-taskboard':
|
||||
self.send_json({'success': False, 'error': 'Cannot restart taskboard from itself'}, 400)
|
||||
return
|
||||
|
||||
result = subprocess.run(
|
||||
['systemctl', '--user', 'restart', svc],
|
||||
capture_output=True, text=True, timeout=30,
|
||||
)
|
||||
if result.returncode == 0:
|
||||
self.send_json({'success': True, 'message': f'{svc} restarted'})
|
||||
else:
|
||||
self.send_json({'success': False, 'error': result.stderr.strip()}, 500)
|
||||
except Exception as e:
|
||||
self.send_json({'success': False, 'error': str(e)}, 500)
|
||||
|
||||
def handle_eco_stop(self):
|
||||
"""Stop an echo-core service (not the taskboard itself)."""
|
||||
try:
|
||||
data = self._read_post_json()
|
||||
svc = data.get('service', '')
|
||||
|
||||
if svc not in constants.ECO_SERVICES:
|
||||
self.send_json({'success': False, 'error': f'Unknown service: {svc}'}, 400)
|
||||
return
|
||||
if svc == 'echo-taskboard':
|
||||
self.send_json({'success': False, 'error': 'Cannot stop taskboard from itself'}, 400)
|
||||
return
|
||||
|
||||
result = subprocess.run(
|
||||
['systemctl', '--user', 'stop', svc],
|
||||
capture_output=True, text=True, timeout=30,
|
||||
)
|
||||
if result.returncode == 0:
|
||||
self.send_json({'success': True, 'message': f'{svc} stopped'})
|
||||
else:
|
||||
self.send_json({'success': False, 'error': result.stderr.strip()}, 500)
|
||||
except Exception as e:
|
||||
self.send_json({'success': False, 'error': str(e)}, 500)
|
||||
|
||||
def handle_eco_restart_taskboard(self):
|
||||
"""Restart the taskboard itself. Sends response then exits; systemd restarts."""
|
||||
import threading
|
||||
self.send_json({'success': True, 'message': 'Restarting taskboard in 1s...'})
|
||||
|
||||
def _exit():
|
||||
import time
|
||||
time.sleep(1)
|
||||
os._exit(0)
|
||||
|
||||
threading.Thread(target=_exit, daemon=True).start()
|
||||
120
dashboard/handlers/files.py
Normal file
120
dashboard/handlers/files.py
Normal file
@@ -0,0 +1,120 @@
|
||||
"""File-browser + note-index endpoints (sandbox-enforced)."""
|
||||
import json
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from urllib.parse import parse_qs, urlparse
|
||||
|
||||
import constants
|
||||
|
||||
|
||||
class FilesHandlers:
|
||||
"""Mixin for /api/files, /api/refresh-index."""
|
||||
|
||||
def _resolve_sandboxed(self, path):
|
||||
"""Resolve `path` against ALLOWED_WORKSPACES. Returns (target, workspace) or (None, None)."""
|
||||
allowed_dirs = constants.ALLOWED_WORKSPACES
|
||||
for base in allowed_dirs:
|
||||
try:
|
||||
candidate = (base / path).resolve()
|
||||
if any(str(candidate).startswith(str(d)) for d in allowed_dirs):
|
||||
return candidate, base
|
||||
except Exception:
|
||||
continue
|
||||
return None, None
|
||||
|
||||
def handle_files_get(self):
|
||||
"""List files or get file content."""
|
||||
params = parse_qs(urlparse(self.path).query)
|
||||
path = params.get('path', [''])[0]
|
||||
action = params.get('action', ['list'])[0]
|
||||
|
||||
target, workspace = self._resolve_sandboxed(path)
|
||||
if target is None:
|
||||
self.send_json({'error': 'Access denied'}, 403)
|
||||
return
|
||||
|
||||
if action != 'list':
|
||||
self.send_json({'error': 'Unknown action'}, 400)
|
||||
return
|
||||
|
||||
if not target.exists():
|
||||
self.send_json({'error': 'Path not found'}, 404)
|
||||
return
|
||||
|
||||
if target.is_file():
|
||||
try:
|
||||
content = target.read_text(encoding='utf-8', errors='replace')
|
||||
self.send_json({
|
||||
'type': 'file',
|
||||
'path': path,
|
||||
'name': target.name,
|
||||
'content': content[:100000],
|
||||
'size': target.stat().st_size,
|
||||
'truncated': target.stat().st_size > 100000,
|
||||
})
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
else:
|
||||
items = []
|
||||
try:
|
||||
for item in sorted(target.iterdir()):
|
||||
stat = item.stat()
|
||||
item_path = f"{path}/{item.name}" if path else item.name
|
||||
items.append({
|
||||
'name': item.name,
|
||||
'type': 'dir' if item.is_dir() else 'file',
|
||||
'size': stat.st_size if item.is_file() else None,
|
||||
'mtime': stat.st_mtime,
|
||||
'path': item_path,
|
||||
})
|
||||
self.send_json({'type': 'dir', 'path': path, 'items': items})
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
def handle_files_post(self):
|
||||
"""Save file content."""
|
||||
try:
|
||||
content_length = int(self.headers['Content-Length'])
|
||||
post_data = self.rfile.read(content_length).decode('utf-8')
|
||||
data = json.loads(post_data)
|
||||
|
||||
path = data.get('path', '')
|
||||
content = data.get('content', '')
|
||||
|
||||
target, workspace = self._resolve_sandboxed(path)
|
||||
if target is None:
|
||||
self.send_json({'error': 'Access denied'}, 403)
|
||||
return
|
||||
|
||||
target.parent.mkdir(parents=True, exist_ok=True)
|
||||
target.write_text(content, encoding='utf-8')
|
||||
|
||||
self.send_json({'status': 'saved', 'path': path, 'size': len(content)})
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
def handle_refresh_index(self):
|
||||
"""Regenerate memory/kb/index.json by running tools/update_notes_index.py."""
|
||||
try:
|
||||
script = constants.TOOLS_DIR / 'update_notes_index.py'
|
||||
result = subprocess.run(
|
||||
[sys.executable, str(script)],
|
||||
capture_output=True, text=True, timeout=30,
|
||||
)
|
||||
if result.returncode == 0:
|
||||
output = result.stdout
|
||||
total_match = re.search(r'with (\d+) notes', output)
|
||||
total = int(total_match.group(1)) if total_match else 0
|
||||
self.send_json({
|
||||
'success': True,
|
||||
'message': f'Index regenerat cu {total} notițe',
|
||||
'total': total,
|
||||
'output': output,
|
||||
})
|
||||
else:
|
||||
self.send_json({'success': False, 'error': result.stderr or 'Unknown error'}, 500)
|
||||
except subprocess.TimeoutExpired:
|
||||
self.send_json({'success': False, 'error': 'Timeout'}, 500)
|
||||
except Exception as e:
|
||||
self.send_json({'success': False, 'error': str(e)}, 500)
|
||||
314
dashboard/handlers/git.py
Normal file
314
dashboard/handlers/git.py
Normal file
@@ -0,0 +1,314 @@
|
||||
"""Git status / diff / commit handlers for dashboard + workspace projects."""
|
||||
import json
|
||||
import subprocess
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
from datetime import datetime
|
||||
from urllib.parse import parse_qs, urlparse
|
||||
|
||||
import constants
|
||||
|
||||
|
||||
class GitHandlers:
|
||||
"""Mixin providing git status/diff/commit endpoints."""
|
||||
|
||||
# ── shared helper ────────────────────────────────────────────
|
||||
def _run_git(self, workspace, args, timeout=5):
|
||||
"""Run a git command in workspace. Returns CompletedProcess."""
|
||||
return subprocess.run(
|
||||
['git', *args],
|
||||
cwd=str(workspace),
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=timeout,
|
||||
)
|
||||
|
||||
# ── /api/git (dashboard repo) ───────────────────────────────
|
||||
def handle_git_status(self):
|
||||
"""Get git status for the echo-core repo."""
|
||||
try:
|
||||
workspace = constants.GIT_WORKSPACE
|
||||
|
||||
branch = self._run_git(workspace, ['branch', '--show-current']).stdout.strip()
|
||||
last_commit = self._run_git(workspace, ['log', '-1', '--format=%h|%s|%cr']).stdout.strip()
|
||||
commit_parts = last_commit.split('|') if last_commit else ['', '', '']
|
||||
|
||||
status_output = self._run_git(workspace, ['status', '--short']).stdout.strip()
|
||||
uncommitted = [f for f in status_output.split('\n') if f.strip()] if status_output else []
|
||||
|
||||
diff_stat = ''
|
||||
if uncommitted:
|
||||
diff_stat = self._run_git(workspace, ['diff', '--stat', '--cached']).stdout.strip()
|
||||
if not diff_stat:
|
||||
diff_stat = self._run_git(workspace, ['diff', '--stat']).stdout.strip()
|
||||
|
||||
uncommitted_parsed = []
|
||||
for line in uncommitted:
|
||||
if len(line) >= 2:
|
||||
status = line[:2].strip()
|
||||
filepath = line[2:].strip()
|
||||
if filepath:
|
||||
uncommitted_parsed.append({'status': status, 'path': filepath})
|
||||
|
||||
self.send_json({
|
||||
'branch': branch,
|
||||
'lastCommit': {
|
||||
'hash': commit_parts[0] if len(commit_parts) > 0 else '',
|
||||
'message': commit_parts[1] if len(commit_parts) > 1 else '',
|
||||
'time': commit_parts[2] if len(commit_parts) > 2 else '',
|
||||
},
|
||||
'uncommitted': uncommitted,
|
||||
'uncommittedParsed': uncommitted_parsed,
|
||||
'uncommittedCount': len(uncommitted),
|
||||
'diffStat': diff_stat,
|
||||
'clean': len(uncommitted) == 0,
|
||||
})
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
# ── /api/diff ────────────────────────────────────────────────
|
||||
def handle_git_diff(self):
|
||||
"""Get git diff for a specific file."""
|
||||
params = parse_qs(urlparse(self.path).query)
|
||||
filepath = params.get('path', [''])[0]
|
||||
|
||||
if not filepath:
|
||||
self.send_json({'error': 'path required'}, 400)
|
||||
return
|
||||
|
||||
try:
|
||||
workspace = constants.GIT_WORKSPACE
|
||||
|
||||
target = (workspace / filepath).resolve()
|
||||
if not str(target).startswith(str(workspace)):
|
||||
self.send_json({'error': 'Access denied'}, 403)
|
||||
return
|
||||
|
||||
diff = self._run_git(workspace, ['diff', '--cached', '--', filepath], timeout=10).stdout
|
||||
if not diff:
|
||||
diff = self._run_git(workspace, ['diff', '--', filepath], timeout=10).stdout
|
||||
|
||||
if not diff:
|
||||
status = self._run_git(workspace, ['status', '--short', '--', filepath]).stdout.strip()
|
||||
if status.startswith('??') and target.exists():
|
||||
content = target.read_text(encoding='utf-8', errors='replace')[:50000]
|
||||
diff = f"+++ b/{filepath}\n" + '\n'.join(f'+{line}' for line in content.split('\n'))
|
||||
|
||||
self.send_json({
|
||||
'path': filepath,
|
||||
'diff': diff or 'No changes',
|
||||
'hasDiff': bool(diff),
|
||||
})
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
# ── /api/eco/git (echo-core repo) ────────────────────────────
|
||||
def handle_eco_git_status(self):
|
||||
"""Get git status for echo-core repo."""
|
||||
try:
|
||||
workspace = constants.ECHO_CORE_DIR
|
||||
|
||||
branch = self._run_git(workspace, ['branch', '--show-current']).stdout.strip()
|
||||
last_commit = self._run_git(workspace, ['log', '-1', '--format=%h|%s|%cr']).stdout.strip()
|
||||
commit_parts = last_commit.split('|') if last_commit else ['', '', '']
|
||||
|
||||
status_output = self._run_git(workspace, ['status', '--short']).stdout.strip()
|
||||
uncommitted = [f for f in status_output.split('\n') if f.strip()] if status_output else []
|
||||
|
||||
uncommitted_parsed = []
|
||||
for line in uncommitted:
|
||||
if len(line) >= 2:
|
||||
status = line[:2].strip()
|
||||
filepath = line[2:].strip()
|
||||
if filepath:
|
||||
uncommitted_parsed.append({'status': status, 'path': filepath})
|
||||
|
||||
self.send_json({
|
||||
'branch': branch,
|
||||
'clean': len(uncommitted) == 0,
|
||||
'uncommittedCount': len(uncommitted),
|
||||
'uncommittedParsed': uncommitted_parsed,
|
||||
'lastCommit': {
|
||||
'hash': commit_parts[0] if len(commit_parts) > 0 else '',
|
||||
'message': commit_parts[1] if len(commit_parts) > 1 else '',
|
||||
'time': commit_parts[2] if len(commit_parts) > 2 else '',
|
||||
},
|
||||
})
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
def handle_eco_git_commit(self):
|
||||
"""Run git add, commit, and push for echo-core repo."""
|
||||
try:
|
||||
workspace = constants.ECHO_CORE_DIR
|
||||
|
||||
self._run_git(workspace, ['add', '-A'], timeout=10)
|
||||
|
||||
status = self._run_git(workspace, ['status', '--porcelain']).stdout.strip()
|
||||
if not status:
|
||||
self.send_json({'success': True, 'files': 0, 'output': 'Nothing to commit'})
|
||||
return
|
||||
|
||||
files_count = len([l for l in status.split('\n') if l.strip()])
|
||||
|
||||
commit_result = self._run_git(workspace, ['commit', '-m', 'chore: auto-commit from dashboard'], timeout=30)
|
||||
push_result = self._run_git(workspace, ['push'], timeout=30)
|
||||
|
||||
output = commit_result.stdout + commit_result.stderr + push_result.stdout + push_result.stderr
|
||||
|
||||
if commit_result.returncode == 0:
|
||||
self.send_json({'success': True, 'files': files_count, 'output': output})
|
||||
else:
|
||||
self.send_json({'success': False, 'error': output or 'Commit failed'})
|
||||
except Exception as e:
|
||||
self.send_json({'success': False, 'error': str(e)}, 500)
|
||||
|
||||
# ── /api/workspace/git/* (per-project) ───────────────────────
|
||||
def handle_workspace_git_diff(self):
|
||||
"""Get git diff for a workspace project."""
|
||||
try:
|
||||
params = parse_qs(urlparse(self.path).query)
|
||||
project_name = params.get('project', [''])[0]
|
||||
|
||||
project_dir = self._validate_project(project_name)
|
||||
if not project_dir:
|
||||
self.send_json({'error': 'Invalid project'}, 400)
|
||||
return
|
||||
|
||||
if not (project_dir / '.git').exists():
|
||||
self.send_json({'error': 'Not a git repository'}, 400)
|
||||
return
|
||||
|
||||
status = self._run_git(project_dir, ['status', '--short'], timeout=10).stdout.strip()
|
||||
diff = self._run_git(project_dir, ['diff'], timeout=10).stdout
|
||||
diff_cached = self._run_git(project_dir, ['diff', '--cached'], timeout=10).stdout
|
||||
|
||||
combined_diff = ''
|
||||
if diff_cached:
|
||||
combined_diff += '=== Staged Changes ===\n' + diff_cached
|
||||
if diff:
|
||||
if combined_diff:
|
||||
combined_diff += '\n'
|
||||
combined_diff += '=== Unstaged Changes ===\n' + diff
|
||||
|
||||
self.send_json({
|
||||
'project': project_name,
|
||||
'status': status,
|
||||
'diff': combined_diff,
|
||||
'hasDiff': bool(status),
|
||||
})
|
||||
except subprocess.TimeoutExpired:
|
||||
self.send_json({'error': 'Timeout'}, 500)
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
def handle_workspace_git_commit(self):
|
||||
"""Commit all changes in a workspace project."""
|
||||
try:
|
||||
data = self._read_post_json()
|
||||
project_name = data.get('project', '')
|
||||
message = data.get('message', '').strip()
|
||||
|
||||
project_dir = self._validate_project(project_name)
|
||||
if not project_dir:
|
||||
self.send_json({'success': False, 'error': 'Invalid project'}, 400)
|
||||
return
|
||||
|
||||
if not (project_dir / '.git').exists():
|
||||
self.send_json({'success': False, 'error': 'Not a git repository'}, 400)
|
||||
return
|
||||
|
||||
porcelain = self._run_git(project_dir, ['status', '--porcelain'], timeout=10).stdout.strip()
|
||||
if not porcelain:
|
||||
self.send_json({'success': False, 'error': 'Nothing to commit'})
|
||||
return
|
||||
|
||||
files_changed = len([l for l in porcelain.split('\n') if l.strip()])
|
||||
|
||||
if not message:
|
||||
now = datetime.now().strftime('%Y-%m-%d %H:%M')
|
||||
message = f'Update: {now} ({files_changed} files)'
|
||||
|
||||
self._run_git(project_dir, ['add', '-A'], timeout=10)
|
||||
|
||||
result = self._run_git(project_dir, ['commit', '-m', message], timeout=30)
|
||||
output = result.stdout + result.stderr
|
||||
|
||||
if result.returncode == 0:
|
||||
self.send_json({
|
||||
'success': True,
|
||||
'message': message,
|
||||
'output': output,
|
||||
'filesChanged': files_changed,
|
||||
})
|
||||
else:
|
||||
self.send_json({'success': False, 'error': output or 'Commit failed'})
|
||||
except subprocess.TimeoutExpired:
|
||||
self.send_json({'success': False, 'error': 'Timeout'}, 500)
|
||||
except Exception as e:
|
||||
self.send_json({'success': False, 'error': str(e)}, 500)
|
||||
|
||||
def _ensure_gitea_remote(self, project_dir, project_name):
|
||||
"""Create Gitea repo and add remote if no origin exists. Returns (ok, message)."""
|
||||
if not constants.GITEA_TOKEN:
|
||||
return False, 'GITEA_TOKEN not set'
|
||||
|
||||
api_url = f'{constants.GITEA_URL}/api/v1/orgs/{constants.GITEA_ORG}/repos'
|
||||
payload = json.dumps({'name': project_name, 'private': True, 'auto_init': False}).encode()
|
||||
req = urllib.request.Request(api_url, data=payload, method='POST', headers={
|
||||
'Authorization': f'token {constants.GITEA_TOKEN}',
|
||||
'Content-Type': 'application/json',
|
||||
})
|
||||
try:
|
||||
resp = urllib.request.urlopen(req, timeout=15)
|
||||
resp.read()
|
||||
except urllib.error.HTTPError as e:
|
||||
body = e.read().decode(errors='replace')
|
||||
if e.code == 409:
|
||||
pass # repo already exists — fine
|
||||
else:
|
||||
return False, f'Gitea API error {e.code}: {body}'
|
||||
|
||||
remote_url = f'{constants.GITEA_URL}/{constants.GITEA_ORG}/{project_name}.git'
|
||||
auth_url = remote_url.replace('https://', f'https://gitea:{constants.GITEA_TOKEN}@')
|
||||
subprocess.run(
|
||||
['git', 'remote', 'add', 'origin', auth_url],
|
||||
cwd=str(project_dir), capture_output=True, text=True, timeout=5,
|
||||
)
|
||||
return True, f'Created repo {constants.GITEA_ORG}/{project_name}'
|
||||
|
||||
def handle_workspace_git_push(self):
|
||||
"""Push a workspace project to its remote, creating Gitea repo if needed."""
|
||||
try:
|
||||
data = self._read_post_json()
|
||||
project_name = data.get('project', '')
|
||||
|
||||
project_dir = self._validate_project(project_name)
|
||||
if not project_dir:
|
||||
self.send_json({'success': False, 'error': 'Invalid project'}, 400)
|
||||
return
|
||||
|
||||
if not (project_dir / '.git').exists():
|
||||
self.send_json({'success': False, 'error': 'Not a git repository'}, 400)
|
||||
return
|
||||
|
||||
created_msg = ''
|
||||
remote_check = self._run_git(project_dir, ['remote', 'get-url', 'origin'], timeout=10)
|
||||
if remote_check.returncode != 0:
|
||||
ok, msg = self._ensure_gitea_remote(project_dir, project_name)
|
||||
if not ok:
|
||||
self.send_json({'success': False, 'error': msg})
|
||||
return
|
||||
created_msg = msg + '\n'
|
||||
|
||||
result = self._run_git(project_dir, ['push', '-u', 'origin', 'HEAD'], timeout=60)
|
||||
output = result.stdout + result.stderr
|
||||
|
||||
if result.returncode == 0:
|
||||
self.send_json({'success': True, 'output': created_msg + (output or 'Pushed successfully')})
|
||||
else:
|
||||
self.send_json({'success': False, 'error': output or 'Push failed'})
|
||||
except subprocess.TimeoutExpired:
|
||||
self.send_json({'success': False, 'error': 'Push timeout (60s)'}, 500)
|
||||
except Exception as e:
|
||||
self.send_json({'success': False, 'error': str(e)}, 500)
|
||||
391
dashboard/handlers/habits.py
Normal file
391
dashboard/handlers/habits.py
Normal file
@@ -0,0 +1,391 @@
|
||||
"""Habit tracking endpoints (CRUD + check / skip / uncheck)."""
|
||||
import json
|
||||
import re
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
from urllib.parse import parse_qs, urlparse
|
||||
|
||||
import constants
|
||||
import habits_helpers
|
||||
|
||||
|
||||
def _enrich(habit):
|
||||
"""Return habit with calculated stats added."""
|
||||
enriched = habit.copy()
|
||||
enriched['current_streak'] = habits_helpers.calculate_streak(habit)
|
||||
enriched['best_streak'] = habit.get('streak', {}).get('best', 0)
|
||||
enriched['completion_rate_30d'] = habits_helpers.get_completion_rate(habit, days=30)
|
||||
enriched['weekly_summary'] = habits_helpers.get_weekly_summary(habit)
|
||||
enriched['should_check_today'] = habits_helpers.should_check_today(habit)
|
||||
return enriched
|
||||
|
||||
|
||||
class HabitsHandlers:
|
||||
"""Mixin providing /api/habits endpoints."""
|
||||
|
||||
def handle_habits_get(self):
|
||||
"""Return all habits with enriched stats."""
|
||||
try:
|
||||
if not constants.HABITS_FILE.exists():
|
||||
self.send_json([])
|
||||
return
|
||||
|
||||
with open(constants.HABITS_FILE, 'r', encoding='utf-8') as f:
|
||||
data = json.load(f)
|
||||
|
||||
enriched = [_enrich(h) for h in data.get('habits', [])]
|
||||
enriched.sort(key=lambda h: h.get('priority', 999))
|
||||
self.send_json(enriched)
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
def handle_habits_post(self):
|
||||
"""Create a new habit."""
|
||||
try:
|
||||
content_length = int(self.headers['Content-Length'])
|
||||
post_data = self.rfile.read(content_length).decode('utf-8')
|
||||
data = json.loads(post_data)
|
||||
|
||||
name = data.get('name', '').strip()
|
||||
if not name:
|
||||
self.send_json({'error': 'name is required'}, 400)
|
||||
return
|
||||
if len(name) > 100:
|
||||
self.send_json({'error': 'name must be max 100 characters'}, 400)
|
||||
return
|
||||
|
||||
color = data.get('color', '#3b82f6')
|
||||
if color and not re.match(r'^#[0-9A-Fa-f]{6}$', color):
|
||||
self.send_json({'error': 'color must be valid hex format (#RRGGBB)'}, 400)
|
||||
return
|
||||
|
||||
frequency_type = data.get('frequency', {}).get('type', 'daily')
|
||||
valid_types = ['daily', 'specific_days', 'x_per_week', 'weekly', 'monthly', 'custom']
|
||||
if frequency_type not in valid_types:
|
||||
self.send_json({'error': f'frequency.type must be one of: {", ".join(valid_types)}'}, 400)
|
||||
return
|
||||
|
||||
habit_id = str(uuid.uuid4())
|
||||
now = datetime.now().isoformat()
|
||||
|
||||
new_habit = {
|
||||
'id': habit_id,
|
||||
'name': name,
|
||||
'category': data.get('category', 'other'),
|
||||
'color': color,
|
||||
'icon': data.get('icon', 'check-circle'),
|
||||
'priority': data.get('priority', 5),
|
||||
'notes': data.get('notes', ''),
|
||||
'reminderTime': data.get('reminderTime', ''),
|
||||
'frequency': data.get('frequency', {'type': 'daily'}),
|
||||
'streak': {'current': 0, 'best': 0, 'lastCheckIn': None},
|
||||
'lives': 3,
|
||||
'completions': [],
|
||||
'createdAt': now,
|
||||
'updatedAt': now,
|
||||
}
|
||||
|
||||
if constants.HABITS_FILE.exists():
|
||||
with open(constants.HABITS_FILE, 'r', encoding='utf-8') as f:
|
||||
habits_data = json.load(f)
|
||||
else:
|
||||
habits_data = {'lastUpdated': '', 'habits': []}
|
||||
|
||||
habits_data['habits'].append(new_habit)
|
||||
habits_data['lastUpdated'] = now
|
||||
|
||||
with open(constants.HABITS_FILE, 'w', encoding='utf-8') as f:
|
||||
json.dump(habits_data, f, indent=2)
|
||||
|
||||
self.send_json(new_habit, 201)
|
||||
except json.JSONDecodeError:
|
||||
self.send_json({'error': 'Invalid JSON'}, 400)
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
def handle_habits_put(self):
|
||||
"""Update an existing habit."""
|
||||
try:
|
||||
path_parts = self.path.split('/')
|
||||
if len(path_parts) < 4:
|
||||
self.send_json({'error': 'Invalid path'}, 400)
|
||||
return
|
||||
habit_id = path_parts[3]
|
||||
|
||||
content_length = int(self.headers['Content-Length'])
|
||||
post_data = self.rfile.read(content_length).decode('utf-8')
|
||||
data = json.loads(post_data)
|
||||
|
||||
if not constants.HABITS_FILE.exists():
|
||||
self.send_json({'error': 'Habit not found'}, 404)
|
||||
return
|
||||
|
||||
with open(constants.HABITS_FILE, 'r', encoding='utf-8') as f:
|
||||
habits_data = json.load(f)
|
||||
|
||||
habits = habits_data.get('habits', [])
|
||||
habit_index = next((i for i, h in enumerate(habits) if h['id'] == habit_id), None)
|
||||
if habit_index is None:
|
||||
self.send_json({'error': 'Habit not found'}, 404)
|
||||
return
|
||||
|
||||
if 'name' in data:
|
||||
name = data['name'].strip()
|
||||
if not name:
|
||||
self.send_json({'error': 'name cannot be empty'}, 400)
|
||||
return
|
||||
if len(name) > 100:
|
||||
self.send_json({'error': 'name must be max 100 characters'}, 400)
|
||||
return
|
||||
if 'color' in data:
|
||||
color = data['color']
|
||||
if color and not re.match(r'^#[0-9A-Fa-f]{6}$', color):
|
||||
self.send_json({'error': 'color must be valid hex format (#RRGGBB)'}, 400)
|
||||
return
|
||||
if 'frequency' in data:
|
||||
frequency_type = data.get('frequency', {}).get('type', 'daily')
|
||||
valid_types = ['daily', 'specific_days', 'x_per_week', 'weekly', 'monthly', 'custom']
|
||||
if frequency_type not in valid_types:
|
||||
self.send_json({'error': f'frequency.type must be one of: {", ".join(valid_types)}'}, 400)
|
||||
return
|
||||
|
||||
allowed_fields = ['name', 'category', 'color', 'icon', 'priority', 'notes', 'frequency', 'reminderTime']
|
||||
habit = habits[habit_index]
|
||||
for field in allowed_fields:
|
||||
if field in data:
|
||||
habit[field] = data[field]
|
||||
|
||||
habit['updatedAt'] = datetime.now().isoformat()
|
||||
habits_data['lastUpdated'] = habit['updatedAt']
|
||||
with open(constants.HABITS_FILE, 'w', encoding='utf-8') as f:
|
||||
json.dump(habits_data, f, indent=2)
|
||||
|
||||
self.send_json(habit)
|
||||
except json.JSONDecodeError:
|
||||
self.send_json({'error': 'Invalid JSON'}, 400)
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
def handle_habits_delete(self):
|
||||
"""Delete a habit."""
|
||||
try:
|
||||
path_parts = self.path.split('/')
|
||||
if len(path_parts) < 4:
|
||||
self.send_json({'error': 'Invalid path'}, 400)
|
||||
return
|
||||
habit_id = path_parts[3]
|
||||
|
||||
if not constants.HABITS_FILE.exists():
|
||||
self.send_json({'error': 'Habit not found'}, 404)
|
||||
return
|
||||
|
||||
with open(constants.HABITS_FILE, 'r', encoding='utf-8') as f:
|
||||
habits_data = json.load(f)
|
||||
|
||||
habits = habits_data.get('habits', [])
|
||||
habit_found = False
|
||||
for i, habit in enumerate(habits):
|
||||
if habit['id'] == habit_id:
|
||||
habits.pop(i)
|
||||
habit_found = True
|
||||
break
|
||||
|
||||
if not habit_found:
|
||||
self.send_json({'error': 'Habit not found'}, 404)
|
||||
return
|
||||
|
||||
habits_data['lastUpdated'] = datetime.now().isoformat()
|
||||
with open(constants.HABITS_FILE, 'w', encoding='utf-8') as f:
|
||||
json.dump(habits_data, f, indent=2)
|
||||
|
||||
self.send_response(204)
|
||||
self.send_header('Access-Control-Allow-Origin', '*')
|
||||
self.end_headers()
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
def handle_habits_check(self):
|
||||
"""Check in on a habit for today."""
|
||||
try:
|
||||
path_parts = self.path.split('/')
|
||||
if len(path_parts) < 5:
|
||||
self.send_json({'error': 'Invalid path'}, 400)
|
||||
return
|
||||
habit_id = path_parts[3]
|
||||
|
||||
body_data = {}
|
||||
content_length = self.headers.get('Content-Length')
|
||||
if content_length:
|
||||
post_data = self.rfile.read(int(content_length)).decode('utf-8')
|
||||
if post_data.strip():
|
||||
try:
|
||||
body_data = json.loads(post_data)
|
||||
except json.JSONDecodeError:
|
||||
self.send_json({'error': 'Invalid JSON'}, 400)
|
||||
return
|
||||
|
||||
if not constants.HABITS_FILE.exists():
|
||||
self.send_json({'error': 'Habit not found'}, 404)
|
||||
return
|
||||
|
||||
with open(constants.HABITS_FILE, 'r', encoding='utf-8') as f:
|
||||
habits_data = json.load(f)
|
||||
|
||||
habit = next((h for h in habits_data.get('habits', []) if h['id'] == habit_id), None)
|
||||
if not habit:
|
||||
self.send_json({'error': 'Habit not found'}, 404)
|
||||
return
|
||||
|
||||
if not habits_helpers.should_check_today(habit):
|
||||
self.send_json({'error': 'Habit is not relevant for today based on its frequency'}, 400)
|
||||
return
|
||||
|
||||
today = datetime.now().date().isoformat()
|
||||
for completion in habit.get('completions', []):
|
||||
if completion.get('date') == today:
|
||||
self.send_json({'error': 'Habit already checked in today'}, 409)
|
||||
return
|
||||
|
||||
completion_entry = {'date': today, 'type': 'check'}
|
||||
if 'note' in body_data:
|
||||
completion_entry['note'] = body_data['note']
|
||||
if 'rating' in body_data:
|
||||
rating = body_data['rating']
|
||||
if not isinstance(rating, int) or rating < 1 or rating > 5:
|
||||
self.send_json({'error': 'rating must be an integer between 1 and 5'}, 400)
|
||||
return
|
||||
completion_entry['rating'] = rating
|
||||
if 'mood' in body_data:
|
||||
mood = body_data['mood']
|
||||
if mood not in ['happy', 'neutral', 'sad']:
|
||||
self.send_json({'error': 'mood must be one of: happy, neutral, sad'}, 400)
|
||||
return
|
||||
completion_entry['mood'] = mood
|
||||
|
||||
habit['completions'].append(completion_entry)
|
||||
|
||||
current_streak = habits_helpers.calculate_streak(habit)
|
||||
habit['streak']['current'] = current_streak
|
||||
if current_streak > habit['streak']['best']:
|
||||
habit['streak']['best'] = current_streak
|
||||
habit['streak']['lastCheckIn'] = today
|
||||
|
||||
new_lives, was_awarded = habits_helpers.check_and_award_weekly_lives(habit)
|
||||
lives_awarded_this_checkin = False
|
||||
if was_awarded:
|
||||
habit['lives'] = new_lives
|
||||
habit['lastLivesAward'] = today
|
||||
lives_awarded_this_checkin = True
|
||||
|
||||
habit['updatedAt'] = datetime.now().isoformat()
|
||||
habits_data['lastUpdated'] = habit['updatedAt']
|
||||
|
||||
with open(constants.HABITS_FILE, 'w', encoding='utf-8') as f:
|
||||
json.dump(habits_data, f, indent=2)
|
||||
|
||||
enriched = _enrich(habit)
|
||||
enriched['livesAwarded'] = lives_awarded_this_checkin
|
||||
self.send_json(enriched, 200)
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
def handle_habits_uncheck(self):
|
||||
"""Remove a habit completion for a specific date."""
|
||||
try:
|
||||
path_parts = self.path.split('?')[0].split('/')
|
||||
if len(path_parts) < 5:
|
||||
self.send_json({'error': 'Invalid path'}, 400)
|
||||
return
|
||||
habit_id = path_parts[3]
|
||||
|
||||
query_params = parse_qs(urlparse(self.path).query)
|
||||
if 'date' not in query_params:
|
||||
self.send_json({'error': 'date parameter is required (format: YYYY-MM-DD)'}, 400)
|
||||
return
|
||||
|
||||
target_date = query_params['date'][0]
|
||||
try:
|
||||
datetime.fromisoformat(target_date)
|
||||
except ValueError:
|
||||
self.send_json({'error': 'Invalid date format. Use YYYY-MM-DD'}, 400)
|
||||
return
|
||||
|
||||
if not constants.HABITS_FILE.exists():
|
||||
self.send_json({'error': 'Habit not found'}, 404)
|
||||
return
|
||||
|
||||
with open(constants.HABITS_FILE, 'r', encoding='utf-8') as f:
|
||||
habits_data = json.load(f)
|
||||
|
||||
habit = next((h for h in habits_data.get('habits', []) if h['id'] == habit_id), None)
|
||||
if not habit:
|
||||
self.send_json({'error': 'Habit not found'}, 404)
|
||||
return
|
||||
|
||||
completions = habit.get('completions', [])
|
||||
completion_found = False
|
||||
for i, completion in enumerate(completions):
|
||||
if completion.get('date') == target_date:
|
||||
completions.pop(i)
|
||||
completion_found = True
|
||||
break
|
||||
|
||||
if not completion_found:
|
||||
self.send_json({'error': 'No completion found for the specified date'}, 404)
|
||||
return
|
||||
|
||||
current_streak = habits_helpers.calculate_streak(habit)
|
||||
habit['streak']['current'] = current_streak
|
||||
if current_streak > habit['streak']['best']:
|
||||
habit['streak']['best'] = current_streak
|
||||
|
||||
habit['updatedAt'] = datetime.now().isoformat()
|
||||
habits_data['lastUpdated'] = habit['updatedAt']
|
||||
|
||||
with open(constants.HABITS_FILE, 'w', encoding='utf-8') as f:
|
||||
json.dump(habits_data, f, indent=2)
|
||||
|
||||
self.send_json(_enrich(habit), 200)
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
def handle_habits_skip(self):
|
||||
"""Skip a day using a life to preserve streak."""
|
||||
try:
|
||||
path_parts = self.path.split('/')
|
||||
if len(path_parts) < 5:
|
||||
self.send_json({'error': 'Invalid path'}, 400)
|
||||
return
|
||||
habit_id = path_parts[3]
|
||||
|
||||
if not constants.HABITS_FILE.exists():
|
||||
self.send_json({'error': 'Habit not found'}, 404)
|
||||
return
|
||||
|
||||
with open(constants.HABITS_FILE, 'r', encoding='utf-8') as f:
|
||||
habits_data = json.load(f)
|
||||
|
||||
habit = next((h for h in habits_data.get('habits', []) if h['id'] == habit_id), None)
|
||||
if not habit:
|
||||
self.send_json({'error': 'Habit not found'}, 404)
|
||||
return
|
||||
|
||||
current_lives = habit.get('lives', 3)
|
||||
if current_lives <= 0:
|
||||
self.send_json({'error': 'No lives remaining'}, 400)
|
||||
return
|
||||
|
||||
habit['lives'] = current_lives - 1
|
||||
|
||||
today = datetime.now().date().isoformat()
|
||||
habit['completions'].append({'date': today, 'type': 'skip'})
|
||||
|
||||
habit['updatedAt'] = datetime.now().isoformat()
|
||||
habits_data['lastUpdated'] = habit['updatedAt']
|
||||
|
||||
with open(constants.HABITS_FILE, 'w', encoding='utf-8') as f:
|
||||
json.dump(habits_data, f, indent=2)
|
||||
|
||||
self.send_json(_enrich(habit), 200)
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
62
dashboard/handlers/pdf.py
Normal file
62
dashboard/handlers/pdf.py
Normal file
@@ -0,0 +1,62 @@
|
||||
"""Markdown → PDF conversion endpoint (delegates to tools/generate_pdf.py)."""
|
||||
import json
|
||||
import subprocess
|
||||
|
||||
import constants
|
||||
|
||||
|
||||
class PDFHandlers:
|
||||
"""Mixin for /api/pdf."""
|
||||
|
||||
def handle_pdf_post(self):
|
||||
"""Convert markdown to PDF (text-based) by spawning the venv python."""
|
||||
try:
|
||||
content_length = int(self.headers['Content-Length'])
|
||||
post_data = self.rfile.read(content_length).decode('utf-8')
|
||||
data = json.loads(post_data)
|
||||
|
||||
markdown_content = data.get('markdown', '')
|
||||
filename = data.get('filename', 'document.pdf')
|
||||
|
||||
if not markdown_content:
|
||||
self.send_json({'error': 'No markdown content'}, 400)
|
||||
return
|
||||
|
||||
venv_python = constants.VENV_PYTHON
|
||||
pdf_script = constants.TOOLS_DIR / 'generate_pdf.py'
|
||||
|
||||
if not venv_python.exists():
|
||||
self.send_json({'error': 'Venv Python not found'}, 500)
|
||||
return
|
||||
if not pdf_script.exists():
|
||||
self.send_json({'error': 'PDF generator script not found'}, 500)
|
||||
return
|
||||
|
||||
input_data = json.dumps({'markdown': markdown_content, 'filename': filename})
|
||||
result = subprocess.run(
|
||||
[str(venv_python), str(pdf_script)],
|
||||
input=input_data.encode('utf-8'),
|
||||
capture_output=True,
|
||||
timeout=30,
|
||||
)
|
||||
|
||||
if result.returncode != 0:
|
||||
error_msg = result.stderr.decode('utf-8', errors='replace')
|
||||
try:
|
||||
error_json = json.loads(error_msg)
|
||||
self.send_json(error_json, 500)
|
||||
except Exception:
|
||||
self.send_json({'error': error_msg}, 500)
|
||||
return
|
||||
|
||||
pdf_bytes = result.stdout
|
||||
self.send_response(200)
|
||||
self.send_header('Content-Type', 'application/pdf')
|
||||
self.send_header('Content-Disposition', f'attachment; filename="{filename}"')
|
||||
self.send_header('Content-Length', str(len(pdf_bytes)))
|
||||
self.end_headers()
|
||||
self.wfile.write(pdf_bytes)
|
||||
except subprocess.TimeoutExpired:
|
||||
self.send_json({'error': 'PDF generation timeout'}, 500)
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
373
dashboard/handlers/workspace.py
Normal file
373
dashboard/handlers/workspace.py
Normal file
@@ -0,0 +1,373 @@
|
||||
"""~/workspace/ project control: list, run, stop, delete, logs."""
|
||||
import json
|
||||
import os
|
||||
import shutil
|
||||
import signal
|
||||
import subprocess
|
||||
import sys
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from urllib.parse import parse_qs, urlparse
|
||||
|
||||
import constants
|
||||
|
||||
|
||||
class WorkspaceHandlers:
|
||||
"""Mixin for /api/workspace and /api/workspace/*."""
|
||||
|
||||
def _validate_project(self, name):
|
||||
"""Validate project name and return its path, or None."""
|
||||
if not name or '/' in name or '..' in name:
|
||||
return None
|
||||
project_dir = constants.WORKSPACE_DIR / name
|
||||
if not project_dir.exists() or not project_dir.is_dir():
|
||||
return None
|
||||
if not str(project_dir.resolve()).startswith(str(constants.WORKSPACE_DIR)):
|
||||
return None
|
||||
return project_dir
|
||||
|
||||
# ── /api/workspace list ─────────────────────────────────────
|
||||
def handle_workspace_list(self):
|
||||
"""List projects in ~/workspace/ with Ralph status, git info, etc."""
|
||||
try:
|
||||
projects = []
|
||||
if not constants.WORKSPACE_DIR.exists():
|
||||
self.send_json({'projects': []})
|
||||
return
|
||||
|
||||
for project_dir in sorted(constants.WORKSPACE_DIR.iterdir()):
|
||||
if not project_dir.is_dir() or project_dir.name.startswith('.'):
|
||||
continue
|
||||
|
||||
ralph_dir = project_dir / 'scripts' / 'ralph'
|
||||
prd_json = ralph_dir / 'prd.json'
|
||||
tasks_dir = project_dir / 'tasks'
|
||||
|
||||
proj = {
|
||||
'name': project_dir.name,
|
||||
'path': str(project_dir),
|
||||
'hasRalph': ralph_dir.exists(),
|
||||
'hasPrd': any(tasks_dir.glob('prd-*.md')) if tasks_dir.exists() else False,
|
||||
'hasMain': (project_dir / 'main.py').exists(),
|
||||
'hasVenv': (project_dir / 'venv').exists(),
|
||||
'hasReadme': (project_dir / 'README.md').exists(),
|
||||
'ralph': None,
|
||||
'process': {'running': False, 'pid': None, 'port': None},
|
||||
'git': None,
|
||||
}
|
||||
|
||||
# Ralph status
|
||||
if prd_json.exists():
|
||||
try:
|
||||
prd = json.loads(prd_json.read_text())
|
||||
stories = prd.get('userStories', [])
|
||||
complete = sum(1 for s in stories if s.get('passes'))
|
||||
|
||||
ralph_pid = None
|
||||
ralph_running = False
|
||||
pid_file = ralph_dir / '.ralph.pid'
|
||||
if pid_file.exists():
|
||||
try:
|
||||
pid = int(pid_file.read_text().strip())
|
||||
os.kill(pid, 0)
|
||||
ralph_running = True
|
||||
ralph_pid = pid
|
||||
except (ValueError, ProcessLookupError, PermissionError):
|
||||
pass
|
||||
|
||||
last_iter = None
|
||||
tech = {}
|
||||
logs_dir = ralph_dir / 'logs'
|
||||
if logs_dir.exists():
|
||||
log_files = sorted(logs_dir.glob('iteration-*.log'), key=lambda f: f.stat().st_mtime, reverse=True)
|
||||
if log_files:
|
||||
mtime = log_files[0].stat().st_mtime
|
||||
last_iter = datetime.fromtimestamp(mtime).strftime('%Y-%m-%d %H:%M')
|
||||
tech = prd.get('techStack', {})
|
||||
|
||||
proj['ralph'] = {
|
||||
'running': ralph_running,
|
||||
'pid': ralph_pid,
|
||||
'storiesTotal': len(stories),
|
||||
'storiesComplete': complete,
|
||||
'lastIteration': last_iter,
|
||||
'stories': [
|
||||
{'id': s.get('id', ''), 'title': s.get('title', ''), 'passes': s.get('passes', False)}
|
||||
for s in stories
|
||||
],
|
||||
}
|
||||
proj['techStack'] = {
|
||||
'type': tech.get('type', ''),
|
||||
'commands': tech.get('commands', {}),
|
||||
'port': tech.get('port'),
|
||||
}
|
||||
except (json.JSONDecodeError, IOError):
|
||||
pass
|
||||
|
||||
# Check if main.py is running
|
||||
if proj['hasMain']:
|
||||
try:
|
||||
result = subprocess.run(
|
||||
['pgrep', '-f', f'python.*{project_dir.name}/main.py'],
|
||||
capture_output=True, text=True, timeout=3,
|
||||
)
|
||||
if result.stdout.strip():
|
||||
pids = result.stdout.strip().split('\n')
|
||||
port = None
|
||||
if prd_json.exists():
|
||||
try:
|
||||
prd_data = json.loads(prd_json.read_text())
|
||||
port = prd_data.get('techStack', {}).get('port')
|
||||
except (json.JSONDecodeError, IOError):
|
||||
pass
|
||||
proj['process'] = {
|
||||
'running': True,
|
||||
'pid': int(pids[0]),
|
||||
'port': port,
|
||||
}
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Git info (using _run_git from GitHandlers mixin)
|
||||
if (project_dir / '.git').exists():
|
||||
try:
|
||||
branch = self._run_git(project_dir, ['branch', '--show-current']).stdout.strip()
|
||||
last_commit = self._run_git(project_dir, ['log', '-1', '--format=%h - %s']).stdout.strip()
|
||||
status_out = self._run_git(project_dir, ['status', '--short']).stdout.strip()
|
||||
uncommitted = len([l for l in status_out.split('\n') if l.strip()]) if status_out else 0
|
||||
proj['git'] = {
|
||||
'branch': branch,
|
||||
'lastCommit': last_commit,
|
||||
'uncommitted': uncommitted,
|
||||
}
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
projects.append(proj)
|
||||
|
||||
self.send_json({'projects': projects})
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
|
||||
# ── /api/workspace/run (main | ralph | test) ───────────────
|
||||
def handle_workspace_run(self):
|
||||
"""Start a project process (main.py, ralph.sh, or pytest)."""
|
||||
try:
|
||||
data = self._read_post_json()
|
||||
project_name = data.get('project', '')
|
||||
command = data.get('command', '')
|
||||
|
||||
project_dir = self._validate_project(project_name)
|
||||
if not project_dir:
|
||||
self.send_json({'success': False, 'error': 'Invalid project'}, 400)
|
||||
return
|
||||
|
||||
allowed_commands = {'main', 'ralph', 'test'}
|
||||
if command not in allowed_commands:
|
||||
self.send_json({'success': False, 'error': f'Invalid command. Allowed: {", ".join(allowed_commands)}'}, 400)
|
||||
return
|
||||
|
||||
ralph_dir = project_dir / 'scripts' / 'ralph'
|
||||
|
||||
if command == 'main':
|
||||
main_py = project_dir / 'main.py'
|
||||
if not main_py.exists():
|
||||
self.send_json({'success': False, 'error': 'No main.py found'}, 404)
|
||||
return
|
||||
|
||||
venv_python = project_dir / 'venv' / 'bin' / 'python'
|
||||
python_cmd = str(venv_python) if venv_python.exists() else sys.executable
|
||||
|
||||
log_path = ralph_dir / 'logs' / 'main.log' if ralph_dir.exists() else project_dir / 'main.log'
|
||||
log_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
with open(log_path, 'a') as log_file:
|
||||
proc = subprocess.Popen(
|
||||
[python_cmd, 'main.py'],
|
||||
cwd=str(project_dir),
|
||||
stdout=log_file,
|
||||
stderr=log_file,
|
||||
start_new_session=True,
|
||||
)
|
||||
self.send_json({'success': True, 'pid': proc.pid, 'log': str(log_path)})
|
||||
|
||||
elif command == 'ralph':
|
||||
ralph_sh = ralph_dir / 'ralph.sh'
|
||||
if not ralph_sh.exists():
|
||||
self.send_json({'success': False, 'error': 'No ralph.sh found'}, 404)
|
||||
return
|
||||
|
||||
log_path = ralph_dir / 'logs' / 'ralph.log'
|
||||
log_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
with open(log_path, 'a') as log_file:
|
||||
proc = subprocess.Popen(
|
||||
['bash', str(ralph_sh)],
|
||||
cwd=str(project_dir),
|
||||
stdout=log_file,
|
||||
stderr=log_file,
|
||||
start_new_session=True,
|
||||
)
|
||||
|
||||
(ralph_dir / '.ralph.pid').write_text(str(proc.pid))
|
||||
self.send_json({'success': True, 'pid': proc.pid, 'log': str(log_path)})
|
||||
|
||||
elif command == 'test':
|
||||
venv_python = project_dir / 'venv' / 'bin' / 'python'
|
||||
python_cmd = str(venv_python) if venv_python.exists() else sys.executable
|
||||
|
||||
result = subprocess.run(
|
||||
[python_cmd, '-m', 'pytest', '-v', '--tb=short'],
|
||||
cwd=str(project_dir),
|
||||
capture_output=True, text=True,
|
||||
timeout=120,
|
||||
)
|
||||
self.send_json({
|
||||
'success': result.returncode == 0,
|
||||
'output': result.stdout + result.stderr,
|
||||
'returncode': result.returncode,
|
||||
})
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
self.send_json({'success': False, 'error': 'Test timeout (120s)'}, 500)
|
||||
except Exception as e:
|
||||
self.send_json({'success': False, 'error': str(e)}, 500)
|
||||
|
||||
def handle_workspace_stop(self):
|
||||
"""Stop a project process."""
|
||||
try:
|
||||
data = self._read_post_json()
|
||||
project_name = data.get('project', '')
|
||||
target = data.get('target', '')
|
||||
|
||||
project_dir = self._validate_project(project_name)
|
||||
if not project_dir:
|
||||
self.send_json({'success': False, 'error': 'Invalid project'}, 400)
|
||||
return
|
||||
|
||||
if target not in ('main', 'ralph'):
|
||||
self.send_json({'success': False, 'error': 'Invalid target. Use: main, ralph'}, 400)
|
||||
return
|
||||
|
||||
if target == 'ralph':
|
||||
pid_file = project_dir / 'scripts' / 'ralph' / '.ralph.pid'
|
||||
if pid_file.exists():
|
||||
try:
|
||||
pid = int(pid_file.read_text().strip())
|
||||
proc_cwd = Path(f'/proc/{pid}/cwd').resolve()
|
||||
if str(proc_cwd).startswith(str(constants.WORKSPACE_DIR)):
|
||||
os.killpg(os.getpgid(pid), signal.SIGTERM)
|
||||
self.send_json({'success': True, 'message': f'Ralph stopped (PID {pid})'})
|
||||
else:
|
||||
self.send_json({'success': False, 'error': 'Process not in workspace'}, 403)
|
||||
except ProcessLookupError:
|
||||
self.send_json({'success': True, 'message': 'Process already stopped'})
|
||||
except PermissionError:
|
||||
self.send_json({'success': False, 'error': 'Permission denied'}, 403)
|
||||
else:
|
||||
self.send_json({'success': False, 'error': 'No PID file found'}, 404)
|
||||
|
||||
elif target == 'main':
|
||||
try:
|
||||
result = subprocess.run(
|
||||
['pgrep', '-f', f'python.*{project_dir.name}/main.py'],
|
||||
capture_output=True, text=True, timeout=3,
|
||||
)
|
||||
if result.stdout.strip():
|
||||
pid = int(result.stdout.strip().split('\n')[0])
|
||||
proc_cwd = Path(f'/proc/{pid}/cwd').resolve()
|
||||
if str(proc_cwd).startswith(str(constants.WORKSPACE_DIR)):
|
||||
os.kill(pid, signal.SIGTERM)
|
||||
self.send_json({'success': True, 'message': f'Main stopped (PID {pid})'})
|
||||
else:
|
||||
self.send_json({'success': False, 'error': 'Process not in workspace'}, 403)
|
||||
else:
|
||||
self.send_json({'success': True, 'message': 'No running process found'})
|
||||
except Exception as e:
|
||||
self.send_json({'success': False, 'error': str(e)}, 500)
|
||||
|
||||
except Exception as e:
|
||||
self.send_json({'success': False, 'error': str(e)}, 500)
|
||||
|
||||
def handle_workspace_delete(self):
|
||||
"""Delete a workspace project."""
|
||||
try:
|
||||
data = self._read_post_json()
|
||||
project_name = data.get('project', '')
|
||||
confirm = data.get('confirm', '')
|
||||
|
||||
project_dir = self._validate_project(project_name)
|
||||
if not project_dir:
|
||||
self.send_json({'success': False, 'error': 'Invalid project'}, 400)
|
||||
return
|
||||
|
||||
if confirm != project_name:
|
||||
self.send_json({'success': False, 'error': 'Confirmation does not match project name'}, 400)
|
||||
return
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
['pgrep', '-f', f'{project_dir.name}/(main\\.py|ralph)'],
|
||||
capture_output=True, text=True, timeout=5,
|
||||
)
|
||||
if result.stdout.strip():
|
||||
self.send_json({'success': False, 'error': 'Project has running processes. Stop them first.'})
|
||||
return
|
||||
except subprocess.TimeoutExpired:
|
||||
pass
|
||||
|
||||
shutil.rmtree(str(project_dir))
|
||||
self.send_json({'success': True, 'message': f'Project {project_name} deleted'})
|
||||
except Exception as e:
|
||||
self.send_json({'success': False, 'error': str(e)}, 500)
|
||||
|
||||
def handle_workspace_logs(self):
|
||||
"""Get last N lines from a project log."""
|
||||
try:
|
||||
params = parse_qs(urlparse(self.path).query)
|
||||
project_name = params.get('project', [''])[0]
|
||||
log_type = params.get('type', ['ralph'])[0]
|
||||
lines_count = min(int(params.get('lines', ['100'])[0]), 500)
|
||||
|
||||
project_dir = self._validate_project(project_name)
|
||||
if not project_dir:
|
||||
self.send_json({'error': 'Invalid project'}, 400)
|
||||
return
|
||||
|
||||
ralph_dir = project_dir / 'scripts' / 'ralph'
|
||||
|
||||
if log_type == 'ralph':
|
||||
log_file = ralph_dir / 'logs' / 'ralph.log'
|
||||
if not log_file.exists():
|
||||
log_file = ralph_dir / 'logs' / 'ralph-test.log'
|
||||
elif log_type == 'main':
|
||||
log_file = ralph_dir / 'logs' / 'main.log' if ralph_dir.exists() else project_dir / 'main.log'
|
||||
elif log_type == 'progress':
|
||||
log_file = ralph_dir / 'progress.txt'
|
||||
elif log_type.startswith('iteration-'):
|
||||
log_file = ralph_dir / 'logs' / f'{log_type}.log'
|
||||
else:
|
||||
self.send_json({'error': 'Invalid log type'}, 400)
|
||||
return
|
||||
|
||||
if not log_file.exists():
|
||||
self.send_json({'project': project_name, 'type': log_type, 'lines': [], 'total': 0})
|
||||
return
|
||||
|
||||
if not str(log_file.resolve()).startswith(str(constants.WORKSPACE_DIR)):
|
||||
self.send_json({'error': 'Access denied'}, 403)
|
||||
return
|
||||
|
||||
content = log_file.read_text(encoding='utf-8', errors='replace')
|
||||
all_lines = content.split('\n')
|
||||
total = len(all_lines)
|
||||
last_lines = all_lines[-lines_count:] if len(all_lines) > lines_count else all_lines
|
||||
|
||||
self.send_json({
|
||||
'project': project_name,
|
||||
'type': log_type,
|
||||
'lines': last_lines,
|
||||
'total': total,
|
||||
})
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
135
dashboard/handlers/youtube.py
Normal file
135
dashboard/handlers/youtube.py
Normal file
@@ -0,0 +1,135 @@
|
||||
"""YouTube subtitle-download + note-creation endpoint."""
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
import traceback
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
|
||||
import constants
|
||||
|
||||
|
||||
def _clean_vtt(content):
|
||||
"""Convert VTT captions to plain text."""
|
||||
lines = []
|
||||
seen = set()
|
||||
for line in content.split('\n'):
|
||||
if any([
|
||||
line.startswith('WEBVTT'),
|
||||
line.startswith('Kind:'),
|
||||
line.startswith('Language:'),
|
||||
'-->' in line,
|
||||
line.strip().startswith('<'),
|
||||
not line.strip(),
|
||||
re.match(r'^\d+$', line.strip()),
|
||||
]):
|
||||
continue
|
||||
clean = re.sub(r'<[^>]+>', '', line).strip()
|
||||
if clean and clean not in seen:
|
||||
seen.add(clean)
|
||||
lines.append(clean)
|
||||
return ' '.join(lines)
|
||||
|
||||
|
||||
def _process_youtube(url):
|
||||
"""Download subtitles, save note."""
|
||||
yt_dlp = os.path.expanduser('~/.local/bin/yt-dlp')
|
||||
|
||||
result = subprocess.run(
|
||||
[yt_dlp, '--dump-json', '--no-download', url],
|
||||
capture_output=True, text=True, timeout=30,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
print(f"Failed to get video info: {result.stderr}")
|
||||
return
|
||||
|
||||
info = json.loads(result.stdout)
|
||||
title = info.get('title', 'Unknown')
|
||||
duration = info.get('duration', 0)
|
||||
|
||||
temp_dir = Path('/tmp/yt_subs')
|
||||
temp_dir.mkdir(exist_ok=True)
|
||||
for f in temp_dir.glob('*'):
|
||||
f.unlink()
|
||||
|
||||
subprocess.run([
|
||||
yt_dlp, '--write-auto-subs', '--sub-langs', 'en',
|
||||
'--skip-download', '--sub-format', 'vtt',
|
||||
'-o', str(temp_dir / '%(id)s'),
|
||||
url,
|
||||
], capture_output=True, timeout=120)
|
||||
|
||||
transcript = None
|
||||
for sub_file in temp_dir.glob('*.vtt'):
|
||||
content = sub_file.read_text(encoding='utf-8', errors='replace')
|
||||
transcript = _clean_vtt(content)
|
||||
break
|
||||
|
||||
if not transcript:
|
||||
print("No subtitles found")
|
||||
return
|
||||
|
||||
date_str = datetime.now().strftime('%Y-%m-%d')
|
||||
slug = re.sub(r'[^\w\s-]', '', title.lower())[:50].strip().replace(' ', '-')
|
||||
filename = f"{date_str}_{slug}.md"
|
||||
|
||||
note_content = f"""# {title}
|
||||
|
||||
**Video:** {url}
|
||||
**Duration:** {duration // 60}:{duration % 60:02d}
|
||||
**Saved:** {date_str}
|
||||
**Tags:** #youtube #to-summarize
|
||||
|
||||
---
|
||||
|
||||
## Transcript
|
||||
|
||||
{transcript[:15000]}
|
||||
|
||||
---
|
||||
|
||||
*Notă: Sumarizarea va fi adăugată de Echo.*
|
||||
"""
|
||||
|
||||
constants.NOTES_DIR.mkdir(parents=True, exist_ok=True)
|
||||
note_path = constants.NOTES_DIR / filename
|
||||
note_path.write_text(note_content, encoding='utf-8')
|
||||
|
||||
subprocess.run(
|
||||
[sys.executable, str(constants.TOOLS_DIR / 'update_notes_index.py')],
|
||||
capture_output=True,
|
||||
)
|
||||
print(f"Created note: {filename}")
|
||||
return filename
|
||||
|
||||
|
||||
class YoutubeHandlers:
|
||||
"""Mixin for /api/youtube."""
|
||||
|
||||
def handle_youtube(self):
|
||||
"""Process a YouTube URL: download subs, save note."""
|
||||
try:
|
||||
content_length = int(self.headers['Content-Length'])
|
||||
post_data = self.rfile.read(content_length).decode('utf-8')
|
||||
data = json.loads(post_data)
|
||||
url = data.get('url', '').strip()
|
||||
|
||||
if not url or ('youtube.com' not in url and 'youtu.be' not in url):
|
||||
self.send_json({'error': 'URL YouTube invalid'}, 400)
|
||||
return
|
||||
|
||||
try:
|
||||
print(f"Processing YouTube URL: {url}")
|
||||
_process_youtube(url)
|
||||
self.send_json({
|
||||
'status': 'done',
|
||||
'message': 'Notița a fost creată! Refresh pagina Notes.',
|
||||
})
|
||||
except Exception as e:
|
||||
print(f"YouTube processing error: {e}")
|
||||
traceback.print_exc()
|
||||
self.send_json({'status': 'error', 'message': f'Eroare: {str(e)}'}, 500)
|
||||
except Exception as e:
|
||||
self.send_json({'error': str(e)}, 500)
|
||||
2182
dashboard/index.html
Normal file
2182
dashboard/index.html
Normal file
File diff suppressed because it is too large
Load Diff
69
dashboard/issues.json
Normal file
69
dashboard/issues.json
Normal file
@@ -0,0 +1,69 @@
|
||||
{
|
||||
"lastUpdated": "2026-03-31T20:02:48.501Z",
|
||||
"programs": [
|
||||
"ROACONT",
|
||||
"ROAGEST",
|
||||
"ROAIMOB",
|
||||
"ROAFACTURARE",
|
||||
"ROADEF",
|
||||
"ROASTART",
|
||||
"ROAPRINT",
|
||||
"ROAWEB",
|
||||
"Clawdbot",
|
||||
"Personal",
|
||||
"Altele"
|
||||
],
|
||||
"issues": [
|
||||
{
|
||||
"id": "ROA-004",
|
||||
"title": "Banca-Plati-Plata comision bancar 627- ar aparea si campul de Lucrare/Comanda",
|
||||
"description": "Banca-Plati-Plata comision bancar 627- ar aparea si campul de Lucrare/Comanda",
|
||||
"program": "ROACONT",
|
||||
"owner": "robert",
|
||||
"priority": "important",
|
||||
"status": "done",
|
||||
"created": "2026-02-12T13:19:01.786Z",
|
||||
"deadline": null,
|
||||
"completed": "2026-02-13T23:06:16.567Z"
|
||||
},
|
||||
{
|
||||
"id": "ROA-002",
|
||||
"title": "D406 - verificare SAFT account Id gol",
|
||||
"description": "",
|
||||
"program": "ROACONT",
|
||||
"owner": "robert",
|
||||
"priority": "urgent-important",
|
||||
"status": "done",
|
||||
"created": "2026-02-02T11:25:18.115Z",
|
||||
"deadline": "2026-02-02",
|
||||
"updated": "2026-02-02T22:27:06.428Z",
|
||||
"completed": "2026-02-03T17:20:07.195Z"
|
||||
},
|
||||
{
|
||||
"id": "ROA-001",
|
||||
"title": "D101: Mutare impozit precedent RD49→RD50",
|
||||
"description": "RD 49 = în urma inspecției fiscale\nRD 50 = impozit precedent\nFormularul nu recalculează impozitul de 16%\nRD 40 se modifică și la 4.1",
|
||||
"program": "ROACONT",
|
||||
"owner": "marius",
|
||||
"priority": "important",
|
||||
"status": "done",
|
||||
"created": "2026-01-30T15:10:00Z",
|
||||
"deadline": "2026-02-06",
|
||||
"updated": "2026-02-02T22:26:59.690Z",
|
||||
"completed": "2026-02-05T21:53:55.392Z"
|
||||
},
|
||||
{
|
||||
"id": "ROA-003",
|
||||
"title": "Auto-copiere manoperă din devize stimative în devize reale",
|
||||
"description": "",
|
||||
"program": "ROAGEST",
|
||||
"owner": "robert",
|
||||
"priority": "backlog",
|
||||
"status": "done",
|
||||
"created": "2026-02-12T10:03:13.378157+00:00",
|
||||
"deadline": null,
|
||||
"updated": "2026-02-13T13:03:45.355Z",
|
||||
"completed": "2026-03-31T20:02:48.489Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
1
dashboard/notes-data
Symbolic link
1
dashboard/notes-data
Symbolic link
@@ -0,0 +1 @@
|
||||
/home/moltbot/echo-core/memory/kb
|
||||
1328
dashboard/notes.html
Normal file
1328
dashboard/notes.html
Normal file
File diff suppressed because it is too large
Load Diff
42
dashboard/status.json
Normal file
42
dashboard/status.json
Normal file
@@ -0,0 +1,42 @@
|
||||
{
|
||||
"git": {
|
||||
"status": "4 fișiere",
|
||||
"clean": false,
|
||||
"files": 4
|
||||
},
|
||||
"lastReport": {
|
||||
"type": "evening",
|
||||
"summary": "notes.html îmbunătățit (filtre colorate), rețetă salvată",
|
||||
"time": "30 Jan 2026, 22:00"
|
||||
},
|
||||
"anaf": {
|
||||
"ok": false,
|
||||
"status": "MODIFICĂRI",
|
||||
"message": "3 modificări detectate",
|
||||
"lastCheck": "21 Apr 2026, 10:04",
|
||||
"changesCount": 3,
|
||||
"changes": [
|
||||
{
|
||||
"name": "Declarația 100 - Obligații de plată la bugetul de stat",
|
||||
"url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/100.html",
|
||||
"summary": [
|
||||
"Soft J: 22.01.2026 → 07.04.2026"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Bilanț 31.12.2025 (S1002-S1005)",
|
||||
"url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/situatiifinanciare/2025/1002_5_2025.html",
|
||||
"summary": [
|
||||
"Pagina s-a modificat"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "Situații financiare anuale 2025",
|
||||
"url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/situatiifinanciare/2025/1030_2025.html",
|
||||
"summary": [
|
||||
"Pagina s-a modificat"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
123
dashboard/swipe-nav.js
Normal file
123
dashboard/swipe-nav.js
Normal file
@@ -0,0 +1,123 @@
|
||||
/**
|
||||
* Swipe Navigation for Echo
|
||||
* Swipe left/right to navigate between pages
|
||||
*/
|
||||
(function() {
|
||||
const pages = ['index.html', 'eco.html', 'notes.html', 'habits.html', 'files.html', 'workspace.html'];
|
||||
|
||||
// Get current page index
|
||||
function getCurrentIndex() {
|
||||
const path = window.location.pathname;
|
||||
let filename = path.split('/').pop() || 'index.html';
|
||||
// Handle /echo/ without filename
|
||||
if (filename === '' || filename === 'echo') filename = 'index.html';
|
||||
const idx = pages.indexOf(filename);
|
||||
return idx >= 0 ? idx : 0;
|
||||
}
|
||||
|
||||
// Navigate to page
|
||||
function navigateTo(index) {
|
||||
if (index >= 0 && index < pages.length) {
|
||||
window.location.href = pages[index];
|
||||
}
|
||||
}
|
||||
|
||||
// Swipe detection
|
||||
let touchStartX = 0;
|
||||
let touchStartY = 0;
|
||||
let touchEndX = 0;
|
||||
let touchEndY = 0;
|
||||
|
||||
const minSwipeDistance = 80;
|
||||
const maxVerticalDistance = 100;
|
||||
|
||||
document.addEventListener('touchstart', function(e) {
|
||||
touchStartX = e.changedTouches[0].screenX;
|
||||
touchStartY = e.changedTouches[0].screenY;
|
||||
}, { passive: true });
|
||||
|
||||
document.addEventListener('touchend', function(e) {
|
||||
touchEndX = e.changedTouches[0].screenX;
|
||||
touchEndY = e.changedTouches[0].screenY;
|
||||
handleSwipe();
|
||||
}, { passive: true });
|
||||
|
||||
function handleSwipe() {
|
||||
const deltaX = touchEndX - touchStartX;
|
||||
const deltaY = Math.abs(touchEndY - touchStartY);
|
||||
|
||||
// Ignore if vertical swipe or too short
|
||||
if (deltaY > maxVerticalDistance) return;
|
||||
if (Math.abs(deltaX) < minSwipeDistance) return;
|
||||
|
||||
const currentIndex = getCurrentIndex();
|
||||
|
||||
if (deltaX > 0) {
|
||||
// Swipe right → previous page
|
||||
navigateTo(currentIndex - 1);
|
||||
} else {
|
||||
// Swipe left → next page
|
||||
navigateTo(currentIndex + 1);
|
||||
}
|
||||
}
|
||||
|
||||
// Visual indicator (optional dots)
|
||||
function createIndicator() {
|
||||
const indicator = document.createElement('div');
|
||||
indicator.className = 'swipe-indicator';
|
||||
indicator.innerHTML = pages.map((_, i) =>
|
||||
`<span class="swipe-dot ${i === getCurrentIndex() ? 'active' : ''}"></span>`
|
||||
).join('');
|
||||
document.body.appendChild(indicator);
|
||||
}
|
||||
|
||||
// Add indicator styles
|
||||
const style = document.createElement('style');
|
||||
style.textContent = `
|
||||
.swipe-indicator {
|
||||
position: fixed;
|
||||
bottom: 24px;
|
||||
left: 50%;
|
||||
transform: translateX(-50%);
|
||||
display: flex;
|
||||
gap: 10px;
|
||||
z-index: 9999;
|
||||
padding: 10px 16px;
|
||||
background: rgba(50, 50, 60, 0.9);
|
||||
border: 1px solid rgba(255, 255, 255, 0.2);
|
||||
border-radius: 24px;
|
||||
backdrop-filter: blur(8px);
|
||||
}
|
||||
.swipe-dot {
|
||||
width: 10px;
|
||||
height: 10px;
|
||||
border-radius: 50%;
|
||||
background: rgba(255, 255, 255, 0.3);
|
||||
border: 1px solid rgba(255, 255, 255, 0.5);
|
||||
transition: all 0.2s;
|
||||
}
|
||||
.swipe-dot.active {
|
||||
background: #3b82f6;
|
||||
border-color: #3b82f6;
|
||||
transform: scale(1.3);
|
||||
box-shadow: 0 0 8px rgba(59, 130, 246, 0.6);
|
||||
}
|
||||
@media (min-width: 769px) {
|
||||
.swipe-indicator { display: none; }
|
||||
}
|
||||
`;
|
||||
document.head.appendChild(style);
|
||||
|
||||
// Init after DOM ready
|
||||
function init() {
|
||||
if ('ontouchstart' in window || navigator.maxTouchPoints > 0) {
|
||||
createIndicator();
|
||||
}
|
||||
}
|
||||
|
||||
if (document.readyState === 'loading') {
|
||||
document.addEventListener('DOMContentLoaded', init);
|
||||
} else {
|
||||
init();
|
||||
}
|
||||
})();
|
||||
1129
dashboard/tests/test_habits_api.py
Normal file
1129
dashboard/tests/test_habits_api.py
Normal file
File diff suppressed because it is too large
Load Diff
2868
dashboard/tests/test_habits_frontend.py
Normal file
2868
dashboard/tests/test_habits_frontend.py
Normal file
File diff suppressed because it is too large
Load Diff
573
dashboard/tests/test_habits_helpers.py
Normal file
573
dashboard/tests/test_habits_helpers.py
Normal file
@@ -0,0 +1,573 @@
|
||||
"""
|
||||
Tests for habits_helpers.py
|
||||
|
||||
Tests cover all helper functions for habit tracking including:
|
||||
- calculate_streak for all 6 frequency types
|
||||
- should_check_today for all frequency types
|
||||
- get_completion_rate
|
||||
- get_weekly_summary
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
# Add parent directory to path to import habits_helpers
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from habits_helpers import (
|
||||
calculate_streak,
|
||||
should_check_today,
|
||||
get_completion_rate,
|
||||
get_weekly_summary,
|
||||
check_and_award_weekly_lives
|
||||
)
|
||||
|
||||
|
||||
def test_calculate_streak_daily_consecutive():
|
||||
"""Test daily streak with consecutive days."""
|
||||
today = datetime.now().date()
|
||||
habit = {
|
||||
"frequency": {"type": "daily"},
|
||||
"completions": [
|
||||
{"date": today.isoformat()},
|
||||
{"date": (today - timedelta(days=1)).isoformat()},
|
||||
{"date": (today - timedelta(days=2)).isoformat()},
|
||||
]
|
||||
}
|
||||
assert calculate_streak(habit) == 3
|
||||
|
||||
|
||||
def test_calculate_streak_daily_with_gap():
|
||||
"""Test daily streak breaks on gap."""
|
||||
today = datetime.now().date()
|
||||
habit = {
|
||||
"frequency": {"type": "daily"},
|
||||
"completions": [
|
||||
{"date": today.isoformat()},
|
||||
{"date": (today - timedelta(days=1)).isoformat()},
|
||||
# Gap here (day 2 missing)
|
||||
{"date": (today - timedelta(days=3)).isoformat()},
|
||||
]
|
||||
}
|
||||
assert calculate_streak(habit) == 2
|
||||
|
||||
|
||||
def test_calculate_streak_daily_empty():
|
||||
"""Test daily streak with no completions."""
|
||||
habit = {
|
||||
"frequency": {"type": "daily"},
|
||||
"completions": []
|
||||
}
|
||||
assert calculate_streak(habit) == 0
|
||||
|
||||
|
||||
def test_calculate_streak_specific_days():
|
||||
"""Test specific_days streak (Mon, Wed, Fri)."""
|
||||
today = datetime.now().date()
|
||||
|
||||
# Find the most recent Monday
|
||||
days_since_monday = today.weekday()
|
||||
last_monday = today - timedelta(days=days_since_monday)
|
||||
|
||||
habit = {
|
||||
"frequency": {
|
||||
"type": "specific_days",
|
||||
"days": [0, 2, 4] # Mon, Wed, Fri (0=Mon in Python weekday)
|
||||
},
|
||||
"completions": [
|
||||
{"date": last_monday.isoformat()}, # Mon
|
||||
{"date": (last_monday - timedelta(days=2)).isoformat()}, # Fri previous week
|
||||
{"date": (last_monday - timedelta(days=4)).isoformat()}, # Wed previous week
|
||||
]
|
||||
}
|
||||
|
||||
# Should count 3 consecutive relevant days
|
||||
streak = calculate_streak(habit)
|
||||
assert streak >= 1 # At least the most recent relevant day
|
||||
|
||||
|
||||
def test_calculate_streak_x_per_week():
|
||||
"""Test x_per_week streak (3 times per week)."""
|
||||
today = datetime.now().date()
|
||||
|
||||
# Find Monday of current week
|
||||
days_since_monday = today.weekday()
|
||||
monday = today - timedelta(days=days_since_monday)
|
||||
|
||||
# Current week: 3 completions (Mon, Tue, Wed)
|
||||
# Previous week: 3 completions (Mon, Tue, Wed)
|
||||
habit = {
|
||||
"frequency": {
|
||||
"type": "x_per_week",
|
||||
"count": 3
|
||||
},
|
||||
"completions": [
|
||||
{"date": monday.isoformat()}, # This week Mon
|
||||
{"date": (monday + timedelta(days=1)).isoformat()}, # This week Tue
|
||||
{"date": (monday + timedelta(days=2)).isoformat()}, # This week Wed
|
||||
# Previous week
|
||||
{"date": (monday - timedelta(days=7)).isoformat()}, # Last week Mon
|
||||
{"date": (monday - timedelta(days=6)).isoformat()}, # Last week Tue
|
||||
{"date": (monday - timedelta(days=5)).isoformat()}, # Last week Wed
|
||||
]
|
||||
}
|
||||
|
||||
streak = calculate_streak(habit)
|
||||
assert streak >= 2 # Both weeks meet the target
|
||||
|
||||
|
||||
def test_calculate_streak_weekly():
|
||||
"""Test weekly streak (at least 1 per week)."""
|
||||
today = datetime.now().date()
|
||||
|
||||
habit = {
|
||||
"frequency": {"type": "weekly"},
|
||||
"completions": [
|
||||
{"date": today.isoformat()}, # This week
|
||||
{"date": (today - timedelta(days=7)).isoformat()}, # Last week
|
||||
{"date": (today - timedelta(days=14)).isoformat()}, # 2 weeks ago
|
||||
]
|
||||
}
|
||||
|
||||
streak = calculate_streak(habit)
|
||||
assert streak >= 1
|
||||
|
||||
|
||||
def test_calculate_streak_monthly():
|
||||
"""Test monthly streak (at least 1 per month)."""
|
||||
today = datetime.now().date()
|
||||
|
||||
# This month
|
||||
habit = {
|
||||
"frequency": {"type": "monthly"},
|
||||
"completions": [
|
||||
{"date": today.isoformat()},
|
||||
]
|
||||
}
|
||||
|
||||
streak = calculate_streak(habit)
|
||||
assert streak >= 1
|
||||
|
||||
|
||||
def test_calculate_streak_custom_interval():
|
||||
"""Test custom interval streak (every 3 days)."""
|
||||
today = datetime.now().date()
|
||||
|
||||
habit = {
|
||||
"frequency": {
|
||||
"type": "custom",
|
||||
"interval": 3
|
||||
},
|
||||
"completions": [
|
||||
{"date": today.isoformat()},
|
||||
{"date": (today - timedelta(days=3)).isoformat()},
|
||||
{"date": (today - timedelta(days=6)).isoformat()},
|
||||
]
|
||||
}
|
||||
|
||||
streak = calculate_streak(habit)
|
||||
assert streak == 3
|
||||
|
||||
|
||||
def test_should_check_today_daily():
|
||||
"""Test should_check_today for daily habit."""
|
||||
habit = {"frequency": {"type": "daily"}}
|
||||
assert should_check_today(habit) is True
|
||||
|
||||
|
||||
def test_should_check_today_specific_days():
|
||||
"""Test should_check_today for specific_days habit."""
|
||||
today_weekday = datetime.now().date().weekday()
|
||||
|
||||
# Habit relevant today
|
||||
habit = {
|
||||
"frequency": {
|
||||
"type": "specific_days",
|
||||
"days": [today_weekday]
|
||||
}
|
||||
}
|
||||
assert should_check_today(habit) is True
|
||||
|
||||
# Habit not relevant today
|
||||
other_day = (today_weekday + 1) % 7
|
||||
habit = {
|
||||
"frequency": {
|
||||
"type": "specific_days",
|
||||
"days": [other_day]
|
||||
}
|
||||
}
|
||||
assert should_check_today(habit) is False
|
||||
|
||||
|
||||
def test_should_check_today_x_per_week():
|
||||
"""Test should_check_today for x_per_week habit."""
|
||||
habit = {
|
||||
"frequency": {
|
||||
"type": "x_per_week",
|
||||
"count": 3
|
||||
}
|
||||
}
|
||||
assert should_check_today(habit) is True
|
||||
|
||||
|
||||
def test_should_check_today_weekly():
|
||||
"""Test should_check_today for weekly habit."""
|
||||
habit = {"frequency": {"type": "weekly"}}
|
||||
assert should_check_today(habit) is True
|
||||
|
||||
|
||||
def test_should_check_today_monthly():
|
||||
"""Test should_check_today for monthly habit."""
|
||||
habit = {"frequency": {"type": "monthly"}}
|
||||
assert should_check_today(habit) is True
|
||||
|
||||
|
||||
def test_should_check_today_custom_ready():
|
||||
"""Test should_check_today for custom interval when ready."""
|
||||
today = datetime.now().date()
|
||||
|
||||
habit = {
|
||||
"frequency": {
|
||||
"type": "custom",
|
||||
"interval": 3
|
||||
},
|
||||
"completions": [
|
||||
{"date": (today - timedelta(days=3)).isoformat()}
|
||||
]
|
||||
}
|
||||
assert should_check_today(habit) is True
|
||||
|
||||
|
||||
def test_should_check_today_custom_not_ready():
|
||||
"""Test should_check_today for custom interval when not ready."""
|
||||
today = datetime.now().date()
|
||||
|
||||
habit = {
|
||||
"frequency": {
|
||||
"type": "custom",
|
||||
"interval": 3
|
||||
},
|
||||
"completions": [
|
||||
{"date": (today - timedelta(days=1)).isoformat()}
|
||||
]
|
||||
}
|
||||
assert should_check_today(habit) is False
|
||||
|
||||
|
||||
def test_get_completion_rate_daily_perfect():
|
||||
"""Test completion rate for daily habit with 100%."""
|
||||
today = datetime.now().date()
|
||||
|
||||
completions = []
|
||||
for i in range(30):
|
||||
completions.append({"date": (today - timedelta(days=i)).isoformat()})
|
||||
|
||||
habit = {
|
||||
"frequency": {"type": "daily"},
|
||||
"completions": completions
|
||||
}
|
||||
|
||||
rate = get_completion_rate(habit, days=30)
|
||||
assert rate == 100.0
|
||||
|
||||
|
||||
def test_get_completion_rate_daily_half():
|
||||
"""Test completion rate for daily habit with 50%."""
|
||||
today = datetime.now().date()
|
||||
|
||||
completions = []
|
||||
for i in range(0, 30, 2): # Every other day
|
||||
completions.append({"date": (today - timedelta(days=i)).isoformat()})
|
||||
|
||||
habit = {
|
||||
"frequency": {"type": "daily"},
|
||||
"completions": completions
|
||||
}
|
||||
|
||||
rate = get_completion_rate(habit, days=30)
|
||||
assert 45 <= rate <= 55 # Around 50%
|
||||
|
||||
|
||||
def test_get_completion_rate_specific_days():
|
||||
"""Test completion rate for specific_days habit."""
|
||||
today = datetime.now().date()
|
||||
today_weekday = today.weekday()
|
||||
|
||||
# Create habit for Mon, Wed, Fri
|
||||
habit = {
|
||||
"frequency": {
|
||||
"type": "specific_days",
|
||||
"days": [0, 2, 4]
|
||||
},
|
||||
"completions": []
|
||||
}
|
||||
|
||||
# Add completions for all relevant days in last 30 days
|
||||
for i in range(30):
|
||||
check_date = today - timedelta(days=i)
|
||||
if check_date.weekday() in [0, 2, 4]:
|
||||
habit["completions"].append({"date": check_date.isoformat()})
|
||||
|
||||
rate = get_completion_rate(habit, days=30)
|
||||
assert rate == 100.0
|
||||
|
||||
|
||||
def test_get_completion_rate_empty():
|
||||
"""Test completion rate with no completions."""
|
||||
habit = {
|
||||
"frequency": {"type": "daily"},
|
||||
"completions": []
|
||||
}
|
||||
|
||||
rate = get_completion_rate(habit, days=30)
|
||||
assert rate == 0.0
|
||||
|
||||
|
||||
def test_get_weekly_summary():
|
||||
"""Test weekly summary returns correct structure."""
|
||||
today = datetime.now().date()
|
||||
|
||||
habit = {
|
||||
"frequency": {"type": "daily"},
|
||||
"completions": [
|
||||
{"date": today.isoformat()},
|
||||
{"date": (today - timedelta(days=1)).isoformat()},
|
||||
]
|
||||
}
|
||||
|
||||
summary = get_weekly_summary(habit)
|
||||
|
||||
# Check structure
|
||||
assert isinstance(summary, dict)
|
||||
assert "Monday" in summary
|
||||
assert "Tuesday" in summary
|
||||
assert "Wednesday" in summary
|
||||
assert "Thursday" in summary
|
||||
assert "Friday" in summary
|
||||
assert "Saturday" in summary
|
||||
assert "Sunday" in summary
|
||||
|
||||
# Check values are valid
|
||||
valid_statuses = ["checked", "skipped", "missed", "upcoming", "not_relevant"]
|
||||
for day, status in summary.items():
|
||||
assert status in valid_statuses
|
||||
|
||||
|
||||
def test_get_weekly_summary_with_skip():
|
||||
"""Test weekly summary handles skipped days."""
|
||||
today = datetime.now().date()
|
||||
|
||||
habit = {
|
||||
"frequency": {"type": "daily"},
|
||||
"completions": [
|
||||
{"date": today.isoformat(), "type": "check"},
|
||||
{"date": (today - timedelta(days=1)).isoformat(), "type": "skip"},
|
||||
]
|
||||
}
|
||||
|
||||
summary = get_weekly_summary(habit)
|
||||
|
||||
# Find today's day name
|
||||
day_names = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
|
||||
today_name = day_names[today.weekday()]
|
||||
yesterday_name = day_names[(today.weekday() - 1) % 7]
|
||||
|
||||
assert summary[today_name] == "checked"
|
||||
assert summary[yesterday_name] == "skipped"
|
||||
|
||||
|
||||
def test_get_weekly_summary_specific_days():
|
||||
"""Test weekly summary marks non-relevant days correctly."""
|
||||
today = datetime.now().date()
|
||||
today_weekday = today.weekday()
|
||||
|
||||
# Habit only for Monday (0)
|
||||
habit = {
|
||||
"frequency": {
|
||||
"type": "specific_days",
|
||||
"days": [0]
|
||||
},
|
||||
"completions": []
|
||||
}
|
||||
|
||||
summary = get_weekly_summary(habit)
|
||||
|
||||
# All days except Monday should be not_relevant or upcoming
|
||||
for day_name, status in summary.items():
|
||||
if day_name == "Monday":
|
||||
continue # Monday can be any status
|
||||
if status not in ["upcoming", "not_relevant"]:
|
||||
# Day should be not_relevant if it's in the past
|
||||
pass
|
||||
|
||||
|
||||
def test_check_and_award_weekly_lives_awards_life_with_checkin():
|
||||
"""Test that +1 life is awarded if there was ≥1 check-in in previous week."""
|
||||
today = datetime.now().date()
|
||||
current_week_start = today - timedelta(days=today.weekday())
|
||||
previous_week_start = current_week_start - timedelta(days=7)
|
||||
|
||||
# Add check-in in previous week (Wednesday)
|
||||
habit = {
|
||||
"lives": 2,
|
||||
"completions": [
|
||||
{"date": (previous_week_start + timedelta(days=2)).isoformat(), "type": "check"}
|
||||
]
|
||||
}
|
||||
|
||||
new_lives, was_awarded = check_and_award_weekly_lives(habit)
|
||||
|
||||
assert was_awarded == True
|
||||
assert new_lives == 3
|
||||
|
||||
|
||||
def test_check_and_award_weekly_lives_no_award_without_checkin():
|
||||
"""Test that no life is awarded if there were no check-ins in previous week."""
|
||||
today = datetime.now().date()
|
||||
current_week_start = today - timedelta(days=today.weekday())
|
||||
|
||||
# Add check-in in current week only
|
||||
habit = {
|
||||
"lives": 2,
|
||||
"completions": [
|
||||
{"date": (current_week_start + timedelta(days=1)).isoformat(), "type": "check"}
|
||||
]
|
||||
}
|
||||
|
||||
new_lives, was_awarded = check_and_award_weekly_lives(habit)
|
||||
|
||||
assert was_awarded == False
|
||||
assert new_lives == 2
|
||||
|
||||
|
||||
def test_check_and_award_weekly_lives_no_duplicate_award():
|
||||
"""Test that life is not awarded twice in the same week."""
|
||||
today = datetime.now().date()
|
||||
current_week_start = today - timedelta(days=today.weekday())
|
||||
previous_week_start = current_week_start - timedelta(days=7)
|
||||
|
||||
# Add check-in in previous week and mark as already awarded this week
|
||||
habit = {
|
||||
"lives": 3,
|
||||
"lastLivesAward": current_week_start.isoformat(),
|
||||
"completions": [
|
||||
{"date": (previous_week_start + timedelta(days=2)).isoformat(), "type": "check"}
|
||||
]
|
||||
}
|
||||
|
||||
new_lives, was_awarded = check_and_award_weekly_lives(habit)
|
||||
|
||||
assert was_awarded == False
|
||||
assert new_lives == 3
|
||||
|
||||
|
||||
def test_check_and_award_weekly_lives_skip_doesnt_count():
|
||||
"""Test that skips don't count toward weekly recovery."""
|
||||
today = datetime.now().date()
|
||||
current_week_start = today - timedelta(days=today.weekday())
|
||||
previous_week_start = current_week_start - timedelta(days=7)
|
||||
|
||||
# Add only skips in previous week, no check-ins
|
||||
habit = {
|
||||
"lives": 1,
|
||||
"completions": [
|
||||
{"date": (previous_week_start + timedelta(days=2)).isoformat(), "type": "skip"},
|
||||
{"date": (previous_week_start + timedelta(days=4)).isoformat(), "type": "skip"}
|
||||
]
|
||||
}
|
||||
|
||||
new_lives, was_awarded = check_and_award_weekly_lives(habit)
|
||||
|
||||
assert was_awarded == False
|
||||
assert new_lives == 1
|
||||
|
||||
|
||||
def test_check_and_award_weekly_lives_multiple_checkins():
|
||||
"""Test that award works with multiple check-ins in previous week."""
|
||||
today = datetime.now().date()
|
||||
current_week_start = today - timedelta(days=today.weekday())
|
||||
previous_week_start = current_week_start - timedelta(days=7)
|
||||
|
||||
# Add multiple check-ins in previous week
|
||||
habit = {
|
||||
"lives": 2,
|
||||
"completions": [
|
||||
{"date": (previous_week_start + timedelta(days=1)).isoformat(), "type": "check"},
|
||||
{"date": (previous_week_start + timedelta(days=3)).isoformat(), "type": "check"},
|
||||
{"date": (previous_week_start + timedelta(days=5)).isoformat(), "type": "check"}
|
||||
]
|
||||
}
|
||||
|
||||
new_lives, was_awarded = check_and_award_weekly_lives(habit)
|
||||
|
||||
assert was_awarded == True
|
||||
assert new_lives == 3
|
||||
|
||||
|
||||
def test_check_and_award_weekly_lives_no_cap():
|
||||
"""Test that lives can accumulate beyond 3."""
|
||||
today = datetime.now().date()
|
||||
current_week_start = today - timedelta(days=today.weekday())
|
||||
previous_week_start = current_week_start - timedelta(days=7)
|
||||
|
||||
# Habit with 5 lives
|
||||
habit = {
|
||||
"lives": 5,
|
||||
"completions": [
|
||||
{"date": (previous_week_start + timedelta(days=2)).isoformat(), "type": "check"}
|
||||
]
|
||||
}
|
||||
|
||||
new_lives, was_awarded = check_and_award_weekly_lives(habit)
|
||||
|
||||
assert was_awarded == True
|
||||
assert new_lives == 6
|
||||
|
||||
|
||||
def test_check_and_award_weekly_lives_missing_last_award_field():
|
||||
"""Test backward compatibility when lastLivesAward field is missing."""
|
||||
today = datetime.now().date()
|
||||
current_week_start = today - timedelta(days=today.weekday())
|
||||
previous_week_start = current_week_start - timedelta(days=7)
|
||||
|
||||
# Habit without lastLivesAward field (backward compatible)
|
||||
habit = {
|
||||
"lives": 2,
|
||||
"completions": [
|
||||
{"date": (previous_week_start + timedelta(days=2)).isoformat(), "type": "check"}
|
||||
]
|
||||
}
|
||||
|
||||
new_lives, was_awarded = check_and_award_weekly_lives(habit)
|
||||
|
||||
assert was_awarded == True
|
||||
assert new_lives == 3
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run all tests
|
||||
import inspect
|
||||
|
||||
test_functions = [
|
||||
obj for name, obj in inspect.getmembers(sys.modules[__name__])
|
||||
if inspect.isfunction(obj) and name.startswith("test_")
|
||||
]
|
||||
|
||||
passed = 0
|
||||
failed = 0
|
||||
|
||||
for test_func in test_functions:
|
||||
try:
|
||||
test_func()
|
||||
print(f"✓ {test_func.__name__}")
|
||||
passed += 1
|
||||
except AssertionError as e:
|
||||
print(f"✗ {test_func.__name__}: {e}")
|
||||
failed += 1
|
||||
except Exception as e:
|
||||
print(f"✗ {test_func.__name__}: {type(e).__name__}: {e}")
|
||||
failed += 1
|
||||
|
||||
print(f"\n{passed} passed, {failed} failed")
|
||||
sys.exit(0 if failed == 0 else 1)
|
||||
555
dashboard/tests/test_habits_integration.py
Normal file
555
dashboard/tests/test_habits_integration.py
Normal file
@@ -0,0 +1,555 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Integration tests for Habits feature - End-to-end flows
|
||||
|
||||
Tests complete workflows involving multiple API calls and state transitions.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
import shutil
|
||||
from datetime import datetime, timedelta
|
||||
from http.server import HTTPServer
|
||||
from threading import Thread
|
||||
import urllib.request
|
||||
import urllib.error
|
||||
|
||||
# Add parent directory to path to import api module
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
|
||||
|
||||
from api import TaskBoardHandler
|
||||
import habits_helpers
|
||||
|
||||
|
||||
# Test helpers
|
||||
def setup_test_env():
|
||||
"""Create temporary environment for testing"""
|
||||
from pathlib import Path
|
||||
temp_dir = tempfile.mkdtemp()
|
||||
habits_file = Path(temp_dir) / 'habits.json'
|
||||
|
||||
# Initialize empty habits file
|
||||
with open(habits_file, 'w') as f:
|
||||
json.dump({'lastUpdated': datetime.now().isoformat(), 'habits': []}, f)
|
||||
|
||||
# Override HABITS_FILE constant
|
||||
import api
|
||||
api.HABITS_FILE = habits_file
|
||||
|
||||
return temp_dir
|
||||
|
||||
|
||||
def teardown_test_env(temp_dir):
|
||||
"""Clean up temporary environment"""
|
||||
shutil.rmtree(temp_dir)
|
||||
|
||||
|
||||
def start_test_server():
|
||||
"""Start HTTP server on random port for testing"""
|
||||
server = HTTPServer(('localhost', 0), TaskBoardHandler)
|
||||
thread = Thread(target=server.serve_forever, daemon=True)
|
||||
thread.start()
|
||||
return server
|
||||
|
||||
|
||||
def http_request(url, method='GET', data=None):
|
||||
"""Make HTTP request and return response data"""
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
|
||||
if data:
|
||||
data = json.dumps(data).encode('utf-8')
|
||||
|
||||
req = urllib.request.Request(url, data=data, headers=headers, method=method)
|
||||
|
||||
try:
|
||||
with urllib.request.urlopen(req) as response:
|
||||
body = response.read().decode('utf-8')
|
||||
return json.loads(body) if body else None
|
||||
except urllib.error.HTTPError as e:
|
||||
error_body = e.read().decode('utf-8')
|
||||
try:
|
||||
return {'error': json.loads(error_body), 'status': e.code}
|
||||
except:
|
||||
return {'error': error_body, 'status': e.code}
|
||||
|
||||
|
||||
# Integration Tests
|
||||
|
||||
def test_01_create_and_checkin_increments_streak():
|
||||
"""Integration test: create habit → check-in → verify streak is 1"""
|
||||
temp_dir = setup_test_env()
|
||||
server = start_test_server()
|
||||
base_url = f"http://localhost:{server.server_port}"
|
||||
|
||||
try:
|
||||
# Create daily habit
|
||||
habit_data = {
|
||||
'name': 'Morning meditation',
|
||||
'category': 'health',
|
||||
'color': '#10B981',
|
||||
'icon': 'brain',
|
||||
'priority': 50,
|
||||
'frequency': {'type': 'daily'}
|
||||
}
|
||||
|
||||
result = http_request(f"{base_url}/api/habits", method='POST', data=habit_data)
|
||||
if 'error' in result:
|
||||
print(f"Error creating habit: {result}")
|
||||
assert 'id' in result, f"Should return created habit with ID, got: {result}"
|
||||
habit_id = result['id']
|
||||
|
||||
# Check in today
|
||||
checkin_result = http_request(f"{base_url}/api/habits/{habit_id}/check", method='POST')
|
||||
|
||||
# Verify streak incremented to 1
|
||||
assert checkin_result['streak']['current'] == 1, "Streak should be 1 after first check-in"
|
||||
assert checkin_result['streak']['best'] == 1, "Best streak should be 1 after first check-in"
|
||||
assert checkin_result['streak']['lastCheckIn'] == datetime.now().date().isoformat(), "Last check-in should be today"
|
||||
|
||||
print("✓ Test 1: Create + check-in → streak is 1")
|
||||
|
||||
finally:
|
||||
server.shutdown()
|
||||
teardown_test_env(temp_dir)
|
||||
|
||||
|
||||
def test_02_seven_consecutive_checkins_restore_life():
|
||||
"""Integration test: 7 consecutive check-ins → life restored (if below 3)"""
|
||||
temp_dir = setup_test_env()
|
||||
server = start_test_server()
|
||||
base_url = f"http://localhost:{server.server_port}"
|
||||
|
||||
try:
|
||||
# Create daily habit
|
||||
habit_data = {
|
||||
'name': 'Daily exercise',
|
||||
'category': 'health',
|
||||
'color': '#EF4444',
|
||||
'icon': 'dumbbell',
|
||||
'priority': 50,
|
||||
'frequency': {'type': 'daily'}
|
||||
}
|
||||
|
||||
result = http_request(f"{base_url}/api/habits", method='POST', data=habit_data)
|
||||
habit_id = result['id']
|
||||
|
||||
# Manually set lives to 1 (instead of using skip API which would add completions)
|
||||
import api
|
||||
with open(api.HABITS_FILE, 'r') as f:
|
||||
data = json.load(f)
|
||||
|
||||
habit_obj = next(h for h in data['habits'] if h['id'] == habit_id)
|
||||
habit_obj['lives'] = 1 # Directly set to 1 (simulating 2 skips used)
|
||||
|
||||
# Add 7 consecutive check-in completions for the past 7 days
|
||||
for i in range(7):
|
||||
check_date = (datetime.now() - timedelta(days=6-i)).date().isoformat()
|
||||
habit_obj['completions'].append({
|
||||
'date': check_date,
|
||||
'type': 'check'
|
||||
})
|
||||
|
||||
# Recalculate streak and check for life restore
|
||||
habit_obj['streak'] = {
|
||||
'current': habits_helpers.calculate_streak(habit_obj),
|
||||
'best': max(habit_obj['streak']['best'], habits_helpers.calculate_streak(habit_obj)),
|
||||
'lastCheckIn': datetime.now().date().isoformat()
|
||||
}
|
||||
|
||||
# Check life restore logic: last 7 completions all 'check' type
|
||||
last_7 = habit_obj['completions'][-7:]
|
||||
if len(last_7) == 7 and all(c.get('type') == 'check' for c in last_7):
|
||||
if habit_obj['lives'] < 3:
|
||||
habit_obj['lives'] += 1
|
||||
|
||||
data['lastUpdated'] = datetime.now().isoformat()
|
||||
with open(api.HABITS_FILE, 'w') as f:
|
||||
json.dump(data, f, indent=2)
|
||||
|
||||
# Get updated habit
|
||||
habits = http_request(f"{base_url}/api/habits")
|
||||
habit = next(h for h in habits if h['id'] == habit_id)
|
||||
|
||||
# Verify life restored
|
||||
assert habit['lives'] == 2, f"Should have 2 lives after 7 consecutive check-ins (was {habit['lives']})"
|
||||
assert habit['current_streak'] == 7, "Should have streak of 7"
|
||||
|
||||
print("✓ Test 2: 7 consecutive check-ins → life restored")
|
||||
|
||||
finally:
|
||||
server.shutdown()
|
||||
teardown_test_env(temp_dir)
|
||||
|
||||
|
||||
def test_03_skip_with_life_maintains_streak():
|
||||
"""Integration test: skip with life → lives decremented, streak unchanged"""
|
||||
temp_dir = setup_test_env()
|
||||
server = start_test_server()
|
||||
base_url = f"http://localhost:{server.server_port}"
|
||||
|
||||
try:
|
||||
# Create daily habit
|
||||
habit_data = {
|
||||
'name': 'Read book',
|
||||
'category': 'growth',
|
||||
'color': '#3B82F6',
|
||||
'icon': 'book',
|
||||
'priority': 50,
|
||||
'frequency': {'type': 'daily'}
|
||||
}
|
||||
|
||||
result = http_request(f"{base_url}/api/habits", method='POST', data=habit_data)
|
||||
habit_id = result['id']
|
||||
|
||||
# Check in yesterday (to build a streak)
|
||||
import api
|
||||
with open(api.HABITS_FILE, 'r') as f:
|
||||
data = json.load(f)
|
||||
|
||||
habit_obj = next(h for h in data['habits'] if h['id'] == habit_id)
|
||||
yesterday = (datetime.now() - timedelta(days=1)).date().isoformat()
|
||||
habit_obj['completions'].append({
|
||||
'date': yesterday,
|
||||
'type': 'check'
|
||||
})
|
||||
habit_obj['streak'] = {
|
||||
'current': 1,
|
||||
'best': 1,
|
||||
'lastCheckIn': yesterday
|
||||
}
|
||||
|
||||
data['lastUpdated'] = datetime.now().isoformat()
|
||||
with open(api.HABITS_FILE, 'w') as f:
|
||||
json.dump(data, f, indent=2)
|
||||
|
||||
# Skip today
|
||||
skip_result = http_request(f"{base_url}/api/habits/{habit_id}/skip", method='POST')
|
||||
|
||||
# Verify lives decremented and streak maintained
|
||||
assert skip_result['lives'] == 2, "Lives should be 2 after skip"
|
||||
|
||||
# Get fresh habit data to check streak
|
||||
habits = http_request(f"{base_url}/api/habits")
|
||||
habit = next(h for h in habits if h['id'] == habit_id)
|
||||
|
||||
# Streak should still be 1 (skip doesn't break it)
|
||||
assert habit['current_streak'] == 1, "Streak should be maintained after skip"
|
||||
|
||||
print("✓ Test 3: Skip with life → lives decremented, streak unchanged")
|
||||
|
||||
finally:
|
||||
server.shutdown()
|
||||
teardown_test_env(temp_dir)
|
||||
|
||||
|
||||
def test_04_skip_with_zero_lives_returns_400():
|
||||
"""Integration test: skip with 0 lives → returns 400 error"""
|
||||
temp_dir = setup_test_env()
|
||||
server = start_test_server()
|
||||
base_url = f"http://localhost:{server.server_port}"
|
||||
|
||||
try:
|
||||
# Create daily habit
|
||||
habit_data = {
|
||||
'name': 'Yoga practice',
|
||||
'category': 'health',
|
||||
'color': '#8B5CF6',
|
||||
'icon': 'heart',
|
||||
'priority': 50,
|
||||
'frequency': {'type': 'daily'}
|
||||
}
|
||||
|
||||
result = http_request(f"{base_url}/api/habits", method='POST', data=habit_data)
|
||||
habit_id = result['id']
|
||||
|
||||
# Use all 3 lives
|
||||
http_request(f"{base_url}/api/habits/{habit_id}/skip", method='POST')
|
||||
http_request(f"{base_url}/api/habits/{habit_id}/skip", method='POST')
|
||||
http_request(f"{base_url}/api/habits/{habit_id}/skip", method='POST')
|
||||
|
||||
# Attempt to skip with 0 lives
|
||||
result = http_request(f"{base_url}/api/habits/{habit_id}/skip", method='POST')
|
||||
|
||||
# Verify 400 error
|
||||
assert result['status'] == 400, "Should return 400 status"
|
||||
assert 'error' in result, "Should return error message"
|
||||
|
||||
print("✓ Test 4: Skip with 0 lives → returns 400 error")
|
||||
|
||||
finally:
|
||||
server.shutdown()
|
||||
teardown_test_env(temp_dir)
|
||||
|
||||
|
||||
def test_05_edit_frequency_changes_should_check_today():
|
||||
"""Integration test: edit frequency → should_check_today logic changes"""
|
||||
temp_dir = setup_test_env()
|
||||
server = start_test_server()
|
||||
base_url = f"http://localhost:{server.server_port}"
|
||||
|
||||
try:
|
||||
# Create daily habit
|
||||
habit_data = {
|
||||
'name': 'Code review',
|
||||
'category': 'work',
|
||||
'color': '#F59E0B',
|
||||
'icon': 'code',
|
||||
'priority': 50,
|
||||
'frequency': {'type': 'daily'}
|
||||
}
|
||||
|
||||
result = http_request(f"{base_url}/api/habits", method='POST', data=habit_data)
|
||||
habit_id = result['id']
|
||||
|
||||
# Verify should_check_today is True for daily habit
|
||||
habits = http_request(f"{base_url}/api/habits")
|
||||
habit = next(h for h in habits if h['id'] == habit_id)
|
||||
assert habit['should_check_today'] == True, "Daily habit should be checkable today"
|
||||
|
||||
# Edit to specific_days (only Monday and Wednesday)
|
||||
update_data = {
|
||||
'name': 'Code review',
|
||||
'category': 'work',
|
||||
'color': '#F59E0B',
|
||||
'icon': 'code',
|
||||
'priority': 50,
|
||||
'frequency': {
|
||||
'type': 'specific_days',
|
||||
'days': ['monday', 'wednesday']
|
||||
}
|
||||
}
|
||||
|
||||
http_request(f"{base_url}/api/habits/{habit_id}", method='PUT', data=update_data)
|
||||
|
||||
# Get updated habit
|
||||
habits = http_request(f"{base_url}/api/habits")
|
||||
habit = next(h for h in habits if h['id'] == habit_id)
|
||||
|
||||
# Verify should_check_today reflects new frequency
|
||||
today_name = datetime.now().strftime('%A').lower()
|
||||
expected = today_name in ['monday', 'wednesday']
|
||||
assert habit['should_check_today'] == expected, f"Should check today should be {expected} for {today_name}"
|
||||
|
||||
print(f"✓ Test 5: Edit frequency → should_check_today is {expected} for {today_name}")
|
||||
|
||||
finally:
|
||||
server.shutdown()
|
||||
teardown_test_env(temp_dir)
|
||||
|
||||
|
||||
def test_06_delete_removes_habit_from_storage():
|
||||
"""Integration test: delete → habit removed from storage"""
|
||||
temp_dir = setup_test_env()
|
||||
server = start_test_server()
|
||||
base_url = f"http://localhost:{server.server_port}"
|
||||
|
||||
try:
|
||||
# Create habit
|
||||
habit_data = {
|
||||
'name': 'Guitar practice',
|
||||
'category': 'personal',
|
||||
'color': '#EC4899',
|
||||
'icon': 'music',
|
||||
'priority': 50,
|
||||
'frequency': {'type': 'daily'}
|
||||
}
|
||||
|
||||
result = http_request(f"{base_url}/api/habits", method='POST', data=habit_data)
|
||||
habit_id = result['id']
|
||||
|
||||
# Verify habit exists
|
||||
habits = http_request(f"{base_url}/api/habits")
|
||||
assert len(habits) == 1, "Should have 1 habit"
|
||||
assert habits[0]['id'] == habit_id, "Should be the created habit"
|
||||
|
||||
# Delete habit
|
||||
http_request(f"{base_url}/api/habits/{habit_id}", method='DELETE')
|
||||
|
||||
# Verify habit removed
|
||||
habits = http_request(f"{base_url}/api/habits")
|
||||
assert len(habits) == 0, "Should have 0 habits after delete"
|
||||
|
||||
# Verify not in storage file
|
||||
import api
|
||||
with open(api.HABITS_FILE, 'r') as f:
|
||||
data = json.load(f)
|
||||
|
||||
assert len(data['habits']) == 0, "Storage file should have 0 habits"
|
||||
|
||||
print("✓ Test 6: Delete → habit removed from storage")
|
||||
|
||||
finally:
|
||||
server.shutdown()
|
||||
teardown_test_env(temp_dir)
|
||||
|
||||
|
||||
def test_07_checkin_on_wrong_day_for_specific_days_returns_400():
|
||||
"""Integration test: check-in on wrong day for specific_days → returns 400"""
|
||||
temp_dir = setup_test_env()
|
||||
server = start_test_server()
|
||||
base_url = f"http://localhost:{server.server_port}"
|
||||
|
||||
try:
|
||||
# Get today's day name
|
||||
today_name = datetime.now().strftime('%A').lower()
|
||||
|
||||
# Create habit for different days (not today)
|
||||
if today_name == 'monday':
|
||||
allowed_days = ['tuesday', 'wednesday']
|
||||
elif today_name == 'tuesday':
|
||||
allowed_days = ['monday', 'wednesday']
|
||||
else:
|
||||
allowed_days = ['monday', 'tuesday']
|
||||
|
||||
habit_data = {
|
||||
'name': 'Gym workout',
|
||||
'category': 'health',
|
||||
'color': '#EF4444',
|
||||
'icon': 'dumbbell',
|
||||
'priority': 50,
|
||||
'frequency': {
|
||||
'type': 'specific_days',
|
||||
'days': allowed_days
|
||||
}
|
||||
}
|
||||
|
||||
result = http_request(f"{base_url}/api/habits", method='POST', data=habit_data)
|
||||
habit_id = result['id']
|
||||
|
||||
# Attempt to check in today (wrong day)
|
||||
result = http_request(f"{base_url}/api/habits/{habit_id}/check", method='POST')
|
||||
|
||||
# Verify 400 error
|
||||
assert result['status'] == 400, "Should return 400 status"
|
||||
assert 'error' in result, "Should return error message"
|
||||
|
||||
print(f"✓ Test 7: Check-in on {today_name} (not in {allowed_days}) → returns 400")
|
||||
|
||||
finally:
|
||||
server.shutdown()
|
||||
teardown_test_env(temp_dir)
|
||||
|
||||
|
||||
def test_08_get_response_includes_all_stats():
|
||||
"""Integration test: GET response includes stats (streak, completion_rate, weekly_summary)"""
|
||||
temp_dir = setup_test_env()
|
||||
server = start_test_server()
|
||||
base_url = f"http://localhost:{server.server_port}"
|
||||
|
||||
try:
|
||||
# Create habit with some completions
|
||||
habit_data = {
|
||||
'name': 'Meditation',
|
||||
'category': 'health',
|
||||
'color': '#10B981',
|
||||
'icon': 'brain',
|
||||
'priority': 50,
|
||||
'frequency': {'type': 'daily'}
|
||||
}
|
||||
|
||||
result = http_request(f"{base_url}/api/habits", method='POST', data=habit_data)
|
||||
habit_id = result['id']
|
||||
|
||||
# Add some completions
|
||||
import api
|
||||
with open(api.HABITS_FILE, 'r') as f:
|
||||
data = json.load(f)
|
||||
|
||||
habit_obj = next(h for h in data['habits'] if h['id'] == habit_id)
|
||||
|
||||
# Add completions for last 3 days
|
||||
for i in range(3):
|
||||
check_date = (datetime.now() - timedelta(days=2-i)).date().isoformat()
|
||||
habit_obj['completions'].append({
|
||||
'date': check_date,
|
||||
'type': 'check'
|
||||
})
|
||||
|
||||
habit_obj['streak'] = {
|
||||
'current': 3,
|
||||
'best': 3,
|
||||
'lastCheckIn': datetime.now().date().isoformat()
|
||||
}
|
||||
|
||||
data['lastUpdated'] = datetime.now().isoformat()
|
||||
with open(api.HABITS_FILE, 'w') as f:
|
||||
json.dump(data, f, indent=2)
|
||||
|
||||
# Get habits
|
||||
habits = http_request(f"{base_url}/api/habits")
|
||||
habit = habits[0]
|
||||
|
||||
# Verify all enriched stats are present
|
||||
assert 'current_streak' in habit, "Should include current_streak"
|
||||
assert 'best_streak' in habit, "Should include best_streak"
|
||||
assert 'completion_rate_30d' in habit, "Should include completion_rate_30d"
|
||||
assert 'weekly_summary' in habit, "Should include weekly_summary"
|
||||
assert 'should_check_today' in habit, "Should include should_check_today"
|
||||
|
||||
# Verify streak values
|
||||
assert habit['current_streak'] == 3, "Current streak should be 3"
|
||||
assert habit['best_streak'] == 3, "Best streak should be 3"
|
||||
|
||||
# Verify weekly_summary structure
|
||||
assert isinstance(habit['weekly_summary'], dict), "Weekly summary should be a dict"
|
||||
days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
|
||||
for day in days:
|
||||
assert day in habit['weekly_summary'], f"Weekly summary should include {day}"
|
||||
|
||||
print("✓ Test 8: GET response includes all stats (streak, completion_rate, weekly_summary)")
|
||||
|
||||
finally:
|
||||
server.shutdown()
|
||||
teardown_test_env(temp_dir)
|
||||
|
||||
|
||||
def test_09_typecheck_passes():
|
||||
"""Integration test: Typecheck passes"""
|
||||
result = os.system('python3 -m py_compile /home/moltbot/clawd/dashboard/api.py')
|
||||
assert result == 0, "Typecheck should pass for api.py"
|
||||
|
||||
result = os.system('python3 -m py_compile /home/moltbot/clawd/dashboard/habits_helpers.py')
|
||||
assert result == 0, "Typecheck should pass for habits_helpers.py"
|
||||
|
||||
print("✓ Test 9: Typecheck passes")
|
||||
|
||||
|
||||
# Run all tests
|
||||
if __name__ == '__main__':
|
||||
tests = [
|
||||
test_01_create_and_checkin_increments_streak,
|
||||
test_02_seven_consecutive_checkins_restore_life,
|
||||
test_03_skip_with_life_maintains_streak,
|
||||
test_04_skip_with_zero_lives_returns_400,
|
||||
test_05_edit_frequency_changes_should_check_today,
|
||||
test_06_delete_removes_habit_from_storage,
|
||||
test_07_checkin_on_wrong_day_for_specific_days_returns_400,
|
||||
test_08_get_response_includes_all_stats,
|
||||
test_09_typecheck_passes,
|
||||
]
|
||||
|
||||
passed = 0
|
||||
failed = 0
|
||||
|
||||
print("Running integration tests...\n")
|
||||
|
||||
for test in tests:
|
||||
try:
|
||||
test()
|
||||
passed += 1
|
||||
except AssertionError as e:
|
||||
print(f"✗ {test.__name__}: {e}")
|
||||
failed += 1
|
||||
except Exception as e:
|
||||
print(f"✗ {test.__name__}: Unexpected error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
failed += 1
|
||||
|
||||
print(f"\n{'='*50}")
|
||||
print(f"Integration Tests: {passed} passed, {failed} failed")
|
||||
print(f"{'='*50}")
|
||||
|
||||
sys.exit(0 if failed == 0 else 1)
|
||||
134
dashboard/tests/test_weekly_lives_integration.py
Normal file
134
dashboard/tests/test_weekly_lives_integration.py
Normal file
@@ -0,0 +1,134 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Integration test for weekly lives recovery feature.
|
||||
|
||||
Tests the full flow:
|
||||
1. Habit has check-ins in previous week
|
||||
2. Check-in today triggers weekly lives recovery
|
||||
3. Response includes livesAwarded flag
|
||||
4. Lives count increases
|
||||
5. Duplicate awards are prevented
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
from datetime import datetime, timedelta
|
||||
import json
|
||||
|
||||
# Add parent directory to path
|
||||
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from habits_helpers import check_and_award_weekly_lives
|
||||
|
||||
|
||||
def test_integration_weekly_lives_award():
|
||||
"""Test complete weekly lives recovery flow."""
|
||||
print("\n=== Testing Weekly Lives Recovery Integration ===\n")
|
||||
|
||||
today = datetime.now().date()
|
||||
current_week_start = today - timedelta(days=today.weekday())
|
||||
previous_week_start = current_week_start - timedelta(days=7)
|
||||
|
||||
# Scenario 1: New habit with check-ins in previous week
|
||||
print("Scenario 1: First award of the week")
|
||||
habit = {
|
||||
"id": "test-habit-1",
|
||||
"name": "Test Habit",
|
||||
"lives": 2,
|
||||
"completions": [
|
||||
{"date": (previous_week_start + timedelta(days=2)).isoformat(), "type": "check"},
|
||||
{"date": (previous_week_start + timedelta(days=4)).isoformat(), "type": "check"},
|
||||
]
|
||||
}
|
||||
|
||||
new_lives, was_awarded = check_and_award_weekly_lives(habit)
|
||||
|
||||
assert was_awarded == True, "Expected life to be awarded"
|
||||
assert new_lives == 3, f"Expected 3 lives, got {new_lives}"
|
||||
print(f"✓ Lives awarded: {habit['lives']} → {new_lives}")
|
||||
print(f"✓ Award flag: {was_awarded}")
|
||||
|
||||
# Scenario 2: Already awarded this week
|
||||
print("\nScenario 2: Prevent duplicate award")
|
||||
habit['lives'] = new_lives
|
||||
habit['lastLivesAward'] = current_week_start.isoformat()
|
||||
|
||||
new_lives2, was_awarded2 = check_and_award_weekly_lives(habit)
|
||||
|
||||
assert was_awarded2 == False, "Expected no duplicate award"
|
||||
assert new_lives2 == 3, f"Lives should remain at 3, got {new_lives2}"
|
||||
print(f"✓ No duplicate award: lives remain at {new_lives2}")
|
||||
|
||||
# Scenario 3: Only skips in previous week
|
||||
print("\nScenario 3: Skips don't qualify for recovery")
|
||||
habit_with_skips = {
|
||||
"id": "test-habit-2",
|
||||
"name": "Habit with Skips",
|
||||
"lives": 1,
|
||||
"completions": [
|
||||
{"date": (previous_week_start + timedelta(days=2)).isoformat(), "type": "skip"},
|
||||
{"date": (previous_week_start + timedelta(days=4)).isoformat(), "type": "skip"},
|
||||
]
|
||||
}
|
||||
|
||||
new_lives3, was_awarded3 = check_and_award_weekly_lives(habit_with_skips)
|
||||
|
||||
assert was_awarded3 == False, "Skips shouldn't trigger award"
|
||||
assert new_lives3 == 1, f"Lives should remain at 1, got {new_lives3}"
|
||||
print(f"✓ Skips don't count: lives remain at {new_lives3}")
|
||||
|
||||
# Scenario 4: No cap on lives (can go beyond 3)
|
||||
print("\nScenario 4: Lives can exceed 3")
|
||||
habit_many_lives = {
|
||||
"id": "test-habit-3",
|
||||
"name": "Habit with Many Lives",
|
||||
"lives": 5,
|
||||
"completions": [
|
||||
{"date": (previous_week_start + timedelta(days=2)).isoformat(), "type": "check"},
|
||||
]
|
||||
}
|
||||
|
||||
new_lives4, was_awarded4 = check_and_award_weekly_lives(habit_many_lives)
|
||||
|
||||
assert was_awarded4 == True, "Expected life to be awarded"
|
||||
assert new_lives4 == 6, f"Expected 6 lives, got {new_lives4}"
|
||||
print(f"✓ No cap: lives increased from 5 → {new_lives4}")
|
||||
|
||||
# Scenario 5: No check-ins in previous week
|
||||
print("\nScenario 5: No check-ins = no award")
|
||||
habit_no_checkins = {
|
||||
"id": "test-habit-4",
|
||||
"name": "New Habit",
|
||||
"lives": 2,
|
||||
"completions": []
|
||||
}
|
||||
|
||||
new_lives5, was_awarded5 = check_and_award_weekly_lives(habit_no_checkins)
|
||||
|
||||
assert was_awarded5 == False, "No check-ins = no award"
|
||||
assert new_lives5 == 2, f"Lives should remain at 2, got {new_lives5}"
|
||||
print(f"✓ No previous week check-ins: lives remain at {new_lives5}")
|
||||
|
||||
print("\n=== All Integration Tests Passed! ===\n")
|
||||
|
||||
# Print summary of the feature
|
||||
print("Feature Summary:")
|
||||
print("• +1 life awarded per week if habit had ≥1 check-in in previous week")
|
||||
print("• Monday-Sunday week boundaries (ISO 8601)")
|
||||
print("• Award triggers on first check-in of current week")
|
||||
print("• Skips don't count toward recovery")
|
||||
print("• No cap on lives (can accumulate beyond 3)")
|
||||
print("• Prevents duplicate awards in same week")
|
||||
print("")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
try:
|
||||
test_integration_weekly_lives_award()
|
||||
sys.exit(0)
|
||||
except AssertionError as e:
|
||||
print(f"\n✗ Test failed: {e}\n")
|
||||
sys.exit(1)
|
||||
except Exception as e:
|
||||
print(f"\n✗ Unexpected error: {type(e).__name__}: {e}\n")
|
||||
sys.exit(1)
|
||||
252
dashboard/todos.json
Normal file
252
dashboard/todos.json
Normal file
@@ -0,0 +1,252 @@
|
||||
{
|
||||
"lastUpdated": "2026-03-25T22:59:24.849Z",
|
||||
"items": [
|
||||
{
|
||||
"id": "prov-2026-02-25",
|
||||
"text": "Provocare: Un proiect - Pentru cine?",
|
||||
"context": "Brendan Burchard: 'Dubiul nu e problema. Oprirea e problema.' Când dubiul devine semnal să înveți (nu să te oprești), câștigi. Problema ta nu e competența (25 ani expertiză) - e TEAMA de primul pas. Credința 'clienți noi = mai multă muncă' te blochează să vezi dincolo de poveste. Adevărul: fiecare lucru pe care îl eviți îți arată EXACT unde trebuie să mergi. În business de ARTĂ (expertiza unică), scaling-ul vine prin CLARITATE despre valoare, nu volum. Problema nu e că nu ai clienți - e că nu știi pentru cine lupți. Când Brendan și-a terminat cartea în 18 zile (după ani de blocaj), nu a fost pentru bani - a fost pentru SOȚIA lui dormind sub greutatea facturilor. Schimbarea: de la 'cum supraviețuiesc' la 'pentru cine lupt'. Proiectele tale rămân 80% done pentru că le lipsește CONVICTION - nu e 'ar fi bine' ci 'TREBUIE pentru cineva anume'. Întrebarea e: 'Pentru cine fac asta?'",
|
||||
"example": "Alege UN proiect (ROA web, chatbot Maria, angajat nou, orice activ) și răspunde SINCER: 'Dacă aș renunța la asta mâine, cine ar pierde?' Dacă răspunsul e 'Nimeni specific' sau 'Ar fi util general' → e half-hearted. Fie oprești proiectul (temporar), fie găsești conviction real (cineva anume). Dacă răspunsul e 'Clientul X care depinde de rapoarte rapide' sau 'Colegă 70 ani care vrea autonomie' → e full conviction. Continuă. Nu trebuie să FACI nimic cu răspunsul - doar să îl VEZI. Exemplu ROA web: Dacă renunț mâine → cine pierde? Răspuns vag: 'Clienții ar beneficia' = half-hearted. Răspuns concret: 'Clientul Y sună de 5 ori/săptămână pentru raport X. Dacă ar avea web, și-ar lua singur' = conviction. Când vezi clar CINE beneficiază, primul pas devine natural. Dubiul nu dispare prin planuri perfecte - dispare prin primul pas, oricât de mic. Primul pas: 5 minute, un proiect, o întrebare, VEZI adevărul.",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-25",
|
||||
"done": true,
|
||||
"doneAt": "2026-03-25T22:59:21.977Z",
|
||||
"source": "Brendan Burchard - Billionaire Coach (Conviction vs Half-heartedness)",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/youtube/2026-02-23_billionaire-coach-abundance-mindset.md",
|
||||
"createdAt": "2026-02-25T07:00:00.000Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-24",
|
||||
"text": "Provocare: Audit Conviction - identifică proiecte half-hearted vs full",
|
||||
"context": "Half-heartedness = cel mai mare inamic al abundenței. Nu poți construi afacere, relație sau viață cu un picior înăuntru și unul afară. Brendan Burchard: 'Breakthroughul vine când lupți pentru ALTCINEVA, nu pentru supraviețuire.' Diferența: Supraviețuire = 'Cum plătesc factura?' (umpli un GOL). Abundență = 'Cui servesc cu expertiza asta?' (construiești). Wealthy people nu se gândesc la supraviețuire - se gândesc la servire, dare, construire. Când un proiect e half-hearted ('ar fi bine'), rămâne 80% done, momentum pierdut. Când e full conviction (PENTRU CINEVA anume), livrare completă, flow în loc de greutate. Exercițiul te ajută să identifici CE e cu conviction reală și CE e doar 'ar fi util'.",
|
||||
"example": "Listează proiectele curente (ROA web, Chatbot Maria, Angajat nou, Clienți noi) și pentru fiecare răspunde: E full conviction (PENTRU CINE?) sau half-hearted (ar fi bine)? De exemplu: ROA web - dacă răspunsul e 'ar fi util pentru clienți' (vag) = half-hearted. Dacă răspunsul e 'Clientul X TREBUIE să aibă acces rapid la rapoarte pentru a lua decizii la timp' (specific, cineva anume) = full conviction. Când identifici unul half-hearted, reframe-ul: NU 'ce câștig EU?' ci 'CINE beneficiază când asta e complet?' Bonus ZAPS antidot: când apare dubiul 'Nu sunt destul de deștept' (attach self) → STOP, recunoaște 'Mă ZAPS-ez?', reframe 'Ce învăț din asta?' (nu 'Mă opresc'), reset BMF (Breath 3 respirații + Movement 10 pași + Food check). Brendan: 'Doubt is not the problem. Stopping is. If doubt is a signal to learn — you win.'",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-24",
|
||||
"done": true,
|
||||
"doneAt": "2026-03-25T22:59:13.743Z",
|
||||
"source": "Brendan Burchard - Billionaire Coach (Abundență vs Supraviețuire)",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/youtube/2026-02-23_billionaire-coach-abundance-mindset.md",
|
||||
"createdAt": "2026-02-24T07:00:00.000Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-23",
|
||||
"text": "Provocare: Identifică tipul de business - ARTĂ sau LIFESTYLE?",
|
||||
"context": "Greșeala majoră: aplici regulile greșite pentru tipul tău de business. Monica Ion: 'Când nu știi tipul de business, e ca și cum nu știi ce boală tratezi. Orice medicament poate face mai mult rău decât bine.' Există 4 tipuri (Artă, Lifestyle, Exit, Legacy) - fiecare cu scop și reguli diferite. Succesul vine din a te cunoaște pe tine și a juca după regulile tipului tău. TOATE blocajele tale (clienți noi=mai multă muncă, prețuri scăzute, angajat greu de învățat) vin din CONFUZIE DE TIP. Dacă e ARTĂ: creștere personală + prețuri mai mari (NU mai mulți clienți). Dacă e LIFESTYLE: sisteme eficiente + documentare procese. Testul rapid: Clienții vin pentru TINE (expertiza unică) sau pentru PROCES (rezultate predictibile)? Proiectele sunt personalizate sau pattern repetabil?",
|
||||
"example": "Scenariul: Ar trebui să cauți clienți noi dar eziti ('mai multă muncă'). ARTĂ: greșit să adaugi clienți - soluția e să CREȘTI PREȚURILE pentru clienții existenți și să SELECTEZI doar cei premium. Angajatul e suport operațional (nu clone al tău). Un client perfect e mai bun decât 5 obișnuiți. LIFESTYLE: corect că e mai multă muncă - ai nevoie de SISTEME mai eficiente. Angajatul învață PROCESUL (nu expertiza ta). Documentezi proceduri standard. Sau: Nu îndrăznești să crești prețurile. ARTĂ: blocare interioară (vină/rușine/merit scăzut) - muncă pe curățenie emoțională, apoi creștere prețuri 2-3x. LIFESTYLE: nu știi numerele - calculează break-even real (ore + cheltuieli + profit motivant) și setează preț matematic.",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-23",
|
||||
"done": true,
|
||||
"doneAt": "2026-03-25T22:59:14.522Z",
|
||||
"source": "Monica Ion - Cele 4 tipuri de business",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/youtube/2026-02-19_cele-4-tipuri-de-business.md",
|
||||
"createdAt": "2026-02-23T07:04:14.171922+00:00"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-22",
|
||||
"text": "Provocare: Schimbă corpul ÎNAINTE de decizie - fiziologie pentru acțiune",
|
||||
"context": "Inacțiunea antreprenorială nu e în minte - e în CORP. Corpul ghemuire (umeri căzuți, respirație superficială) comunică: 'Nu sunt suficient. E periculos să ies.' Și mintea urmează corpul. Tony Robbins: 'Depresia are o postură. Schimbă corpul PRIMUL — mișcă-te, respiră diferit.' Corpul GENEREAZĂ starea, nu o reflectă. Când aștepți să te simți 'pregătit' pentru a acționa — corpul spune: 'Nu suntem acolo încă.' Când acționezi CU CORPUL ÎNTÂI (miști, respiri, te ridici) — starea vine DUPĂ. Nu aștepți încredere - o CREEZI cu fiziologia.",
|
||||
"example": "Scenariul: Ar trebui să suni un client nou pentru un proiect mai mare. Simți ezitare: 'E prea scump, poate zice nu...' VECHIUL MOD: Stai la birou, gândești, analizezi, amâni. NOUL MOD: (1) Simți ezitarea → ridică-te imediat (2) 3x pe vârfuri (activează corpul) (3) 5 respirații profunde în piept (deschide corp, încredere) (4) 10 pași rapizi prin cameră (5) ACUM suni clientul - cu corp deschis, respirație plină. REZULTAT: Același gând ('poate zice nu'), dar corp diferit = emoție diferită = acțiune.",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-22",
|
||||
"done": true,
|
||||
"doneAt": "2026-03-25T22:59:15.239Z",
|
||||
"source": "Tony Robbins - The Secret to an Extraordinary Life",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/youtube/2026-01-31_tony-robbins-secret-extraordinary-life.md",
|
||||
"createdAt": "2026-02-22T07:03:01.936301Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-21",
|
||||
"text": "Provocare: Ce s-ar schimba în TINE dacă ai vedea clar valoarea ta?",
|
||||
"context": "Rezistența la 'dovezi concrete' = frica de puterea ta reală. Mintea preferă credința familiară ('nu sunt destul de deștept') în locul evidenței incomode ('am rezolvat sute de probleme complexe'). De ce? Pentru că dacă vezi dovezile și ÎNCĂ nu acționezi (să cauți clienți noi, să crești prețurile) - atunci nu mai poți da vina pe 'nu știu destul'. Și asta doare mai tare. Când începi cu 'ce s-ar schimba în mine?' în loc de 'ce dovezi am?', ocolești rezistența identitară. Nu mai e despre DOVADA externă (care activează frica: 'dacă știu și nu acționez = cine sunt eu?'). E despre VIZIUNE internă: cine vrei să fii? Și când vezi clar cine vrei să fii - dovezile devin INSTRUMENTE, nu AMENINȚĂRI.",
|
||||
"example": "De exemplu: Dacă ai vedea clar că ai expertiza reală (25 ani, sute de probleme rezolvate), cum ai RESPIRA când intri într-o conversație cu un client nou? Ai sta mai drept? Ai vorbi mai calm? Ai asculta mai atent sau ai explica mai convingător? Nu e despre CE ai face (cerut preț mai mare), ci despre CINE ai fi în acel moment. Poate ai descoperi: 'Aș respira mai ușor. Nu aș mai simți nevoia să-mi dovedesc valoarea - aș OFERI valoarea cu încredere liniștită.' Și când vezi asta - scrisul celor 3 dovezi concrete devine natural, nu o amenințare.",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-21",
|
||||
"done": true,
|
||||
"doneAt": "2026-03-25T22:59:23.303Z",
|
||||
"source": "Coaching seară 20 feb + Friday Spark #95 People Pleasing",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/coaching/2026-02-21-dimineata.md",
|
||||
"createdAt": "2026-02-21T07:00:00.000Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-20",
|
||||
"text": "Provocare: Identifică 3 dovezi concrete de încredere - probleme complexe rezolvate",
|
||||
"context": "Încrederea în sine nu vine din gândire pozitivă sau autosugestie. Vine din valoare demonstrată prin experiență și rezultate. Îndoielile tale ('nu sunt destul de deștept ca antreprenor') ignoră 25 de ani de dovezi concrete. Pentru a le demonta, trebuie să identifici exact CE ai ȘTIUT, CE ai ȘTIUT SĂ FACI și CE REZULTATE ai OBȚINUT în situații reale. Când vezi dovezile concrete, îndoielile se dizolvă natural - nu prin forțare, ci prin evidență.",
|
||||
"example": "De exemplu: client care avea probleme cu sincronizarea datelor între două sisteme. Ai analizat problema (CE ȘTIU: arhitectură bază de date, Oracle triggers), ai creat o soluție customizată (CE ȘTIU SĂ FAC: scripturi PL/SQL, testare în producție), clientul a economisit 20 ore/săptămână de lucru manual (CE REZULTAT). Asta e dovada concretă - nu teorie, ci fapte. Când ai 3 astfel de dovezi recente în față, credința 'nu sunt destul de deștept' devine absurdă în fața evidenței.",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-20",
|
||||
"done": true,
|
||||
"doneAt": "2026-03-25T22:59:19.095Z",
|
||||
"source": "Zoltan Vereș - Încrederea în Sine + Monica Ion - Cele 4 tipuri de business",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/coaching/2026-02-20-dimineata.md",
|
||||
"createdAt": "2026-02-20T07:00:00.000Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-16",
|
||||
"text": "Provocare: Metoda 3M - pune angajatul sa scrie 5 keywords dupa explicatie",
|
||||
"context": "La prima explicatie pe care i-o dai angajatului azi, opreste-te si spune: 'Acum scrie in 5 keywords ce ai inteles.' NU corecta imediat. Lasa-l sa greseasca. Apoi discutati diferentele. Creierul care ghiceste RETINE. Cel care copiaza UITA. Trei principii: Make it Wrong (ghiceste, nu copia), Make it Shorter (keywords, nu propozitii), Make it Again (reorganizeaza, nu rescrie). Metoda transforma explicatiile repetitive in invatare activa - nu mai 'pierzi timp', il pui sa-si construiasca propria intelegere.",
|
||||
"example": "Explici angajatului cum sa faca o procedura de facturare in ROA. In loc sa repeti de 3 ori pana memoreaza mecanic, dupa prima explicatie ii spui: 'Scrie 5 cuvinte cheie din ce ai inteles.' El scrie: 'client, factura, TVA, salvare, print'. Tu vezi ca lipseste 'validare ANAF' - asta e gap-ul real. Discutati 2 minute pe gap, nu repeți totul. A doua zi, ii ceri sa reorganizeze notitele de ieri din memorie. Ce uita = ce nu a integrat. Metoda e aplicabila si pentru tine cu NLP: dupa modul, redeseneaza harta mentala din memorie, nu din notite.",
|
||||
"domain": "work",
|
||||
"dueDate": "2026-02-16",
|
||||
"done": true,
|
||||
"doneAt": "2026-03-25T22:59:24.238Z",
|
||||
"source": "Thinking on Paper - 3 principii pentru retentie",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/youtube/thinking-on-paper.md",
|
||||
"createdAt": "2026-02-16T07:00:00.000Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-15",
|
||||
"text": "Provocare: Reframe Mentorship - ce ai inteles TU din ultima explicatie data angajatului?",
|
||||
"context": "Gandeste-te la ULTIMA explicatie pe care i-ai dat-o angajatului. Ce ai inteles TU mai bine despre propriul proces datorita acelei explicatii? Fiecare explicatie te forteaza sa-ti clarifici procesul - nu doar lui ii predai, tie iti reconstruiesti fundamentul. Dupa 25 de ani pe pilot automat, cand cineva intreaba 'de ce?', redescoperi logica din spatele deciziilor. Si uneori descoperi ca unele decizii nu mai au logica. Asta e aur.",
|
||||
"example": "Angajatul intreaba: 'De ce facem backup-ul asa si nu altfel?' Tu incepi sa explici si realizezi ca metoda e din 2010, cand aveai alta structura de date. Acum ar fi mai simplu cu un script automat. Fara intrebarea lui, ai fi continuat pe pilot automat inca 5 ani. Sau: explici cum functioneaza facturarea in ROA si realizezi ca 3 pasi ar putea fi 1. Angajatul nu pierde timp - el iti face audit gratuit la procese.",
|
||||
"domain": "work",
|
||||
"dueDate": "2026-02-15",
|
||||
"done": true,
|
||||
"doneAt": "2026-03-25T22:59:24.849Z",
|
||||
"source": "InfoWorld - Why We Need Junior Developers",
|
||||
"sourceUrl": "https://www.infoworld.com/article/4065771/why-we-need-junior-developers.html",
|
||||
"createdAt": "2026-02-15T07:00:00.000Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-14",
|
||||
"text": "Provocare: Echilibrarea unui Conflict Interior - găsește un sau-sau și echilibrează-l",
|
||||
"context": "Găsește UN 'sau-sau' din viața ta — două lucruri pe care le consideri incompatibile. (1) Scrie conflictul: 'Sau sunt X, sau sunt Y'. (2) Pentru fiecare parte, găsește opusul simultan: Când ești X, cum ești deja și Y? (dovezi concrete). Când ești Y, cum ești deja și X? (dovezi concrete). (3) Observă: Când ambele sunt adevărate simultan, ce simți? Nu trebuie să rezolvi nimic — doar să vezi că cele două nu sunt incompatibile, sunt complementare. Metoda Demartini: echilibrezi percepția, nu elimini josurile.",
|
||||
"example": "Conflictul tău real: 'Sau sunt programator bun, sau sunt antreprenor.' Echilibrare: Când ești programator — deja faci antreprenoriat (ai firmă, negociezi cu clienți, iei decizii de business zilnic, ai angajat pe care îl formezi). Când ești antreprenor — deja folosești mintea tehnică (automatizezi, optimizezi, rezolvi probleme sistemic). Dovada: de 25 de ani faci AMBELE simultan. Doar percepția zice că una o exclude pe cealaltă.",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-14",
|
||||
"done": true,
|
||||
"doneAt": "2026-02-14T08:27:56.118Z",
|
||||
"source": "Monica Ion - Povestea lui Marc Ep.9 (Anxietatea, frica de control și pierdere)",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/youtube/monica-ion-povestea-lui-marc-ep9-anxietatea.md",
|
||||
"createdAt": "2026-02-14T07:00:00.000Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-13",
|
||||
"text": "Provocare: Linkage Personal - conectează o activitate evitată cu calitățile tale",
|
||||
"context": "Alege o activitate pe care o eviți (telefon client, conversație angajat, decizie amânată). Scrie TU răspunsurile (NU cere AI-ului): (1) Cum servește această activitate lucrul pe care îl fac cel mai bine? (2) Ce calitate a mea folosesc deja identic în altă parte? (3) Ce simt în corp când imaginez că am terminat-o? Dacă rezistența scade după răspunsuri → ai găsit linkage-ul. Dacă nu scade → poate nu e activitatea ta, și asta e valid. Ideea: mintea trebuie să FACĂ munca de conectare, nu să o citească.",
|
||||
"example": "Activitate evitată: emiterea facturii imediat după prestare. Linkage descoperit de Mark: facturarea = finalizare proces complet (ca în soluțiile tehnice: funcționează sau e teorie). Gândire structurată, logică, ordonată — IDENTICĂ cu rezolvarea problemelor tehnice. Rezultat: rezistența a dispărut complet, acțiunea curgea natural. La tine: poate suni un client — linkage: rezolvi probleme tehnice = oferi valoare = clientul te vrea. Soluția tehnica NU se termină când funcționează codul — se termină când clientul o folosește.",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-13",
|
||||
"done": true,
|
||||
"doneAt": "2026-02-13T13:03:30.654Z",
|
||||
"source": "Monica Ion - Povestea lui Marc Ep.8 (Mândria și identitatea personală)",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/youtube/2026-02-12_monica-ion-povestea-lui-marc-ep8.md",
|
||||
"createdAt": "2026-02-13T09:30:00.000Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-12",
|
||||
"text": "Provocare: Primul Pas Minim (PPM) - alege idee și execută în MAX 10 min",
|
||||
"context": "Regula PPM: Orice idee pe care o ai astăzi → identifică primul pas care: (1) Durează MAX 10 minute (2) NU necesită alte persoane (3) E CONCRET (nu 'mă gândesc', ci 'scriu', 'sun', 'trimit', 'creez'). La prima pauză (10:00-11:00): Alege UNA din ideile tale recente, identifică PPM-ul, execută-l chiar dacă nu e perfect. La 17:00 notează: Ce idee? Care PPM? L-am executat? Dacă DA: cum mă simt, următorul pas? Dacă NU: ce m-a oprit, ce PPM MAI MIC mâine?",
|
||||
"example": "Exemplu concret: Ideea 'ar trebui să am task brief template pentru angajat'. PPM greu: 'Creez template complet cu toate secțiunile, testez, ajustez...' PPM SIMPLU: 'Deschid fișier task-brief-template.md și scriu primele 3 secțiuni (Task, Input, Output) în 10 minute'. Sau ideea 'trebuie să documentez soluții probleme clienți'. PPM: 'Creez folder memory/kb/roa/probleme-frecvente/ și scriu PRIMA problemă rezolvată recent în 10 minute'. Cel mai greu pas e PRIMUL - după ce ai început, creierul intră în flow mode.",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-12",
|
||||
"done": true,
|
||||
"doneAt": "2026-02-12T12:07:04.068Z",
|
||||
"source": "Multi-Agent Pattern + Living Files Theory + Context Engineering",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/coaching/2026-02-12-dimineata.md",
|
||||
"createdAt": "2026-02-12T07:00:00.000Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-11",
|
||||
"text": "Provocare: Identifică un task pe care îl execuți singur și ar putea fi orchestrat",
|
||||
"context": "Alege UNA din variantele: (1) Delegat la angajat - task repetitiv pe care îl faci de 10 ori și ar putea învăța? (2) Automatizat cu Echo - verificare/raport/backup care rulează manual? (3) Modelat de la colegă - proces pe care ea îl face excelent și tu îl faci mai greu? (4) Documentat pentru viitor - explicație pe care o repeți la fiecare client nou? La 17:00 notează: Ce task? Cum ar arăta orchestrat? Primul pas minim pentru orchestrare? Nu implementa imediat - doar identifică și scrie. Conștientizarea e primul pas.",
|
||||
"example": "Exemple reale: (1) Explicația cum să adauge client nou în ROA - ai făcut-o de 10 ori la angajat, ar putea fi screencast + checklist. (2) Verificarea zilnică backups - rulează manual, ar putea fi script Echo automat cu alertă doar dacă fail. (3) Suportul tehnic calm - colega face excelent, tu mai nervos, ar putea cere să te învețe procesul TOTE intern. (4) Setup ANAF pentru client nou - repeți aceiași pași, ar putea fi documentație step-by-step pe care Echo o trimite automat.",
|
||||
"domain": "work",
|
||||
"dueDate": "2026-02-11",
|
||||
"done": true,
|
||||
"doneAt": "2026-02-11T16:39:39.457Z",
|
||||
"source": "Claude Code Multi-Agent Orchestration + TDi Mindset Entrepreneurship",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/coaching/2026-02-11-dimineata.md",
|
||||
"createdAt": "2026-02-11T07:00:00.000Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-10",
|
||||
"text": "Provocare: Body Loose, Head Clear - verifică corpul înainte de situație tensionată",
|
||||
"context": "Alege UN moment când anticipezi o situație tensionată (conversație cu angajatul, gândire la proiect, task dificil). ÎNAINTE să o rezolvi: (1) Verifică corpul: Umeri sus sau jos? Maxilar strâns sau relaxat? Respirație scurtă sau adâncă? (2) Unknot yourself: 3 respirații 4-7-8 (inspiră 4 sec, ține 7, expiră 8) + relaxează conștient zona tensionată (3) Apoi acționează: Rezolvă cu 'body loose, head clear' (4) Seara notează: Diferență față de cum rezolvi de obicei?",
|
||||
"example": "Angajatul întreabă din nou același lucru. În loc să simți frustrarea creștând în piept și să răspunzi strâns → observi tensiunea, faci 3 respirații, APOI răspunzi (sau îl trimiți la documentație, sau spui 'discutăm mâine'). Mesajul e același, dar tu nu acumulezi durere.",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-10",
|
||||
"done": true,
|
||||
"source": "James Clear - 3-2-1 Newsletter (Body Loose, Head Clear) + Monica Ion - Pattern Sacrificiu-Durere-Sabotaj",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/coaching/2026-02-09-seara.md",
|
||||
"createdAt": "2026-02-09T19:00:00.000Z",
|
||||
"doneAt": "2026-02-11T16:39:37.436Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-08",
|
||||
"text": "Provocare: Aplică 1 tehnică din NLP ASTĂZI, notează experiența",
|
||||
"context": "Alege UNA tehnică/concept din training-ul de astăzi și APLICĂ-L IMEDIAT în aceeași zi, la un moment REAL (exercițiu, conversație, blocare, emoție). La final de zi, scrie NU 'ce am învățat' (concepte) ci 'ce am APLICAT și ce s-a întâmplat' (experiență). Mintea învață prin experiență repetată, nu prin concepte teoretice. Cum înveți în training = cum vei aplica în viață. Dacă înveți prin note și 'mai târziu' → vei aplica exact așa acasă (niciodată). Dacă înveți prin aplicare instant → vei aplica exact așa acasă (automat).",
|
||||
"example": "Scenariul tău real: Într-un exercițiu NLP, partenerul te blochează sau critică. În loc să rămâi în defensivă ('e greu') → aplici pattern interrupt din Tony Robbins: observi fiziologia (umeri contractați?), schimbi focusul (ce pot învăța despre cum reacționez?), schimbi limbajul ('e provocator' în loc de 'e greu'). Exercițiul devine mirror pentru tiparele tale în relații/business - exact cum reacționezi când angajatul nu înțelege sau când clientul critică.",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-08",
|
||||
"done": true,
|
||||
"doneAt": "2026-02-08T14:32:35.511Z",
|
||||
"source": "Tony Robbins - The Secret to an Extraordinary Life + Monica Ion - Legea Fractalilor",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/coaching/2026-02-08-dimineata.md",
|
||||
"createdAt": "2026-02-08T07:00:00.000Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-07",
|
||||
"text": "Provocare: Închide o buclă - ce ai dat DEJA + decizie clară",
|
||||
"context": "Notează UNA buclă deschisă din viața ta - orice \"ar trebui să...\" dar nu faci. Răspunde la 3 întrebări: (1) Ce am dat DEJA în schimb (în alte forme)? (2) Ce dezavantaje ar fi fost dacă rezolvam altfel? (3) Ce decizie clară iau ACUM: fie fac cu plan+dată, fie accept că NU fac. Când bucla se închide (prin percepție sau decizie), mintea se eliberează și vezi oportunități.",
|
||||
"example": "Buclă: \"Ar trebui să caut clienți noi\". (1) Ce am dat: clienților actuali - suport 24/7, know-how 25 ani, disponibilitate. (2) Dezavantaje dacă găseam 10 acum: angajat nepregătit, echipă suprasolicită, burnout. (3) Decizie: ACCEPT că nu caut clienți noi PÂNĂ în martie când angajatul e autonom. Plan: martie = 1 apel/săptămână. Bucla închisă → energie liberă.",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-07",
|
||||
"done": true,
|
||||
"doneAt": "2026-02-07T19:32:23.501Z",
|
||||
"source": "Monica Ion - Povestea lui Marc Episod #5",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/youtube/2026-02-07_monica-ion-povestea-lui-marc-ep5-datorie-familie.md",
|
||||
"createdAt": "2026-02-07T07:03:19.909Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-06",
|
||||
"text": "Provocare: Observă 1 aliniere + 1 fricțiune - ce îți spun despre tine?",
|
||||
"context": "Observă azi UN moment când te simți energizat (aliniere) și UN moment când ești tras înapoi (fricțiune). Pentru fiecare notează: ce activitate, ce caracteristică (creativitate? rezolvare probleme? conexiune? vs repetitivitate? teamă de judecată?). Nu trebuie să faci nimic cu observațiile - doar să le vezi. Corpul știe adevărul înainte ca mintea să-l articuleze.",
|
||||
"example": "Aliniere: Când automatizezi ceva și simți satisfacție - observi că e creativitatea și controlul care te energizează. Fricțiune: Când amâni să suni un client nou - observi că nu e competența (știi să vorbești), ci teama de respingere. Pattern-ul arată: vrei autonomie creativă, nu vânzare agresivă.",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-06",
|
||||
"done": true,
|
||||
"doneAt": "2026-02-06T13:46:00.687Z",
|
||||
"source": "Coaching Dimineață - Pattern-uri de Auto-Cunoaștere",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/coaching/2026-02-06-dimineata.md",
|
||||
"createdAt": "2026-02-06T07:02:00.666161"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-05",
|
||||
"text": "Provocare: Vizualizare Prospecting - sună un client potențial (5 min)",
|
||||
"context": "Alege UN client potențial real. Găsește o amintire cu client entuziasmat. Vizualizează: tu suni, el răspunde, pui propunerea, el zice 'Sună bine'. Sparge imaginea - prin fissură vezi entuziasmul din amintirea reală. Repetă 2-3 ori. Apoi sun-l azi sau mâine (sau cel puțin prepară motivul).",
|
||||
"example": "Client potențial: X care ar fi perfect dar zici 'dar...'. Amintire: momentul când clientul A a zis 'da'. Vizualizezi: suni, răspunde, pui propunerea, el: 'Sună bine'. Apoi suni pe X.",
|
||||
"domain": "work",
|
||||
"dueDate": "2026-02-05",
|
||||
"done": true,
|
||||
"source": "Gândul de Seară - NLP Prospecting",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/coaching/2026-02-05-seara.md",
|
||||
"createdAt": "2026-02-05T19:00:00.000Z",
|
||||
"doneAt": "2026-02-06T13:45:58.234Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-04",
|
||||
"text": "Provocare: Vizualizare NLP - transferă motivația (5 min)",
|
||||
"context": "Alege O acțiune pe care o tot amâni. Găsește o amintire cu plăcere intensă (vacanță, succes, flow). Vizualizează amintirea luminoasă și caldă. Pune acțiunea amânată în față. 'Sparge' imaginea - vezi plăcerea prin fissură. Închide. Repetă de 2 ori. Observă schimbarea emoțională.",
|
||||
"example": "Acțiunea: să trimiți un email de prospecting către un potențial client. Amintirea: momentul când ai terminat un proiect mare și clientul era entuziasmat. Când 'spargi' imaginea și vezi entuziasmul din spate, creierul începe să asocieze email-ul cu acel sentiment de succes.",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-04",
|
||||
"done": true,
|
||||
"doneAt": "2026-02-04T14:38:17.505Z",
|
||||
"source": "Meditație NLP - Vizualizare pentru Motivație",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/projects/grup-sprijin/biblioteca/meditatie-vizualizare-motivatie.md",
|
||||
"createdAt": "2026-02-04T07:00:00.000Z"
|
||||
},
|
||||
{
|
||||
"id": "prov-2026-02-03",
|
||||
"text": "Provocare: Răspunde la una din întrebări despre umbrele tale (3 min)",
|
||||
"context": "Alege UNA din aceste întrebări și scrie răspunsul pe hârtie sau în telefon: 1) Ce complimente refuzi sau minimizezi? 2) Ce ai face dacă nu te-ar judeca nimeni? 3) Ce te irită la alții? Nu trebuie să faci nimic cu răspunsul - doar să-l vezi. Umbrele consumă energie să le ținem ascunse.",
|
||||
"example": "Exemplu de umbră: 'Nu mă consider destul de deștept ca antreprenor' - asta e o parte pe care o ascunzi. Când o accepți ('ok, am și limite'), eliberezi energia pe care o consumi să o maschezi cu scuze sau evitare.",
|
||||
"domain": "self",
|
||||
"dueDate": "2026-02-03",
|
||||
"done": true,
|
||||
"doneAt": "2026-02-03T21:16:13.452Z",
|
||||
"source": "Zoltan Vereș - Umbrele Workshop",
|
||||
"sourceUrl": "https://moltbot.tailf7372d.ts.net/echo/files.html#memory/kb/youtube/2026-02-02_zoltan-veres-umbrele-workshop-complet.md",
|
||||
"createdAt": "2026-02-03T07:00:00.000Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
1029
dashboard/workspace.html
Normal file
1029
dashboard/workspace.html
Normal file
File diff suppressed because it is too large
Load Diff
1
dashboard/youtube-notes
Symbolic link
1
dashboard/youtube-notes
Symbolic link
@@ -0,0 +1 @@
|
||||
/home/moltbot/echo-core/memory/kb/youtube
|
||||
@@ -141,7 +141,7 @@ Când lansez sub-agent, îi dau context: AGENTS.md, SOUL.md, USER.md + relevant
|
||||
- **Link YouTube:** → răspund "👍 Execut acum" sau "👍 Programez noapte 23:00" cu `[[reply_to_current]]` → APOI **RULEZ** `tools/youtube_subs.py` (vezi FLUX-JOBURI.md)
|
||||
- **Bon PDF:** → dry run, confirmare cu `[[reply_to_current]]`, save
|
||||
- **Task:** React 👍 → add/done task
|
||||
- **Seară (>22:00 București):** → programez automat in approved_tasks.md pentru joburile de noapte (night-execute), nu execut imediat
|
||||
- **Seară (>22:00 București):** → adaug în `memory/approved-tasks.md` pentru procesare programată; nu execut imediat
|
||||
|
||||
**REGULĂ RĂSPUNSURI:** Când răspund la mesaje directe (link-uri, tasks, comenzi), folosesc ÎNTOTDEAUNA `[[reply_to_current]]` pentru a răspunde EXACT în canalul de unde a venit mesajul, NU în "ultimul canal activ".
|
||||
|
||||
|
||||
13
scripts/update_crontab.sh
Executable file
13
scripts/update_crontab.sh
Executable file
@@ -0,0 +1,13 @@
|
||||
#!/usr/bin/env bash
|
||||
# update_crontab.sh — post-migration crontab update for Marius.
|
||||
# Replaces the clawd/tools/backup_config.sh line with the echo-core equivalent.
|
||||
# Idempotent: safe to re-run.
|
||||
set -euo pipefail
|
||||
current=$(crontab -l 2>/dev/null || true)
|
||||
new=$(printf '%s\n' "$current" | sed -E 's#/home/moltbot/clawd/tools/backup_config\.sh#/home/moltbot/echo-core/tools/backup_config.sh#g')
|
||||
if [ "$current" != "$new" ]; then
|
||||
printf '%s\n' "$new" | crontab -
|
||||
echo "Crontab updated: backup_config.sh path remapped to echo-core."
|
||||
else
|
||||
echo "Crontab unchanged (already pointing at echo-core, or backup_config line absent)."
|
||||
fi
|
||||
218
src/scheduler.py
218
src/scheduler.py
@@ -15,6 +15,7 @@ import tempfile
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Awaitable, Callable
|
||||
from zoneinfo import ZoneInfo
|
||||
|
||||
from apscheduler.schedulers.asyncio import AsyncIOScheduler
|
||||
from apscheduler.triggers.cron import CronTrigger
|
||||
@@ -39,6 +40,12 @@ JOB_TIMEOUT = 300 # 5-minute default per job execution
|
||||
|
||||
_NAME_RE = re.compile(r"^[a-z0-9][a-z0-9-]{0,62}$")
|
||||
_MAX_PROMPT_LEN = 10_000
|
||||
_MAX_SHELL_OUTPUT = 1500 # chars of stdout to forward to channel
|
||||
_MAX_STDERR_REPORT = 500 # chars of stderr on non-zero exit
|
||||
_VALID_REPORT_ON = {"always", "changes", "never"}
|
||||
_VALID_KINDS = {"claude", "shell"}
|
||||
_SCHEDULER_TZ = ZoneInfo("Europe/Bucharest")
|
||||
_MARKER_RE = re.compile(r"^GSTACK-CRON:\s+changes=(\d+)\s*$", re.MULTILINE)
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Scheduler class
|
||||
@@ -55,7 +62,7 @@ class Scheduler:
|
||||
) -> None:
|
||||
self._send_callback = send_callback
|
||||
self._config = config
|
||||
self._scheduler = AsyncIOScheduler()
|
||||
self._scheduler = AsyncIOScheduler(timezone=_SCHEDULER_TZ)
|
||||
self._jobs: list[dict] = []
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
@@ -138,6 +145,94 @@ class Scheduler:
|
||||
logger.info("Added job '%s' (cron: %s, channel: %s)", name, cron, channel)
|
||||
return job
|
||||
|
||||
def add_shell_job(
|
||||
self,
|
||||
name: str,
|
||||
cron: str,
|
||||
channel: str,
|
||||
command: list[str],
|
||||
report_on: str = "changes",
|
||||
timeout: int | None = None,
|
||||
) -> dict:
|
||||
"""Validate, add a shell job to list, save, and schedule.
|
||||
|
||||
Shell jobs execute an arbitrary command via subprocess. Reporting policy:
|
||||
- report_on="always": always forward stdout (truncated) on exit 0
|
||||
- report_on="never": never forward stdout on exit 0
|
||||
- report_on="changes": parse GSTACK-CRON marker ('changes=N') and
|
||||
forward stdout only if N>0
|
||||
Non-zero exit status ALWAYS reports stderr regardless of report_on.
|
||||
"""
|
||||
# Validate name
|
||||
if not _NAME_RE.match(name):
|
||||
raise ValueError(
|
||||
f"Invalid job name '{name}'. Must match: lowercase alphanumeric "
|
||||
"and hyphens, 1-63 chars, starting with alphanumeric."
|
||||
)
|
||||
|
||||
# Duplicate check (across claude+shell)
|
||||
if any(j["name"] == name for j in self._jobs):
|
||||
raise ValueError(f"Job '{name}' already exists")
|
||||
|
||||
# Validate cron expression
|
||||
try:
|
||||
CronTrigger.from_crontab(cron, timezone=_SCHEDULER_TZ)
|
||||
except (ValueError, KeyError) as exc:
|
||||
raise ValueError(f"Invalid cron expression '{cron}': {exc}")
|
||||
|
||||
# Validate channel
|
||||
if not isinstance(channel, str) or not channel.strip():
|
||||
raise ValueError("Channel must be a non-empty string")
|
||||
|
||||
# Validate command
|
||||
if (
|
||||
not isinstance(command, list)
|
||||
or not command
|
||||
or not all(isinstance(c, str) and c.strip() for c in command)
|
||||
):
|
||||
raise ValueError(
|
||||
"Command must be a non-empty list of non-empty strings"
|
||||
)
|
||||
|
||||
# Validate report_on
|
||||
if report_on not in _VALID_REPORT_ON:
|
||||
raise ValueError(
|
||||
f"Invalid report_on '{report_on}'. Must be one of: "
|
||||
f"{', '.join(sorted(_VALID_REPORT_ON))}"
|
||||
)
|
||||
|
||||
# Validate timeout
|
||||
if timeout is not None:
|
||||
if not isinstance(timeout, int) or isinstance(timeout, bool):
|
||||
raise ValueError("Timeout must be an int (seconds)")
|
||||
if timeout < 1 or timeout > 3600:
|
||||
raise ValueError("Timeout must be between 1 and 3600 seconds")
|
||||
|
||||
job = {
|
||||
"name": name,
|
||||
"kind": "shell",
|
||||
"cron": cron,
|
||||
"channel": channel,
|
||||
"command": list(command),
|
||||
"report_on": report_on,
|
||||
"timeout": timeout,
|
||||
"enabled": True,
|
||||
"last_run": None,
|
||||
"last_status": None,
|
||||
"next_run": None,
|
||||
}
|
||||
|
||||
self._jobs.append(job)
|
||||
self._schedule_job(job)
|
||||
self._update_next_run(job)
|
||||
self._save_jobs()
|
||||
|
||||
logger.info(
|
||||
"Added shell job '%s' (cron: %s, channel: %s, report_on: %s)",
|
||||
name, cron, channel, report_on,
|
||||
)
|
||||
return job
|
||||
|
||||
def remove_job(self, name: str) -> bool:
|
||||
"""Remove job from list and APScheduler. Returns True if found."""
|
||||
for i, job in enumerate(self._jobs):
|
||||
@@ -263,10 +358,36 @@ class Scheduler:
|
||||
await self._execute_job(job)
|
||||
|
||||
async def _execute_job(self, job: dict) -> str:
|
||||
"""Execute a job: run Claude CLI, update state, send output."""
|
||||
"""Execute a job: dispatch by kind, update state, forward output."""
|
||||
name = job["name"]
|
||||
job["last_run"] = datetime.now(timezone.utc).isoformat()
|
||||
|
||||
kind = job.get("kind", "claude")
|
||||
if kind == "shell":
|
||||
result_text = await self._execute_shell_job(job)
|
||||
else:
|
||||
result_text = await self._execute_claude_job(job)
|
||||
|
||||
# Update next_run from APScheduler
|
||||
self._update_next_run(job)
|
||||
# Save state
|
||||
self._save_jobs()
|
||||
|
||||
# Send output via callback if we have something to send
|
||||
if result_text and self._send_callback:
|
||||
try:
|
||||
await self._send_callback(job["channel"], result_text)
|
||||
except Exception as exc:
|
||||
logger.error("Job '%s' send_callback failed: %s", name, exc)
|
||||
elif not result_text:
|
||||
logger.debug("Job '%s' produced no output, skipping send", name)
|
||||
|
||||
return result_text or ""
|
||||
|
||||
async def _execute_claude_job(self, job: dict) -> str:
|
||||
"""Run a Claude CLI job and return the response text (or error marker)."""
|
||||
name = job["name"]
|
||||
|
||||
# Build CLI command
|
||||
cmd = [
|
||||
CLAUDE_BIN, "-p", job["prompt"],
|
||||
@@ -283,7 +404,6 @@ class Scheduler:
|
||||
if job.get("allowed_tools"):
|
||||
cmd += ["--allowedTools"] + job["allowed_tools"]
|
||||
|
||||
# Run in thread to not block event loop
|
||||
result_text = ""
|
||||
try:
|
||||
proc = await asyncio.to_thread(
|
||||
@@ -322,23 +442,85 @@ class Scheduler:
|
||||
result_text = f"[cron:{name}] Error: {exc}"
|
||||
logger.error("Job '%s' unexpected error: %s", name, exc)
|
||||
|
||||
# Update next_run from APScheduler
|
||||
self._update_next_run(job)
|
||||
|
||||
# Save state
|
||||
self._save_jobs()
|
||||
|
||||
# Send output via callback
|
||||
if not result_text:
|
||||
logger.warning("Job '%s' produced empty result, skipping send", name)
|
||||
elif self._send_callback:
|
||||
try:
|
||||
await self._send_callback(job["channel"], result_text)
|
||||
except Exception as exc:
|
||||
logger.error("Job '%s' send_callback failed: %s", name, exc)
|
||||
|
||||
return result_text
|
||||
|
||||
async def _execute_shell_job(self, job: dict) -> str:
|
||||
"""Run a shell command job, honour report_on policy, return text to forward.
|
||||
|
||||
Exit != 0 always reports stderr (trimmed). Exit == 0 obeys report_on:
|
||||
'always' forwards stdout, 'never' stays silent, 'changes' parses the
|
||||
GSTACK-CRON marker ('changes=N') and forwards stdout only if N>0.
|
||||
Missing/malformed marker is logged and treated as 'no changes'.
|
||||
"""
|
||||
name = job["name"]
|
||||
cmd = list(job["command"])
|
||||
timeout = job.get("timeout")
|
||||
if not isinstance(timeout, int) or timeout <= 0:
|
||||
timeout = JOB_TIMEOUT
|
||||
report_on = job.get("report_on", "changes")
|
||||
|
||||
try:
|
||||
proc = await asyncio.to_thread(
|
||||
subprocess.run,
|
||||
cmd,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=timeout,
|
||||
cwd=PROJECT_ROOT,
|
||||
env=_safe_env(),
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
job["last_status"] = "error"
|
||||
logger.error("Shell job '%s' timed out after %ss", name, timeout)
|
||||
return f"[cron:{name}] Error: timed out after {timeout}s"
|
||||
except Exception as exc:
|
||||
job["last_status"] = "error"
|
||||
logger.error("Shell job '%s' failed to launch: %s", name, exc)
|
||||
return f"[cron:{name}] Error: {exc}"
|
||||
|
||||
if proc.returncode != 0:
|
||||
job["last_status"] = "error"
|
||||
stderr_trim = (proc.stderr or "").strip()[:_MAX_STDERR_REPORT]
|
||||
logger.error(
|
||||
"Shell job '%s' exit %d: %s", name, proc.returncode, stderr_trim,
|
||||
)
|
||||
return f"[cron:{name}] exit {proc.returncode}: {stderr_trim}"
|
||||
|
||||
job["last_status"] = "ok"
|
||||
stdout = proc.stdout or ""
|
||||
|
||||
if report_on == "never":
|
||||
logger.info("Shell job '%s' ok (report_on=never, silent)", name)
|
||||
return ""
|
||||
|
||||
if report_on == "always":
|
||||
logger.info("Shell job '%s' ok (report_on=always)", name)
|
||||
return stdout[:_MAX_SHELL_OUTPUT]
|
||||
|
||||
# report_on == "changes"
|
||||
match = _MARKER_RE.search(stdout)
|
||||
if not match:
|
||||
logger.warning(
|
||||
"Shell job '%s' missing GSTACK-CRON marker "
|
||||
"(report_on=changes, staying silent)",
|
||||
name,
|
||||
)
|
||||
return ""
|
||||
try:
|
||||
n = int(match.group(1))
|
||||
except ValueError:
|
||||
logger.warning(
|
||||
"Shell job '%s' GSTACK-CRON marker has non-int payload", name,
|
||||
)
|
||||
return ""
|
||||
|
||||
if n <= 0:
|
||||
logger.info("Shell job '%s' ok (0 changes, silent)", name)
|
||||
return ""
|
||||
|
||||
logger.info("Shell job '%s' ok (%d changes, forwarding)", name, n)
|
||||
return stdout[:_MAX_SHELL_OUTPUT]
|
||||
|
||||
def _update_next_run(self, job: dict) -> None:
|
||||
"""Update job's next_run from APScheduler."""
|
||||
try:
|
||||
|
||||
100
tests/test_anaf_marker.py
Normal file
100
tests/test_anaf_marker.py
Normal file
@@ -0,0 +1,100 @@
|
||||
"""Regression guard: tools/anaf-monitor/monitor_v2.py must emit the
|
||||
GSTACK-CRON marker so the Echo-Core scheduler (report_on="changes") can
|
||||
decide whether to forward stdout to a channel.
|
||||
|
||||
Two checks:
|
||||
1. Static — the marker print statement is present in the script source.
|
||||
2. Runtime — running the script via subprocess in an isolated cwd (with
|
||||
config empty and network disabled via a stubbed urlopen) produces
|
||||
a trailing line matching ^GSTACK-CRON: changes=\\d+$.
|
||||
"""
|
||||
|
||||
import json
|
||||
import re
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
SCRIPT = (
|
||||
Path(__file__).resolve().parent.parent
|
||||
/ "tools"
|
||||
/ "anaf-monitor"
|
||||
/ "monitor_v2.py"
|
||||
)
|
||||
|
||||
MARKER_RE = re.compile(r"^GSTACK-CRON:\s+changes=(\d+)\s*$", re.MULTILINE)
|
||||
|
||||
|
||||
def test_script_exists():
|
||||
assert SCRIPT.is_file(), f"Expected ANAF monitor at {SCRIPT}"
|
||||
|
||||
|
||||
def test_anaf_monitor_source_contains_marker_print():
|
||||
"""Weak but fast regression guard: the marker print must exist in source."""
|
||||
src = SCRIPT.read_text(encoding="utf-8")
|
||||
assert 'print(f"GSTACK-CRON: changes=' in src, (
|
||||
"monitor_v2.py must emit GSTACK-CRON marker on its own line — "
|
||||
"contract with the Echo-Core shell scheduler (report_on='changes')."
|
||||
)
|
||||
|
||||
|
||||
def test_anaf_monitor_emits_gstack_marker(tmp_path):
|
||||
"""Run the script end-to-end with an empty config and assert the marker."""
|
||||
work_dir = tmp_path / "anaf"
|
||||
work_dir.mkdir()
|
||||
|
||||
# Empty config → zero pages processed → num_changes = 0
|
||||
(work_dir / "config.json").write_text(json.dumps({"pages": []}))
|
||||
|
||||
# Copy the script to the isolated dir so SCRIPT_DIR-relative paths
|
||||
# (hashes.json, versions.json, snapshots/, monitor.log) don't pollute
|
||||
# the repo.
|
||||
local_script = work_dir / "monitor_v2.py"
|
||||
local_script.write_text(SCRIPT.read_text(encoding="utf-8"))
|
||||
|
||||
proc = subprocess.run(
|
||||
[sys.executable, str(local_script)],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=30,
|
||||
cwd=str(work_dir),
|
||||
)
|
||||
|
||||
assert proc.returncode == 0, (
|
||||
f"Script exited {proc.returncode}, stderr: {proc.stderr}"
|
||||
)
|
||||
|
||||
match = MARKER_RE.search(proc.stdout)
|
||||
assert match is not None, (
|
||||
"monitor_v2.py did not emit a GSTACK-CRON marker. "
|
||||
f"stdout was:\n{proc.stdout}"
|
||||
)
|
||||
# With empty config there are zero changes.
|
||||
assert int(match.group(1)) == 0
|
||||
|
||||
|
||||
def test_anaf_monitor_marker_is_last_line(tmp_path):
|
||||
"""Marker should be the final meaningful line so log-tailers can parse."""
|
||||
work_dir = tmp_path / "anaf"
|
||||
work_dir.mkdir()
|
||||
(work_dir / "config.json").write_text(json.dumps({"pages": []}))
|
||||
local_script = work_dir / "monitor_v2.py"
|
||||
local_script.write_text(SCRIPT.read_text(encoding="utf-8"))
|
||||
|
||||
proc = subprocess.run(
|
||||
[sys.executable, str(local_script)],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=30,
|
||||
cwd=str(work_dir),
|
||||
)
|
||||
assert proc.returncode == 0
|
||||
|
||||
# Strip trailing whitespace/newlines, grab last non-empty line.
|
||||
lines = [ln for ln in proc.stdout.splitlines() if ln.strip()]
|
||||
assert lines, "Script produced no stdout lines"
|
||||
assert MARKER_RE.match(lines[-1]), (
|
||||
f"Last stdout line is not the GSTACK-CRON marker. Got: {lines[-1]!r}"
|
||||
)
|
||||
163
tests/test_cron_jobs_schema.py
Normal file
163
tests/test_cron_jobs_schema.py
Normal file
@@ -0,0 +1,163 @@
|
||||
"""Validate cron/jobs.json schema per kind.
|
||||
|
||||
Every job must be either `kind: "shell"` (with command/report_on/channel)
|
||||
or Claude-style (prompt/model/channel). Required fields are enforced
|
||||
per kind. Cron expressions must parse.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
from apscheduler.triggers.cron import CronTrigger
|
||||
|
||||
PROJECT_ROOT = Path(__file__).resolve().parents[1]
|
||||
JOBS_FILE = PROJECT_ROOT / "cron" / "jobs.json"
|
||||
|
||||
VALID_REPORT_ON = {"always", "changes", "never"}
|
||||
# Keep in sync with src.claude_session.VALID_MODELS (informational — we don't
|
||||
# import there to avoid pulling the whole module into the schema test).
|
||||
VALID_MODELS = {"haiku", "sonnet", "opus"}
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def jobs():
|
||||
text = JOBS_FILE.read_text(encoding="utf-8")
|
||||
data = json.loads(text)
|
||||
assert isinstance(data, list), "jobs.json must be a JSON array"
|
||||
return data
|
||||
|
||||
|
||||
def test_jobs_file_exists_and_parses(jobs):
|
||||
assert isinstance(jobs, list)
|
||||
|
||||
|
||||
def test_names_are_unique(jobs):
|
||||
names = [j["name"] for j in jobs]
|
||||
assert len(names) == len(set(names)), f"duplicate names: {names}"
|
||||
|
||||
|
||||
def test_every_job_has_name_and_cron(jobs):
|
||||
for j in jobs:
|
||||
assert "name" in j and isinstance(j["name"], str) and j["name"]
|
||||
assert "cron" in j and isinstance(j["cron"], str) and j["cron"]
|
||||
|
||||
|
||||
def test_every_cron_expression_parses(jobs):
|
||||
for j in jobs:
|
||||
try:
|
||||
CronTrigger.from_crontab(j["cron"])
|
||||
except Exception as exc: # pragma: no cover — diagnostic
|
||||
pytest.fail(f"job {j['name']!r} has invalid cron {j['cron']!r}: {exc}")
|
||||
|
||||
|
||||
def test_enabled_is_bool(jobs):
|
||||
for j in jobs:
|
||||
assert isinstance(j.get("enabled"), bool), (
|
||||
f"job {j['name']!r} missing/non-bool enabled"
|
||||
)
|
||||
|
||||
|
||||
def test_shell_jobs_have_required_fields(jobs):
|
||||
shell_jobs = [j for j in jobs if j.get("kind") == "shell"]
|
||||
assert shell_jobs, "expected at least one shell job"
|
||||
|
||||
for j in shell_jobs:
|
||||
name = j["name"]
|
||||
# channel
|
||||
assert isinstance(j.get("channel"), str) and j["channel"], (
|
||||
f"shell job {name!r} has empty/missing channel"
|
||||
)
|
||||
# command
|
||||
cmd = j.get("command")
|
||||
assert isinstance(cmd, list) and cmd, (
|
||||
f"shell job {name!r} has empty/missing command"
|
||||
)
|
||||
assert all(isinstance(c, str) and c.strip() for c in cmd), (
|
||||
f"shell job {name!r} has non-string entry in command: {cmd!r}"
|
||||
)
|
||||
# report_on
|
||||
report_on = j.get("report_on")
|
||||
assert report_on in VALID_REPORT_ON, (
|
||||
f"shell job {name!r} has invalid report_on={report_on!r}"
|
||||
)
|
||||
# timeout (optional but if present must be a positive int)
|
||||
to = j.get("timeout")
|
||||
if to is not None:
|
||||
assert isinstance(to, int) and not isinstance(to, bool), (
|
||||
f"shell job {name!r} has non-int timeout"
|
||||
)
|
||||
assert 1 <= to <= 3600, (
|
||||
f"shell job {name!r} timeout out of range: {to}"
|
||||
)
|
||||
|
||||
|
||||
def test_claude_jobs_have_required_fields(jobs):
|
||||
claude_jobs = [j for j in jobs if j.get("kind", "claude") == "claude"]
|
||||
assert claude_jobs, "expected at least one claude job"
|
||||
|
||||
for j in claude_jobs:
|
||||
name = j["name"]
|
||||
# channel
|
||||
assert isinstance(j.get("channel"), str) and j["channel"], (
|
||||
f"claude job {name!r} has empty/missing channel"
|
||||
)
|
||||
# prompt
|
||||
prompt = j.get("prompt")
|
||||
assert isinstance(prompt, str) and prompt.strip(), (
|
||||
f"claude job {name!r} has empty/missing prompt"
|
||||
)
|
||||
# model
|
||||
model = j.get("model")
|
||||
assert model in VALID_MODELS, (
|
||||
f"claude job {name!r} has invalid model={model!r}"
|
||||
)
|
||||
# allowed_tools
|
||||
allowed = j.get("allowed_tools")
|
||||
assert isinstance(allowed, list), (
|
||||
f"claude job {name!r} has non-list allowed_tools"
|
||||
)
|
||||
for t in allowed:
|
||||
assert isinstance(t, str), (
|
||||
f"claude job {name!r} has non-string tool: {t!r}"
|
||||
)
|
||||
|
||||
|
||||
def test_shell_jobs_reference_existing_scripts(jobs):
|
||||
"""Every shell command's first path-looking argument must refer to a file
|
||||
that exists in the repo (so we don't ship a jobs.json that runs missing
|
||||
scripts on day one)."""
|
||||
shell_jobs = [j for j in jobs if j.get("kind") == "shell"]
|
||||
for j in shell_jobs:
|
||||
cmd = j["command"]
|
||||
# Find the first argument that looks like a relative path
|
||||
# (contains "/" and doesn't start with "-").
|
||||
script = next(
|
||||
(a for a in cmd[1:] if "/" in a and not a.startswith("-")),
|
||||
None,
|
||||
)
|
||||
if script is None:
|
||||
continue
|
||||
script_path = PROJECT_ROOT / script
|
||||
assert script_path.exists(), (
|
||||
f"shell job {j['name']!r} references missing script: {script}"
|
||||
)
|
||||
|
||||
|
||||
def test_no_clawd_paths_remain_in_claude_prompts(jobs):
|
||||
"""Sanity check: no imported prompt still points at /home/moltbot/clawd/."""
|
||||
offenders: list[str] = []
|
||||
for j in jobs:
|
||||
if j.get("kind", "claude") != "claude":
|
||||
continue
|
||||
prompt = j.get("prompt", "")
|
||||
if "/home/moltbot/clawd/" in prompt:
|
||||
offenders.append(j["name"])
|
||||
if "cd ~/clawd " in prompt or prompt.endswith("cd ~/clawd") or "cd ~/clawd\n" in prompt:
|
||||
offenders.append(j["name"])
|
||||
assert not offenders, (
|
||||
f"these claude jobs still reference /home/moltbot/clawd/ or cd ~/clawd: "
|
||||
f"{offenders}"
|
||||
)
|
||||
76
tests/test_dashboard_constants.py
Normal file
76
tests/test_dashboard_constants.py
Normal file
@@ -0,0 +1,76 @@
|
||||
"""Validate the dashboard `constants` module exposes the post-consolidation paths."""
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
PROJECT_ROOT = Path(__file__).resolve().parents[1]
|
||||
DASH = PROJECT_ROOT / "dashboard"
|
||||
|
||||
# Make dashboard/ importable (same trick api.py does at runtime)
|
||||
if str(DASH) not in sys.path:
|
||||
sys.path.insert(0, str(DASH))
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def constants():
|
||||
# Imported as a module, not via `from dashboard import constants`,
|
||||
# because handlers use bare `import constants`.
|
||||
import constants as _c # type: ignore
|
||||
return _c
|
||||
|
||||
|
||||
def test_base_dir_is_echo_core(constants):
|
||||
assert constants.BASE_DIR == PROJECT_ROOT
|
||||
assert constants.BASE_DIR.name == "echo-core"
|
||||
|
||||
|
||||
def test_echo_core_dir_equals_base_dir(constants):
|
||||
"""Post-consolidation: ECHO_CORE_DIR and BASE_DIR point at the same place."""
|
||||
assert constants.ECHO_CORE_DIR == constants.BASE_DIR
|
||||
|
||||
|
||||
def test_git_workspace_is_echo_core(constants):
|
||||
"""Legacy clawd workspace must be gone."""
|
||||
assert constants.GIT_WORKSPACE == constants.BASE_DIR
|
||||
assert "clawd" not in str(constants.GIT_WORKSPACE)
|
||||
|
||||
|
||||
def test_allowed_workspaces_do_not_include_clawd(constants):
|
||||
for w in constants.ALLOWED_WORKSPACES:
|
||||
assert "clawd" not in str(w), f"clawd leaked into ALLOWED_WORKSPACES: {w}"
|
||||
|
||||
|
||||
def test_allowed_workspaces_include_echo_core_and_workspace(constants):
|
||||
allowed = {str(p) for p in constants.ALLOWED_WORKSPACES}
|
||||
assert str(constants.BASE_DIR) in allowed
|
||||
assert str(constants.WORKSPACE_DIR) in allowed
|
||||
|
||||
|
||||
def test_venv_python_is_dot_venv(constants):
|
||||
"""The dashboard must spawn the .venv python (not legacy `venv/`)."""
|
||||
assert constants.VENV_PYTHON == constants.BASE_DIR / ".venv" / "bin" / "python3"
|
||||
assert ".venv" in str(constants.VENV_PYTHON)
|
||||
|
||||
|
||||
def test_notes_dir_points_at_in_repo_memory(constants):
|
||||
"""memory/ lives inside echo-core post-consolidation."""
|
||||
assert constants.NOTES_DIR == constants.BASE_DIR / "memory" / "kb" / "youtube"
|
||||
|
||||
|
||||
def test_eco_services_list(constants):
|
||||
assert "echo-core" in constants.ECO_SERVICES
|
||||
assert "echo-whatsapp-bridge" in constants.ECO_SERVICES
|
||||
assert "echo-taskboard" in constants.ECO_SERVICES
|
||||
|
||||
|
||||
def test_echo_log_and_sessions_paths(constants):
|
||||
assert constants.ECHO_LOG_FILE == constants.BASE_DIR / "logs" / "echo-core.log"
|
||||
assert constants.ECHO_SESSIONS_FILE == constants.BASE_DIR / "sessions" / "active.json"
|
||||
|
||||
|
||||
def test_habits_file_is_inside_dashboard(constants):
|
||||
assert constants.HABITS_FILE == constants.KANBAN_DIR / "habits.json"
|
||||
assert constants.KANBAN_DIR == constants.BASE_DIR / "dashboard"
|
||||
170
tests/test_dashboard_cron_endpoint.py
Normal file
170
tests/test_dashboard_cron_endpoint.py
Normal file
@@ -0,0 +1,170 @@
|
||||
"""Tests for the /api/cron endpoint (echo-core flat schema, no UTC→local conversion)."""
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
PROJECT_ROOT = Path(__file__).resolve().parents[1]
|
||||
DASH = PROJECT_ROOT / "dashboard"
|
||||
|
||||
if str(DASH) not in sys.path:
|
||||
sys.path.insert(0, str(DASH))
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def cron_module():
|
||||
from handlers import cron as _c # type: ignore
|
||||
return _c
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def handler(cron_module):
|
||||
class _Stub(cron_module.CronHandlers):
|
||||
def __init__(self):
|
||||
self.captured = None
|
||||
self.captured_code = None
|
||||
|
||||
def send_json(self, data, code=200):
|
||||
self.captured = data
|
||||
self.captured_code = code
|
||||
|
||||
return _Stub()
|
||||
|
||||
|
||||
def _write_jobs(tmp_path: Path, jobs: list) -> Path:
|
||||
cron_dir = tmp_path / "cron"
|
||||
cron_dir.mkdir(parents=True, exist_ok=True)
|
||||
(cron_dir / "jobs.json").write_text(json.dumps(jobs), encoding="utf-8")
|
||||
return tmp_path
|
||||
|
||||
|
||||
def test_parse_cron_time_no_utc_conversion(cron_module):
|
||||
"""Echo-core cron strings are already Europe/Bucharest; no +3 shift."""
|
||||
# 10:00 Bucharest should stay 10:00 in the display.
|
||||
assert cron_module._parse_cron_time("0 10 * * *") == "10:00"
|
||||
assert cron_module._parse_cron_time("30 8 * * 1-5") == "08:30"
|
||||
|
||||
|
||||
def test_parse_cron_time_handles_hour_range(cron_module):
|
||||
"""A cron like `0 9-17 * * *` should display the starting hour."""
|
||||
assert cron_module._parse_cron_time("0 9-17 * * *") == "09:00"
|
||||
|
||||
|
||||
def test_parse_cron_time_falls_back_on_unexpected_expr(cron_module):
|
||||
assert cron_module._parse_cron_time("@hourly") == "@hourly"[:15]
|
||||
assert cron_module._parse_cron_time("bad") == "bad"
|
||||
|
||||
|
||||
def test_iso_to_epoch_ms_handles_empty(cron_module):
|
||||
assert cron_module._iso_to_epoch_ms("") == 0
|
||||
assert cron_module._iso_to_epoch_ms(None) == 0
|
||||
assert cron_module._iso_to_epoch_ms("not a date") == 0
|
||||
|
||||
|
||||
def test_iso_to_epoch_ms_parses_iso(cron_module):
|
||||
# Well-known epoch
|
||||
ms = cron_module._iso_to_epoch_ms("1970-01-01T00:00:00+00:00")
|
||||
assert ms == 0
|
||||
|
||||
|
||||
def test_missing_jobs_file_returns_empty(tmp_path, monkeypatch, handler):
|
||||
import constants # type: ignore
|
||||
monkeypatch.setattr(constants, "BASE_DIR", tmp_path)
|
||||
handler.handle_cron_status()
|
||||
assert handler.captured["jobs"] == []
|
||||
assert "No jobs file" in handler.captured["error"]
|
||||
|
||||
|
||||
def test_non_list_jobs_file_is_rejected(tmp_path, monkeypatch, handler):
|
||||
import constants # type: ignore
|
||||
cron_dir = tmp_path / "cron"
|
||||
cron_dir.mkdir()
|
||||
(cron_dir / "jobs.json").write_text('{"not": "a list"}', encoding="utf-8")
|
||||
monkeypatch.setattr(constants, "BASE_DIR", tmp_path)
|
||||
handler.handle_cron_status()
|
||||
assert handler.captured["jobs"] == []
|
||||
assert "shape" in handler.captured["error"].lower()
|
||||
|
||||
|
||||
def test_disabled_jobs_are_filtered_out(tmp_path, monkeypatch, handler):
|
||||
import constants # type: ignore
|
||||
_write_jobs(tmp_path, [
|
||||
{"name": "foo", "cron": "0 10 * * *", "enabled": True},
|
||||
{"name": "bar", "cron": "0 11 * * *", "enabled": False},
|
||||
])
|
||||
monkeypatch.setattr(constants, "BASE_DIR", tmp_path)
|
||||
handler.handle_cron_status()
|
||||
names = [j["name"] for j in handler.captured["jobs"]]
|
||||
assert names == ["foo"]
|
||||
assert handler.captured["total"] == 1
|
||||
|
||||
|
||||
def test_frontend_shape_is_preserved(tmp_path, monkeypatch, handler):
|
||||
"""Output must carry: id, name, time, schedule, ranToday, lastStatus,
|
||||
lastRunAtMs, nextRunAtMs."""
|
||||
import constants # type: ignore
|
||||
_write_jobs(tmp_path, [
|
||||
{
|
||||
"name": "anaf-monitor",
|
||||
"cron": "0 10 * * 1-5",
|
||||
"enabled": True,
|
||||
"last_run": "2026-04-21T10:00:00+03:00",
|
||||
"next_run": "2026-04-22T10:00:00+03:00",
|
||||
"last_status": "ok",
|
||||
},
|
||||
])
|
||||
monkeypatch.setattr(constants, "BASE_DIR", tmp_path)
|
||||
handler.handle_cron_status()
|
||||
|
||||
jobs = handler.captured["jobs"]
|
||||
assert len(jobs) == 1
|
||||
j = jobs[0]
|
||||
for k in ("id", "name", "time", "schedule", "ranToday",
|
||||
"lastStatus", "lastRunAtMs", "nextRunAtMs"):
|
||||
assert k in j, f"missing key {k!r} in response"
|
||||
# Echo-core has no separate id — fallback is name.
|
||||
assert j["id"] == "anaf-monitor"
|
||||
assert j["name"] == "anaf-monitor"
|
||||
assert j["time"] == "10:00"
|
||||
assert j["schedule"] == "0 10 * * 1-5"
|
||||
assert j["lastRunAtMs"] > 0
|
||||
assert j["nextRunAtMs"] is not None and j["nextRunAtMs"] > 0
|
||||
|
||||
|
||||
def test_ran_today_is_based_on_last_run_ms(tmp_path, monkeypatch, handler):
|
||||
"""If last_run is in the past (before today 00:00), ranToday is False and lastStatus is None."""
|
||||
import constants # type: ignore
|
||||
_write_jobs(tmp_path, [
|
||||
{
|
||||
"name": "ancient",
|
||||
"cron": "0 5 * * *",
|
||||
"enabled": True,
|
||||
"last_run": "2020-01-01T05:00:00+03:00",
|
||||
"next_run": "2026-04-22T05:00:00+03:00",
|
||||
"last_status": "ok",
|
||||
},
|
||||
])
|
||||
monkeypatch.setattr(constants, "BASE_DIR", tmp_path)
|
||||
handler.handle_cron_status()
|
||||
|
||||
j = handler.captured["jobs"][0]
|
||||
assert j["ranToday"] is False
|
||||
# lastStatus is only surfaced when ranToday — else it's None.
|
||||
assert j["lastStatus"] is None
|
||||
|
||||
|
||||
def test_jobs_are_sorted_by_display_time(tmp_path, monkeypatch, handler):
|
||||
import constants # type: ignore
|
||||
_write_jobs(tmp_path, [
|
||||
{"name": "late", "cron": "0 22 * * *", "enabled": True},
|
||||
{"name": "early", "cron": "0 5 * * *", "enabled": True},
|
||||
{"name": "mid", "cron": "0 12 * * *", "enabled": True},
|
||||
])
|
||||
monkeypatch.setattr(constants, "BASE_DIR", tmp_path)
|
||||
handler.handle_cron_status()
|
||||
|
||||
names = [j["name"] for j in handler.captured["jobs"]]
|
||||
assert names == ["early", "mid", "late"]
|
||||
107
tests/test_dashboard_files_sandbox.py
Normal file
107
tests/test_dashboard_files_sandbox.py
Normal file
@@ -0,0 +1,107 @@
|
||||
"""Tests for the /api/files sandbox — _resolve_sandboxed must block path traversal."""
|
||||
from __future__ import annotations
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
PROJECT_ROOT = Path(__file__).resolve().parents[1]
|
||||
DASH = PROJECT_ROOT / "dashboard"
|
||||
|
||||
if str(DASH) not in sys.path:
|
||||
sys.path.insert(0, str(DASH))
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def files_module():
|
||||
from handlers import files as _f # type: ignore
|
||||
return _f
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def handler(files_module):
|
||||
class _Stub(files_module.FilesHandlers):
|
||||
def __init__(self):
|
||||
self.captured = None
|
||||
self.captured_code = None
|
||||
|
||||
def send_json(self, data, code=200):
|
||||
self.captured = data
|
||||
self.captured_code = code
|
||||
|
||||
return _Stub()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sandboxed(tmp_path, monkeypatch):
|
||||
"""Replace ALLOWED_WORKSPACES with a single tmp_path to isolate tests."""
|
||||
import constants # type: ignore
|
||||
root = tmp_path / "repo"
|
||||
root.mkdir()
|
||||
monkeypatch.setattr(constants, "ALLOWED_WORKSPACES", [root])
|
||||
return root
|
||||
|
||||
|
||||
def test_resolve_accepts_file_inside_workspace(handler, sandboxed):
|
||||
(sandboxed / "notes.md").write_text("hello", encoding="utf-8")
|
||||
target, workspace = handler._resolve_sandboxed("notes.md")
|
||||
assert target is not None
|
||||
assert target == (sandboxed / "notes.md").resolve()
|
||||
assert workspace == sandboxed
|
||||
|
||||
|
||||
def test_resolve_accepts_nested_file(handler, sandboxed):
|
||||
(sandboxed / "sub").mkdir()
|
||||
(sandboxed / "sub" / "file.txt").write_text("x", encoding="utf-8")
|
||||
target, workspace = handler._resolve_sandboxed("sub/file.txt")
|
||||
assert target == (sandboxed / "sub" / "file.txt").resolve()
|
||||
assert workspace == sandboxed
|
||||
|
||||
|
||||
def test_resolve_rejects_parent_traversal(handler, sandboxed):
|
||||
"""../etc/passwd-style requests must resolve OUTSIDE the workspace and
|
||||
be refused by returning (None, None)."""
|
||||
target, workspace = handler._resolve_sandboxed("../../../etc/passwd")
|
||||
assert target is None
|
||||
assert workspace is None
|
||||
|
||||
|
||||
def test_resolve_rejects_absolute_escape(handler, sandboxed):
|
||||
"""Absolute path that lands outside the workspace must be refused.
|
||||
|
||||
Note: Path('/base') / '/etc/passwd' == Path('/etc/passwd') because the
|
||||
second path is absolute. The resolver must still refuse it.
|
||||
"""
|
||||
target, workspace = handler._resolve_sandboxed("/etc/passwd")
|
||||
assert target is None, "absolute outside path must be refused"
|
||||
assert workspace is None
|
||||
|
||||
|
||||
def test_handle_files_get_returns_403_on_denied(handler, sandboxed):
|
||||
"""End-to-end: /api/files with a traversal path must 403."""
|
||||
# Simulate HTTP path parsing: the endpoint reads self.path
|
||||
handler.path = "/api/files?path=../../../etc/passwd&action=list"
|
||||
handler.handle_files_get()
|
||||
assert handler.captured_code == 403
|
||||
assert handler.captured["error"] == "Access denied"
|
||||
|
||||
|
||||
def test_handle_files_get_reads_a_file(handler, sandboxed):
|
||||
(sandboxed / "hello.txt").write_text("hi there", encoding="utf-8")
|
||||
handler.path = "/api/files?path=hello.txt&action=list"
|
||||
handler.handle_files_get()
|
||||
assert handler.captured_code == 200 or handler.captured_code is None
|
||||
assert handler.captured["type"] == "file"
|
||||
assert handler.captured["content"] == "hi there"
|
||||
|
||||
|
||||
def test_handle_files_get_lists_a_dir(handler, sandboxed):
|
||||
(sandboxed / "a.md").write_text("a", encoding="utf-8")
|
||||
(sandboxed / "b.md").write_text("b", encoding="utf-8")
|
||||
(sandboxed / "nested").mkdir()
|
||||
handler.path = "/api/files?path=&action=list"
|
||||
handler.handle_files_get()
|
||||
assert handler.captured["type"] == "dir"
|
||||
names = {item["name"] for item in handler.captured["items"]}
|
||||
assert names == {"a.md", "b.md", "nested"}
|
||||
133
tests/test_dashboard_git.py
Normal file
133
tests/test_dashboard_git.py
Normal file
@@ -0,0 +1,133 @@
|
||||
"""Unit tests for dashboard.handlers.git — _run_git helper + endpoint shapes."""
|
||||
from __future__ import annotations
|
||||
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
PROJECT_ROOT = Path(__file__).resolve().parents[1]
|
||||
DASH = PROJECT_ROOT / "dashboard"
|
||||
|
||||
if str(DASH) not in sys.path:
|
||||
sys.path.insert(0, str(DASH))
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def git_module():
|
||||
from handlers import git as _g # type: ignore
|
||||
return _g
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def handler(git_module):
|
||||
"""A bare instance of GitHandlers with a captured send_json."""
|
||||
class _Stub(git_module.GitHandlers):
|
||||
def __init__(self):
|
||||
self.captured = None
|
||||
self.captured_code = None
|
||||
|
||||
def send_json(self, data, code=200):
|
||||
self.captured = data
|
||||
self.captured_code = code
|
||||
|
||||
return _Stub()
|
||||
|
||||
|
||||
def test_run_git_is_a_subprocess_call(handler, git_module, tmp_path):
|
||||
"""_run_git must use subprocess.run with the supplied cwd + timeout."""
|
||||
with patch.object(git_module.subprocess, "run") as mock_run:
|
||||
mock_run.return_value = subprocess.CompletedProcess(
|
||||
args=["git", "status"], returncode=0, stdout="clean\n", stderr=""
|
||||
)
|
||||
result = handler._run_git(tmp_path, ["status"], timeout=3)
|
||||
|
||||
mock_run.assert_called_once()
|
||||
args, kwargs = mock_run.call_args
|
||||
assert args[0] == ["git", "status"]
|
||||
assert kwargs["cwd"] == str(tmp_path)
|
||||
assert kwargs["timeout"] == 3
|
||||
assert kwargs["capture_output"] is True
|
||||
assert kwargs["text"] is True
|
||||
assert result.stdout == "clean\n"
|
||||
|
||||
|
||||
def test_run_git_default_timeout_is_5(handler, git_module, tmp_path):
|
||||
with patch.object(git_module.subprocess, "run") as mock_run:
|
||||
mock_run.return_value = subprocess.CompletedProcess(
|
||||
args=["git", "log"], returncode=0, stdout="", stderr=""
|
||||
)
|
||||
handler._run_git(tmp_path, ["log"])
|
||||
_, kwargs = mock_run.call_args
|
||||
assert kwargs["timeout"] == 5
|
||||
|
||||
|
||||
def test_legacy_git_commit_handler_is_gone(git_module):
|
||||
"""/api/git-commit was consolidated into /api/eco/git-commit."""
|
||||
assert not hasattr(git_module.GitHandlers, "handle_git_commit")
|
||||
|
||||
|
||||
def test_git_status_uses_echo_core_workspace(handler, git_module):
|
||||
"""handle_git_status must target constants.GIT_WORKSPACE (echo-core), not clawd."""
|
||||
captured_workspaces = []
|
||||
|
||||
def fake_run(_self, workspace, args, timeout=5):
|
||||
captured_workspaces.append(workspace)
|
||||
stdout_map = {
|
||||
("branch", "--show-current"): "master\n",
|
||||
("log", "-1", "--format=%h|%s|%cr"): "abc1234|test commit|1 hour ago\n",
|
||||
("status", "--short"): "",
|
||||
("diff", "--stat", "--cached"): "",
|
||||
("diff", "--stat"): "",
|
||||
}
|
||||
return subprocess.CompletedProcess(
|
||||
args=["git", *args],
|
||||
returncode=0,
|
||||
stdout=stdout_map.get(tuple(args), ""),
|
||||
stderr="",
|
||||
)
|
||||
|
||||
with patch.object(git_module.GitHandlers, "_run_git", fake_run):
|
||||
handler.handle_git_status()
|
||||
|
||||
import constants # type: ignore
|
||||
assert all(w == constants.GIT_WORKSPACE for w in captured_workspaces)
|
||||
assert handler.captured is not None
|
||||
assert handler.captured["branch"] == "master"
|
||||
|
||||
|
||||
def test_git_status_parses_uncommitted_paths(handler, git_module):
|
||||
def fake_run(_self, workspace, args, timeout=5):
|
||||
if args == ["status", "--short"]:
|
||||
return subprocess.CompletedProcess(
|
||||
args=["git"] + args, returncode=0,
|
||||
stdout=" M src/foo.py\n?? new_file.txt\n", stderr="",
|
||||
)
|
||||
if args[:1] == ["branch"]:
|
||||
return subprocess.CompletedProcess(
|
||||
args=["git"] + args, returncode=0, stdout="feat-x\n", stderr="",
|
||||
)
|
||||
if args[:1] == ["log"]:
|
||||
return subprocess.CompletedProcess(
|
||||
args=["git"] + args, returncode=0,
|
||||
stdout="deadbee|msg|2 minutes ago\n", stderr="",
|
||||
)
|
||||
return subprocess.CompletedProcess(
|
||||
args=["git"] + args, returncode=0, stdout="", stderr="",
|
||||
)
|
||||
|
||||
with patch.object(git_module.GitHandlers, "_run_git", fake_run):
|
||||
handler.handle_git_status()
|
||||
|
||||
data = handler.captured
|
||||
assert data is not None
|
||||
assert data["uncommittedCount"] == 2
|
||||
assert data["clean"] is False
|
||||
paths = [u["path"] for u in data["uncommittedParsed"]]
|
||||
statuses = {u["status"] for u in data["uncommittedParsed"]}
|
||||
assert "src/foo.py" in paths
|
||||
assert "new_file.txt" in paths
|
||||
assert "M" in statuses
|
||||
assert "??" in statuses
|
||||
322
tests/test_import_openclaw_jobs.py
Normal file
322
tests/test_import_openclaw_jobs.py
Normal file
@@ -0,0 +1,322 @@
|
||||
"""Tests for tools/migrations/import_openclaw_jobs_2026-04.py
|
||||
|
||||
The script is a one-shot translator. We test:
|
||||
- --dry-run does not mutate the target file
|
||||
- UTC -> Bucharest cron conversion actually changes the hour field
|
||||
- antfarm/* are excluded by the default skip list
|
||||
- clawd path rewrites happen for `cd ~/clawd` and `/home/moltbot/clawd/`
|
||||
- `clawd-archive` / `clawdbot` etc. are NOT matched
|
||||
- Duplicate job names in target cause a skip-with-warning, not a crash
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import importlib.util
|
||||
import json
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
PROJECT_ROOT = Path(__file__).resolve().parents[1]
|
||||
SCRIPT_PATH = (
|
||||
PROJECT_ROOT / "tools" / "migrations" / "import_openclaw_jobs_2026-04.py"
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def mod():
|
||||
"""Load the migration script as a Python module."""
|
||||
spec = importlib.util.spec_from_file_location(
|
||||
"import_openclaw_jobs_2026_04", SCRIPT_PATH,
|
||||
)
|
||||
assert spec is not None
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
assert spec.loader is not None
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _make_openclaw_job(
|
||||
name: str,
|
||||
expr: str = "0 6 * * *",
|
||||
tz: str | None = None,
|
||||
enabled: bool = True,
|
||||
message: str = "hello",
|
||||
model: str | None = None,
|
||||
kind: str = "cron",
|
||||
):
|
||||
sched = {"kind": kind, "expr": expr}
|
||||
if tz:
|
||||
sched["tz"] = tz
|
||||
payload = {"kind": "agentTurn", "message": message}
|
||||
if model:
|
||||
payload["model"] = model
|
||||
return {
|
||||
"id": f"uuid-{name}",
|
||||
"agentId": "echo",
|
||||
"name": name,
|
||||
"enabled": enabled,
|
||||
"schedule": sched,
|
||||
"sessionTarget": "isolated",
|
||||
"payload": payload,
|
||||
"state": {},
|
||||
"delivery": {"mode": "none"},
|
||||
}
|
||||
|
||||
|
||||
def _write_source(tmp_path: Path, jobs: list[dict]) -> Path:
|
||||
src = tmp_path / "openclaw_jobs.json"
|
||||
src.write_text(json.dumps({"version": 1, "jobs": jobs}), encoding="utf-8")
|
||||
return src
|
||||
|
||||
|
||||
def _write_target(tmp_path: Path, jobs: list[dict]) -> Path:
|
||||
tgt = tmp_path / "echo_jobs.json"
|
||||
tgt.write_text(json.dumps(jobs), encoding="utf-8")
|
||||
return tgt
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Tests
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def test_dry_run_does_not_write(mod, tmp_path, capsys):
|
||||
src = _write_source(tmp_path, [_make_openclaw_job("foo")])
|
||||
tgt = _write_target(tmp_path, [])
|
||||
before = tgt.read_text(encoding="utf-8")
|
||||
|
||||
rc = mod.run([
|
||||
"--dry-run",
|
||||
"--source", str(src),
|
||||
"--target", str(tgt),
|
||||
])
|
||||
assert rc == 0
|
||||
after = tgt.read_text(encoding="utf-8")
|
||||
assert before == after
|
||||
captured = capsys.readouterr()
|
||||
assert "DRY-RUN" in captured.out
|
||||
|
||||
|
||||
def test_utc_to_bucharest_conversion_shifts_hours(mod):
|
||||
"""A UTC cron 0 6 * * * should translate to a non-'0 6' Bucharest expr.
|
||||
|
||||
Offset varies with DST (+2 or +3), so assert the conversion happened,
|
||||
not the exact value.
|
||||
"""
|
||||
new, warnings = mod.convert_cron_utc_to_bucharest(
|
||||
"0 6 * * *", src_tz=None,
|
||||
)
|
||||
assert new != "0 6 * * *", f"expected conversion, got {new!r}"
|
||||
# Minute, day-of-month, month, day-of-week must be unchanged.
|
||||
parts = new.split()
|
||||
assert parts[0] == "0"
|
||||
assert parts[2] == "*"
|
||||
assert parts[3] == "*"
|
||||
assert parts[4] == "*"
|
||||
# Hour is now 8 (winter) or 9 (summer).
|
||||
assert parts[1] in {"8", "9"}
|
||||
|
||||
|
||||
def test_bucharest_tz_source_is_unchanged(mod):
|
||||
"""If openclaw already marks tz=Europe/Bucharest, we must NOT re-shift."""
|
||||
new, warnings = mod.convert_cron_utc_to_bucharest(
|
||||
"0 7 * * *", src_tz="Europe/Bucharest",
|
||||
)
|
||||
assert new == "0 7 * * *"
|
||||
assert warnings == []
|
||||
|
||||
|
||||
def test_skip_by_default_excludes_antfarm(mod, tmp_path):
|
||||
src = _write_source(tmp_path, [
|
||||
_make_openclaw_job("antfarm/feature-dev/planner"),
|
||||
_make_openclaw_job("antfarm/feature-dev/developer"),
|
||||
_make_openclaw_job("keep-this-one"),
|
||||
])
|
||||
tgt = _write_target(tmp_path, [])
|
||||
|
||||
rc = mod.run(["--source", str(src), "--target", str(tgt)])
|
||||
assert rc == 0
|
||||
result = json.loads(tgt.read_text(encoding="utf-8"))
|
||||
names = {j["name"] for j in result}
|
||||
assert "keep-this-one" in names
|
||||
assert not any(n.startswith("antfarm/") for n in names)
|
||||
|
||||
|
||||
def test_skip_by_default_excludes_night_execute(mod, tmp_path):
|
||||
src = _write_source(tmp_path, [_make_openclaw_job("night-execute")])
|
||||
tgt = _write_target(tmp_path, [])
|
||||
rc = mod.run(["--source", str(src), "--target", str(tgt)])
|
||||
assert rc == 0
|
||||
result = json.loads(tgt.read_text(encoding="utf-8"))
|
||||
assert result == []
|
||||
|
||||
|
||||
def test_youtube_prefix_is_auto_skipped(mod, tmp_path):
|
||||
src = _write_source(tmp_path, [
|
||||
_make_openclaw_job("YouTube: abc123"),
|
||||
_make_openclaw_job("not-youtube"),
|
||||
])
|
||||
tgt = _write_target(tmp_path, [])
|
||||
rc = mod.run(["--source", str(src), "--target", str(tgt)])
|
||||
assert rc == 0
|
||||
names = {j["name"] for j in json.loads(tgt.read_text(encoding="utf-8"))}
|
||||
assert "not-youtube" in names
|
||||
assert not any(n.startswith("YouTube:") for n in names)
|
||||
|
||||
|
||||
def test_prompt_rewrite_clawd_to_echo_core(mod):
|
||||
text = "Hey, please run this: cd ~/clawd && python3 tools/foo.py"
|
||||
new, subs = mod.rewrite_prompt_paths(text)
|
||||
assert "~/clawd" not in new
|
||||
assert "~/echo-core" in new
|
||||
assert "cd ~/clawd" not in new
|
||||
assert "cd ~/echo-core" in new
|
||||
assert len(subs) == 1
|
||||
|
||||
|
||||
def test_prompt_rewrite_absolute_clawd_path(mod):
|
||||
text = "read /home/moltbot/clawd/memory/foo.md now"
|
||||
new, subs = mod.rewrite_prompt_paths(text)
|
||||
assert "/home/moltbot/echo-core/memory/foo.md" in new
|
||||
assert "/home/moltbot/clawd/memory/foo.md" not in new
|
||||
assert len(subs) == 1
|
||||
|
||||
|
||||
def test_prompt_rewrite_does_not_match_clawd_archive(mod):
|
||||
"""Substrings like `clawd-archive` and `clawdbot` must stay untouched."""
|
||||
text = (
|
||||
"Folder: /home/moltbot/clawd-archive-old/stuff\n"
|
||||
"Other folder: /home/moltbot/clawdbot/data\n"
|
||||
"cd ~/clawd-archive\n"
|
||||
)
|
||||
new, subs = mod.rewrite_prompt_paths(text)
|
||||
assert new == text, f"unexpected substitution: {subs}"
|
||||
assert subs == []
|
||||
|
||||
|
||||
def test_prompt_rewrite_cd_clawd_with_trailing_space(mod):
|
||||
# Whitespace boundary on the shell form
|
||||
text = "cd ~/clawd\nls"
|
||||
new, _ = mod.rewrite_prompt_paths(text)
|
||||
assert "cd ~/echo-core" in new
|
||||
|
||||
|
||||
def test_duplicate_name_warns_skips(mod, tmp_path, capsys):
|
||||
"""When target already has a job 'foo', importing 'foo' must preserve
|
||||
the existing entry and not write a duplicate."""
|
||||
existing = {
|
||||
"name": "foo",
|
||||
"cron": "0 0 * * *",
|
||||
"channel": "echo-core",
|
||||
"model": "haiku",
|
||||
"prompt": "existing!",
|
||||
"allowed_tools": [],
|
||||
"enabled": True,
|
||||
}
|
||||
tgt = _write_target(tmp_path, [existing])
|
||||
src = _write_source(tmp_path, [_make_openclaw_job("foo", message="new prompt")])
|
||||
|
||||
rc = mod.run(["--source", str(src), "--target", str(tgt)])
|
||||
assert rc == 0
|
||||
|
||||
out = capsys.readouterr().out
|
||||
assert "DUPE" in out
|
||||
assert "already in target" in out
|
||||
|
||||
result = json.loads(tgt.read_text(encoding="utf-8"))
|
||||
assert len(result) == 1
|
||||
assert result[0]["prompt"] == "existing!" # untouched
|
||||
assert result[0]["model"] == "haiku"
|
||||
|
||||
|
||||
def test_skip_disabled_flag(mod, tmp_path):
|
||||
src = _write_source(tmp_path, [
|
||||
_make_openclaw_job("enabled-one", enabled=True),
|
||||
_make_openclaw_job("disabled-one", enabled=False),
|
||||
])
|
||||
tgt = _write_target(tmp_path, [])
|
||||
rc = mod.run(["--source", str(src), "--target", str(tgt), "--skip-disabled"])
|
||||
assert rc == 0
|
||||
names = {j["name"] for j in json.loads(tgt.read_text(encoding="utf-8"))}
|
||||
assert names == {"enabled-one"}
|
||||
|
||||
|
||||
def test_non_cron_schedule_is_skipped(mod, tmp_path):
|
||||
# 'at' (one-shot) is not supported in echo-core; must be skipped, not crash.
|
||||
src = _write_source(
|
||||
tmp_path,
|
||||
[_make_openclaw_job("one-shot", kind="at")],
|
||||
)
|
||||
# Force kind=at by overriding schedule dict
|
||||
data = json.loads(src.read_text(encoding="utf-8"))
|
||||
data["jobs"][0]["schedule"] = {
|
||||
"kind": "at", "at": "2026-02-09T13:00:00.000Z",
|
||||
}
|
||||
src.write_text(json.dumps(data), encoding="utf-8")
|
||||
|
||||
tgt = _write_target(tmp_path, [])
|
||||
rc = mod.run(["--source", str(src), "--target", str(tgt)])
|
||||
assert rc == 0
|
||||
assert json.loads(tgt.read_text(encoding="utf-8")) == []
|
||||
|
||||
|
||||
def test_extra_skip_cli_flag(mod, tmp_path):
|
||||
src = _write_source(tmp_path, [
|
||||
_make_openclaw_job("a"),
|
||||
_make_openclaw_job("b"),
|
||||
_make_openclaw_job("c"),
|
||||
])
|
||||
tgt = _write_target(tmp_path, [])
|
||||
rc = mod.run([
|
||||
"--source", str(src), "--target", str(tgt),
|
||||
"--skip", "a,c",
|
||||
])
|
||||
assert rc == 0
|
||||
names = {j["name"] for j in json.loads(tgt.read_text(encoding="utf-8"))}
|
||||
assert names == {"b"}
|
||||
|
||||
|
||||
def test_default_channel_override(mod, tmp_path):
|
||||
src = _write_source(tmp_path, [_make_openclaw_job("foo")])
|
||||
tgt = _write_target(tmp_path, [])
|
||||
rc = mod.run([
|
||||
"--source", str(src), "--target", str(tgt),
|
||||
"--channel", "echo-sprijin",
|
||||
])
|
||||
assert rc == 0
|
||||
data = json.loads(tgt.read_text(encoding="utf-8"))
|
||||
assert data[0]["channel"] == "echo-sprijin"
|
||||
|
||||
|
||||
def test_translate_job_preserves_enabled_flag(mod):
|
||||
j = _make_openclaw_job("foo", enabled=True)
|
||||
echo, _ = mod.translate_job(j, "echo-work")
|
||||
assert echo is not None
|
||||
assert echo["enabled"] is True
|
||||
|
||||
j = _make_openclaw_job("bar", enabled=False)
|
||||
echo, _ = mod.translate_job(j, "echo-work")
|
||||
assert echo is not None
|
||||
assert echo["enabled"] is False
|
||||
|
||||
|
||||
def test_translate_job_defaults_model_to_sonnet(mod):
|
||||
j = _make_openclaw_job("foo", model=None)
|
||||
echo, _ = mod.translate_job(j, "echo-work")
|
||||
assert echo is not None
|
||||
assert echo["model"] == "sonnet"
|
||||
|
||||
|
||||
def test_translate_job_respects_explicit_model(mod):
|
||||
j = _make_openclaw_job("foo", model="opus")
|
||||
echo, _ = mod.translate_job(j, "echo-work")
|
||||
assert echo is not None
|
||||
assert echo["model"] == "opus"
|
||||
@@ -6,10 +6,11 @@ import subprocess
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from unittest.mock import AsyncMock, MagicMock, patch
|
||||
from zoneinfo import ZoneInfo
|
||||
|
||||
import pytest
|
||||
|
||||
from src.scheduler import Scheduler, JOBS_FILE, JOBS_DIR, _NAME_RE
|
||||
from src.scheduler import Scheduler, JOBS_FILE, JOBS_DIR, _NAME_RE, _MARKER_RE
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
@@ -548,3 +549,411 @@ class TestFullLifecycle:
|
||||
# Remove
|
||||
assert sched.remove_job("lifecycle") is True
|
||||
assert sched.list_jobs() == []
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Timezone
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestTimezone:
|
||||
def test_scheduler_uses_bucharest_tz(self, sched):
|
||||
tz = sched._scheduler.timezone
|
||||
# Support zoneinfo.ZoneInfo (py3.9+) and pytz-style tzinfo
|
||||
key = getattr(tz, "key", None) or getattr(tz, "zone", None) or str(tz)
|
||||
assert key == "Europe/Bucharest"
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Shell-kind validation (add_shell_job)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestShellKind:
|
||||
def test_add_shell_job_creates_file(self, sched, tmp_jobs):
|
||||
job = sched.add_shell_job(
|
||||
"shelly", "0 9 * * *", "general",
|
||||
["/bin/echo", "hello"],
|
||||
)
|
||||
assert job["name"] == "shelly"
|
||||
assert job["kind"] == "shell"
|
||||
assert job["command"] == ["/bin/echo", "hello"]
|
||||
assert job["report_on"] == "changes"
|
||||
assert job["timeout"] is None
|
||||
assert job["enabled"] is True
|
||||
assert tmp_jobs["file"].exists()
|
||||
|
||||
data = json.loads(tmp_jobs["file"].read_text())
|
||||
assert len(data) == 1
|
||||
assert data[0]["kind"] == "shell"
|
||||
assert data[0]["command"] == ["/bin/echo", "hello"]
|
||||
|
||||
def test_add_shell_job_with_report_on_always(self, sched, tmp_jobs):
|
||||
job = sched.add_shell_job(
|
||||
"x", "0 9 * * *", "ch", ["/bin/true"], report_on="always",
|
||||
)
|
||||
assert job["report_on"] == "always"
|
||||
|
||||
def test_add_shell_job_with_custom_timeout(self, sched, tmp_jobs):
|
||||
job = sched.add_shell_job(
|
||||
"x", "0 9 * * *", "ch", ["/bin/true"], timeout=60,
|
||||
)
|
||||
assert job["timeout"] == 60
|
||||
|
||||
def test_add_shell_job_duplicate_name_with_claude(self, sched, tmp_jobs):
|
||||
sched.add_job("x", "0 * * * *", "ch", "prompt")
|
||||
with pytest.raises(ValueError, match="already exists"):
|
||||
sched.add_shell_job("x", "0 9 * * *", "ch", ["/bin/true"])
|
||||
|
||||
def test_add_shell_job_duplicate_name_with_shell(self, sched, tmp_jobs):
|
||||
sched.add_shell_job("x", "0 9 * * *", "ch", ["/bin/true"])
|
||||
with pytest.raises(ValueError, match="already exists"):
|
||||
sched.add_shell_job("x", "0 10 * * *", "ch", ["/bin/true"])
|
||||
|
||||
def test_add_shell_job_invalid_cron(self, sched, tmp_jobs):
|
||||
with pytest.raises(ValueError, match="Invalid cron"):
|
||||
sched.add_shell_job("x", "not a cron", "ch", ["/bin/true"])
|
||||
|
||||
def test_add_shell_job_invalid_name(self, sched, tmp_jobs):
|
||||
with pytest.raises(ValueError, match="Invalid job name"):
|
||||
sched.add_shell_job("BadName", "0 9 * * *", "ch", ["/bin/true"])
|
||||
|
||||
def test_add_shell_job_empty_command_list(self, sched, tmp_jobs):
|
||||
with pytest.raises(ValueError, match="non-empty list"):
|
||||
sched.add_shell_job("x", "0 9 * * *", "ch", [])
|
||||
|
||||
def test_add_shell_job_command_not_list(self, sched, tmp_jobs):
|
||||
with pytest.raises(ValueError, match="non-empty list"):
|
||||
sched.add_shell_job("x", "0 9 * * *", "ch", "/bin/echo hi") # type: ignore[arg-type]
|
||||
|
||||
def test_add_shell_job_command_with_empty_string(self, sched, tmp_jobs):
|
||||
with pytest.raises(ValueError, match="non-empty list"):
|
||||
sched.add_shell_job("x", "0 9 * * *", "ch", ["/bin/echo", ""])
|
||||
|
||||
def test_add_shell_job_command_non_string_element(self, sched, tmp_jobs):
|
||||
with pytest.raises(ValueError, match="non-empty list"):
|
||||
sched.add_shell_job("x", "0 9 * * *", "ch", ["/bin/echo", 1]) # type: ignore[list-item]
|
||||
|
||||
def test_add_shell_job_bad_report_on(self, sched, tmp_jobs):
|
||||
with pytest.raises(ValueError, match="Invalid report_on"):
|
||||
sched.add_shell_job(
|
||||
"x", "0 9 * * *", "ch", ["/bin/true"], report_on="maybe",
|
||||
)
|
||||
|
||||
def test_add_shell_job_bad_timeout_zero(self, sched, tmp_jobs):
|
||||
with pytest.raises(ValueError, match="between 1 and 3600"):
|
||||
sched.add_shell_job(
|
||||
"x", "0 9 * * *", "ch", ["/bin/true"], timeout=0,
|
||||
)
|
||||
|
||||
def test_add_shell_job_bad_timeout_negative(self, sched, tmp_jobs):
|
||||
with pytest.raises(ValueError, match="between 1 and 3600"):
|
||||
sched.add_shell_job(
|
||||
"x", "0 9 * * *", "ch", ["/bin/true"], timeout=-5,
|
||||
)
|
||||
|
||||
def test_add_shell_job_bad_timeout_too_big(self, sched, tmp_jobs):
|
||||
with pytest.raises(ValueError, match="between 1 and 3600"):
|
||||
sched.add_shell_job(
|
||||
"x", "0 9 * * *", "ch", ["/bin/true"], timeout=7200,
|
||||
)
|
||||
|
||||
def test_add_shell_job_bad_timeout_not_int(self, sched, tmp_jobs):
|
||||
with pytest.raises(ValueError, match="must be an int"):
|
||||
sched.add_shell_job(
|
||||
"x", "0 9 * * *", "ch", ["/bin/true"], timeout="60", # type: ignore[arg-type]
|
||||
)
|
||||
|
||||
def test_add_shell_job_empty_channel(self, sched, tmp_jobs):
|
||||
with pytest.raises(ValueError, match="non-empty"):
|
||||
sched.add_shell_job("x", "0 9 * * *", " ", ["/bin/true"])
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Shell-kind execution
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _mock_proc(returncode=0, stdout="", stderr=""):
|
||||
p = MagicMock()
|
||||
p.returncode = returncode
|
||||
p.stdout = stdout
|
||||
p.stderr = stderr
|
||||
return p
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sched_with_shell_job(sched, tmp_jobs):
|
||||
"""Scheduler pre-loaded with a shell job 'shelly' (report_on=changes)."""
|
||||
sched.add_shell_job(
|
||||
"shelly", "0 9 * * *", "general", ["/bin/true"], report_on="changes",
|
||||
)
|
||||
return sched
|
||||
|
||||
|
||||
class TestShellExecute:
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_exit_zero_changes_marker_positive(
|
||||
self, sched_with_shell_job, callback,
|
||||
):
|
||||
stdout = "some body\nGSTACK-CRON: changes=3\nfooter\n"
|
||||
with patch("subprocess.run", return_value=_mock_proc(0, stdout, "")):
|
||||
result = await sched_with_shell_job.run_job("shelly")
|
||||
assert result == stdout[:1500]
|
||||
assert sched_with_shell_job._jobs[0]["last_status"] == "ok"
|
||||
callback.assert_awaited_once_with("general", stdout[:1500])
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_exit_zero_changes_marker_zero(
|
||||
self, sched_with_shell_job, callback,
|
||||
):
|
||||
stdout = "nothing here\nGSTACK-CRON: changes=0\n"
|
||||
with patch("subprocess.run", return_value=_mock_proc(0, stdout, "")):
|
||||
result = await sched_with_shell_job.run_job("shelly")
|
||||
assert result == ""
|
||||
assert sched_with_shell_job._jobs[0]["last_status"] == "ok"
|
||||
callback.assert_not_awaited()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_exit_zero_no_marker(
|
||||
self, sched_with_shell_job, callback, caplog,
|
||||
):
|
||||
stdout = "completely unrelated output\n"
|
||||
import logging
|
||||
caplog.set_level(logging.WARNING, logger="src.scheduler")
|
||||
with patch("subprocess.run", return_value=_mock_proc(0, stdout, "")):
|
||||
result = await sched_with_shell_job.run_job("shelly")
|
||||
assert result == ""
|
||||
assert sched_with_shell_job._jobs[0]["last_status"] == "ok"
|
||||
callback.assert_not_awaited()
|
||||
assert any(
|
||||
"missing GSTACK-CRON marker" in rec.message
|
||||
for rec in caplog.records
|
||||
)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_exit_zero_report_on_always(self, sched, callback):
|
||||
sched.add_shell_job(
|
||||
"alw", "0 9 * * *", "general", ["/bin/true"], report_on="always",
|
||||
)
|
||||
stdout = "hello world, no marker here"
|
||||
with patch("subprocess.run", return_value=_mock_proc(0, stdout, "")):
|
||||
result = await sched.run_job("alw")
|
||||
assert result == stdout
|
||||
callback.assert_awaited_once_with("general", stdout)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_exit_zero_report_on_never(self, sched, callback):
|
||||
sched.add_shell_job(
|
||||
"nev", "0 9 * * *", "general", ["/bin/true"], report_on="never",
|
||||
)
|
||||
stdout = "GSTACK-CRON: changes=99\nbig change"
|
||||
with patch("subprocess.run", return_value=_mock_proc(0, stdout, "")):
|
||||
result = await sched.run_job("nev")
|
||||
assert result == ""
|
||||
callback.assert_not_awaited()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_exit_nonzero_always_reports(
|
||||
self, sched_with_shell_job, callback,
|
||||
):
|
||||
with patch("subprocess.run", return_value=_mock_proc(1, "", "ANAF 502")):
|
||||
result = await sched_with_shell_job.run_job("shelly")
|
||||
assert result == "[cron:shelly] exit 1: ANAF 502"
|
||||
assert sched_with_shell_job._jobs[0]["last_status"] == "error"
|
||||
callback.assert_awaited_once_with(
|
||||
"general", "[cron:shelly] exit 1: ANAF 502",
|
||||
)
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_exit_nonzero_never_still_reports(
|
||||
self, sched, callback,
|
||||
):
|
||||
"""report_on=never must NOT suppress error reports."""
|
||||
sched.add_shell_job(
|
||||
"nev", "0 9 * * *", "ch", ["/bin/false"], report_on="never",
|
||||
)
|
||||
with patch("subprocess.run", return_value=_mock_proc(2, "", "boom")):
|
||||
result = await sched.run_job("nev")
|
||||
assert "exit 2" in result
|
||||
assert "boom" in result
|
||||
callback.assert_awaited_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_timeout_reports(self, sched_with_shell_job, callback):
|
||||
with patch(
|
||||
"subprocess.run",
|
||||
side_effect=subprocess.TimeoutExpired(cmd="x", timeout=300),
|
||||
):
|
||||
result = await sched_with_shell_job.run_job("shelly")
|
||||
assert "timed out after 300s" in result
|
||||
assert sched_with_shell_job._jobs[0]["last_status"] == "error"
|
||||
callback.assert_awaited_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_subprocess_launch_exception(
|
||||
self, sched_with_shell_job, callback,
|
||||
):
|
||||
with patch(
|
||||
"subprocess.run",
|
||||
side_effect=FileNotFoundError("no such binary"),
|
||||
):
|
||||
result = await sched_with_shell_job.run_job("shelly")
|
||||
assert "Error" in result
|
||||
assert "no such binary" in result
|
||||
assert sched_with_shell_job._jobs[0]["last_status"] == "error"
|
||||
callback.assert_awaited_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_respects_per_job_timeout(self, sched, callback):
|
||||
sched.add_shell_job(
|
||||
"t", "0 9 * * *", "ch", ["/bin/true"], timeout=60,
|
||||
)
|
||||
stdout = "GSTACK-CRON: changes=1\nok\n"
|
||||
with patch(
|
||||
"subprocess.run", return_value=_mock_proc(0, stdout, ""),
|
||||
) as mock_run:
|
||||
await sched.run_job("t")
|
||||
assert mock_run.call_args.kwargs["timeout"] == 60
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_default_timeout_when_none(
|
||||
self, sched_with_shell_job, callback,
|
||||
):
|
||||
"""timeout=None in job dict falls back to JOB_TIMEOUT (300)."""
|
||||
stdout = "GSTACK-CRON: changes=1\n"
|
||||
with patch(
|
||||
"subprocess.run", return_value=_mock_proc(0, stdout, ""),
|
||||
) as mock_run:
|
||||
await sched_with_shell_job.run_job("shelly")
|
||||
assert mock_run.call_args.kwargs["timeout"] == 300
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_executes_command_list(
|
||||
self, sched_with_shell_job, callback,
|
||||
):
|
||||
"""subprocess.run is invoked with the job's command list verbatim."""
|
||||
stdout = "GSTACK-CRON: changes=1\n"
|
||||
with patch(
|
||||
"subprocess.run", return_value=_mock_proc(0, stdout, ""),
|
||||
) as mock_run:
|
||||
await sched_with_shell_job.run_job("shelly")
|
||||
cmd = mock_run.call_args.args[0]
|
||||
assert cmd == ["/bin/true"]
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_marker_anywhere_in_output(self, sched, callback):
|
||||
"""Marker can appear on any line, not just last."""
|
||||
sched.add_shell_job(
|
||||
"m", "0 9 * * *", "ch", ["/bin/true"], report_on="changes",
|
||||
)
|
||||
stdout = "GSTACK-CRON: changes=5\nbody follows\nmore body\n"
|
||||
with patch("subprocess.run", return_value=_mock_proc(0, stdout, "")):
|
||||
result = await sched.run_job("m")
|
||||
assert result == stdout
|
||||
callback.assert_awaited_once()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_stderr_trimmed_to_500(
|
||||
self, sched_with_shell_job, callback,
|
||||
):
|
||||
long_stderr = "E" * 2000
|
||||
with patch(
|
||||
"subprocess.run", return_value=_mock_proc(1, "", long_stderr),
|
||||
):
|
||||
result = await sched_with_shell_job.run_job("shelly")
|
||||
# 500 chars of stderr max
|
||||
assert "E" * 500 in result
|
||||
assert "E" * 501 not in result
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_shell_stdout_trimmed_to_1500(self, sched, callback):
|
||||
sched.add_shell_job(
|
||||
"big", "0 9 * * *", "ch", ["/bin/true"], report_on="always",
|
||||
)
|
||||
big_stdout = "x" * 5000
|
||||
with patch("subprocess.run", return_value=_mock_proc(0, big_stdout, "")):
|
||||
result = await sched.run_job("big")
|
||||
assert len(result) == 1500
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Backward compatibility — jobs without kind field default to "claude"
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestBackwardCompat:
|
||||
def test_legacy_job_without_kind_defaults_to_claude(self, sched, tmp_jobs):
|
||||
"""A jobs.json written before kind existed must still dispatch to claude."""
|
||||
legacy = [_sample_job(name="legacy")]
|
||||
# Explicitly ensure no 'kind' key is present
|
||||
assert "kind" not in legacy[0]
|
||||
tmp_jobs["file"].write_text(json.dumps(legacy))
|
||||
sched._jobs = sched._load_jobs()
|
||||
|
||||
assert sched._jobs[0].get("kind") is None # still legacy shape on disk
|
||||
# Dispatch: ensure we go through the claude path, NOT the shell path
|
||||
with patch.object(
|
||||
sched, "_execute_claude_job", new=AsyncMock(return_value="ok")
|
||||
) as claude, patch.object(
|
||||
sched, "_execute_shell_job", new=AsyncMock(return_value="never")
|
||||
) as shell:
|
||||
asyncio.get_event_loop() # keep lints happy; actual run below
|
||||
asyncio.run(sched._execute_job(sched._jobs[0]))
|
||||
claude.assert_awaited_once()
|
||||
shell.assert_not_awaited()
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_claude_jobs_still_work_unchanged(
|
||||
self, sched, tmp_jobs, callback,
|
||||
):
|
||||
"""Existing Claude job pattern: add_job + run_job path unchanged."""
|
||||
sched.add_job("claude-j", "0 * * * *", "general", "hi")
|
||||
|
||||
mock_proc = MagicMock()
|
||||
mock_proc.returncode = 0
|
||||
mock_proc.stdout = json.dumps({"result": "response"})
|
||||
mock_proc.stderr = ""
|
||||
|
||||
with patch("src.scheduler.build_system_prompt", return_value="sys"), \
|
||||
patch("subprocess.run", return_value=mock_proc) as mock_run:
|
||||
result = await sched.run_job("claude-j")
|
||||
|
||||
assert result == "response"
|
||||
# First positional arg is the CLI command list; confirm it targets CLAUDE_BIN
|
||||
cmd = mock_run.call_args[0][0]
|
||||
assert "-p" in cmd
|
||||
assert "hi" in cmd
|
||||
assert sched._jobs[0]["last_status"] == "ok"
|
||||
callback.assert_awaited_once_with("general", "response")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Marker regex unit checks
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class TestMarkerRegex:
|
||||
@pytest.mark.parametrize("stdout,expected", [
|
||||
("GSTACK-CRON: changes=0\n", 0),
|
||||
("GSTACK-CRON: changes=1\n", 1),
|
||||
("GSTACK-CRON: changes=42\n", 42),
|
||||
("foo\nbar\nGSTACK-CRON: changes=7\nbaz\n", 7),
|
||||
("GSTACK-CRON: changes=3", 3), # no trailing newline
|
||||
])
|
||||
def test_marker_matches(self, stdout, expected):
|
||||
m = _MARKER_RE.search(stdout)
|
||||
assert m is not None
|
||||
assert int(m.group(1)) == expected
|
||||
|
||||
@pytest.mark.parametrize("stdout", [
|
||||
"",
|
||||
"hello world",
|
||||
"gstack-cron: changes=5\n", # lowercase
|
||||
"GSTACK-CRON:changes=5\n", # no space after colon
|
||||
"GSTACK-CRON: changes=\n", # empty int
|
||||
"GSTACK-CRON: changes=abc\n", # non-int
|
||||
"prefix GSTACK-CRON: changes=5\n", # not at start of line
|
||||
])
|
||||
def test_marker_no_match(self, stdout):
|
||||
assert _MARKER_RE.search(stdout) is None
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
{
|
||||
"D100": "44c03d855b36c32578b58bef6116e861c1d26ed6b038d732c23334b5d42f20de",
|
||||
"D101": "937209d4785ca013cbcbe5a0d0aa8ba0e7033d3d8e6c121dadd8e38b20db8026",
|
||||
"D100": "ae66293e3027ada9ffc32f2d58bf562e8e9e3e3a12c29f531e5479bfa9895262",
|
||||
"D101": "f72fc1c29657ea11e0238806a28f6abccf5b00e45904e1e0c9385cc64491fcaf",
|
||||
"D300": "cb7b55b568ab893024884971eac0367fb6fe487c297e355d64258dae437f6ddd",
|
||||
"D394": "c4c4e62bda30032f12c17edf9a5087b6173a350ccb1fd750158978b3bd0acb7d",
|
||||
"D406": "5a6712fab7b904ee659282af1b62f8b789aada5e3e4beb9fcce4ea3e0cab6ece",
|
||||
"D406": "ca6103448d663ab16fcaef0f29f8933ef526cbf5aad12c7ff5dbd61b22ca9fc6",
|
||||
"SIT_FIN_SEM_2025": "8164843431e6b703a38fbdedc7898ec6ae83559fe10f88663ba0b55f3091d5fe",
|
||||
"SIT_FIN_AN_2025": "c00c39079482af8b7af6d32ba7b85c7d9e8cb25ebcbd6704adabd0192e1adca8",
|
||||
"DESCARCARE_DECLARATII": "d66297abcfc2b3ad87f65e4a60c97ddd0a889f493bb7e7c8e6035ef39d55ec3f",
|
||||
"D205": "cbaad8e3bd561494556eb963976310810f4fb63cdea054d66d9503c93ce27dd4",
|
||||
"SIT_FIN_AN_2025": "ec5b2ce694b02bf780e0f72df462b1aeec578ee64c11b3e44ed1a80b2dbe85d8",
|
||||
"DESCARCARE_DECLARATII": "8091a14fd7ef56e0cbf139add3e66dedf6b0d71d1858fca6be5f86760c4c0ec2",
|
||||
"D205": "d3c20a7ae70f4c18bbb7add42af035e3746d323b2e6df37a4e31ed625ddb86d9",
|
||||
"D390": "4726938ed5858ec735caefd947a7d182b6dc64009478332c4feabdb36412a84e",
|
||||
"BILANT_2024": "fbb8d66c2e530d8798362992c6983e07e1250188228c758cb6da4cde4f955950",
|
||||
"BILANT_2025": "9d66ffa59b8be06a5632b0f23a0354629f175ae5204398d7bb7a4c4734d5275a"
|
||||
"BILANT_2025": "03fe3c9095345d8ab912bafe789ce69d5957a85d94df38ce03f0d0b11d4a809b"
|
||||
}
|
||||
@@ -461,3 +461,458 @@
|
||||
[2026-02-13 14:00:12] OK: SIT_FIN_AN_2025
|
||||
[2026-02-13 14:00:12] OK: DESCARCARE_DECLARATII
|
||||
[2026-02-13 14:00:12] === Monitor complete ===
|
||||
[2026-02-19 14:00:03] === Starting ANAF monitor v2.1 ===
|
||||
[2026-02-19 14:00:04] OK: D100
|
||||
[2026-02-19 14:00:04] HASH CHANGED in D101 (no version changes detected)
|
||||
[2026-02-19 14:00:04] OK: D300
|
||||
[2026-02-19 14:00:04] OK: D390
|
||||
[2026-02-19 14:00:05] OK: D394
|
||||
[2026-02-19 14:00:05] HASH CHANGED in D205 (no version changes detected)
|
||||
[2026-02-19 14:00:05] CHANGES in D406: ['Soft J: 11.02.2026 → 16.02.2026']
|
||||
[2026-02-19 14:00:05] OK: BILANT_2025
|
||||
[2026-02-19 14:00:05] OK: SIT_FIN_SEM_2025
|
||||
[2026-02-19 14:00:05] OK: SIT_FIN_AN_2025
|
||||
[2026-02-19 14:00:05] OK: DESCARCARE_DECLARATII
|
||||
[2026-02-19 14:00:05] === Monitor complete ===
|
||||
[2026-02-20 08:00:05] === Starting ANAF monitor v2.1 ===
|
||||
[2026-02-20 08:00:05] OK: D100
|
||||
[2026-02-20 08:00:05] OK: D101
|
||||
[2026-02-20 08:00:05] OK: D300
|
||||
[2026-02-20 08:00:05] OK: D390
|
||||
[2026-02-20 08:00:06] OK: D394
|
||||
[2026-02-20 08:00:06] HASH CHANGED in D205 (no version changes detected)
|
||||
[2026-02-20 08:00:06] OK: D406
|
||||
[2026-02-20 08:00:06] OK: BILANT_2025
|
||||
[2026-02-20 08:00:08] OK: SIT_FIN_SEM_2025
|
||||
[2026-02-20 08:00:08] OK: SIT_FIN_AN_2025
|
||||
[2026-02-20 08:00:08] OK: DESCARCARE_DECLARATII
|
||||
[2026-02-20 08:00:08] === Monitor complete ===
|
||||
[2026-02-20 14:00:07] === Starting ANAF monitor v2.1 ===
|
||||
[2026-02-20 14:00:07] OK: D100
|
||||
[2026-02-20 14:00:08] OK: D101
|
||||
[2026-02-20 14:00:08] OK: D300
|
||||
[2026-02-20 14:00:08] OK: D390
|
||||
[2026-02-20 14:00:08] OK: D394
|
||||
[2026-02-20 14:00:09] OK: D205
|
||||
[2026-02-20 14:00:09] OK: D406
|
||||
[2026-02-20 14:00:10] OK: BILANT_2025
|
||||
[2026-02-20 14:00:10] OK: SIT_FIN_SEM_2025
|
||||
[2026-02-20 14:00:12] OK: SIT_FIN_AN_2025
|
||||
[2026-02-20 14:00:12] OK: DESCARCARE_DECLARATII
|
||||
[2026-02-20 14:00:12] === Monitor complete ===
|
||||
[2026-02-23 08:00:06] === Starting ANAF monitor v2.1 ===
|
||||
[2026-02-23 08:00:06] OK: D100
|
||||
[2026-02-23 08:00:06] OK: D101
|
||||
[2026-02-23 08:00:06] OK: D300
|
||||
[2026-02-23 08:00:06] OK: D390
|
||||
[2026-02-23 08:00:06] OK: D394
|
||||
[2026-02-23 08:00:06] OK: D205
|
||||
[2026-02-23 08:00:06] OK: D406
|
||||
[2026-02-23 08:00:06] OK: BILANT_2025
|
||||
[2026-02-23 08:00:07] OK: SIT_FIN_SEM_2025
|
||||
[2026-02-23 08:00:07] OK: SIT_FIN_AN_2025
|
||||
[2026-02-23 08:00:07] OK: DESCARCARE_DECLARATII
|
||||
[2026-02-23 08:00:07] === Monitor complete ===
|
||||
[2026-02-23 14:00:04] === Starting ANAF monitor v2.1 ===
|
||||
[2026-02-23 14:00:04] OK: D100
|
||||
[2026-02-23 14:00:04] OK: D101
|
||||
[2026-02-23 14:00:04] OK: D300
|
||||
[2026-02-23 14:00:04] OK: D390
|
||||
[2026-02-23 14:00:05] OK: D394
|
||||
[2026-02-23 14:00:05] OK: D205
|
||||
[2026-02-23 14:00:05] OK: D406
|
||||
[2026-02-23 14:00:05] OK: BILANT_2025
|
||||
[2026-02-23 14:00:05] OK: SIT_FIN_SEM_2025
|
||||
[2026-02-23 14:00:05] OK: SIT_FIN_AN_2025
|
||||
[2026-02-23 14:00:05] OK: DESCARCARE_DECLARATII
|
||||
[2026-02-23 14:00:05] === Monitor complete ===
|
||||
[2026-02-24 08:00:04] === Starting ANAF monitor v2.1 ===
|
||||
[2026-02-24 08:00:04] OK: D100
|
||||
[2026-02-24 08:00:04] OK: D101
|
||||
[2026-02-24 08:00:04] OK: D300
|
||||
[2026-02-24 08:00:04] OK: D390
|
||||
[2026-02-24 08:00:04] OK: D394
|
||||
[2026-02-24 08:00:04] OK: D205
|
||||
[2026-02-24 08:00:05] OK: D406
|
||||
[2026-02-24 08:00:06] HASH CHANGED in BILANT_2025 (no version changes detected)
|
||||
[2026-02-24 08:00:06] OK: SIT_FIN_SEM_2025
|
||||
[2026-02-24 08:00:07] HASH CHANGED in SIT_FIN_AN_2025 (no version changes detected)
|
||||
[2026-02-24 08:00:07] OK: DESCARCARE_DECLARATII
|
||||
[2026-02-24 08:00:07] === Monitor complete ===
|
||||
[2026-02-25 08:00:05] === Starting ANAF monitor v2.1 ===
|
||||
[2026-02-25 08:00:05] OK: D100
|
||||
[2026-02-25 08:00:05] OK: D101
|
||||
[2026-02-25 08:00:05] OK: D300
|
||||
[2026-02-25 08:00:05] OK: D390
|
||||
[2026-02-25 08:00:06] OK: D394
|
||||
[2026-02-25 08:00:06] OK: D205
|
||||
[2026-02-25 08:00:06] OK: D406
|
||||
[2026-02-25 08:00:06] CHANGES in BILANT_2025: ['Soft J S1004: 04.02.2025 → 24.02.2026']
|
||||
[2026-02-25 08:00:06] OK: SIT_FIN_SEM_2025
|
||||
[2026-02-25 08:00:06] OK: SIT_FIN_AN_2025
|
||||
[2026-02-25 08:00:06] OK: DESCARCARE_DECLARATII
|
||||
[2026-02-25 08:00:06] === Monitor complete ===
|
||||
[2026-02-25 23:07:32] === Starting ANAF monitor v2.1 ===
|
||||
[2026-02-25 23:07:32] OK: D100
|
||||
[2026-02-25 23:07:32] OK: D101
|
||||
[2026-02-25 23:07:32] OK: D300
|
||||
[2026-02-25 23:07:32] OK: D390
|
||||
[2026-02-25 23:07:32] OK: D394
|
||||
[2026-02-25 23:07:33] HASH CHANGED in D205 (no version changes detected)
|
||||
[2026-02-25 23:07:33] OK: D406
|
||||
[2026-02-25 23:07:33] OK: BILANT_2025
|
||||
[2026-02-25 23:07:33] OK: SIT_FIN_SEM_2025
|
||||
[2026-02-25 23:07:33] OK: SIT_FIN_AN_2025
|
||||
[2026-02-25 23:07:33] OK: DESCARCARE_DECLARATII
|
||||
[2026-02-25 23:07:33] === Monitor complete ===
|
||||
[2026-02-26 23:01:03] === Starting ANAF monitor v2.1 ===
|
||||
[2026-02-26 23:01:03] OK: D100
|
||||
[2026-02-26 23:01:03] OK: D101
|
||||
[2026-02-26 23:01:04] OK: D300
|
||||
[2026-02-26 23:01:04] OK: D390
|
||||
[2026-02-26 23:01:04] OK: D394
|
||||
[2026-02-26 23:01:04] OK: D205
|
||||
[2026-02-26 23:01:04] OK: D406
|
||||
[2026-02-26 23:01:04] OK: BILANT_2025
|
||||
[2026-02-26 23:01:04] OK: SIT_FIN_SEM_2025
|
||||
[2026-02-26 23:01:04] OK: SIT_FIN_AN_2025
|
||||
[2026-02-26 23:01:05] OK: DESCARCARE_DECLARATII
|
||||
[2026-02-26 23:01:05] === Monitor complete ===
|
||||
[2026-02-27 23:00:41] === Starting ANAF monitor v2.1 ===
|
||||
[2026-02-27 23:00:41] OK: D100
|
||||
[2026-02-27 23:00:42] OK: D101
|
||||
[2026-02-27 23:00:42] OK: D300
|
||||
[2026-02-27 23:00:42] OK: D390
|
||||
[2026-02-27 23:00:42] OK: D394
|
||||
[2026-02-27 23:00:42] OK: D205
|
||||
[2026-02-27 23:00:42] OK: D406
|
||||
[2026-02-27 23:00:42] OK: BILANT_2025
|
||||
[2026-02-27 23:00:42] OK: SIT_FIN_SEM_2025
|
||||
[2026-02-27 23:00:42] OK: SIT_FIN_AN_2025
|
||||
[2026-02-27 23:00:43] OK: DESCARCARE_DECLARATII
|
||||
[2026-02-27 23:00:43] === Monitor complete ===
|
||||
[2026-02-28 23:09:28] === Starting ANAF monitor v2.1 ===
|
||||
[2026-02-28 23:09:28] OK: D100
|
||||
[2026-02-28 23:09:28] OK: D101
|
||||
[2026-02-28 23:09:28] OK: D300
|
||||
[2026-02-28 23:09:28] OK: D390
|
||||
[2026-02-28 23:09:28] OK: D394
|
||||
[2026-02-28 23:09:28] OK: D205
|
||||
[2026-02-28 23:09:28] OK: D406
|
||||
[2026-02-28 23:09:29] OK: BILANT_2025
|
||||
[2026-02-28 23:09:29] OK: SIT_FIN_SEM_2025
|
||||
[2026-02-28 23:09:29] OK: SIT_FIN_AN_2025
|
||||
[2026-02-28 23:09:29] OK: DESCARCARE_DECLARATII
|
||||
[2026-02-28 23:09:29] === Monitor complete ===
|
||||
[2026-03-01 23:01:00] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-01 23:01:00] OK: D100
|
||||
[2026-03-01 23:01:00] OK: D101
|
||||
[2026-03-01 23:01:01] OK: D300
|
||||
[2026-03-01 23:01:01] OK: D390
|
||||
[2026-03-01 23:01:01] OK: D394
|
||||
[2026-03-01 23:01:01] OK: D205
|
||||
[2026-03-01 23:01:01] OK: D406
|
||||
[2026-03-01 23:01:01] OK: BILANT_2025
|
||||
[2026-03-01 23:01:01] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-01 23:01:01] OK: SIT_FIN_AN_2025
|
||||
[2026-03-01 23:01:02] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-01 23:01:02] === Monitor complete ===
|
||||
[2026-03-02 23:00:50] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-02 23:00:50] OK: D100
|
||||
[2026-03-02 23:00:50] OK: D101
|
||||
[2026-03-02 23:00:51] OK: D300
|
||||
[2026-03-02 23:00:51] OK: D390
|
||||
[2026-03-02 23:00:51] OK: D394
|
||||
[2026-03-02 23:00:51] OK: D205
|
||||
[2026-03-02 23:00:51] OK: D406
|
||||
[2026-03-02 23:00:51] HASH CHANGED in BILANT_2025 (no version changes detected)
|
||||
[2026-03-02 23:00:51] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-02 23:00:52] HASH CHANGED in SIT_FIN_AN_2025 (no version changes detected)
|
||||
[2026-03-02 23:00:52] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-02 23:00:52] === Monitor complete ===
|
||||
[2026-03-03 23:00:51] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-03 23:00:51] OK: D100
|
||||
[2026-03-03 23:00:51] OK: D101
|
||||
[2026-03-03 23:00:51] OK: D300
|
||||
[2026-03-03 23:00:51] OK: D390
|
||||
[2026-03-03 23:00:51] OK: D394
|
||||
[2026-03-03 23:00:52] OK: D205
|
||||
[2026-03-03 23:00:52] OK: D406
|
||||
[2026-03-03 23:00:52] OK: BILANT_2025
|
||||
[2026-03-03 23:00:52] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-03 23:00:52] OK: SIT_FIN_AN_2025
|
||||
[2026-03-03 23:00:52] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-03 23:00:52] === Monitor complete ===
|
||||
[2026-03-04 23:00:53] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-04 23:00:53] OK: D100
|
||||
[2026-03-04 23:00:53] OK: D101
|
||||
[2026-03-04 23:00:54] OK: D300
|
||||
[2026-03-04 23:00:54] OK: D390
|
||||
[2026-03-04 23:00:54] OK: D394
|
||||
[2026-03-04 23:00:54] OK: D205
|
||||
[2026-03-04 23:00:54] OK: D406
|
||||
[2026-03-04 23:00:54] OK: BILANT_2025
|
||||
[2026-03-04 23:00:54] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-04 23:00:54] OK: SIT_FIN_AN_2025
|
||||
[2026-03-04 23:00:55] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-04 23:00:55] === Monitor complete ===
|
||||
[2026-03-05 23:01:44] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-05 23:01:44] OK: D100
|
||||
[2026-03-05 23:01:44] OK: D101
|
||||
[2026-03-05 23:01:45] OK: D300
|
||||
[2026-03-05 23:01:45] OK: D390
|
||||
[2026-03-05 23:01:45] OK: D394
|
||||
[2026-03-05 23:01:45] OK: D205
|
||||
[2026-03-05 23:01:45] OK: D406
|
||||
[2026-03-05 23:01:45] OK: BILANT_2025
|
||||
[2026-03-05 23:01:45] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-05 23:01:45] OK: SIT_FIN_AN_2025
|
||||
[2026-03-05 23:01:45] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-05 23:01:45] === Monitor complete ===
|
||||
[2026-03-06 23:01:09] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-06 23:01:09] OK: D100
|
||||
[2026-03-06 23:01:09] OK: D101
|
||||
[2026-03-06 23:01:10] OK: D300
|
||||
[2026-03-06 23:01:10] OK: D390
|
||||
[2026-03-06 23:01:10] OK: D394
|
||||
[2026-03-06 23:01:10] OK: D205
|
||||
[2026-03-06 23:01:11] OK: D406
|
||||
[2026-03-06 23:01:11] HASH CHANGED in BILANT_2025 (no version changes detected)
|
||||
[2026-03-06 23:01:11] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-06 23:01:11] HASH CHANGED in SIT_FIN_AN_2025 (no version changes detected)
|
||||
[2026-03-06 23:01:11] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-06 23:01:11] === Monitor complete ===
|
||||
[2026-03-07 23:00:53] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-07 23:00:53] OK: D100
|
||||
[2026-03-07 23:00:53] OK: D101
|
||||
[2026-03-07 23:00:54] OK: D300
|
||||
[2026-03-07 23:00:54] OK: D390
|
||||
[2026-03-07 23:00:54] OK: D394
|
||||
[2026-03-07 23:00:54] OK: D205
|
||||
[2026-03-07 23:00:54] OK: D406
|
||||
[2026-03-07 23:00:54] OK: BILANT_2025
|
||||
[2026-03-07 23:00:54] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-07 23:00:55] OK: SIT_FIN_AN_2025
|
||||
[2026-03-07 23:00:55] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-07 23:00:55] === Monitor complete ===
|
||||
[2026-03-08 23:01:02] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-08 23:01:02] OK: D100
|
||||
[2026-03-08 23:01:02] OK: D101
|
||||
[2026-03-08 23:01:03] OK: D300
|
||||
[2026-03-08 23:01:03] OK: D390
|
||||
[2026-03-08 23:01:03] OK: D394
|
||||
[2026-03-08 23:01:03] OK: D205
|
||||
[2026-03-08 23:01:03] OK: D406
|
||||
[2026-03-08 23:01:03] OK: BILANT_2025
|
||||
[2026-03-08 23:01:03] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-08 23:01:04] OK: SIT_FIN_AN_2025
|
||||
[2026-03-08 23:01:04] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-08 23:01:04] === Monitor complete ===
|
||||
[2026-03-09 23:00:58] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-09 23:00:58] OK: D100
|
||||
[2026-03-09 23:00:58] OK: D101
|
||||
[2026-03-09 23:00:59] OK: D300
|
||||
[2026-03-09 23:00:59] OK: D390
|
||||
[2026-03-09 23:00:59] OK: D394
|
||||
[2026-03-09 23:00:59] OK: D205
|
||||
[2026-03-09 23:00:59] OK: D406
|
||||
[2026-03-09 23:00:59] OK: BILANT_2025
|
||||
[2026-03-09 23:00:59] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-09 23:01:00] OK: SIT_FIN_AN_2025
|
||||
[2026-03-09 23:01:00] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-09 23:01:00] === Monitor complete ===
|
||||
[2026-03-10 23:00:58] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-10 23:00:58] OK: D100
|
||||
[2026-03-10 23:00:58] OK: D101
|
||||
[2026-03-10 23:00:58] OK: D300
|
||||
[2026-03-10 23:00:58] OK: D390
|
||||
[2026-03-10 23:00:59] OK: D394
|
||||
[2026-03-10 23:00:59] OK: D205
|
||||
[2026-03-10 23:00:59] OK: D406
|
||||
[2026-03-10 23:00:59] OK: BILANT_2025
|
||||
[2026-03-10 23:00:59] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-10 23:00:59] OK: SIT_FIN_AN_2025
|
||||
[2026-03-10 23:00:59] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-10 23:00:59] === Monitor complete ===
|
||||
[2026-03-11 23:00:51] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-11 23:00:51] OK: D100
|
||||
[2026-03-11 23:00:51] OK: D101
|
||||
[2026-03-11 23:00:51] OK: D300
|
||||
[2026-03-11 23:00:51] OK: D390
|
||||
[2026-03-11 23:00:51] OK: D394
|
||||
[2026-03-11 23:00:52] OK: D205
|
||||
[2026-03-11 23:00:52] OK: D406
|
||||
[2026-03-11 23:00:52] OK: BILANT_2025
|
||||
[2026-03-11 23:00:52] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-11 23:00:52] OK: SIT_FIN_AN_2025
|
||||
[2026-03-11 23:00:52] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-11 23:00:52] === Monitor complete ===
|
||||
[2026-03-12 23:01:01] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-12 23:01:01] ERROR fetching https://static.anaf.ro/static/10/Anaf/Declaratii_R/100.html: Remote end closed connection without response
|
||||
[2026-03-12 23:01:01] ERROR fetching https://static.anaf.ro/static/10/Anaf/Declaratii_R/101.html: Remote end closed connection without response
|
||||
[2026-03-12 23:01:02] ERROR fetching https://static.anaf.ro/static/10/Anaf/Declaratii_R/300.html: Remote end closed connection without response
|
||||
[2026-03-12 23:01:02] ERROR fetching https://static.anaf.ro/static/10/Anaf/Declaratii_R/390.html: Remote end closed connection without response
|
||||
[2026-03-12 23:01:02] ERROR fetching https://static.anaf.ro/static/10/Anaf/Declaratii_R/394.html: Remote end closed connection without response
|
||||
[2026-03-12 23:01:02] ERROR fetching https://static.anaf.ro/static/10/Anaf/Declaratii_R/205.html: Remote end closed connection without response
|
||||
[2026-03-12 23:01:02] ERROR fetching https://static.anaf.ro/static/10/Anaf/Declaratii_R/406.html: Remote end closed connection without response
|
||||
[2026-03-12 23:01:02] ERROR fetching https://static.anaf.ro/static/10/Anaf/Declaratii_R/situatiifinanciare/2025/1002_5_2025.html: Remote end closed connection without response
|
||||
[2026-03-12 23:01:02] ERROR fetching https://static.anaf.ro/static/10/Anaf/Declaratii_R/situatiifinanciare/2025/semestriale/1012_2025.html: Remote end closed connection without response
|
||||
[2026-03-12 23:01:02] ERROR fetching https://static.anaf.ro/static/10/Anaf/Declaratii_R/situatiifinanciare/2025/1030_2025.html: Remote end closed connection without response
|
||||
[2026-03-12 23:01:02] ERROR fetching https://static.anaf.ro/static/10/Anaf/Declaratii_R/descarcare_declaratii.htm: Remote end closed connection without response
|
||||
[2026-03-12 23:01:02] === Monitor complete ===
|
||||
[2026-03-13 23:01:04] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-13 23:01:04] OK: D100
|
||||
[2026-03-13 23:01:04] OK: D101
|
||||
[2026-03-13 23:01:04] OK: D300
|
||||
[2026-03-13 23:01:04] OK: D390
|
||||
[2026-03-13 23:01:04] OK: D394
|
||||
[2026-03-13 23:01:04] OK: D205
|
||||
[2026-03-13 23:01:05] OK: D406
|
||||
[2026-03-13 23:01:05] CHANGES in BILANT_2025: ['Soft J S1005: 12.03.2026 (NOU)']
|
||||
[2026-03-13 23:01:05] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-13 23:01:05] OK: SIT_FIN_AN_2025
|
||||
[2026-03-13 23:01:05] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-13 23:01:05] === Monitor complete ===
|
||||
[2026-03-14 23:00:57] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-14 23:00:57] OK: D100
|
||||
[2026-03-14 23:00:57] OK: D101
|
||||
[2026-03-14 23:00:57] OK: D300
|
||||
[2026-03-14 23:00:58] OK: D390
|
||||
[2026-03-14 23:00:58] OK: D394
|
||||
[2026-03-14 23:00:58] OK: D205
|
||||
[2026-03-14 23:00:58] OK: D406
|
||||
[2026-03-14 23:00:58] OK: BILANT_2025
|
||||
[2026-03-14 23:00:58] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-14 23:00:58] OK: SIT_FIN_AN_2025
|
||||
[2026-03-14 23:00:58] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-14 23:00:58] === Monitor complete ===
|
||||
[2026-03-15 23:00:49] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-15 23:00:49] OK: D100
|
||||
[2026-03-15 23:00:49] OK: D101
|
||||
[2026-03-15 23:00:49] OK: D300
|
||||
[2026-03-15 23:00:49] OK: D390
|
||||
[2026-03-15 23:00:49] OK: D394
|
||||
[2026-03-15 23:00:50] OK: D205
|
||||
[2026-03-15 23:00:50] OK: D406
|
||||
[2026-03-15 23:00:50] OK: BILANT_2025
|
||||
[2026-03-15 23:00:50] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-15 23:00:50] OK: SIT_FIN_AN_2025
|
||||
[2026-03-15 23:00:50] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-15 23:00:50] === Monitor complete ===
|
||||
[2026-03-20 23:00:45] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-20 23:00:46] OK: D100
|
||||
[2026-03-20 23:00:46] CHANGES in D101: ['Soft A: 26.01.2026 → 18.03.2026']
|
||||
[2026-03-20 23:00:46] OK: D300
|
||||
[2026-03-20 23:00:46] OK: D390
|
||||
[2026-03-20 23:00:46] OK: D394
|
||||
[2026-03-20 23:00:46] OK: D205
|
||||
[2026-03-20 23:00:46] OK: D406
|
||||
[2026-03-20 23:00:47] CHANGES in BILANT_2025: ['Soft J S1005: 12.03.2026 → 17.03.2026']
|
||||
[2026-03-20 23:00:47] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-20 23:00:47] HASH CHANGED in SIT_FIN_AN_2025 (no version changes detected)
|
||||
[2026-03-20 23:00:47] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-20 23:00:47] === Monitor complete ===
|
||||
[2026-03-26 23:09:24] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-26 23:09:24] CHANGES in D100: ['Soft A: 10.02.2026 → 25.03.2026']
|
||||
[2026-03-26 23:09:24] OK: D101
|
||||
[2026-03-26 23:09:24] OK: D300
|
||||
[2026-03-26 23:09:24] OK: D390
|
||||
[2026-03-26 23:09:24] OK: D394
|
||||
[2026-03-26 23:09:25] OK: D205
|
||||
[2026-03-26 23:09:25] OK: D406
|
||||
[2026-03-26 23:09:25] CHANGES in BILANT_2025: ['Soft A: 11.02.2026 → 25.03.2026']
|
||||
[2026-03-26 23:09:25] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-26 23:09:25] HASH CHANGED in SIT_FIN_AN_2025 (no version changes detected)
|
||||
[2026-03-26 23:09:25] HASH CHANGED in DESCARCARE_DECLARATII (no version changes detected)
|
||||
[2026-03-26 23:09:25] === Monitor complete ===
|
||||
[2026-03-29 22:01:05] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-29 22:01:05] OK: D100
|
||||
[2026-03-29 22:01:05] OK: D101
|
||||
[2026-03-29 22:01:06] OK: D300
|
||||
[2026-03-29 22:01:06] OK: D390
|
||||
[2026-03-29 22:01:06] OK: D394
|
||||
[2026-03-29 22:01:06] OK: D205
|
||||
[2026-03-29 22:01:06] OK: D406
|
||||
[2026-03-29 22:01:06] CHANGES in BILANT_2025: ['Soft A: 25.03.2026 → 27.03.2026']
|
||||
[2026-03-29 22:01:06] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-29 22:01:06] OK: SIT_FIN_AN_2025
|
||||
[2026-03-29 22:01:07] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-29 22:01:07] === Monitor complete ===
|
||||
[2026-03-30 22:00:49] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-30 22:00:49] OK: D100
|
||||
[2026-03-30 22:00:50] OK: D101
|
||||
[2026-03-30 22:00:50] OK: D300
|
||||
[2026-03-30 22:00:50] OK: D390
|
||||
[2026-03-30 22:00:50] OK: D394
|
||||
[2026-03-30 22:00:50] OK: D205
|
||||
[2026-03-30 22:00:50] OK: D406
|
||||
[2026-03-30 22:00:50] OK: BILANT_2025
|
||||
[2026-03-30 22:00:50] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-30 22:00:51] OK: SIT_FIN_AN_2025
|
||||
[2026-03-30 22:00:51] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-30 22:00:51] === Monitor complete ===
|
||||
[2026-03-31 22:01:26] === Starting ANAF monitor v2.1 ===
|
||||
[2026-03-31 22:01:26] OK: D100
|
||||
[2026-03-31 22:01:27] OK: D101
|
||||
[2026-03-31 22:01:27] OK: D300
|
||||
[2026-03-31 22:01:27] OK: D390
|
||||
[2026-03-31 22:01:27] OK: D394
|
||||
[2026-03-31 22:01:27] OK: D205
|
||||
[2026-03-31 22:01:27] OK: D406
|
||||
[2026-03-31 22:01:27] OK: BILANT_2025
|
||||
[2026-03-31 22:01:27] OK: SIT_FIN_SEM_2025
|
||||
[2026-03-31 22:01:27] OK: SIT_FIN_AN_2025
|
||||
[2026-03-31 22:01:28] OK: DESCARCARE_DECLARATII
|
||||
[2026-03-31 22:01:28] === Monitor complete ===
|
||||
[2026-04-01 22:02:02] === Starting ANAF monitor v2.1 ===
|
||||
[2026-04-01 22:02:02] OK: D100
|
||||
[2026-04-01 22:02:02] OK: D101
|
||||
[2026-04-01 22:02:02] OK: D300
|
||||
[2026-04-01 22:02:02] OK: D390
|
||||
[2026-04-01 22:02:02] OK: D394
|
||||
[2026-04-01 22:02:03] OK: D205
|
||||
[2026-04-01 22:02:03] OK: D406
|
||||
[2026-04-01 22:02:03] OK: BILANT_2025
|
||||
[2026-04-01 22:02:03] OK: SIT_FIN_SEM_2025
|
||||
[2026-04-01 22:02:03] OK: SIT_FIN_AN_2025
|
||||
[2026-04-01 22:02:03] OK: DESCARCARE_DECLARATII
|
||||
[2026-04-01 22:02:03] === Monitor complete ===
|
||||
[2026-04-02 22:03:25] === Starting ANAF monitor v2.1 ===
|
||||
[2026-04-02 22:03:25] OK: D100
|
||||
[2026-04-02 22:03:25] OK: D101
|
||||
[2026-04-02 22:03:25] OK: D300
|
||||
[2026-04-02 22:03:25] OK: D390
|
||||
[2026-04-02 22:03:25] OK: D394
|
||||
[2026-04-02 22:03:26] OK: D205
|
||||
[2026-04-02 22:03:26] OK: D406
|
||||
[2026-04-02 22:03:26] OK: BILANT_2025
|
||||
[2026-04-02 22:03:26] OK: SIT_FIN_SEM_2025
|
||||
[2026-04-02 22:03:26] OK: SIT_FIN_AN_2025
|
||||
[2026-04-02 22:03:26] HASH CHANGED in DESCARCARE_DECLARATII (no version changes detected)
|
||||
[2026-04-02 22:03:26] === Monitor complete ===
|
||||
[2026-04-03 22:07:19] === Starting ANAF monitor v2.1 ===
|
||||
[2026-04-03 22:07:19] OK: D100
|
||||
[2026-04-03 22:07:19] OK: D101
|
||||
[2026-04-03 22:07:20] OK: D300
|
||||
[2026-04-03 22:07:20] OK: D390
|
||||
[2026-04-03 22:07:20] OK: D394
|
||||
[2026-04-03 22:07:20] OK: D205
|
||||
[2026-04-03 22:07:20] OK: D406
|
||||
[2026-04-03 22:07:20] OK: BILANT_2025
|
||||
[2026-04-03 22:07:20] OK: SIT_FIN_SEM_2025
|
||||
[2026-04-03 22:07:21] OK: SIT_FIN_AN_2025
|
||||
[2026-04-03 22:07:21] HASH CHANGED in DESCARCARE_DECLARATII (no version changes detected)
|
||||
[2026-04-03 22:07:21] === Monitor complete ===
|
||||
[2026-04-21 10:04:29] === Starting ANAF monitor v2.1 ===
|
||||
[2026-04-21 10:04:29] CHANGES in D100: ['Soft J: 22.01.2026 → 07.04.2026']
|
||||
[2026-04-21 10:04:29] OK: D101
|
||||
[2026-04-21 10:04:29] OK: D300
|
||||
[2026-04-21 10:04:29] OK: D390
|
||||
[2026-04-21 10:04:29] OK: D394
|
||||
[2026-04-21 10:04:30] OK: D205
|
||||
[2026-04-21 10:04:30] OK: D406
|
||||
[2026-04-21 10:04:30] HASH CHANGED in BILANT_2025 (no version changes detected)
|
||||
[2026-04-21 10:04:30] OK: SIT_FIN_SEM_2025
|
||||
[2026-04-21 10:04:32] HASH CHANGED in SIT_FIN_AN_2025 (no version changes detected)
|
||||
[2026-04-21 10:04:32] OK: DESCARCARE_DECLARATII
|
||||
[2026-04-21 10:04:32] === Monitor complete ===
|
||||
|
||||
@@ -364,9 +364,16 @@ def main():
|
||||
update_dashboard_status(len(all_changes) > 0, len(all_changes), all_changes)
|
||||
|
||||
log("=== Monitor complete ===")
|
||||
|
||||
|
||||
num_changes = len(all_changes)
|
||||
print(json.dumps({"changes": all_changes}, ensure_ascii=False, indent=2))
|
||||
return len(all_changes)
|
||||
# GSTACK-CRON marker: contract with Echo-Core scheduler (report_on="changes").
|
||||
# The shell-kind scheduler parses the last line matching
|
||||
# ^GSTACK-CRON: changes=\d+$ to decide whether to forward stdout.
|
||||
# A successful run (even with N>0 changes) must exit 0 — the scheduler
|
||||
# reports non-zero exit codes as errors unconditionally.
|
||||
print(f"GSTACK-CRON: changes={num_changes}")
|
||||
return 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
exit(main())
|
||||
|
||||
@@ -1,14 +1,26 @@
|
||||
S1002-S1003-S1004-S1005
|
||||
S1002-S1003-S1004-S1005
|
||||
S1010
|
||||
S1011
|
||||
S1014-S1015
|
||||
S1019
|
||||
S1020-S1022
|
||||
S1023-S1044
|
||||
S1024-S1043
|
||||
S1025
|
||||
S1026-S1077
|
||||
S1030
|
||||
S1039
|
||||
S1040-S1041
|
||||
S1042
|
||||
S1046
|
||||
S1047-S1049
|
||||
S1051-S1052-S1053-S1054
|
||||
S1056
|
||||
S1061
|
||||
S1070
|
||||
S1072
|
||||
S1079
|
||||
Tabel
|
||||
codificări
|
||||
tipuri de situaţii financiare şi raportări anuale
|
||||
@@ -17,7 +29,6 @@ Programe asistenţă
|
||||
Instrucţiuni/ Documentaţie
|
||||
PDF
|
||||
JAVA
|
||||
Atenție! Momentan se pot depune doar S1002,S1003 şi S1005.
|
||||
S1002-S1005
|
||||
Situaţii financiare anuale la
|
||||
31 decembrie 2025
|
||||
@@ -29,12 +40,14 @@ Fişierul cu extensia zip va conţine prima pagină din situaţiile financiare a
|
||||
28.01.2026
|
||||
soft A
|
||||
actualizat în data
|
||||
11.02.2026
|
||||
27.03.2026
|
||||
soft J - S1002
|
||||
soft J - S1003
|
||||
soft J - S1004
|
||||
soft J - S1005
|
||||
Schema XSD 1002
|
||||
Schema XSD 1003
|
||||
Schema XSD 1004
|
||||
Schema XSD 1005
|
||||
Structura
|
||||
*softul J se adresează doar contribuabililor care îşi generează fişierul xml din aplicaţiile informatice proprii
|
||||
@@ -62,14 +62,14 @@ valabil începand cu
|
||||
01/2024 - publicat în data de 09.02.2024
|
||||
soft A
|
||||
actualizat în data de
|
||||
10.02.2026
|
||||
25.03.2026
|
||||
soft J*
|
||||
actualizat în data de
|
||||
23.01.2026
|
||||
07.04.2026
|
||||
Anexa
|
||||
validări
|
||||
actualizat în data de
|
||||
09.02.2026
|
||||
26.03.2026
|
||||
Schema
|
||||
XSD
|
||||
100
|
||||
|
||||
@@ -14,10 +14,10 @@ publicat în 14.02.2025
|
||||
.
|
||||
soft A
|
||||
actualizat în data de
|
||||
26.01.2026
|
||||
18.03.2026
|
||||
soft J*
|
||||
actualizat în data de
|
||||
23.01.2026
|
||||
19.02.2026
|
||||
Anexa
|
||||
validări
|
||||
actualizat în data de
|
||||
@@ -122,10 +122,10 @@ Ca urmare a modificării plafonului de venituri, vor depune D101
|
||||
euro,inclusiv, si care :
|
||||
- sunt plătitoare de impozit pe profit la
|
||||
data de 31.01.2017 si vor intra în categoria microintreprinderilor
|
||||
încep<EFBFBD>nd cu 01.02.2017 (scadenta 25.02.2017)
|
||||
încep nd cu 01.02.2017 (scadenta 25.02.2017)
|
||||
- sunt plătitoare de
|
||||
impozit pe profit la data de 31.07.2017 si vor intra în categoria
|
||||
microintreprinderilor încep<EFBFBD>nd cu 01.08.2017 (scadenta 25.08.2017)
|
||||
microintreprinderilor încep nd cu 01.08.2017 (scadenta 25.08.2017)
|
||||
Valabil pentru lunile ianuarie 2017 si iulie 2017 - actualizat în
|
||||
data de
|
||||
31.07.2017
|
||||
@@ -157,7 +157,7 @@ XSD
|
||||
actualizat în data de
|
||||
06.02.2017
|
||||
101
|
||||
- Declaraţie privind impozitul pe profit, conform OPANAF nr. 3250/ 2015 (M.OF. nr.905/ 07.12.2015) - valabil încep<EFBFBD>nd cu anul 2015 - publicat
|
||||
- Declaraţie privind impozitul pe profit, conform OPANAF nr. 3250/ 2015 (M.OF. nr.905/ 07.12.2015) - valabil încep nd cu anul 2015 - publicat
|
||||
07.12.2016 versiune bilingvă română - engleză
|
||||
soft A
|
||||
Anexa
|
||||
@@ -165,7 +165,7 @@ validări
|
||||
Schema
|
||||
XSD
|
||||
101
|
||||
- Declaraţie privind impozitul pe profit, conform OPANAF nr. 3250/ 2015 (M.OF. nr.905/ 07.12.2015) - valabil încep<EFBFBD>nd cu anul 2015 - publicat
|
||||
- Declaraţie privind impozitul pe profit, conform OPANAF nr. 3250/ 2015 (M.OF. nr.905/ 07.12.2015) - valabil încep nd cu anul 2015 - publicat
|
||||
11.01.2016
|
||||
soft A
|
||||
actualizat în data de
|
||||
|
||||
@@ -11,7 +11,7 @@ D406
|
||||
28.02.2022
|
||||
Soft J*
|
||||
actualizat în data de
|
||||
11.02.2026
|
||||
19.02.2026
|
||||
Schema xsd
|
||||
actualizat în data de
|
||||
08.07.2025
|
||||
|
||||
@@ -215,6 +215,8 @@ Declaraţii
|
||||
,
|
||||
180
|
||||
,
|
||||
181
|
||||
,
|
||||
182
|
||||
,
|
||||
205
|
||||
@@ -376,7 +378,7 @@ Situaţii financiare interimare trimestriale
|
||||
20.04.2023
|
||||
Se transmit prin portalul
|
||||
e-guvernare.ro
|
||||
Situaţii financiare interimare trimestriale =2025
|
||||
Situaţii financiare interimare trimestriale >=2025
|
||||
Se transmit prin portalul
|
||||
e-guvernare.ro
|
||||
Raportări contabile semestriale 2024
|
||||
@@ -417,7 +419,7 @@ Se transmit prin portalul
|
||||
e-guvernare.ro
|
||||
Formulare S1001
|
||||
,
|
||||
1100
|
||||
1101
|
||||
Se transmit prin portalul
|
||||
e-guvernare.ro
|
||||
Declaraţii electronice
|
||||
|
||||
@@ -1,14 +1,26 @@
|
||||
S1030
|
||||
S1002-S1003-S1004-S1005
|
||||
S1010
|
||||
S1011
|
||||
S1014-S1015
|
||||
S1019
|
||||
S1020-S1022
|
||||
S1023-S1044
|
||||
S1024-S1043
|
||||
S1025
|
||||
S1026-S1077
|
||||
S1030
|
||||
S1039
|
||||
S1040-S1041
|
||||
S1042
|
||||
S1046
|
||||
S1047-S1049
|
||||
S1051-S1052-S1053-S1054
|
||||
S1056
|
||||
S1061
|
||||
S1070
|
||||
S1072
|
||||
S1079
|
||||
Tabel
|
||||
codificări
|
||||
tipuri de situaţii financiare şi raportări anuale
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
{
|
||||
"D100": {
|
||||
"soft_a_url": "http://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D100_710_XML_0126_100226.pdf",
|
||||
"soft_a_date": "10.02.2026",
|
||||
"soft_j_url": "http://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D100_22012026.zip",
|
||||
"soft_j_date": "22.01.2026"
|
||||
"soft_a_url": "http://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D100_710_XML_0126_250326.pdf",
|
||||
"soft_a_date": "25.03.2026",
|
||||
"soft_j_url": "http://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D100_07042026.zip",
|
||||
"soft_j_date": "07.04.2026"
|
||||
},
|
||||
"D101": {
|
||||
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D101_XML_2025_260126.pdf",
|
||||
"soft_a_date": "26.01.2026",
|
||||
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D101_J1102.zip"
|
||||
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D101_XML_2025_180326.pdf",
|
||||
"soft_a_date": "18.03.2026",
|
||||
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D101_J1103.zip"
|
||||
},
|
||||
"D300": {
|
||||
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D300_v12.0.2_12022026.pdf",
|
||||
@@ -31,24 +31,25 @@
|
||||
"D205": {
|
||||
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D205_XML_2025_120226.pdf",
|
||||
"soft_a_date": "12.02.2026",
|
||||
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D205_v903.zip"
|
||||
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D205_J905.zip"
|
||||
},
|
||||
"D406": {
|
||||
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/R405_XML_2017_080321.pdf",
|
||||
"soft_a_date": "08.03.2021",
|
||||
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D406_20260211.zip",
|
||||
"soft_j_date": "11.02.2026"
|
||||
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D406_20260216.zip",
|
||||
"soft_j_date": "16.02.2026"
|
||||
},
|
||||
"BILANT_2025": {
|
||||
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/bilant_SC_1225_XML_110226.pdf",
|
||||
"soft_a_date": "11.02.2026",
|
||||
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/bilant_SC_1225_XML_270326.pdf",
|
||||
"soft_a_date": "27.03.2026",
|
||||
"soft_j_S1002_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/S1002_20260128.zip",
|
||||
"soft_j_S1002_date": "28.01.2026",
|
||||
"soft_j_S1004_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/S1004_20250204.zip",
|
||||
"soft_j_S1004_date": "04.02.2025",
|
||||
"soft_j_S1003_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/S1003_20260210.zip",
|
||||
"soft_j_S1003_date": "10.02.2026",
|
||||
"soft_j_S1005_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/S1005_202060203.zip"
|
||||
"soft_j_S1004_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/S1004_20260224.zip",
|
||||
"soft_j_S1004_date": "24.02.2026",
|
||||
"soft_j_S1005_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/S1005_20260317.zip",
|
||||
"soft_j_S1005_date": "17.03.2026"
|
||||
},
|
||||
"SIT_FIN_SEM_2025": {
|
||||
"soft_j_1012_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/S1012_20250723.zip",
|
||||
|
||||
458
tools/migrations/import_openclaw_jobs_2026-04.py
Executable file
458
tools/migrations/import_openclaw_jobs_2026-04.py
Executable file
@@ -0,0 +1,458 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
One-shot migration: translate OpenClaw cron/jobs.json to echo-core schema.
|
||||
|
||||
Dated: 2026-04
|
||||
Status: ONE-SHOT tool. Kept in git as an audit artifact for the consolidation.
|
||||
Restore path: if this needs to be re-run, the original OpenClaw file is at
|
||||
/home/moltbot/.openclaw/cron/jobs.json
|
||||
/home/moltbot/.openclaw/cron/jobs.json.bak
|
||||
and the pre-migration echo-core jobs.json is recoverable from git history
|
||||
(commit preceding `feat(cron): populate jobs.json with decomposed ...`).
|
||||
|
||||
OpenClaw schema (nested):
|
||||
{
|
||||
"id": "<uuid>",
|
||||
"agentId": "echo",
|
||||
"name": "<name>",
|
||||
"enabled": bool,
|
||||
"schedule": {"kind": "cron", "expr": "0 6 * * *", "tz": "Europe/Bucharest"?},
|
||||
"sessionTarget": "isolated",
|
||||
"payload": {"kind": "agentTurn", "message": "<prompt>", "model": "sonnet"?},
|
||||
"state": {...},
|
||||
...
|
||||
}
|
||||
|
||||
Echo-core schema (flat, Claude job):
|
||||
{
|
||||
"name": "<name>",
|
||||
"cron": "<expr, Bucharest local>",
|
||||
"channel": "<channel name>",
|
||||
"model": "sonnet",
|
||||
"prompt": "<prompt, path-rewritten>",
|
||||
"allowed_tools": [],
|
||||
"enabled": bool,
|
||||
"last_run": null, "last_status": null, "next_run": null
|
||||
}
|
||||
|
||||
Echo-core scheduler interprets cron expressions in Europe/Bucharest. OpenClaw
|
||||
used UTC by default (per its runtime) unless schedule.tz is set explicitly.
|
||||
This script converts UTC -> Europe/Bucharest for jobs without an explicit tz.
|
||||
|
||||
Usage:
|
||||
python3 tools/migrations/import_openclaw_jobs_2026-04.py [flags]
|
||||
|
||||
Flags:
|
||||
--dry-run Print what would change without writing.
|
||||
--skip-disabled Skip jobs where enabled is false (default: import all).
|
||||
--skip name1,name2,... Comma-separated list of job names to exclude.
|
||||
--channel <name> Default channel for imported jobs (default: echo-work).
|
||||
--source <path> Path to openclaw jobs.json.
|
||||
--target <path> Path to echo-core jobs.json.
|
||||
|
||||
The script is idempotent with respect to existing jobs: if a job with the same
|
||||
name is already present in the target, it is skipped with a warning, and the
|
||||
existing entry is preserved untouched.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from zoneinfo import ZoneInfo
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Constants
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
PROJECT_ROOT = Path(__file__).resolve().parents[2]
|
||||
DEFAULT_SOURCE = Path("/home/moltbot/.openclaw/cron/jobs.json")
|
||||
DEFAULT_TARGET = PROJECT_ROOT / "cron" / "jobs.json"
|
||||
DEFAULT_CHANNEL = "echo-work"
|
||||
|
||||
BUCHAREST = ZoneInfo("Europe/Bucharest")
|
||||
UTC = ZoneInfo("UTC")
|
||||
|
||||
# Jobs to skip by default. Anti-foot-gun list for known-dead/bad openclaw jobs.
|
||||
# Can be extended at invocation time via --skip.
|
||||
SKIP_BY_DEFAULT: set[str] = {
|
||||
"night-execute", # SSH to LXC, dead infra
|
||||
"antfarm/feature-dev/planner",
|
||||
"antfarm/feature-dev/setup",
|
||||
"antfarm/feature-dev/developer",
|
||||
"antfarm/feature-dev/verifier",
|
||||
"antfarm/feature-dev/tester",
|
||||
"antfarm/feature-dev/reviewer",
|
||||
}
|
||||
|
||||
# YouTube:* one-off pinned prompts — always auto-skipped regardless of flags.
|
||||
YOUTUBE_PREFIX = "YouTube:"
|
||||
|
||||
# Path rewrites applied to prompt bodies. Each pattern is a compiled regex;
|
||||
# the replacement is a literal string. Order matters — longer/more-specific
|
||||
# patterns first so the shorter ones don't eat them prematurely.
|
||||
#
|
||||
# We use lookahead/boundary tricks so that `clawd-archive`, `clawdbot`,
|
||||
# `clawd.old`, etc. are NOT matched. `clawd` must be immediately followed
|
||||
# by `/` (path boundary) or `$` / whitespace (end-of-token).
|
||||
PATH_REWRITES: list[tuple[re.Pattern[str], str]] = [
|
||||
# Absolute path: /home/moltbot/clawd/... -> /home/moltbot/echo-core/...
|
||||
(re.compile(r"/home/moltbot/clawd(?=/)"), "/home/moltbot/echo-core"),
|
||||
# Shell form: cd ~/clawd -> cd ~/echo-core (allow trailing & or space)
|
||||
(re.compile(r"(?<![\w-])cd\s+~/clawd(?![\w/-])"), "cd ~/echo-core"),
|
||||
# Shell form: cd /home/moltbot/clawd -> cd /home/moltbot/echo-core
|
||||
(re.compile(r"(?<![\w-])cd\s+/home/moltbot/clawd(?![\w/-])"),
|
||||
"cd /home/moltbot/echo-core"),
|
||||
]
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def _parse_cron_field(field: str) -> list[int] | None:
|
||||
"""Return sorted list of ints the field expands to, or None if uncertain.
|
||||
|
||||
Handles: "*", "N", "N,M", "N-M", "*/S", "A-B/S", "N,M-P/S".
|
||||
Returns None for anything we don't recognise (caller should warn and leave
|
||||
the job for manual review).
|
||||
"""
|
||||
# Don't attempt to resolve `*` here — caller handles it per-field since
|
||||
# the valid range depends on which field it is.
|
||||
result: set[int] = set()
|
||||
parts = field.split(",")
|
||||
for part in parts:
|
||||
if part == "*":
|
||||
return None # caller handles
|
||||
step = 1
|
||||
if "/" in part:
|
||||
base, step_s = part.split("/", 1)
|
||||
try:
|
||||
step = int(step_s)
|
||||
except ValueError:
|
||||
return None
|
||||
else:
|
||||
base = part
|
||||
if base == "*":
|
||||
return None
|
||||
if "-" in base:
|
||||
try:
|
||||
lo_s, hi_s = base.split("-", 1)
|
||||
lo, hi = int(lo_s), int(hi_s)
|
||||
except ValueError:
|
||||
return None
|
||||
for v in range(lo, hi + 1, step):
|
||||
result.add(v)
|
||||
else:
|
||||
try:
|
||||
result.add(int(base))
|
||||
except ValueError:
|
||||
return None
|
||||
return sorted(result)
|
||||
|
||||
|
||||
def _convert_hour_field(hour_field: str, day_shift_from_utc: int) -> tuple[str, bool]:
|
||||
"""Convert UTC hour field to Bucharest-local hour field.
|
||||
|
||||
Returns (converted_field, approx_ok). approx_ok is False when we
|
||||
couldn't confidently translate (e.g. odd step that crosses midnight).
|
||||
The caller should warn if False and present the job for manual review.
|
||||
|
||||
Strategy: if the field is "*" -> "*"; if it expands to a concrete list
|
||||
of hours, shift each hour by `day_shift_from_utc` (UTC+2 or UTC+3 depending
|
||||
on DST) modulo 24. If any hour wraps past midnight (which would change the
|
||||
day-of-week / day-of-month field in a way a simple script can't handle),
|
||||
flag approx_ok=False.
|
||||
"""
|
||||
if hour_field == "*":
|
||||
return "*", True
|
||||
|
||||
hours = _parse_cron_field(hour_field)
|
||||
if hours is None:
|
||||
return hour_field, False
|
||||
|
||||
# Shift and check for day-wrap
|
||||
day_wrap = False
|
||||
shifted = []
|
||||
for h in hours:
|
||||
new_h = h + day_shift_from_utc
|
||||
if new_h >= 24:
|
||||
new_h -= 24
|
||||
day_wrap = True
|
||||
elif new_h < 0:
|
||||
new_h += 24
|
||||
day_wrap = True
|
||||
shifted.append(new_h)
|
||||
|
||||
shifted = sorted(set(shifted))
|
||||
if not shifted:
|
||||
return hour_field, False
|
||||
|
||||
# Try to re-compress into a step form if the input looked like A-B/S.
|
||||
# For simplicity we emit a comma-separated list. APScheduler accepts that.
|
||||
return ",".join(str(h) for h in shifted), not day_wrap
|
||||
|
||||
|
||||
def convert_cron_utc_to_bucharest(
|
||||
expr: str,
|
||||
src_tz: str | None,
|
||||
reference_dt: datetime | None = None,
|
||||
) -> tuple[str, list[str]]:
|
||||
"""Translate a cron expression from src_tz to Europe/Bucharest.
|
||||
|
||||
If `src_tz == 'Europe/Bucharest'` the expression is returned unchanged.
|
||||
Otherwise we assume UTC source (OpenClaw's default runtime) and shift the
|
||||
hour field by the current UTC->Bucharest offset.
|
||||
|
||||
Returns (new_expr, warnings). warnings is a list of human-readable notes;
|
||||
if non-empty, caller should flag for manual review.
|
||||
|
||||
DST caveat: the offset is evaluated at `reference_dt` (default: now).
|
||||
Jobs that span DST transitions may need manual tuning. We emit a warning
|
||||
rather than trying to be clever.
|
||||
"""
|
||||
warnings: list[str] = []
|
||||
if src_tz == "Europe/Bucharest":
|
||||
return expr, warnings
|
||||
|
||||
fields = expr.split()
|
||||
if len(fields) != 5:
|
||||
warnings.append(f"cron expr does not have 5 fields: {expr!r}")
|
||||
return expr, warnings
|
||||
|
||||
minute, hour, dom, month, dow = fields
|
||||
|
||||
ref = reference_dt or datetime.now(UTC)
|
||||
# offset for "what is UTC hour X in Bucharest?"
|
||||
offset_seconds = int(
|
||||
ref.replace(tzinfo=UTC).astimezone(BUCHAREST).utcoffset().total_seconds()
|
||||
)
|
||||
# should be +7200 (winter) or +10800 (summer)
|
||||
shift_hours = offset_seconds // 3600
|
||||
|
||||
new_hour, ok = _convert_hour_field(hour, shift_hours)
|
||||
if not ok:
|
||||
warnings.append(
|
||||
f"hour field {hour!r} crosses day boundary or is complex — "
|
||||
"verify day-of-week/day-of-month manually"
|
||||
)
|
||||
|
||||
return f"{minute} {new_hour} {dom} {month} {dow}", warnings
|
||||
|
||||
|
||||
def rewrite_prompt_paths(text: str) -> tuple[str, list[tuple[str, str]]]:
|
||||
"""Apply path rewrites to a prompt body.
|
||||
|
||||
Returns (new_text, substitutions) where substitutions is a list of
|
||||
(old_snippet, new_snippet) tuples — every rewrite that was performed.
|
||||
"""
|
||||
substitutions: list[tuple[str, str]] = []
|
||||
new = text
|
||||
for pattern, replacement in PATH_REWRITES:
|
||||
def _sub(match: re.Match[str]) -> str:
|
||||
old = match.group(0)
|
||||
substitutions.append((old, replacement))
|
||||
return replacement
|
||||
|
||||
new = pattern.sub(_sub, new)
|
||||
return new, substitutions
|
||||
|
||||
|
||||
def translate_job(
|
||||
oc_job: dict,
|
||||
default_channel: str,
|
||||
reference_dt: datetime | None = None,
|
||||
) -> tuple[dict | None, list[str]]:
|
||||
"""Translate one openclaw job dict to an echo-core job dict.
|
||||
|
||||
Returns (echo_job, warnings). echo_job is None if the job cannot be
|
||||
translated (e.g. non-cron schedule).
|
||||
"""
|
||||
warnings: list[str] = []
|
||||
name = oc_job.get("name") or oc_job.get("id") or "<unnamed>"
|
||||
|
||||
sched = oc_job.get("schedule") or {}
|
||||
if sched.get("kind") != "cron":
|
||||
warnings.append(
|
||||
f"job {name!r}: schedule.kind={sched.get('kind')!r} "
|
||||
"is not 'cron' — skipping (manual review)"
|
||||
)
|
||||
return None, warnings
|
||||
|
||||
expr = sched.get("expr")
|
||||
if not isinstance(expr, str) or not expr.strip():
|
||||
warnings.append(f"job {name!r}: missing/empty schedule.expr — skipping")
|
||||
return None, warnings
|
||||
|
||||
src_tz = sched.get("tz")
|
||||
new_expr, tz_warnings = convert_cron_utc_to_bucharest(
|
||||
expr, src_tz, reference_dt=reference_dt
|
||||
)
|
||||
for w in tz_warnings:
|
||||
warnings.append(f"job {name!r}: {w}")
|
||||
|
||||
payload = oc_job.get("payload") or {}
|
||||
prompt = payload.get("message") or ""
|
||||
new_prompt, subs = rewrite_prompt_paths(prompt)
|
||||
for old, new in subs:
|
||||
warnings.append(f"job {name!r}: rewrote {old!r} -> {new!r}")
|
||||
|
||||
model = payload.get("model") or "sonnet"
|
||||
|
||||
# openclaw doesn't track allowedTools in the same way; start with [].
|
||||
allowed = oc_job.get("allowedTools") or payload.get("allowedTools") or []
|
||||
if not isinstance(allowed, list):
|
||||
allowed = []
|
||||
|
||||
echo_job = {
|
||||
"name": name,
|
||||
"cron": new_expr,
|
||||
"channel": default_channel,
|
||||
"model": model,
|
||||
"prompt": new_prompt,
|
||||
"allowed_tools": list(allowed),
|
||||
"enabled": bool(oc_job.get("enabled", False)),
|
||||
"last_run": None,
|
||||
"last_status": None,
|
||||
"next_run": None,
|
||||
}
|
||||
return echo_job, warnings
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def load_json(path: Path) -> object:
|
||||
with path.open("r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def _is_skipped(name: str, skip_set: set[str], include_default_skip: bool) -> bool:
|
||||
if name.startswith(YOUTUBE_PREFIX):
|
||||
return True
|
||||
if include_default_skip and name in SKIP_BY_DEFAULT:
|
||||
return True
|
||||
if name in skip_set:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def run(argv: list[str] | None = None) -> int:
|
||||
p = argparse.ArgumentParser(description=__doc__.splitlines()[0])
|
||||
p.add_argument("--dry-run", action="store_true")
|
||||
p.add_argument("--skip-disabled", action="store_true")
|
||||
p.add_argument("--skip", default="",
|
||||
help="Comma-separated list of additional names to skip.")
|
||||
p.add_argument("--no-default-skip", action="store_true",
|
||||
help="Disable the built-in SKIP_BY_DEFAULT list.")
|
||||
p.add_argument("--channel", default=DEFAULT_CHANNEL,
|
||||
help=f"Default channel for imported jobs (default: {DEFAULT_CHANNEL}).")
|
||||
p.add_argument("--source", default=str(DEFAULT_SOURCE))
|
||||
p.add_argument("--target", default=str(DEFAULT_TARGET))
|
||||
args = p.parse_args(argv)
|
||||
|
||||
source = Path(args.source)
|
||||
target = Path(args.target)
|
||||
extra_skip = {s.strip() for s in args.skip.split(",") if s.strip()}
|
||||
include_default_skip = not args.no_default_skip
|
||||
|
||||
if not source.exists():
|
||||
print(f"ERROR: source not found: {source}", file=sys.stderr)
|
||||
return 2
|
||||
|
||||
oc_data = load_json(source)
|
||||
if not isinstance(oc_data, dict) or "jobs" not in oc_data:
|
||||
print(f"ERROR: source {source} is not a dict with 'jobs' key",
|
||||
file=sys.stderr)
|
||||
return 2
|
||||
|
||||
# load target (may not exist yet)
|
||||
if target.exists():
|
||||
target_jobs = load_json(target)
|
||||
if not isinstance(target_jobs, list):
|
||||
print(f"ERROR: target {target} is not a JSON list", file=sys.stderr)
|
||||
return 2
|
||||
else:
|
||||
target_jobs = []
|
||||
|
||||
existing_names = {j.get("name") for j in target_jobs}
|
||||
|
||||
ref = datetime.now(UTC)
|
||||
to_add: list[dict] = []
|
||||
summary_lines: list[str] = []
|
||||
|
||||
for oc_job in oc_data["jobs"]:
|
||||
name = oc_job.get("name") or oc_job.get("id") or "<unnamed>"
|
||||
|
||||
if _is_skipped(name, extra_skip, include_default_skip):
|
||||
summary_lines.append(f" SKIP {name:40s} (skip list)")
|
||||
continue
|
||||
|
||||
if args.skip_disabled and not oc_job.get("enabled", False):
|
||||
summary_lines.append(f" SKIP {name:40s} (disabled, --skip-disabled)")
|
||||
continue
|
||||
|
||||
echo_job, warnings = translate_job(oc_job, args.channel, reference_dt=ref)
|
||||
|
||||
if echo_job is None:
|
||||
for w in warnings:
|
||||
summary_lines.append(f" WARN {w}")
|
||||
summary_lines.append(f" SKIP {name:40s} (untranslatable)")
|
||||
continue
|
||||
|
||||
if echo_job["name"] in existing_names:
|
||||
summary_lines.append(
|
||||
f" DUPE {name:40s} (already in target — existing entry preserved)"
|
||||
)
|
||||
continue
|
||||
|
||||
for w in warnings:
|
||||
summary_lines.append(f" WARN {w}")
|
||||
|
||||
summary_lines.append(
|
||||
f" ADD {name:40s} cron={echo_job['cron']!r:18s} "
|
||||
f"enabled={echo_job['enabled']} model={echo_job['model']}"
|
||||
)
|
||||
to_add.append(echo_job)
|
||||
|
||||
# Print summary
|
||||
print(f"Source: {source}")
|
||||
print(f"Target: {target}")
|
||||
print(f"Dry-run: {args.dry_run}")
|
||||
print(f"Default channel for imports: {args.channel}")
|
||||
print(f"Existing target jobs: {len(target_jobs)}")
|
||||
print(f"Source jobs: {len(oc_data['jobs'])}")
|
||||
print()
|
||||
print("Per-job decisions:")
|
||||
for line in summary_lines:
|
||||
print(line)
|
||||
print()
|
||||
print(f"Would add {len(to_add)} new job(s) to target.")
|
||||
|
||||
if args.dry_run:
|
||||
print("[DRY-RUN] no changes written.")
|
||||
return 0
|
||||
|
||||
if not to_add:
|
||||
print("Nothing to write.")
|
||||
return 0
|
||||
|
||||
target_jobs.extend(to_add)
|
||||
target.parent.mkdir(parents=True, exist_ok=True)
|
||||
tmp = target.with_suffix(target.suffix + ".tmp")
|
||||
with tmp.open("w", encoding="utf-8") as f:
|
||||
json.dump(target_jobs, f, indent=2, ensure_ascii=False)
|
||||
f.write("\n")
|
||||
tmp.replace(target)
|
||||
print(f"Wrote {len(target_jobs)} jobs to {target}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(run())
|
||||
Reference in New Issue
Block a user