docs(kb): sync infrastructure with romfastsql proxmox config

LXC 171 mutat pe pveelite (nu pvemini), RAM 4GB (nu 16GB).
LXC 110 disk 8GB (nu 30GB), SSH user moltbot@.
Adăugat VM 302 (oracle-test, 10.0.20.130).
VM 201 extins cu detalii IIS, domenii, Win-ACME, ZFS replication.
VM 109 extins cu Oracle 19c, schedule backup RMAN.
Proxmox VE 8.4.14, storage cluster documentat.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-04-25 22:05:09 +00:00
parent c146d68498
commit d22ce49d76

View File

@@ -1,6 +1,6 @@
# Infrastructură (Proxmox + Docker)
> Ultima actualizare: 2026-04-24. Scan complet al tuturor nodurilor.
> Ultima actualizare: 2026-04-25. Sync cu romfastsql/proxmox/ din Gitea.
## Acces rapid LXC
@@ -12,8 +12,8 @@
| 104 | flowise | pvemini | 10.0.20.161 | ❌ (publickey only) | `ssh echo@10.0.20.201 "sudo pct exec 104 -- bash"` |
| 106 | gitea | pvemini | 10.0.20.165 | — | `ssh echo@10.0.20.201 "sudo pct exec 106 -- sh"` ⚠️ Alpine (sh, nu bash) |
| 108 | central-oracle | pvemini | 10.0.20.121 | `ssh echo@10.0.20.121` | `ssh echo@10.0.20.201 "sudo pct exec 108 -- bash"` |
| 110 | moltbot | pveelite | 10.0.20.173 | `ssh echo@10.0.20.173` | `ssh echo@10.0.20.202 "sudo pct exec 110 -- bash"` |
| 171 | claude-agent | pvemini | 10.0.20.171 | `ssh user@10.0.20.171` | `ssh echo@10.0.20.201 "sudo pct exec 171 -- bash"` |
| 110 | moltbot | pveelite | 10.0.20.173 | `ssh moltbot@10.0.20.173` | `ssh echo@10.0.20.202 "sudo pct exec 110 -- bash"` |
| 171 | claude-agent | pveelite | 10.0.20.171 | `ssh claude@10.0.20.171` | `ssh echo@10.0.20.202 "sudo pct exec 171 -- bash"` |
---
@@ -193,9 +193,10 @@ ssh echo@10.0.20.201 "sudo pct exec 108 -- docker exec -it oracle-xe bash"
## LXC 110 — moltbot (pveelite)
- **IP:** 10.0.20.173 | **OS:** Debian/systemd | **Tailscale:** Da
- **Resurse:** 4GB RAM (564MB used) | 30GB disk (15GB used, 48%)
- **Acesta este LXC-ul pe care rulează echo-core**
- **IP:** 10.0.20.173 | **Tailscale IP:** 100.120.119.70 | **OS:** Debian/systemd | **Tailscale:** Da
- **Resurse:** 4GB RAM | 8GB disk (local-zfs) | 2 cores
- **SSH direct:** `ssh moltbot@10.0.20.173` (user dedicat non-root)
- **Acesta este LXC-ul pe care rulează echo-core (OpenClaw)**
**Servicii:**
| Serviciu | Port | Descriere |
@@ -208,10 +209,10 @@ ssh echo@10.0.20.201 "sudo pct exec 108 -- docker exec -it oracle-xe bash"
---
## LXC 171 — claude-agent (pvemini)
## LXC 171 — claude-agent (pveelite)
- **IP:** 10.0.20.171 | **Tailscale:** 100.95.55.51 | **OS:** Ubuntu/systemd
- **Resurse:** 16GB RAM (982MB used) | 32GB disk (23GB used, **72%** — de monitorizat)
- **IP:** 10.0.20.171 | **Tailscale:** 100.95.55.51 | **OS:** Ubuntu 24.04 LTS/systemd
- **Resurse:** 4GB RAM | 32GB disk (local-zfs) | 2 cores
- **User principal:** `claude` | **Workspace:** `/workspace/`
**Servicii:**
@@ -240,35 +241,111 @@ ssh echo@10.0.20.201 "sudo pct exec 108 -- docker exec -it oracle-xe bash"
**Depanare:**
```bash
ssh echo@10.0.20.201 "sudo pct exec 171 -- systemctl status code-server@claude ttyd"
ssh echo@10.0.20.201 "sudo pct exec 171 -- df -h /"
ssh echo@10.0.20.202 "sudo pct exec 171 -- systemctl status code-server@claude ttyd"
ssh echo@10.0.20.202 "sudo pct exec 171 -- df -h /"
```
---
## VM 201 — roacentral (pvemini)
- **IP:** 10.0.20.122 | **OS:** Windows | **QEMU Guest Agent:** Da
- Windows VM cu guest agent activ
- **VMID:** 201 | **Host:** pvemini | **Status:** Running (autostart)
- **OS:** Windows 11 Pro (24H2) | **QEMU Guest Agent:** Da
- **Resurse:** 2 cores | 4GB RAM | 500GB disk (local-zfs, ~89GB used)
- **Network:** virtio bridge (DHCP) | **RDP:** port 3389
## Mașini Windows externe (producție)
**Rol principal — Reverse Proxy IIS:**
| Domeniu | Destinație |
|---------|-----------|
| roa.romfast.ro | aplicație ROA |
| gitea.romfast.ro | LXC 106 |
| dokploy.romfast.ro | LXC 103 Traefik |
| roa-qr.romfast.ro | LXC 103 Traefik |
| *.roa.romfast.ro | Dokploy wildcard |
**Servicii instalate:**
- **IIS 10.0** — ASP.NET 4.8, WebSockets, URL Rewrite, SSL termination
- **Win-ACME v2.2.9** — certificate Let's Encrypt automate
- **Oracle Instant Client** — JDBC client pentru LXC 108
- **WinNUT** — UPS monitor (NUT server: 10.0.20.201:3493)
**Backup & Replicare:**
- Backup zilnic 02:00 (zstd comprimat)
- ZFS replication activă: pvemini → pve1 + pveelite (interval 30 min)
- HA dezactivat — pornire manuală la failover
---
## VM 109 — oracle-dr (pveelite)
- **VMID:** 109 | **Host:** pveelite | **Status:** Stopped (pornit doar pentru DR/test)
- **IP:** 10.0.20.37 | **OS:** Windows Server + Oracle 19c
- **HA group:** ha-prefer-pveelite | state=stopped, nofailback=1
- **Scop:** Disaster Recovery pentru Oracle Database (backup RMAN de pe server Windows extern)
**Oracle Database:**
- DB Name: ROA | Dimensiune: ~80 GB | Tabele: 42.625
- Strategie: full backup zilnic (6-7 GB) + cumulative incremental (200-300 MB)
**Schedule backup RMAN:**
| Oră | Tip |
|-----|-----|
| 02:30 | Full backup |
| 13:00 | Cumulative incremental |
| 18:00 | Cumulative incremental |
| 09:00 | Monitorizare automată |
**Depanare:**
```bash
ssh echo@10.0.20.202 "sudo qm status 109"
```
---
## VM 302 — oracle-test (pvemini)
- **VMID:** 302 | **Host:** pvemini | **Status:** Stopped (test on-demand)
- **IP:** 10.0.20.130 | **OS:** Windows 11
- **Resurse:** 4GB RAM | 500GB disk
- **Scop:** Mediu de test pentru scripturi instalare ROA pe Windows cu Oracle 21c XE
**Oracle Configuration:**
- Ediție: Oracle 21c XE (CDB/PDB) | Port: 1521 | Service: XEPDB1
- Setup dir: `C:\roa-setup\` | DMP files: `C:\DMPDIR\`
- Instalare completă: ~8 minute
**Depanare:**
```bash
ssh echo@10.0.20.201 "sudo qm status 302"
```
---
## Server Windows extern — producție
| Mașină | IP | Port | Rol |
|--------|----|------|-----|
| Oracle producție | 10.0.20.36 | 1521 | Oracle 10g Windows, baza de date principală ROA |
| Oracle DR | 10.0.20.37 | 1521 | Disaster recovery Oracle |
---
## Proxmox Noduri
**Versiune:** Proxmox VE 8.4.14 | **Cluster:** romfast (3 noduri, quorum activ)
**User:** `echo` | **Acces SSH:** `ssh echo@<IP>` | **Sudo:** `qm`, `pct`, `pvesh`
**Storage cluster:**
| Storage | Tip | Capacitate | Scop |
|---------|-----|------------|------|
| local-zfs | ZFS Pool | 1.75 TiB | Diskuri VM/LXC |
| backup | Directory | 1.79 TiB | Backup-uri (pvemini only) |
| local | Directory | 1.51 TiB | ISO-uri și template-uri |
### pvemini (10.0.20.201) — host principal
- **Resurse:** 64GB RAM, 1.4TB disk
- **LXC-uri:** 100, 103, 104, 105(stopped), 106, 108, 171
- **VM-uri:** 201(running), 300(stopped), 302(stopped)
- **Backup zilnic 02:00:** VM 100, 104, 106, 108, 171, 201 → storage "backup"
- **LXC-uri:** 100(running), 103(running), 104(running), 105(stopped), 106(running), 108(running)
- **VM-uri:** 201(running), 300(stopped — Windows 11 template), 302(stopped — oracle test)
- **Backup zilnic 02:00:** LXC 100, 104, 106, 108, VM 201 → storage "backup"
**Scripturi `/opt/scripts/`:**
- `ha-monitor.sh` — zilnic 00:00, status cluster HA
@@ -279,10 +356,10 @@ ssh echo@10.0.20.201 "sudo pct exec 171 -- df -h /"
- `vm107-monitor.sh` — monitorizare VM 107
### pveelite (10.0.20.202)
- **Resurse:** 16GB RAM, 557GB disk
- **LXC-uri:** 101(running), 105(stopped), 110(running), 301(stopped)
- **Resurse:** 16GB RAM, 557GB disk (+ 8GB ZFS swap)
- **LXC-uri:** 101(running), 105(stopped), 110(running), 171(running), 301(stopped)
- **VM-uri:** 109(stopped — oracle DR)
- **Backup zilnic 22:00:** LXC 101, 110 → backup-pvemini-nfs
- **Backup zilnic 22:00:** LXC 101, 110, 171 → backup-pvemini-nfs
**Scripturi `/opt/scripts/`:**
- `oracle-backup-monitor-proxmox.sh` — zilnic 21:00, verifică backup Oracle
@@ -306,7 +383,7 @@ ssh echo@10.0.20.201 "sudo pct exec 171 -- df -h /"
## Alertă automată când
- Container/VM down neașteptat
- Disk >85% utilizare (LXC 171 deja la 72% — monitorizez)
- Disk >85% utilizare pe orice container/VM
- Serviciu `unhealthy` >1h
- Erori repetate în logs