Restructurare completă: kanban→dashboard, notes→kb, ANAF→tools/

- Mutare și reorganizare foldere proiecte
- Actualizare path-uri în TOOLS.md
- Sincronizare configurații agenți
- 79 fișiere actualizate
This commit is contained in:
Echo
2026-01-31 09:34:24 +00:00
parent 838c38e82f
commit a44b9ef852
99 changed files with 2096 additions and 623 deletions

View File

@@ -0,0 +1,86 @@
# Raport Comparare Bilanț ANAF: 12/2025 vs 12/2024
## (Depuneri 2026 vs Depuneri 2025)
Data analizei: 2026-01-29
Baza legală 2025: **OMF nr. 2036/23.12.2025**
---
## 🔴 IMPORTANT: Doar S1002 are modificări!
S1003, S1004, S1005 folosesc **aceleași XSD-uri** ca pentru 2024.
---
## S1002 - Entități Mari și Mijlocii
**Versiune**: v14 → v15
### ⭐ Câmpuri NOI (OBLIGATORII):
| Câmp | Tip | Descriere |
|------|-----|-----------|
| **AN_CAEN** | IntInt2024_2025SType | **NOU! Anul pentru codul CAEN (2024 sau 2025)** |
| **d_audit_intern** | IntPoz1SType | **NOU! Declarație audit intern** |
### 🔄 Câmpuri MODIFICATE:
| Câmp | 2024 | 2025 | Impact |
|------|------|------|--------|
| cif_audi | CnpSType (CNP) | **CuiSType** | **Înapoi la CUI!** (era CNP în 2024) |
| bifa_aprob | Int_bifaAprobSType | IntInt1_1SType | Simplificat |
| bifa_art27 | Int_bifaArt27SType | IntInt0_0SType | Simplificat |
| interes_public | Int_interesPublicSType | IntInt0_1SType | Simplificat |
### Câmp RE-ADĂUGAT:
| Câmp | Notă |
|------|------|
| **F40_0174** | Re-adăugat (fusese eliminat în v14!) |
### 📋 Coduri CAEN NOI:
**+150 coduri CAEN** adăugate în enumerare, printre care:
- 5330, 1625, 3032, 9013, 7412, 1628, 4783, 9020
- 3100, 6422, 8694, 9699, 8692, 8569, 4682, 4686
- și multe altele...
---
## S1003, S1004, S1005 - FĂRĂ MODIFICĂRI
Aceste formulare folosesc aceleași scheme XSD ca pentru 2024:
- s1003_20250204.xsd
- s1004_20250204.xsd
- s1005_20250206.xsd
---
## ⚠️ Acțiuni Necesare pentru ROA
### Prioritate ÎNALTĂ:
1. **Actualizare namespace** S1002: v14 → v15
2. **Adăugare câmp AN_CAEN** (obligatoriu, valori: 2024 sau 2025)
3. **Adăugare câmp d_audit_intern** (audit intern)
4. **Modificare validare cif_audi** - înapoi la CUI (nu mai e CNP!)
5. **Re-activare F40_0174**
### Prioritate MEDIE:
6. Actualizare lista coduri CAEN (+150 noi)
7. Simplificare tipuri pentru bifa_aprob, bifa_art27, interes_public
---
## Fișiere Sursă
| Formular | 2024 | 2025 |
|----------|------|------|
| S1002 | s1002_20250204.xsd (v14) | s1002_20260128.xsd (v15) |
| S1003 | s1003_20250204.xsd | *același* |
| S1004 | s1004_20250204.xsd | *același* |
| S1005 | s1005_20250206.xsd | *același* |
---
## Link-uri ANAF
- [Pagina 2025](https://static.anaf.ro/static/10/Anaf/Declaratii_R/situatiifinanciare/2025/1002_5_2025.html)
- [OMF 2036/2025](https://static.anaf.ro/static/10/Anaf/legislatie/O_2036_2025.pdf)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,147 @@
# Raport Comparare Formulare Bilanț ANAF
## 2024 (pentru depuneri 2025) vs 2023 (pentru depuneri 2024)
Data analizei: 2026-01-29
---
## Rezumat Executiv
Toate formularele au primit versiuni noi cu modificări structurale:
- **S1002**: v12 → v14 (entități mari/mijlocii)
- **S1003**: v12 → v13 (entități mici)
- **S1004**: v12 → v13 (raportări contabile)
- **S1005**: v12 → v13 (microentități)
### Principalele Schimbări
1. **Câmpuri NOI în F20** (toate formularele):
- `F20_3181`, `F20_3182`
- `F20_3171`, `F20_3172`
2. **Validare auditor modificată**:
- S1002: `cif_audi` schimbat de la CIF la **CNP** (pattern 13 cifre începând cu 1-9)
- S1003, S1004, S1005: `cif_audi` → CuiSType
3. **Restricție an minim**:
- Formularele nu mai acceptă ani vechi (2018/2023 → 2024)
---
## S1002 - Entități Mari și Mijlocii
**Versiune**: v12 → v14
### Câmpuri NOI:
| Câmp | Tip | Descriere |
|------|-----|-----------|
| F20_3181 | IntNeg15SType | Nou în F20 |
| F20_3182 | IntNeg15SType | Nou în F20 |
| F20_3171 | IntNeg15SType | Nou în F20 |
| F20_3172 | IntNeg15SType | Nou în F20 |
### Câmpuri ELIMINATE:
| Câmp | Notă |
|------|------|
| F40_0174 | Eliminat din F40 |
### Câmpuri MODIFICATE:
| Câmp | Vechi | Nou | Impact |
|------|-------|-----|--------|
| cif_audi | CifSType | CnpSType | **ATENȚIE: Acum cere CNP, nu CIF!** |
| an | 2018-2100 | 2024-2100 | Nu mai acceptă ani vechi |
### Enumerări NOI:
- Adăugată valoarea "16" la lista de tipuri valide
---
## S1003 - Entități Mici
**Versiune**: v12 → v13
### Câmpuri NOI:
| Câmp | Tip |
|------|-----|
| F20_3181 | IntNeg15SType |
| F20_3182 | IntNeg15SType |
| F20_3171 | IntNeg15SType |
| F20_3172 | IntNeg15SType |
### Câmpuri MODIFICATE (tip):
| Câmp | Vechi | Nou | Impact |
|------|-------|-----|--------|
| F30_0341 | IntNeg15SType | IntPoz15SType | Doar valori pozitive |
| F30_0351 | IntNeg15SType | IntPoz15SType | Doar valori pozitive |
| F30_0361 | IntNeg15SType | IntPoz15SType | Doar valori pozitive |
| cif_audi | CifSType | CuiSType | Format modificat |
---
## S1004 - Raportări Contabile
**Versiune**: v12 → v13
### Câmpuri NOI:
| Câmp | Tip |
|------|-----|
| F20_3181 | IntNeg15SType |
| F20_3182 | IntNeg15SType |
| F20_3171 | IntNeg15SType |
| F20_3172 | IntNeg15SType |
### Câmpuri MODIFICATE:
| Câmp | Vechi | Nou |
|------|-------|-----|
| tip_rapSL | IntInt1_4SType | Int_tipRapSLSType |
| interes_public | Int_interesPublicSType | IntInt0_1SType |
| an | 2023-2100 | 2018-2100 (relaxat) |
---
## S1005 - Microentități
**Versiune**: v12 → v13
### Câmpuri NOI:
| Câmp | Tip | Descriere |
|------|-----|-----------|
| cif_intocmit | CifSType | **NOU: CIF persoană care întocmește** |
| F20_3051 | IntNeg15SType | Nou în F20 |
| F20_3052 | IntNeg15SType | Nou în F20 |
| F30_3421 | IntNeg15SType | Nou în F30 |
| F30_3422 | IntNeg15SType | Nou în F30 |
| F30_3411 | IntNeg15SType | Nou în F30 |
| F30_3412 | IntNeg15SType | Nou în F30 |
### Câmpuri MODIFICATE:
| Câmp | Vechi | Nou | Impact |
|------|-------|-----|--------|
| F10_0011 | IntNeg15SType | IntPoz15SType | Doar valori pozitive |
| cif_audi | Str13 | CuiSType | Format modificat |
---
## Recomandări pentru Dezvoltatori ROA
1. **Actualizare namespace-uri XML** - toate au versiuni noi
2. **Adăugare câmpuri F20_31xx** în toate formularele
3. **Modificare validare auditor** - S1002 cere acum CNP, nu CIF
4. **Câmpuri cu tip schimbat** (IntNeg → IntPoz) - elimină valori negative
5. **Câmp nou cif_intocmit** pentru S1005
6. **Eliminare F40_0174** din S1002
---
## Fișiere Comparate
| An | Formular | Dimensiune | Link |
|----|----------|------------|------|
| 2023 | S1002 | 90KB | s1002_20240119.xsd |
| 2024 | S1002 | 90KB | s1002_20250204.xsd |
| 2023 | S1003 | 60KB | s1003_20240131.xsd |
| 2024 | S1003 | 84KB | s1003_20250204.xsd |
| 2023 | S1004 | 90KB | s1004_20240129.xsd |
| 2024 | S1004 | 90KB | s1004_20250204.xsd |
| 2023 | S1005 | 60KB | s1005_20240131.xsd |
| 2024 | S1005 | 84KB | s1005_20250206.xsd |

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Binary file not shown.

Binary file not shown.

View File

@@ -0,0 +1,59 @@
{
"pages": [
{
"id": "D100",
"name": "Declarația 100 - Obligații de plată la bugetul de stat",
"url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/100.html"
},
{
"id": "D101",
"name": "Declarația 101 - Impozit pe profit",
"url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/101.html"
},
{
"id": "D300",
"name": "Declarația 300 - Decont TVA",
"url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/300.html"
},
{
"id": "D390",
"name": "Declarația 390 - Recapitulativă livrări/achiziții intracomunitare",
"url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/390.html"
},
{
"id": "D394",
"name": "Declarația 394 - Informativă livrări/achiziții",
"url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/394.html"
},
{
"id": "D205",
"name": "Declarația 205 - Informativă impozit la sursă",
"url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/205.html"
},
{
"id": "D406",
"name": "Declarația 406 - SAF-T",
"url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/406.html"
},
{
"id": "BILANT_2025",
"name": "Bilanț 31.12.2025 (S1002-S1005)",
"url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/situatiifinanciare/2025/1002_5_2025.html"
},
{
"id": "SIT_FIN_SEM_2025",
"name": "Raportări contabile semestriale 2025",
"url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/situatiifinanciare/2025/semestriale/1012_2025.html"
},
{
"id": "SIT_FIN_AN_2025",
"name": "Situații financiare anuale 2025",
"url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/situatiifinanciare/2025/1030_2025.html"
},
{
"id": "DESCARCARE_DECLARATII",
"name": "Pagina principală descărcare declarații",
"url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/descarcare_declaratii.htm"
}
]
}

View File

@@ -0,0 +1,14 @@
{
"D100": "10d051263016cb5ef71a883b7dc3b1d8d2f9ff29909740a74b729d3e980b6460",
"D101": "937209d4785ca013cbcbe5a0d0aa8ba0e7033d3d8e6c121dadd8e38b20db8026",
"D300": "0623da0873a893fc3b1635007a32059804d94b740ec606839f471b895e774c60",
"D394": "c4c4e62bda30032f12c17edf9a5087b6173a350ccb1fd750158978b3bd0acb7d",
"D406": "b3c621b61771d7b678b4bb0946a2f47434abbc332091c84de91e7dcb4effaab6",
"SIT_FIN_SEM_2025": "8164843431e6b703a38fbdedc7898ec6ae83559fe10f88663ba0b55f3091d5fe",
"SIT_FIN_AN_2025": "4294ca9271da15b9692c3efc126298fd3a89b0c68e0df9e2a256f50ad3d46b77",
"DESCARCARE_DECLARATII": "d66297abcfc2b3ad87f65e4a60c97ddd0a889f493bb7e7c8e6035ef39d55ec3f",
"D205": "f707104acc691cf79fbaa9a80c68bff4a285297f7dd3ab7b7a680715b54fd502",
"D390": "4726938ed5858ec735caefd947a7d182b6dc64009478332c4feabdb36412a84e",
"BILANT_2024": "fbb8d66c2e530d8798362992c6983e07e1250188228c758cb6da4cde4f955950",
"BILANT_2025": "3d4e363b0f352e0b961474bca6bfa99ae44a591959210f7db8b10335f4ccede6"
}

111
tools/anaf-monitor/monitor.py Executable file
View File

@@ -0,0 +1,111 @@
#!/usr/bin/env python3
"""
ANAF Page Monitor - Simple hash-based change detection
Checks configured pages and reports changes via stdout
"""
import json
import hashlib
import urllib.request
import ssl
import os
from datetime import datetime
from pathlib import Path
SCRIPT_DIR = Path(__file__).parent
CONFIG_FILE = SCRIPT_DIR / "config.json"
HASHES_FILE = SCRIPT_DIR / "hashes.json"
LOG_FILE = SCRIPT_DIR / "monitor.log"
# SSL context that doesn't verify (some ANAF pages have cert issues)
SSL_CTX = ssl.create_default_context()
SSL_CTX.check_hostname = False
SSL_CTX.verify_mode = ssl.CERT_NONE
def log(msg):
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
with open(LOG_FILE, "a") as f:
f.write(f"[{timestamp}] {msg}\n")
def load_json(path, default=None):
try:
with open(path) as f:
return json.load(f)
except:
return default if default is not None else {}
def save_json(path, data):
with open(path, "w") as f:
json.dump(data, f, indent=2)
def fetch_page(url, timeout=30):
"""Fetch page content"""
try:
req = urllib.request.Request(url, headers={
'User-Agent': 'Mozilla/5.0 (compatible; ANAF-Monitor/1.0)'
})
with urllib.request.urlopen(req, timeout=timeout, context=SSL_CTX) as resp:
return resp.read()
except Exception as e:
log(f"ERROR fetching {url}: {e}")
return None
def compute_hash(content):
"""Compute SHA256 hash of content"""
return hashlib.sha256(content).hexdigest()
def check_page(page, hashes):
"""Check a single page for changes. Returns change info or None."""
page_id = page["id"]
name = page["name"]
url = page["url"]
content = fetch_page(url)
if content is None:
return None
new_hash = compute_hash(content)
old_hash = hashes.get(page_id)
if old_hash is None:
log(f"INIT: {page_id} - storing initial hash")
hashes[page_id] = new_hash
return None
if new_hash != old_hash:
log(f"CHANGE DETECTED: {page_id} - {name}")
log(f" URL: {url}")
log(f" Old hash: {old_hash}")
log(f" New hash: {new_hash}")
hashes[page_id] = new_hash
return {"id": page_id, "name": name, "url": url}
log(f"OK: {page_id} - no changes")
return None
def main():
log("=== Starting ANAF monitor check ===")
config = load_json(CONFIG_FILE, {"pages": []})
hashes = load_json(HASHES_FILE, {})
changes = []
for page in config["pages"]:
change = check_page(page, hashes)
if change:
changes.append(change)
save_json(HASHES_FILE, hashes)
log("=== Monitor check complete ===")
# Output changes as JSON for the caller
if changes:
print(json.dumps({"changes": changes}))
else:
print(json.dumps({"changes": []}))
return len(changes)
if __name__ == "__main__":
exit(main())

87
tools/anaf-monitor/monitor.sh Executable file
View File

@@ -0,0 +1,87 @@
#!/bin/bash
# ANAF Page Monitor - Simple hash-based change detection
# Checks configured pages and reports changes
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CONFIG_FILE="$SCRIPT_DIR/config.json"
HASHES_FILE="$SCRIPT_DIR/hashes.json"
LOG_FILE="$SCRIPT_DIR/monitor.log"
# Initialize hashes file if not exists
if [ ! -f "$HASHES_FILE" ]; then
echo "{}" > "$HASHES_FILE"
fi
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> "$LOG_FILE"
}
check_page() {
local id="$1"
local name="$2"
local url="$3"
# Fetch page content and compute hash
local content=$(curl -s -L --max-time 30 "$url" 2>/dev/null)
if [ -z "$content" ]; then
log "ERROR: Failed to fetch $id ($url)"
return 1
fi
local new_hash=$(echo "$content" | sha256sum | cut -d' ' -f1)
local old_hash=$(jq -r ".[\"$id\"] // \"\"" "$HASHES_FILE")
if [ "$old_hash" = "" ]; then
# First time seeing this page
log "INIT: $id - storing initial hash"
jq ". + {\"$id\": \"$new_hash\"}" "$HASHES_FILE" > "$HASHES_FILE.tmp" && mv "$HASHES_FILE.tmp" "$HASHES_FILE"
return 0
fi
if [ "$new_hash" != "$old_hash" ]; then
log "CHANGE DETECTED: $id - $name"
log " URL: $url"
log " Old hash: $old_hash"
log " New hash: $new_hash"
# Update stored hash
jq ". + {\"$id\": \"$new_hash\"}" "$HASHES_FILE" > "$HASHES_FILE.tmp" && mv "$HASHES_FILE.tmp" "$HASHES_FILE"
# Output change for notification
echo "CHANGE:$id:$name:$url"
return 2
fi
log "OK: $id - no changes"
return 0
}
main() {
log "=== Starting ANAF monitor check ==="
local changes=""
# Read config and check each page
while IFS= read -r page; do
id=$(echo "$page" | jq -r '.id')
name=$(echo "$page" | jq -r '.name')
url=$(echo "$page" | jq -r '.url')
result=$(check_page "$id" "$name" "$url")
if [ -n "$result" ]; then
changes="$changes$result\n"
fi
# Small delay between requests
sleep 2
done < <(jq -c '.pages[]' "$CONFIG_FILE")
log "=== Monitor check complete ==="
# Output changes (if any) for the caller to handle
if [ -n "$changes" ]; then
echo -e "$changes"
fi
}
main "$@"

View File

@@ -0,0 +1,193 @@
#!/usr/bin/env python3
"""
ANAF Monitor v2 - Extrage și compară versiuni soft A/J din numele fișierelor
"""
import json
import re
import urllib.request
import ssl
from datetime import datetime
from pathlib import Path
SCRIPT_DIR = Path(__file__).parent
CONFIG_FILE = SCRIPT_DIR / "config.json"
VERSIONS_FILE = SCRIPT_DIR / "versions.json"
LOG_FILE = SCRIPT_DIR / "monitor.log"
SSL_CTX = ssl.create_default_context()
SSL_CTX.check_hostname = False
SSL_CTX.verify_mode = ssl.CERT_NONE
def log(msg):
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
with open(LOG_FILE, "a") as f:
f.write(f"[{timestamp}] {msg}\n")
def load_json(path, default=None):
try:
with open(path) as f:
return json.load(f)
except:
return default if default is not None else {}
def save_json(path, data):
with open(path, "w") as f:
json.dump(data, f, indent=2, ensure_ascii=False)
def fetch_page(url, timeout=30):
try:
req = urllib.request.Request(url, headers={
'User-Agent': 'Mozilla/5.0 (compatible; ANAF-Monitor/2.0)'
})
with urllib.request.urlopen(req, timeout=timeout, context=SSL_CTX) as resp:
return resp.read().decode('utf-8', errors='ignore')
except Exception as e:
log(f"ERROR fetching {url}: {e}")
return None
def parse_date_from_filename(filename):
"""Extrage data din numele fișierului (ex: D394_26092025.pdf -> 26.09.2025)"""
# Pattern: _DDMMYYYY. sau _DDMMYYYY_ sau _YYYYMMDD
match = re.search(r'_(\d{8})[\._]', filename)
if match:
d = match.group(1)
# Verifică dacă e DDMMYYYY sau YYYYMMDD
if int(d[:2]) <= 31 and int(d[2:4]) <= 12:
return f"{d[:2]}.{d[2:4]}.{d[4:]}"
elif int(d[4:6]) <= 12 and int(d[6:]) <= 31:
return f"{d[6:]}.{d[4:6]}.{d[:4]}"
# Pattern: _DDMMYY
match = re.search(r'_(\d{6})[\._]', filename)
if match:
d = match.group(1)
if int(d[:2]) <= 31 and int(d[2:4]) <= 12:
return f"{d[:2]}.{d[2:4]}.20{d[4:]}"
return None
def extract_versions(html):
"""Extrage primul soft A și soft J din HTML"""
versions = {}
# Găsește primul link soft A (PDF)
soft_a_match = re.search(
r'<a[^>]+href=["\']([^"\']*\.pdf)["\'][^>]*>\s*soft\s*A\s*</a>',
html, re.IGNORECASE
)
if soft_a_match:
url = soft_a_match.group(1)
versions['soft_a_url'] = url
date = parse_date_from_filename(url)
if date:
versions['soft_a_date'] = date
# Găsește primul link soft J (ZIP)
soft_j_match = re.search(
r'<a[^>]+href=["\']([^"\']*\.zip)["\'][^>]*>\s*soft\s*J',
html, re.IGNORECASE
)
if soft_j_match:
url = soft_j_match.group(1)
versions['soft_j_url'] = url
date = parse_date_from_filename(url)
if date:
versions['soft_j_date'] = date
# Găsește data publicării din text
publish_match = re.search(
r'publicat\s+[îi]n\s*(?:data\s+de\s*)?(\d{2}[./]\d{2}[./]\d{4})',
html, re.IGNORECASE
)
if publish_match:
versions['published'] = publish_match.group(1).replace('/', '.')
return versions
def format_date(d):
"""Formatează data pentru afișare"""
if not d:
return "N/A"
return d
def compare_versions(old, new, page_name):
"""Compară versiunile și returnează diferențele"""
changes = []
fields = [
('soft_a_date', 'Soft A'),
('soft_j_date', 'Soft J'),
('published', 'Publicat')
]
for field, label in fields:
old_val = old.get(field)
new_val = new.get(field)
if new_val and old_val != new_val:
if old_val:
changes.append(f"{label}: {old_val}{new_val}")
else:
changes.append(f"{label}: {new_val} (nou)")
return changes
def check_page(page, saved_versions):
"""Verifică o pagină și returnează modificările"""
page_id = page["id"]
name = page["name"]
url = page["url"]
html = fetch_page(url)
if html is None:
return None
new_versions = extract_versions(html)
old_versions = saved_versions.get(page_id, {})
# Prima rulare - doar salvează, nu raportează
if not old_versions:
log(f"INIT: {page_id} - {new_versions}")
saved_versions[page_id] = new_versions
return None
changes = compare_versions(old_versions, new_versions, name)
saved_versions[page_id] = new_versions
if changes:
log(f"CHANGES in {page_id}: {changes}")
return {
"id": page_id,
"name": name,
"url": url,
"changes": changes,
"current": {
"soft_a": new_versions.get('soft_a_date', 'N/A'),
"soft_j": new_versions.get('soft_j_date', 'N/A')
}
}
else:
log(f"OK: {page_id}")
return None
def main():
log("=== Starting ANAF monitor v2 ===")
config = load_json(CONFIG_FILE, {"pages": []})
saved_versions = load_json(VERSIONS_FILE, {})
all_changes = []
for page in config["pages"]:
result = check_page(page, saved_versions)
if result:
all_changes.append(result)
save_json(VERSIONS_FILE, saved_versions)
log("=== Monitor complete ===")
print(json.dumps({"changes": all_changes}, ensure_ascii=False, indent=2))
return len(all_changes)
if __name__ == "__main__":
exit(main())

View File

@@ -0,0 +1,57 @@
{
"D100": {
"soft_a_url": "http://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D100_710_XML_0126_260126.pdf",
"soft_a_date": "26.01.2026",
"soft_j_url": "http://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D100_22012026.zip",
"soft_j_date": "22.01.2026"
},
"D101": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D101_XML_2025_260126.pdf",
"soft_a_date": "26.01.2026",
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D101_J1102.zip"
},
"D300": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D300_v11.0.7_16122025.pdf",
"soft_a_date": "16.12.2025",
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D300_20250910.zip",
"soft_j_date": "10.09.2025"
},
"D390": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D390_XML_2020_300424.pdf",
"soft_a_date": "30.04.2024",
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D390_20250625.zip",
"soft_j_date": "25.06.2025"
},
"D394": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D394_26092025.pdf",
"soft_a_date": "26.09.2025",
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D394_17092025.zip",
"soft_j_date": "17.09.2025"
},
"D205": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D205_XML_2025_150126.pdf",
"soft_a_date": "15.01.2026",
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D205_J901_P400.zip"
},
"D406": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/R405_XML_2017_080321.pdf",
"soft_a_date": "08.03.2021",
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D406_20251030.zip",
"soft_j_date": "30.10.2025"
},
"BILANT_2025": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/bilant_SC_1225_XML_270126.pdf",
"soft_a_date": "27.01.2026",
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/S1002_20260128.zip",
"soft_j_date": "28.01.2026"
},
"SIT_FIN_SEM_2025": {
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/S1012_20250723.zip",
"soft_j_date": "23.07.2025"
},
"SIT_FIN_AN_2025": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/bilant_S1030_XML_consolidare_270126_bis.pdf",
"soft_a_date": "27.01.2026"
},
"DESCARCARE_DECLARATII": {}
}

View File

@@ -1,25 +1,75 @@
#!/usr/bin/env python3
"""
Generează index.json pentru notes din fișierele .md
Generează index.json pentru KB din fișierele .md
Scanează: kb/, memory/, conversations/
Extrage titlu, dată, tags, și domenii (@work, @health, etc.)
Scanează TOATE subdirectoarele din notes/ (youtube, retete, etc.)
"""
import os
import re
import json
from pathlib import Path
from datetime import datetime
NOTES_ROOT = Path(__file__).parent.parent / "notes"
INDEX_FILE = NOTES_ROOT / "index.json"
# Subdirectoare de scanat (adaugă altele aici)
SCAN_DIRS = ['youtube', 'retete']
BASE_DIR = Path(__file__).parent.parent
KB_ROOT = BASE_DIR / "kb"
MEMORY_DIR = BASE_DIR / "memory"
CONVERSATIONS_DIR = BASE_DIR / "conversations"
INDEX_FILE = KB_ROOT / "index.json"
# Domenii de agenți
VALID_DOMAINS = ['work', 'health', 'growth', 'sprijin', 'scout']
def extract_metadata(filepath):
# Tipuri speciale (pentru grup-sprijin etc.)
VALID_TYPES = ['exercitiu', 'meditatie', 'reflectie', 'intrebare', 'fisa', 'project', 'memory', 'conversation', 'coaching']
# Cache for rules files
_rules_cache = {}
def load_rules(filepath):
"""Încarcă regulile din .rules.json din directorul fișierului sau părinți"""
dir_path = filepath.parent
# Check cache
if str(dir_path) in _rules_cache:
return _rules_cache[str(dir_path)]
# Look for .rules.json in current dir and parents (up to kb/)
rules = {
"defaultDomains": [],
"defaultTypes": [],
"defaultTags": [],
"inferTypeFromFilename": False,
"filenameTypeMap": {}
}
# Collect rules from all levels (child rules override parent)
rules_chain = []
current = dir_path
while current >= KB_ROOT:
rules_file = current / ".rules.json"
if rules_file.exists():
try:
with open(rules_file, 'r', encoding='utf-8') as f:
rules_chain.insert(0, json.load(f)) # Parent first
except:
pass
current = current.parent
# Merge rules (child overrides parent)
for r in rules_chain:
for key in rules:
if key in r:
if isinstance(rules[key], list):
# Extend lists (don't override)
rules[key] = list(set(rules[key] + r[key]))
else:
rules[key] = r[key]
_rules_cache[str(dir_path)] = rules
return rules
def extract_metadata(filepath, category, subcategory=None):
"""Extrage metadata din fișierul markdown"""
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()
@@ -31,6 +81,7 @@ def extract_metadata(filepath):
# Extrage tags (linia cu **Tags:** sau tags:)
tags = []
domains = []
types = []
tags_match = re.search(r'\*\*Tags?:\*\*\s*(.+)$|^Tags?:\s*(.+)$', content, re.MULTILINE | re.IGNORECASE)
if tags_match:
tags_str = tags_match.group(1) or tags_match.group(2)
@@ -38,96 +89,199 @@ def extract_metadata(filepath):
# Extrage domenii (@work, @health, etc.)
domain_matches = re.findall(r'@(\w+)', tags_str)
domains = [d for d in domain_matches if d in VALID_DOMAINS]
types = [d for d in domain_matches if d in VALID_TYPES]
# Extrage tags normale (#tag) - exclude domeniile
# Extrage tags normale (#tag)
all_tags = re.findall(r'#([\w-]+)', tags_str)
tags = [t for t in all_tags if t not in VALID_DOMAINS]
tags = [t for t in all_tags if t not in VALID_DOMAINS and t not in VALID_TYPES]
# Extrage data din filename (YYYY-MM-DD_slug.md)
date_match = re.match(r'(\d{4}-\d{2}-\d{2})_', filepath.name)
# Aplică reguli din .rules.json (dacă există)
rules = load_rules(filepath)
# Adaugă domains implicite (dacă nu sunt deja)
for d in rules.get("defaultDomains", []):
if d not in domains:
domains.append(d)
# Adaugă types implicite
for t in rules.get("defaultTypes", []):
if t not in types:
types.append(t)
# Adaugă tags implicite
for t in rules.get("defaultTags", []):
if t not in tags:
tags.append(t)
# Inferă type din filename (dacă e configurat)
if rules.get("inferTypeFromFilename"):
filename_lower = filepath.stem.lower()
for pattern, type_name in rules.get("filenameTypeMap", {}).items():
if pattern in filename_lower and type_name not in types:
types.append(type_name)
break
# Extrage data din filename (YYYY-MM-DD_slug.md sau YYYY-MM-DD.md)
date_match = re.match(r'(\d{4}-\d{2}-\d{2})', filepath.name)
date = date_match.group(1) if date_match else ""
# Pentru fișiere fără dată în nume, folosește mtime
if not date:
mtime = filepath.stat().st_mtime
date = datetime.fromtimestamp(mtime).strftime('%Y-%m-%d')
# Extrage video URL
video_match = re.search(r'\*\*(?:Video|Link):\*\*\s*(https?://[^\s]+)', content)
video_url = video_match.group(1) if video_match else ""
# Extrage TL;DR (primele 200 caractere)
tldr_match = re.search(r'##\s*📋?\s*TL;DR\s*\n+(.+?)(?=\n##|\n---|\Z)', content, re.DOTALL)
# Extrage TL;DR sau primele 200 caractere de conținut
tldr = ""
tldr_match = re.search(r'##\s*📋?\s*TL;DR\s*\n+(.+?)(?=\n##|\n---|\Z)', content, re.DOTALL)
if tldr_match:
tldr = tldr_match.group(1).strip()[:200]
if len(tldr_match.group(1).strip()) > 200:
tldr += "..."
else:
# Fallback: primul paragraf după titlu
para_match = re.search(r'^#.+\n+(.+?)(?=\n\n|\n#|\Z)', content, re.DOTALL)
if para_match:
tldr = para_match.group(1).strip()[:200]
if len(tldr) >= 200:
tldr += "..."
# Construiește path-ul relativ pentru web (din dashboard/)
# Dashboard are symlinks: notes-data -> ../kb, memory -> ../memory, conversations -> ../conversations
rel_path = str(filepath.relative_to(BASE_DIR))
# Transformă kb/... în notes-data/... pentru web
if rel_path.startswith('kb/'):
rel_path = 'notes-data/' + rel_path[3:]
return {
"file": filepath.name,
"file": rel_path,
"title": title,
"date": date,
"tags": tags,
"domains": domains,
"types": types,
"category": category,
"project": subcategory, # primul nivel sub projects/ (grup-sprijin, vending-master)
"subdir": None, # se setează în scan_directory pentru niveluri mai adânci
"video": video_url,
"tldr": tldr
}
def generate_index():
"""Generează index.json din toate fișierele .md din toate subdirectoarele"""
def scan_directory(dir_path, category, subcategory=None, recursive=False):
"""Scanează un director pentru fișiere .md"""
notes = []
# Stats per domeniu
domain_stats = {d: 0 for d in VALID_DOMAINS}
# Stats per categorie
category_stats = {}
if not dir_path.exists():
return notes
for subdir in SCAN_DIRS:
notes_dir = NOTES_ROOT / subdir
if not notes_dir.exists():
print(f" (skipping {subdir}/ - not found)")
continue
print(f"Scanning notes/{subdir}/...")
category_stats[subdir] = 0
for filepath in sorted(notes_dir.glob("*.md"), reverse=True):
if filepath.name == 'index.json':
# Defaults pentru categorii speciale (memory/, conversations/)
category_defaults = {
"memory": {"types": ["memory"], "domains": []},
"conversations": {"types": ["conversation"], "domains": []}
}
if recursive:
# Scanează recursiv
for filepath in dir_path.rglob("*.md"):
if filepath.name.startswith('.') or 'template' in filepath.name.lower():
continue
try:
metadata = extract_metadata(filepath)
# Adaugă categoria (subdirectorul)
metadata['category'] = subdir
# Modifică path-ul fișierului să includă subdirectorul
metadata['file'] = f"{subdir}/{filepath.name}"
# Determină project și subdir din path
# Ex: projects/grup-sprijin/biblioteca/file.md
# project = grup-sprijin, subdir = biblioteca
rel_to_dir = filepath.relative_to(dir_path)
parts = rel_to_dir.parts[:-1] # exclude filename
project = parts[0] if len(parts) > 0 else None
subdir = parts[1] if len(parts) > 1 else None
metadata = extract_metadata(filepath, category, project)
metadata['subdir'] = subdir
notes.append(metadata)
# Update stats
category_stats[subdir] += 1
for d in metadata['domains']:
domain_stats[d] += 1
domains_str = ' '.join([f'@{d}' for d in metadata['domains']]) if metadata['domains'] else ''
print(f" + {metadata['title'][:40]}... {domains_str}")
except Exception as e:
print(f" ! Error processing {filepath.name}: {e}")
print(f" ! Error processing {filepath}: {e}")
else:
# Scanează doar fișierele din director (nu subdirectoare)
for filepath in sorted(dir_path.glob("*.md"), reverse=True):
if filepath.name.startswith('.') or 'template' in filepath.name.lower():
continue
try:
metadata = extract_metadata(filepath, category, subcategory)
# Aplică defaults pentru categoria specială
if category in category_defaults:
defaults = category_defaults[category]
for t in defaults.get("types", []):
if t not in metadata["types"]:
metadata["types"].append(t)
for d in defaults.get("domains", []):
if d not in metadata["domains"]:
metadata["domains"].append(d)
notes.append(metadata)
except Exception as e:
print(f" ! Error processing {filepath}: {e}")
return notes
def generate_index():
"""Generează index.json din toate sursele"""
all_notes = []
# Stats
domain_stats = {d: 0 for d in VALID_DOMAINS}
category_stats = {}
# Scanează TOATE subdirectoarele din kb/ recursiv
print("Scanning kb/ (all subdirectories)...")
for subdir in sorted(KB_ROOT.iterdir()):
if subdir.is_dir() and not subdir.name.startswith('.'):
category = subdir.name
print(f" [{category}]")
notes = scan_directory(subdir, category, recursive=True)
all_notes.extend(notes)
category_stats[category] = len(notes)
for n in notes:
sub = f"/{n['subcategory']}" if n.get('subcategory') else ""
print(f" + {n['title'][:42]}...")
for d in n['domains']:
domain_stats[d] += 1
# 4. Scanează memory/
print("Scanning memory/...")
memory_notes = scan_directory(MEMORY_DIR, "memory")
all_notes.extend(memory_notes)
category_stats["memory"] = len(memory_notes)
for n in memory_notes:
print(f" + {n['title'][:45]}...")
# 5. Scanează conversations/
print("Scanning conversations/...")
conv_notes = scan_directory(CONVERSATIONS_DIR, "conversations")
all_notes.extend(conv_notes)
category_stats["conversations"] = len(conv_notes)
for n in conv_notes:
print(f" + {n['title'][:45]}...")
# Sortează după dată descrescător
notes.sort(key=lambda x: x['date'], reverse=True)
all_notes.sort(key=lambda x: x['date'], reverse=True)
# Adaugă metadata globală
output = {
"notes": notes,
"notes": all_notes,
"stats": {
"total": len(notes),
"total": len(all_notes),
"by_domain": domain_stats,
"by_category": category_stats
},
"domains": VALID_DOMAINS,
"categories": SCAN_DIRS
"types": VALID_TYPES,
"categories": list(category_stats.keys())
}
with open(INDEX_FILE, 'w', encoding='utf-8') as f:
json.dump(output, f, indent=2, ensure_ascii=False)
print(f"\n✅ Generated {INDEX_FILE} with {len(notes)} notes")
print(f" Domains: {domain_stats}")
print(f"\n✅ Generated {INDEX_FILE} with {len(all_notes)} notes")
print(f" Categories: {category_stats}")
return output