Update .trash, dashboard, root +1 more (+1 ~5 -18)

This commit is contained in:
Echo
2026-02-03 21:48:03 +00:00
parent 0ba44b52a0
commit 75c400009a
23 changed files with 215 additions and 129 deletions

View File

@@ -1,86 +0,0 @@
# Raport Comparare Bilanț ANAF: 12/2025 vs 12/2024
## (Depuneri 2026 vs Depuneri 2025)
Data analizei: 2026-01-29
Baza legală 2025: **OMF nr. 2036/23.12.2025**
---
## 🔴 IMPORTANT: Doar S1002 are modificări!
S1003, S1004, S1005 folosesc **aceleași XSD-uri** ca pentru 2024.
---
## S1002 - Entități Mari și Mijlocii
**Versiune**: v14 → v15
### ⭐ Câmpuri NOI (OBLIGATORII):
| Câmp | Tip | Descriere |
|------|-----|-----------|
| **AN_CAEN** | IntInt2024_2025SType | **NOU! Anul pentru codul CAEN (2024 sau 2025)** |
| **d_audit_intern** | IntPoz1SType | **NOU! Declarație audit intern** |
### 🔄 Câmpuri MODIFICATE:
| Câmp | 2024 | 2025 | Impact |
|------|------|------|--------|
| cif_audi | CnpSType (CNP) | **CuiSType** | **Înapoi la CUI!** (era CNP în 2024) |
| bifa_aprob | Int_bifaAprobSType | IntInt1_1SType | Simplificat |
| bifa_art27 | Int_bifaArt27SType | IntInt0_0SType | Simplificat |
| interes_public | Int_interesPublicSType | IntInt0_1SType | Simplificat |
### Câmp RE-ADĂUGAT:
| Câmp | Notă |
|------|------|
| **F40_0174** | Re-adăugat (fusese eliminat în v14!) |
### 📋 Coduri CAEN NOI:
**+150 coduri CAEN** adăugate în enumerare, printre care:
- 5330, 1625, 3032, 9013, 7412, 1628, 4783, 9020
- 3100, 6422, 8694, 9699, 8692, 8569, 4682, 4686
- și multe altele...
---
## S1003, S1004, S1005 - FĂRĂ MODIFICĂRI
Aceste formulare folosesc aceleași scheme XSD ca pentru 2024:
- s1003_20250204.xsd
- s1004_20250204.xsd
- s1005_20250206.xsd
---
## ⚠️ Acțiuni Necesare pentru ROA
### Prioritate ÎNALTĂ:
1. **Actualizare namespace** S1002: v14 → v15
2. **Adăugare câmp AN_CAEN** (obligatoriu, valori: 2024 sau 2025)
3. **Adăugare câmp d_audit_intern** (audit intern)
4. **Modificare validare cif_audi** - înapoi la CUI (nu mai e CNP!)
5. **Re-activare F40_0174**
### Prioritate MEDIE:
6. Actualizare lista coduri CAEN (+150 noi)
7. Simplificare tipuri pentru bifa_aprob, bifa_art27, interes_public
---
## Fișiere Sursă
| Formular | 2024 | 2025 |
|----------|------|------|
| S1002 | s1002_20250204.xsd (v14) | s1002_20260128.xsd (v15) |
| S1003 | s1003_20250204.xsd | *același* |
| S1004 | s1004_20250204.xsd | *același* |
| S1005 | s1005_20250206.xsd | *același* |
---
## Link-uri ANAF
- [Pagina 2025](https://static.anaf.ro/static/10/Anaf/Declaratii_R/situatiifinanciare/2025/1002_5_2025.html)
- [OMF 2036/2025](https://static.anaf.ro/static/10/Anaf/legislatie/O_2036_2025.pdf)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,147 +0,0 @@
# Raport Comparare Formulare Bilanț ANAF
## 2024 (pentru depuneri 2025) vs 2023 (pentru depuneri 2024)
Data analizei: 2026-01-29
---
## Rezumat Executiv
Toate formularele au primit versiuni noi cu modificări structurale:
- **S1002**: v12 → v14 (entități mari/mijlocii)
- **S1003**: v12 → v13 (entități mici)
- **S1004**: v12 → v13 (raportări contabile)
- **S1005**: v12 → v13 (microentități)
### Principalele Schimbări
1. **Câmpuri NOI în F20** (toate formularele):
- `F20_3181`, `F20_3182`
- `F20_3171`, `F20_3172`
2. **Validare auditor modificată**:
- S1002: `cif_audi` schimbat de la CIF la **CNP** (pattern 13 cifre începând cu 1-9)
- S1003, S1004, S1005: `cif_audi` → CuiSType
3. **Restricție an minim**:
- Formularele nu mai acceptă ani vechi (2018/2023 → 2024)
---
## S1002 - Entități Mari și Mijlocii
**Versiune**: v12 → v14
### Câmpuri NOI:
| Câmp | Tip | Descriere |
|------|-----|-----------|
| F20_3181 | IntNeg15SType | Nou în F20 |
| F20_3182 | IntNeg15SType | Nou în F20 |
| F20_3171 | IntNeg15SType | Nou în F20 |
| F20_3172 | IntNeg15SType | Nou în F20 |
### Câmpuri ELIMINATE:
| Câmp | Notă |
|------|------|
| F40_0174 | Eliminat din F40 |
### Câmpuri MODIFICATE:
| Câmp | Vechi | Nou | Impact |
|------|-------|-----|--------|
| cif_audi | CifSType | CnpSType | **ATENȚIE: Acum cere CNP, nu CIF!** |
| an | 2018-2100 | 2024-2100 | Nu mai acceptă ani vechi |
### Enumerări NOI:
- Adăugată valoarea "16" la lista de tipuri valide
---
## S1003 - Entități Mici
**Versiune**: v12 → v13
### Câmpuri NOI:
| Câmp | Tip |
|------|-----|
| F20_3181 | IntNeg15SType |
| F20_3182 | IntNeg15SType |
| F20_3171 | IntNeg15SType |
| F20_3172 | IntNeg15SType |
### Câmpuri MODIFICATE (tip):
| Câmp | Vechi | Nou | Impact |
|------|-------|-----|--------|
| F30_0341 | IntNeg15SType | IntPoz15SType | Doar valori pozitive |
| F30_0351 | IntNeg15SType | IntPoz15SType | Doar valori pozitive |
| F30_0361 | IntNeg15SType | IntPoz15SType | Doar valori pozitive |
| cif_audi | CifSType | CuiSType | Format modificat |
---
## S1004 - Raportări Contabile
**Versiune**: v12 → v13
### Câmpuri NOI:
| Câmp | Tip |
|------|-----|
| F20_3181 | IntNeg15SType |
| F20_3182 | IntNeg15SType |
| F20_3171 | IntNeg15SType |
| F20_3172 | IntNeg15SType |
### Câmpuri MODIFICATE:
| Câmp | Vechi | Nou |
|------|-------|-----|
| tip_rapSL | IntInt1_4SType | Int_tipRapSLSType |
| interes_public | Int_interesPublicSType | IntInt0_1SType |
| an | 2023-2100 | 2018-2100 (relaxat) |
---
## S1005 - Microentități
**Versiune**: v12 → v13
### Câmpuri NOI:
| Câmp | Tip | Descriere |
|------|-----|-----------|
| cif_intocmit | CifSType | **NOU: CIF persoană care întocmește** |
| F20_3051 | IntNeg15SType | Nou în F20 |
| F20_3052 | IntNeg15SType | Nou în F20 |
| F30_3421 | IntNeg15SType | Nou în F30 |
| F30_3422 | IntNeg15SType | Nou în F30 |
| F30_3411 | IntNeg15SType | Nou în F30 |
| F30_3412 | IntNeg15SType | Nou în F30 |
### Câmpuri MODIFICATE:
| Câmp | Vechi | Nou | Impact |
|------|-------|-----|--------|
| F10_0011 | IntNeg15SType | IntPoz15SType | Doar valori pozitive |
| cif_audi | Str13 | CuiSType | Format modificat |
---
## Recomandări pentru Dezvoltatori ROA
1. **Actualizare namespace-uri XML** - toate au versiuni noi
2. **Adăugare câmpuri F20_31xx** în toate formularele
3. **Modificare validare auditor** - S1002 cere acum CNP, nu CIF
4. **Câmpuri cu tip schimbat** (IntNeg → IntPoz) - elimină valori negative
5. **Câmp nou cif_intocmit** pentru S1005
6. **Eliminare F40_0174** din S1002
---
## Fișiere Comparate
| An | Formular | Dimensiune | Link |
|----|----------|------------|------|
| 2023 | S1002 | 90KB | s1002_20240119.xsd |
| 2024 | S1002 | 90KB | s1002_20250204.xsd |
| 2023 | S1003 | 60KB | s1003_20240131.xsd |
| 2024 | S1003 | 84KB | s1003_20250204.xsd |
| 2023 | S1004 | 90KB | s1004_20240129.xsd |
| 2024 | S1004 | 90KB | s1004_20250204.xsd |
| 2023 | S1005 | 60KB | s1005_20240131.xsd |
| 2024 | S1005 | 84KB | s1005_20250206.xsd |

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,14 +0,0 @@
{
"D100": "10d051263016cb5ef71a883b7dc3b1d8d2f9ff29909740a74b729d3e980b6460",
"D101": "937209d4785ca013cbcbe5a0d0aa8ba0e7033d3d8e6c121dadd8e38b20db8026",
"D300": "0623da0873a893fc3b1635007a32059804d94b740ec606839f471b895e774c60",
"D394": "c4c4e62bda30032f12c17edf9a5087b6173a350ccb1fd750158978b3bd0acb7d",
"D406": "b3c621b61771d7b678b4bb0946a2f47434abbc332091c84de91e7dcb4effaab6",
"SIT_FIN_SEM_2025": "8164843431e6b703a38fbdedc7898ec6ae83559fe10f88663ba0b55f3091d5fe",
"SIT_FIN_AN_2025": "4294ca9271da15b9692c3efc126298fd3a89b0c68e0df9e2a256f50ad3d46b77",
"DESCARCARE_DECLARATII": "d66297abcfc2b3ad87f65e4a60c97ddd0a889f493bb7e7c8e6035ef39d55ec3f",
"D205": "f707104acc691cf79fbaa9a80c68bff4a285297f7dd3ab7b7a680715b54fd502",
"D390": "4726938ed5858ec735caefd947a7d182b6dc64009478332c4feabdb36412a84e",
"BILANT_2024": "fbb8d66c2e530d8798362992c6983e07e1250188228c758cb6da4cde4f955950",
"BILANT_2025": "3d4e363b0f352e0b961474bca6bfa99ae44a591959210f7db8b10335f4ccede6"
}

View File

@@ -1,111 +0,0 @@
#!/usr/bin/env python3
"""
ANAF Page Monitor - Simple hash-based change detection
Checks configured pages and reports changes via stdout
"""
import json
import hashlib
import urllib.request
import ssl
import os
from datetime import datetime
from pathlib import Path
SCRIPT_DIR = Path(__file__).parent
CONFIG_FILE = SCRIPT_DIR / "config.json"
HASHES_FILE = SCRIPT_DIR / "hashes.json"
LOG_FILE = SCRIPT_DIR / "monitor.log"
# SSL context that doesn't verify (some ANAF pages have cert issues)
SSL_CTX = ssl.create_default_context()
SSL_CTX.check_hostname = False
SSL_CTX.verify_mode = ssl.CERT_NONE
def log(msg):
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
with open(LOG_FILE, "a") as f:
f.write(f"[{timestamp}] {msg}\n")
def load_json(path, default=None):
try:
with open(path) as f:
return json.load(f)
except:
return default if default is not None else {}
def save_json(path, data):
with open(path, "w") as f:
json.dump(data, f, indent=2)
def fetch_page(url, timeout=30):
"""Fetch page content"""
try:
req = urllib.request.Request(url, headers={
'User-Agent': 'Mozilla/5.0 (compatible; ANAF-Monitor/1.0)'
})
with urllib.request.urlopen(req, timeout=timeout, context=SSL_CTX) as resp:
return resp.read()
except Exception as e:
log(f"ERROR fetching {url}: {e}")
return None
def compute_hash(content):
"""Compute SHA256 hash of content"""
return hashlib.sha256(content).hexdigest()
def check_page(page, hashes):
"""Check a single page for changes. Returns change info or None."""
page_id = page["id"]
name = page["name"]
url = page["url"]
content = fetch_page(url)
if content is None:
return None
new_hash = compute_hash(content)
old_hash = hashes.get(page_id)
if old_hash is None:
log(f"INIT: {page_id} - storing initial hash")
hashes[page_id] = new_hash
return None
if new_hash != old_hash:
log(f"CHANGE DETECTED: {page_id} - {name}")
log(f" URL: {url}")
log(f" Old hash: {old_hash}")
log(f" New hash: {new_hash}")
hashes[page_id] = new_hash
return {"id": page_id, "name": name, "url": url}
log(f"OK: {page_id} - no changes")
return None
def main():
log("=== Starting ANAF monitor check ===")
config = load_json(CONFIG_FILE, {"pages": []})
hashes = load_json(HASHES_FILE, {})
changes = []
for page in config["pages"]:
change = check_page(page, hashes)
if change:
changes.append(change)
save_json(HASHES_FILE, hashes)
log("=== Monitor check complete ===")
# Output changes as JSON for the caller
if changes:
print(json.dumps({"changes": changes}))
else:
print(json.dumps({"changes": []}))
return len(changes)
if __name__ == "__main__":
exit(main())

View File

@@ -1,87 +0,0 @@
#!/bin/bash
# ANAF Page Monitor - Simple hash-based change detection
# Checks configured pages and reports changes
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CONFIG_FILE="$SCRIPT_DIR/config.json"
HASHES_FILE="$SCRIPT_DIR/hashes.json"
LOG_FILE="$SCRIPT_DIR/monitor.log"
# Initialize hashes file if not exists
if [ ! -f "$HASHES_FILE" ]; then
echo "{}" > "$HASHES_FILE"
fi
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> "$LOG_FILE"
}
check_page() {
local id="$1"
local name="$2"
local url="$3"
# Fetch page content and compute hash
local content=$(curl -s -L --max-time 30 "$url" 2>/dev/null)
if [ -z "$content" ]; then
log "ERROR: Failed to fetch $id ($url)"
return 1
fi
local new_hash=$(echo "$content" | sha256sum | cut -d' ' -f1)
local old_hash=$(jq -r ".[\"$id\"] // \"\"" "$HASHES_FILE")
if [ "$old_hash" = "" ]; then
# First time seeing this page
log "INIT: $id - storing initial hash"
jq ". + {\"$id\": \"$new_hash\"}" "$HASHES_FILE" > "$HASHES_FILE.tmp" && mv "$HASHES_FILE.tmp" "$HASHES_FILE"
return 0
fi
if [ "$new_hash" != "$old_hash" ]; then
log "CHANGE DETECTED: $id - $name"
log " URL: $url"
log " Old hash: $old_hash"
log " New hash: $new_hash"
# Update stored hash
jq ". + {\"$id\": \"$new_hash\"}" "$HASHES_FILE" > "$HASHES_FILE.tmp" && mv "$HASHES_FILE.tmp" "$HASHES_FILE"
# Output change for notification
echo "CHANGE:$id:$name:$url"
return 2
fi
log "OK: $id - no changes"
return 0
}
main() {
log "=== Starting ANAF monitor check ==="
local changes=""
# Read config and check each page
while IFS= read -r page; do
id=$(echo "$page" | jq -r '.id')
name=$(echo "$page" | jq -r '.name')
url=$(echo "$page" | jq -r '.url')
result=$(check_page "$id" "$name" "$url")
if [ -n "$result" ]; then
changes="$changes$result\n"
fi
# Small delay between requests
sleep 2
done < <(jq -c '.pages[]' "$CONFIG_FILE")
log "=== Monitor check complete ==="
# Output changes (if any) for the caller to handle
if [ -n "$changes" ]; then
echo -e "$changes"
fi
}
main "$@"

View File

@@ -1,21 +1,67 @@
#!/usr/bin/env python3
"""
ANAF Monitor v2 - Extrage și compară versiuni soft A/J din numele fișierelor
ANAF Monitor v2.2 - Hash detection + version extraction + text diff
- Hash-based change detection (catches ANY change)
- Extracts ALL soft A/J versions from page
- Saves page text and shows diff on changes
"""
import json
import re
import hashlib
import urllib.request
import ssl
import difflib
from datetime import datetime
from pathlib import Path
from html.parser import HTMLParser
SCRIPT_DIR = Path(__file__).parent
CONFIG_FILE = SCRIPT_DIR / "config.json"
VERSIONS_FILE = SCRIPT_DIR / "versions.json"
HASHES_FILE = SCRIPT_DIR / "hashes.json"
SNAPSHOTS_DIR = SCRIPT_DIR / "snapshots"
LOG_FILE = SCRIPT_DIR / "monitor.log"
DASHBOARD_STATUS = SCRIPT_DIR.parent.parent / "dashboard" / "status.json"
# Ensure snapshots directory exists
SNAPSHOTS_DIR.mkdir(exist_ok=True)
class TextExtractor(HTMLParser):
"""Extract visible text from HTML"""
def __init__(self):
super().__init__()
self.text = []
self.skip_tags = {'script', 'style', 'head', 'meta', 'link'}
self.current_tag = None
def handle_starttag(self, tag, attrs):
self.current_tag = tag.lower()
def handle_endtag(self, tag):
self.current_tag = None
def handle_data(self, data):
if self.current_tag not in self.skip_tags:
text = data.strip()
if text:
self.text.append(text)
def get_text(self):
return '\n'.join(self.text)
def html_to_text(html):
"""Convert HTML to plain text"""
parser = TextExtractor()
try:
parser.feed(html)
return parser.get_text()
except:
# Fallback: just strip tags
return re.sub(r'<[^>]+>', ' ', html)
SSL_CTX = ssl.create_default_context()
SSL_CTX.check_hostname = False
SSL_CTX.verify_mode = ssl.CERT_NONE
@@ -39,14 +85,58 @@ def save_json(path, data):
def fetch_page(url, timeout=30):
try:
req = urllib.request.Request(url, headers={
'User-Agent': 'Mozilla/5.0 (compatible; ANAF-Monitor/2.0)'
'User-Agent': 'Mozilla/5.0 (compatible; ANAF-Monitor/2.1)'
})
with urllib.request.urlopen(req, timeout=timeout, context=SSL_CTX) as resp:
return resp.read().decode('utf-8', errors='ignore')
return resp.read()
except Exception as e:
log(f"ERROR fetching {url}: {e}")
return None
def compute_hash(content):
"""Compute SHA256 hash of content"""
return hashlib.sha256(content).hexdigest()
def load_snapshot(page_id):
"""Load previous page text snapshot"""
snapshot_file = SNAPSHOTS_DIR / f"{page_id}.txt"
try:
return snapshot_file.read_text(encoding='utf-8')
except:
return None
def save_snapshot(page_id, text):
"""Save page text snapshot"""
snapshot_file = SNAPSHOTS_DIR / f"{page_id}.txt"
snapshot_file.write_text(text, encoding='utf-8')
def generate_diff(old_text, new_text, context_lines=3):
"""Generate unified diff between old and new text"""
if not old_text:
return None
old_lines = old_text.splitlines(keepends=True)
new_lines = new_text.splitlines(keepends=True)
diff = list(difflib.unified_diff(
old_lines, new_lines,
fromfile='anterior',
tofile='actual',
n=context_lines
))
if not diff:
return None
# Limitează diff-ul la maxim 50 linii pentru output
if len(diff) > 50:
diff = diff[:50] + ['... (truncat)\n']
return ''.join(diff)
def parse_date_from_filename(filename):
"""Extrage data din numele fișierului (ex: D394_26092025.pdf -> 26.09.2025)"""
# Pattern: _DDMMYYYY. sau _DDMMYYYY_ sau _YYYYMMDD
@@ -69,10 +159,10 @@ def parse_date_from_filename(filename):
return None
def extract_versions(html):
"""Extrage primul soft A și soft J din HTML"""
"""Extrage soft A/J din HTML - primul generic + toate cele cu label (S1002, etc.)"""
versions = {}
# Găsește primul link soft A (PDF)
# Găsește PRIMUL link soft A (PDF) - versiunea curentă
soft_a_match = re.search(
r'<a[^>]+href=["\']([^"\']*\.pdf)["\'][^>]*>\s*soft\s*A\s*</a>',
html, re.IGNORECASE
@@ -84,17 +174,33 @@ def extract_versions(html):
if date:
versions['soft_a_date'] = date
# Găsește primul link soft J (ZIP)
soft_j_match = re.search(
r'<a[^>]+href=["\']([^"\']*\.zip)["\'][^>]*>\s*soft\s*J',
# Găsește soft J-uri CU LABEL (ex: "soft J - S1002") - toate
soft_j_labeled = re.findall(
r'<a[^>]+href=["\']([^"\']*\.zip)["\'][^>]*>\s*soft\s*J\s*-\s*([^<]+)',
html, re.IGNORECASE
)
if soft_j_match:
url = soft_j_match.group(1)
versions['soft_j_url'] = url
date = parse_date_from_filename(url)
if date:
versions['soft_j_date'] = date
if soft_j_labeled:
# Pagină cu soft-uri denumite (bilanț)
for url, label in soft_j_labeled:
label = label.strip()
key = f'soft_j_{label.replace(" ", "_")}'
versions[f'{key}_url'] = url
date = parse_date_from_filename(url)
if date:
versions[f'{key}_date'] = date
else:
# Pagină cu soft J simplu - ia doar primul
soft_j_match = re.search(
r'<a[^>]+href=["\']([^"\']*\.zip)["\'][^>]*>\s*soft\s*J',
html, re.IGNORECASE
)
if soft_j_match:
url = soft_j_match.group(1)
versions['soft_j_url'] = url
date = parse_date_from_filename(url)
if date:
versions['soft_j_date'] = date
# Găsește data publicării din text
publish_match = re.search(
@@ -106,71 +212,106 @@ def extract_versions(html):
return versions
def format_date(d):
"""Formatează data pentru afișare"""
if not d:
return "N/A"
return d
def compare_versions(old, new, page_name):
def compare_versions(old, new):
"""Compară versiunile și returnează diferențele"""
changes = []
fields = [
('soft_a_date', 'Soft A'),
('soft_j_date', 'Soft J'),
('published', 'Publicat')
]
# Colectează toate cheile unice
all_keys = set(old.keys()) | set(new.keys())
date_keys = sorted([k for k in all_keys if k.endswith('_date') or k == 'published'])
for field, label in fields:
old_val = old.get(field)
new_val = new.get(field)
for key in date_keys:
old_val = old.get(key)
new_val = new.get(key)
# Formatează label-ul
label = key.replace('_date', '').replace('_', ' ').title()
if new_val and old_val != new_val:
if old_val:
changes.append(f"{label}: {old_val}{new_val}")
else:
changes.append(f"{label}: {new_val} (nou)")
changes.append(f"{label}: {new_val} (NOU)")
return changes
def check_page(page, saved_versions):
def format_current_versions(versions):
"""Formatează versiunile curente pentru output"""
result = {}
for key, val in versions.items():
if key.endswith('_date'):
label = key.replace('_date', '')
result[label] = val
return result
def check_page(page, saved_versions, saved_hashes):
"""Verifică o pagină și returnează modificările"""
page_id = page["id"]
name = page["name"]
url = page["url"]
html = fetch_page(url)
if html is None:
content = fetch_page(url)
if content is None:
return None
# 1. Verifică hash-ul mai întâi (detectează ORICE schimbare)
new_hash = compute_hash(content)
old_hash = saved_hashes.get(page_id)
html = content.decode('utf-8', errors='ignore')
new_text = html_to_text(html)
new_versions = extract_versions(html)
old_versions = saved_versions.get(page_id, {})
# Prima rulare - doar salvează, nu raportează
if not old_versions:
log(f"INIT: {page_id} - {new_versions}")
# Încarcă snapshot-ul anterior
old_text = load_snapshot(page_id)
# Prima rulare - inițializare
if not old_hash:
log(f"INIT: {page_id}")
saved_hashes[page_id] = new_hash
saved_versions[page_id] = new_versions
save_snapshot(page_id, new_text)
return None
changes = compare_versions(old_versions, new_versions, name)
saved_versions[page_id] = new_versions
# Compară hash-uri
hash_changed = new_hash != old_hash
if changes:
log(f"CHANGES in {page_id}: {changes}")
return {
# Compară versiuni pentru detalii
version_changes = compare_versions(old_versions, new_versions)
# Generează diff dacă s-a schimbat
diff = None
if hash_changed and old_text:
diff = generate_diff(old_text, new_text)
# Actualizează starea
saved_hashes[page_id] = new_hash
saved_versions[page_id] = new_versions
save_snapshot(page_id, new_text)
if hash_changed:
if version_changes:
log(f"CHANGES in {page_id}: {version_changes}")
else:
log(f"HASH CHANGED in {page_id} (no version changes detected)")
version_changes = ["Pagina s-a modificat (vezi diff)"]
result = {
"id": page_id,
"name": name,
"url": url,
"changes": changes,
"current": {
"soft_a": new_versions.get('soft_a_date', 'N/A'),
"soft_j": new_versions.get('soft_j_date', 'N/A')
}
"changes": version_changes,
"current": format_current_versions(new_versions)
}
else:
log(f"OK: {page_id}")
return None
if diff:
result["diff"] = diff
return result
log(f"OK: {page_id}")
return None
def update_dashboard_status(has_changes, changes_count):
"""Actualizează status.json pentru dashboard"""
@@ -188,18 +329,20 @@ def update_dashboard_status(has_changes, changes_count):
log(f"ERROR updating dashboard status: {e}")
def main():
log("=== Starting ANAF monitor v2 ===")
log("=== Starting ANAF monitor v2.1 ===")
config = load_json(CONFIG_FILE, {"pages": []})
saved_versions = load_json(VERSIONS_FILE, {})
saved_hashes = load_json(HASHES_FILE, {})
all_changes = []
for page in config["pages"]:
result = check_page(page, saved_versions)
result = check_page(page, saved_versions, saved_hashes)
if result:
all_changes.append(result)
save_json(VERSIONS_FILE, saved_versions)
save_json(HASHES_FILE, saved_hashes)
# Update dashboard status
update_dashboard_status(len(all_changes) > 0, len(all_changes))

View File

@@ -1,57 +0,0 @@
{
"D100": {
"soft_a_url": "http://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D100_710_XML_0126_260126.pdf",
"soft_a_date": "26.01.2026",
"soft_j_url": "http://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D100_22012026.zip",
"soft_j_date": "22.01.2026"
},
"D101": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D101_XML_2025_260126.pdf",
"soft_a_date": "26.01.2026",
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D101_J1102.zip"
},
"D300": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D300_v11.0.7_16122025.pdf",
"soft_a_date": "16.12.2025",
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D300_20250910.zip",
"soft_j_date": "10.09.2025"
},
"D390": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D390_XML_2020_300424.pdf",
"soft_a_date": "30.04.2024",
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D390_20250625.zip",
"soft_j_date": "25.06.2025"
},
"D394": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D394_26092025.pdf",
"soft_a_date": "26.09.2025",
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D394_17092025.zip",
"soft_j_date": "17.09.2025"
},
"D205": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D205_XML_2025_150126.pdf",
"soft_a_date": "15.01.2026",
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D205_J901_P400.zip"
},
"D406": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/R405_XML_2017_080321.pdf",
"soft_a_date": "08.03.2021",
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/D406_20251030.zip",
"soft_j_date": "30.10.2025"
},
"BILANT_2025": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/bilant_SC_1225_XML_270126.pdf",
"soft_a_date": "27.01.2026",
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/S1002_20260128.zip",
"soft_j_date": "28.01.2026"
},
"SIT_FIN_SEM_2025": {
"soft_j_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/S1012_20250723.zip",
"soft_j_date": "23.07.2025"
},
"SIT_FIN_AN_2025": {
"soft_a_url": "https://static.anaf.ro/static/10/Anaf/Declaratii_R/AplicatiiDec/bilant_S1030_XML_consolidare_270126_bis.pdf",
"soft_a_date": "27.01.2026"
},
"DESCARCARE_DECLARATII": {}
}