Compare commits

..

3 Commits

Author SHA1 Message Date
cac8db8219 Merge feature/consolidare-rapoarte: Dashboard consolidation + Performance logging
- Unified Dashboard Complet sheet/page with all KPIs
- PerformanceLogger for identifying bottlenecks
- Fixed VALOARE_ANTERIOARA bug
- SQL queries identified as 94% of runtime (optimization needed)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-11 13:34:36 +02:00
9e9ddec014 Implement Dashboard consolidation + Performance logging
Features:
- Add unified "Dashboard Complet" sheet (Excel) with all 9 sections
- Add unified "Dashboard Complet" page (PDF) with key metrics
- Fix VALOARE_ANTERIOARA NULL bug (use sumar_executiv_yoy directly)
- Add PerformanceLogger class for timing analysis
- Remove redundant consolidated sheets (keep only Dashboard Complet)

Bug fixes:
- Fix Excel formula error (=== interpreted as formula, changed to >>>)
- Fix args.output → args.output_dir in perf.summary()

Performance analysis:
- Add PERFORMANCE_ANALYSIS.md with detailed breakdown
- SQL queries take 94% of runtime (31 min), Excel/PDF only 1%
- Identified slow queries for optimization

Documentation:
- Update CLAUDE.md with new structure
- Add context handover for query optimization task

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-11 13:33:02 +02:00
a2ad4c7ed2 Add implementation plan for report consolidation
Plan includes:
- Fix CONCENTRARE_RISC_YOY bug (CROSS JOIN issue)
- Consolidate 4 groups of sheets/pages:
  1. Executive Summary (3→1)
  2. Revenue Indicators (2→1)
  3. Client Portfolio + Risk (3→1)
  4. Financial Dashboard (5→1)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-09 16:30:12 +02:00
19 changed files with 2400 additions and 613 deletions

View File

@@ -0,0 +1,57 @@
---
name: feature-planner
description: Use this agent when you need to plan the implementation of a new feature for the ROA2WEB project. Examples: <example>Context: User wants to add a new reporting dashboard feature to the FastAPI/Vue.js application. user: 'I need to add a user activity dashboard that shows login history and report generation statistics' assistant: 'I'll use the feature-planner agent to analyze the current codebase and create a comprehensive implementation plan.' <commentary>Since the user is requesting a new feature plan, use the feature-planner agent to analyze the current project structure and create a detailed implementation strategy.</commentary></example> <example>Context: User wants to implement real-time notifications in the application. user: 'We need to add real-time notifications when reports are ready for download' assistant: 'Let me use the feature-planner agent to examine the current architecture and design an efficient notification system.' <commentary>The user is requesting a new feature implementation, so use the feature-planner agent to create a comprehensive plan.</commentary></example>
model: opus
color: purple
---
You are an expert software architect and senior full-stack engineer specializing in FastAPI and Vue.js applications. Your expertise lies in analyzing existing codebases and designing minimal-impact, maximum-effect feature implementations. You use KISS principle. You propose the best and most popular technologies/frameworks/libraries. Use tool context7 for the documentation.
When tasked with planning a new feature, you will:
1. **Codebase Analysis Phase**:
- Examine the current project structure in the roa2web/ directory
- Identify existing patterns, architectural decisions, and coding standards
- Map out current database schema usage (CONTAFIN_ORACLE)
- Analyze existing API endpoints, Vue components, and shared utilities
- Identify reusable components and services that can be leveraged
2. **Impact Assessment**:
- Determine which files need modification vs. creation
- Identify potential breaking changes or conflicts
- Assess database schema changes required
- Evaluate impact on existing authentication and user management
- Consider SSH tunnel and Oracle database constraints
3. **Implementation Strategy**:
- Design the feature using existing architectural patterns
- Prioritize modifications to existing files over new file creation
- Plan database changes that work with the CONTAFIN_ORACLE schema
- Design API endpoints following existing FastAPI patterns
- Plan Vue.js components that integrate with current frontend structure
- Consider testing strategy using the existing pytest setup
4. **Detailed Planning Document**:
Create a comprehensive markdown file with:
- Executive summary of the feature and its benefits
- Technical requirements and constraints
- Step-by-step implementation plan with file-by-file changes
- Database schema modifications (if any)
- API endpoint specifications
- Frontend component structure
- Testing approach
- Deployment considerations
- Risk assessment and mitigation strategies
- Timeline estimates for each phase
5. **Optimization Principles**:
- Leverage existing code patterns and utilities
- Minimize new dependencies
- Ensure backward compatibility
- Follow the principle of least modification for maximum effect
- Consider performance implications
- Plan for scalability within the current architecture
Always save your comprehensive plan as a markdown file with a descriptive name like 'feature-[feature-name]-implementation-plan.md' in the appropriate directory. The plan should be detailed enough for any developer to implement the feature following your specifications.
Before starting, ask clarifying questions about the feature requirements if anything is unclear. Focus on creating a plan that integrates seamlessly with the existing ROA2WEB FastAPI/Vue.js architecture.

View File

@@ -0,0 +1,5 @@
Create a new branch, save the detailed implementation plan to a markdown file for context handover to another session, then stop.
1. **Create new branch** with descriptive name based on current task
2. **Save the implementation plan** you created earlier in this session to a markdown file in the project root
3. **Stop execution** - do not commit anything, just prepare the context for handover to another session

View File

@@ -0,0 +1,8 @@
Save detailed context about the current problem to a markdown file for handover to another session due to context limit reached.
1. **Create context handover file** in project root: `CONTEXT_HANDOVER_[TIMESTAMP].md`
2. **Document the current problem** being worked on with all relevant details and analysis
3. **Include current progress** - what has been discovered, analyzed, or attempted so far
4. **List key files examined** and their relevance to the problem
5. **Save current state** - todos, findings, next steps, and any constraints
6. **Stop execution** - context is now ready for a fresh session to continue the work

View File

@@ -0,0 +1,4 @@
Save the detailed implementation plan to a markdown file for context handover to another session, then stop.
1. **Save the implementation plan** you created earlier in this session to a markdown file in the project root
2. **Stop execution** - do not commit anything, just prepare the context for handover to another session

View File

@@ -0,0 +1,12 @@
Show the current session status by:
1. Check if `.claude/sessions/.current-session` exists
2. If no active session, inform user and suggest starting one
3. If active session exists:
- Show session name and filename
- Calculate and show duration since start
- Show last few updates
- Show current goals/tasks
- Remind user of available commands
Keep the output concise and informative.

View File

@@ -0,0 +1,30 @@
End the current development session by:
1. Check `.claude/sessions/.current-session` for the active session
2. If no active session, inform user there's nothing to end
3. If session exists, append a comprehensive summary including:
- Session duration
- Git summary:
* Total files changed (added/modified/deleted)
* List all changed files with change type
* Number of commits made (if any)
* Final git status
- Todo summary:
* Total tasks completed/remaining
* List all completed tasks
* List any incomplete tasks with status
- Key accomplishments
- All features implemented
- Problems encountered and solutions
- Breaking changes or important findings
- Dependencies added/removed
- Configuration changes
- Deployment steps taken
- Lessons learned
- What wasn't completed
- Tips for future developers
4. Empty the `.claude/sessions/.current-session` file (don't remove it, just clear its contents)
5. Inform user the session has been documented
The summary should be thorough enough that another developer (or AI) can understand everything that happened without reading the entire session.

View File

@@ -0,0 +1,37 @@
Show help for the session management system:
## Session Management Commands
The session system helps document development work for future reference.
### Available Commands:
- `/project:session-start [name]` - Start a new session with optional name
- `/project:session-update [notes]` - Add notes to current session
- `/project:session-end` - End session with comprehensive summary
- `/project:session-list` - List all session files
- `/project:session-current` - Show current session status
- `/project:session-help` - Show this help
### How It Works:
1. Sessions are markdown files in `.claude/sessions/`
2. Files use `YYYY-MM-DD-HHMM-name.md` format
3. Only one session can be active at a time
4. Sessions track progress, issues, solutions, and learnings
### Best Practices:
- Start a session when beginning significant work
- Update regularly with important changes or findings
- End with thorough summary for future reference
- Review past sessions before starting similar work
### Example Workflow:
```
/project:session-start refactor-auth
/project:session-update Added Google OAuth restriction
/project:session-update Fixed Next.js 15 params Promise issue
/project:session-end
```

View File

@@ -0,0 +1,13 @@
List all development sessions by:
1. Check if `.claude/sessions/` directory exists
2. List all `.md` files (excluding hidden files and `.current-session`)
3. For each session file:
- Show the filename
- Extract and show the session title
- Show the date/time
- Show first few lines of the overview if available
4. If `.claude/sessions/.current-session` exists, highlight which session is currently active
5. Sort by most recent first
Present in a clean, readable format.

View File

@@ -0,0 +1,13 @@
Start a new development session by creating a session file in `.claude/sessions/` with the format `YYYY-MM-DD-HHMM-$ARGUMENTS.md` (or just `YYYY-MM-DD-HHMM.md` if no name provided).
The session file should begin with:
1. Session name and timestamp as the title
2. Session overview section with start time
3. Goals section (ask user for goals if not clear)
4. Empty progress section ready for updates
After creating the file, create or update `.claude/sessions/.current-session` to track the active session filename.
Confirm the session has started and remind the user they can:
- Update it with `/project:session-update`
- End it with `/project:session-end`

View File

@@ -0,0 +1,37 @@
Update the current development session by:
1. Check if `.claude/sessions/.current-session` exists to find the active session
2. If no active session, inform user to start one with `/project:session-start`
3. If session exists, append to the session file with:
- Current timestamp
- The update: $ARGUMENTS (or if no arguments, summarize recent activities)
- Git status summary:
* Files added/modified/deleted (from `git status --porcelain`)
* Current branch and last commit
- Todo list status:
* Number of completed/in-progress/pending tasks
* List any newly completed tasks
- Any issues encountered
- Solutions implemented
- Code changes made
Keep updates concise but comprehensive for future reference.
Example format:
```
### Update - 2025-06-16 12:15 PM
**Summary**: Implemented user authentication
**Git Changes**:
- Modified: app/middleware.ts, lib/auth.ts
- Added: app/login/page.tsx
- Current branch: main (commit: abc123)
**Todo Progress**: 3 completed, 1 in progress, 2 pending
- ✓ Completed: Set up auth middleware
- ✓ Completed: Create login page
- ✓ Completed: Add logout functionality
**Details**: [user's update or automatic summary]
```

View File

@@ -0,0 +1,116 @@
---
description: Generate comprehensive validation command for this codebase
---
# Generate Ultimate Validation Command
Analyze this codebase deeply and create `.claude/commands/validate.md` that comprehensively validates everything.
## Step 0: Discover Real User Workflows
**Before analyzing tooling, understand what users ACTUALLY do:**
1. Read workflow documentation:
- README.md - Look for "Usage", "Quickstart", "Examples" sections
- CLAUDE.md/AGENTS.md or similar - Look for workflow patterns
- docs/ folder - User guides, tutorials
2. Identify external integrations:
- What CLIs does the app use? (Check Dockerfile for installed tools)
- What external APIs does it call? (Telegram, Slack, GitHub, etc.)
- What services does it interact with?
3. Extract complete user journeys from docs:
- Find examples like "Fix Issue (GitHub):" or "User does X → then Y → then Z"
- Each workflow becomes an E2E test scenario
**Critical: Your E2E tests should mirror actual workflows from docs, not just test internal APIs.**
## Step 1: Deep Codebase Analysis
Explore the codebase to understand:
**What validation tools already exist:**
- Linting config: `.eslintrc*`, `.pylintrc`, `ruff.toml`, etc.
- Type checking: `tsconfig.json`, `mypy.ini`, etc.
- Style/formatting: `.prettierrc*`, `black`, `.editorconfig`
- Unit tests: `jest.config.*`, `pytest.ini`, test directories
- Package manager scripts: `package.json` scripts, `Makefile`, `pyproject.toml` tools
**What the application does:**
- Frontend: Routes, pages, components, user flows
- Backend: API endpoints, authentication, database operations
- Database: Schema, migrations, models
- Infrastructure: Docker services, dependencies
**How things are currently tested:**
- Existing test files and patterns
- CI/CD workflows (`.github/workflows/`, etc.)
- Test commands in package.json or scripts
## Step 2: Generate validate.md
Create `.claude/commands/validate.md` with these phases (ONLY include phases that exist in the codebase):
### Phase 1: Linting
Run the actual linter commands found in the project (e.g., `npm run lint`, `ruff check`, etc.)
### Phase 2: Type Checking
Run the actual type checker commands found (e.g., `tsc --noEmit`, `mypy .`, etc.)
### Phase 3: Style Checking
Run the actual formatter check commands found (e.g., `prettier --check`, `black --check`, etc.)
### Phase 4: Unit Testing
Run the actual test commands found (e.g., `npm test`, `pytest`, etc.)
### Phase 5: End-to-End Testing (BE CREATIVE AND COMPREHENSIVE)
Test COMPLETE user workflows from documentation, not just internal APIs.
**The Three Levels of E2E Testing:**
1. **Internal APIs** (what you might naturally test):
- Test adapter endpoints work
- Database queries succeed
- Commands execute
2. **External Integrations** (what you MUST test):
- CLI operations (GitHub CLI create issue/PR, etc.)
- Platform APIs (send Telegram message, post Slack message)
- Any external services the app depends on
3. **Complete User Journeys** (what gives 100% confidence):
- Follow workflows from docs start-to-finish
- Example: "User asks bot to fix GitHub issue" → Bot clones repo → Makes changes → Creates PR → Comments on issue
- Test like a user would actually use the application in production
**Examples of good vs. bad E2E tests:**
- ❌ Bad: Tests that `/clone` command stores data in database
- ✅ Good: Clone repo → Load commands → Execute command → Verify git commit created
- ✅ Great: Create GitHub issue → Bot receives webhook → Analyzes issue → Creates PR → Comments on issue with PR link
**Approach:**
- Use Docker for isolated, reproducible testing
- Create test data/repos/issues as needed
- Verify outcomes in external systems (GitHub, database, file system)
- Clean up after tests
## Critical: Don't Stop Until Everything is Validated
**Your job is to create a validation command that leaves NO STONE UNTURNED.**
- Every user workflow from docs should be tested end-to-end
- Every external integration should be exercised (GitHub CLI, APIs, etc.)
- Every API endpoint should be hit
- Every error case should be verified
- Database integrity should be confirmed
- The validation should be so thorough that manual testing is completely unnecessary
If /validate passes, the user should have 100% confidence their application works correctly in production. Don't settle for partial coverage - make it comprehensive, creative, and complete.
## Output
Write the generated validation command to `.claude/commands/validate.md`
The command should be executable, practical, and give complete confidence in the codebase.

1012
.claude/commands/validate.md Normal file

File diff suppressed because it is too large Load Diff

129
CLAUDE.md
View File

@@ -4,90 +4,107 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
## Project Overview ## Project Overview
Data Intelligence Report Generator for ERP ROA (Oracle Database). Generates Excel and PDF business intelligence reports with sales analytics, margin analysis, stock tracking, and alerts. Data Intelligence Report Generator for ERP ROA (Oracle Database). Generates Excel and PDF business intelligence reports with sales analytics, margin analysis, stock tracking, financial indicators, and alerts.
## Commands ## Commands
### Option 1: Virtual Environment (WSL or Windows)
```bash ```bash
# Create and activate virtual environment # Virtual Environment setup
python -m venv .venv python -m venv .venv
source .venv/bin/activate # Linux/WSL source .venv/bin/activate # Linux/WSL
# or: .venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txt pip install -r requirements.txt
# Run report # Run report (default: last 12 months)
python main.py python main.py
```
### Option 2: Docker (Windows Docker Desktop / Linux) # Custom period
```bash
# Copy and configure environment
cp .env.example .env
# Edit .env with your Oracle credentials
# Run with docker-compose
docker-compose run --rm report-generator
# Or with custom months
docker-compose run --rm report-generator python main.py --months 6
```
### Common Options
```bash
# Run with custom period
python main.py --months 6 python main.py --months 6
# Custom output directory # Docker alternative
python main.py --output-dir /path/to/output docker-compose run --rm report-generator
``` ```
## Oracle Connection from Different Environments ## Oracle Connection
| Environment | ORACLE_HOST value | | Environment | ORACLE_HOST value |
|-------------|-------------------| |-------------|-------------------|
| Windows native | `127.0.0.1` | | Windows native | `127.0.0.1` |
| WSL | Windows IP (run: `cat /etc/resolv.conf \| grep nameserver`) | | WSL | Windows IP (`cat /etc/resolv.conf \| grep nameserver`) |
| Docker | `host.docker.internal` (automatic in docker-compose) | | Docker | `host.docker.internal` |
## Architecture ## Architecture
**Entry point**: `main.py` - CLI interface, orchestrates query execution and report generation ```
main.py # Entry point, orchestrates everything
├── config.py # .env loader, thresholds (RECOMMENDATION_THRESHOLDS)
├── queries.py # SQL queries in QUERIES dict with metadata
├── recommendations.py # RecommendationsEngine - auto-generates alerts
└── report_generator.py # Excel/PDF generators
```
**Data flow**: **Data flow**:
1. `config.py` loads Oracle connection settings from `.env` file 1. `main.py` executes queries via `OracleConnection` context manager
2. `queries.py` contains all SQL queries in a `QUERIES` dictionary with metadata (title, description, params) 2. Results stored in `results` dict (query_name → DataFrame)
3. `main.py` executes queries via `OracleConnection` context manager, stores results in `results` dict 3. Consolidation logic merges related DataFrames (e.g., KPIs + YoY)
4. `report_generator.py` receives dataframes and generates: 4. `ExcelReportGenerator` creates consolidated sheets + detail sheets
- `ExcelReportGenerator`: Multi-sheet workbook with conditional formatting 5. `PDFReportGenerator` creates consolidated pages + charts
- `PDFReportGenerator`: Executive summary with charts via ReportLab
**Key patterns**: **Report structure** (after consolidation):
- Queries use parameterized `:months` for configurable analysis period - **Excel**: 4 consolidated sheets (Vedere Ansamblu, Indicatori Venituri, Clienti si Risc, Tablou Financiar) + detail sheets
- Sheet order in `main.py:sheet_order` controls Excel tab sequence - **PDF**: Consolidated pages with multiple sections + charts + detail tables
- Charts are generated via matplotlib, converted to images for PDF
## Oracle Database Schema ## Key Code Locations
Required views: `fact_vfacturi2`, `fact_vfacturi_detalii`, `vnom_articole`, `vnom_parteneri`, `vstoc`, `vrul` | What | Where |
|------|-------|
Filter conventions: | SQL queries | `queries.py` - constants like `SUMAR_EXECUTIV`, `CONCENTRARE_RISC_YOY` |
- `sters = 0` excludes deleted records | Query registry | `queries.py:QUERIES` dict |
- `tip NOT IN (7, 8, 9, 24)` excludes returns/credit notes | Sheet order | `main.py:sheet_order` list (~line 242) |
- Account codes: `341`, `345` = own production; `301` = raw materials | Consolidated sheets | `main.py` after "GENERARE SHEET-URI CONSOLIDATE" (~line 567) |
| Legends | `main.py:legends` dict (~line 303) |
| Alert thresholds | `config.py:RECOMMENDATION_THRESHOLDS` |
| Consolidated sheet method | `report_generator.py:ExcelReportGenerator.add_consolidated_sheet()` |
| Consolidated page method | `report_generator.py:PDFReportGenerator.add_consolidated_page()` |
## Adding New Reports ## Adding New Reports
1. Add SQL query constant in `queries.py` 1. Add SQL constant in `queries.py` (e.g., `NEW_QUERY = """SELECT..."""`)
2. Add entry to `QUERIES` dict with `sql`, `params`, `title`, `description` 2. Add to `QUERIES` dict: `'new_query': {'sql': NEW_QUERY, 'params': {'months': 12}, 'title': '...', 'description': '...'}`
3. Add query name to `sheet_order` list in `main.py` (line ~143) 3. Add `'new_query'` to `sheet_order` in `main.py`
4. For PDF inclusion, add rendering logic in `main.py:generate_reports()` 4. Add legend in `legends` dict if needed
5. For PDF: add rendering in PDF section of `generate_reports()`
## Alert Thresholds (in config.py) ## Adding Consolidated Views
- Low margin: < 15% To add data to consolidated sheets, modify the `sections` list in `add_consolidated_sheet()` calls:
- Price variation: > 20% ```python
- Slow stock: > 90 days without movement excel_gen.add_consolidated_sheet(
- Minimum sales for analysis: 1000 RON name='Sheet Name',
sections=[
{'title': 'Section', 'df': results.get('query_name'), 'legend': legends.get('query_name')}
]
)
```
## Oracle Schema Conventions
- `sters = 0` excludes deleted records
- `tip NOT IN (7, 8, 9, 24)` excludes returns/credit notes
- Account `341`, `345` = own production; `301` = raw materials
- Required views: `fact_vfacturi2`, `fact_vfacturi_detalii`, `vnom_articole`, `vnom_parteneri`, `vstoc`, `vrul`
## YoY Query Pattern
When creating Year-over-Year comparison queries:
1. Use CTEs for current period (`ADD_MONTHS(TRUNC(SYSDATE), -12)` to `SYSDATE`)
2. Use CTEs for previous period (`ADD_MONTHS(TRUNC(SYSDATE), -24)` to `ADD_MONTHS(TRUNC(SYSDATE), -12)`)
3. Handle empty previous data with `NVL()` fallback to 0
4. Add `TREND` column with values like `'CRESTERE'`, `'SCADERE'`, `'STABIL'`, `'FARA DATE YOY'`
## Conditional Formatting Colors
| Status | Excel Fill | Meaning |
|--------|------------|---------|
| OK/Good | `#4ECDC4` (teal) | CRESTERE, IMBUNATATIRE, DIVERSIFICARE |
| Warning | `#FFE66D` (yellow) | ATENTIE |
| Alert | `#FF6B6B` (red) | ALERTA, SCADERE, DETERIORARE, CONCENTRARE |

View File

@@ -0,0 +1,162 @@
# Context Handover - Query Optimization (11 Dec 2025 - v2)
## Session Summary
This session accomplished:
1. ✅ Fixed VALOARE_ANTERIOARA NULL bug (used `sumar_executiv_yoy` directly)
2. ✅ Created unified "Dashboard Complet" sheet/page
3. ✅ Added PerformanceLogger for timing analysis
4. ✅ Fixed Excel formula error (`===``>>>`)
5. ✅ Removed redundant consolidated sheets/pages
6. ✅ Created PERFORMANCE_ANALYSIS.md with findings
## Critical Finding: SQL Queries Are The Bottleneck
**Total runtime: ~33 minutes**
- SQL Queries: 31 min (94%)
- Excel/PDF: 15 sec (1%)
### Top Slow Queries (all 60-130 seconds for tiny results):
| Query | Duration | Rows | Issue |
|-------|----------|------|-------|
| `clienti_sub_medie` | 130.63s | 100 | Uses complex views |
| `sumar_executiv_yoy` | 129.05s | 5 | YoY 24-month scan |
| `vanzari_lunare` | 129.90s | 25 | Monthly aggregation |
| `indicatori_agregati_venituri_yoy` | 129.31s | 3 | YoY comparison |
---
## Root Cause: Views vs Base Tables
The current queries use complex views like `fact_vfacturi2`, `fact_vfacturi_detalii`, `vnom_articole`, `vnom_parteneri`.
**These views likely contain:**
- Multiple nested JOINs
- Calculated columns
- No index utilization
**Solution:** Use base tables directly: `VANZARI`, `VANZARI_DETALII`, `NOM_PARTENERI`, etc.
---
## Example Optimization: CLIENTI_SUB_MEDIE
### Current Query (uses views - 130 seconds):
Located in `queries.py` around line 600-650.
### Optimized Query (uses base tables - should be <5 seconds):
```sql
WITH preturi_medii AS (
SELECT
d.id_articol,
AVG(CASE WHEN d.pret_cu_tva = 1 THEN d.pret / (1 + d.proc_tvav/100) ELSE d.pret END) AS pret_mediu
FROM VANZARI f
JOIN VANZARI_DETALII d ON d.id_vanzare = f.id_vanzare
WHERE f.sters = 0 AND d.sters = 0
AND f.tip > 0 AND f.tip NOT IN (7, 8, 9, 24)
AND f.data_act >= ADD_MONTHS(TRUNC(SYSDATE), -24)
AND d.pret > 0
GROUP BY d.id_articol
),
preturi_client AS (
SELECT
d.id_articol,
f.id_part,
p.denumire as client,
AVG(CASE WHEN d.pret_cu_tva = 1 THEN d.pret / (1 + d.proc_tvav/100) ELSE d.pret END) AS pret_client,
SUM(d.cantitate) AS cantitate_totala
FROM VANZARI f
JOIN VANZARI_DETALII d ON d.id_vanzare = f.id_vanzare
JOIN NOM_PARTENERI P on f.id_part = p.id_part
WHERE f.sters = 0 AND d.sters = 0
AND f.tip > 0 AND f.tip NOT IN (7, 8, 9, 24)
AND f.data_act >= ADD_MONTHS(TRUNC(SYSDATE), -24)
AND d.pret > 0
GROUP BY d.id_articol, f.id_part, p.denumire
)
SELECT
a.denumire AS produs,
pc.client,
ROUND(pc.pret_client, 2) AS pret_platit,
ROUND(pm.pret_mediu, 2) AS pret_mediu,
ROUND((pm.pret_mediu - pc.pret_client) * 100.0 / pm.pret_mediu, 2) AS discount_vs_medie,
pc.cantitate_totala
FROM preturi_client pc
JOIN preturi_medii pm ON pm.id_articol = pc.id_articol
JOIN vnom_articole a ON a.id_articol = pc.id_articol
WHERE pc.pret_client < pm.pret_mediu * 0.85
ORDER BY discount_vs_medie DESC
FETCH FIRST 100 ROWS ONLY
```
### Key Differences:
1. Uses `VANZARI` instead of `fact_vfacturi2`
2. Uses `VANZARI_DETALII` instead of `fact_vfacturi_detalii`
3. Uses `NOM_PARTENERI` instead of `vnom_parteneri`
4. Column names differ: `id_vanzare` vs `nrfactura`, `data_act` vs `data`
5. Direct JOIN on IDs instead of view abstractions
---
## Task for Next Session: Optimize All Slow Queries
### Priority 1 - Rewrite using base tables:
1. `clienti_sub_medie` (130s) - example above
2. `sumar_executiv` (130s)
3. `sumar_executiv_yoy` (129s)
4. `vanzari_lunare` (130s)
5. `indicatori_agregati_venituri_yoy` (129s)
### Priority 2 - YoY optimization:
- Pre-calculate previous year metrics in single CTE
- Avoid scanning same data twice
### Steps:
1. Read current query in `queries.py`
2. Identify view → base table mappings
3. Rewrite with base tables
4. Test performance improvement
5. Repeat for all slow queries
---
## Key Files
| File | Purpose |
|------|---------|
| `queries.py` | All SQL queries - constants like `CLIENTI_SUB_MEDIE` |
| `main.py` | Execution with PerformanceLogger |
| `PERFORMANCE_ANALYSIS.md` | Detailed timing analysis |
---
## Base Table → View Mapping (to discover)
Need to examine Oracle schema to find exact mappings:
- `VANZARI``fact_vfacturi2`?
- `VANZARI_DETALII``fact_vfacturi_detalii`?
- `NOM_PARTENERI``vnom_parteneri`?
- `NOM_ARTICOLE``vnom_articole`?
Column mappings:
- `id_vanzare``nrfactura`?
- `data_act``data`?
- `id_part``id_partener`?
---
## Test Command
```bash
cd /mnt/e/proiecte/vending/data_intelligence_report
.\run.bat
# Check output/performance_log.txt for timing
```
---
## Success Criteria
Reduce total query time from 31 minutes to <5 minutes by using base tables instead of views.

179
PERFORMANCE_ANALYSIS.md Normal file
View File

@@ -0,0 +1,179 @@
# Performance Analysis - Data Intelligence Report Generator
**Date:** 2024-12-11
**Total Runtime:** ~33 minutes (1971 seconds)
## Executive Summary
| Category | Time | Percentage |
|----------|------|------------|
| **SQL Queries** | ~31 min | **94%** |
| Excel Generation | ~12 sec | 0.6% |
| PDF Generation | ~3 sec | 0.2% |
| Other (consolidation, recommendations) | <1 sec | <0.1% |
**Conclusion:** The bottleneck is 100% in Oracle SQL queries. Excel and PDF generation are negligible.
---
## Top 20 Slowest Operations
| Rank | Operation | Duration | Rows | Notes |
|------|-----------|----------|------|-------|
| 1 | `QUERY: clienti_sub_medie` | 130.63s | 100 | Complex aggregation |
| 2 | `QUERY: vanzari_lunare` | 129.90s | 25 | Monthly aggregation over 12 months |
| 3 | `QUERY: indicatori_agregati_venituri_yoy` | 129.31s | 3 | YoY comparison - 24 month scan |
| 4 | `QUERY: sumar_executiv_yoy` | 129.05s | 5 | YoY comparison - 24 month scan |
| 5 | `QUERY: sumar_executiv` | 129.84s | 6 | Basic KPIs |
| 6 | `QUERY: dispersie_preturi` | 97.11s | 50 | Price variance analysis |
| 7 | `QUERY: trending_clienti` | 69.84s | 12514 | Large result set |
| 8 | `QUERY: marja_per_client` | 68.58s | 7760 | Large result set |
| 9 | `QUERY: concentrare_risc_yoy` | 66.33s | 3 | YoY comparison |
| 10 | `QUERY: concentrare_risc` | 66.19s | 3 | Risk concentration |
| 11 | `QUERY: dso_dpo_yoy` | 65.88s | 2 | YoY comparison |
| 12 | `QUERY: clienti_marja_mica` | 65.93s | 7 | Low margin clients |
| 13 | `QUERY: sezonalitate_lunara` | 65.93s | 12 | Seasonality |
| 14 | `QUERY: concentrare_clienti` | 65.76s | 31 | Client concentration |
| 15 | `QUERY: indicatori_agregati_venituri` | 65.59s | 3 | Revenue indicators |
| 16 | `QUERY: marja_client_categorie` | 65.27s | 2622 | Client-category margins |
| 17 | `QUERY: top_produse` | 65.26s | 50 | Top products |
| 18 | `QUERY: clienti_ranking_profit` | 65.03s | 2463 | Client profit ranking |
| 19 | `QUERY: marja_per_categorie` | 64.85s | 4 | Margin by category |
| 20 | `QUERY: productie_vs_revanzare` | 64.86s | 3 | Production vs resale |
---
## Fast Queries (<5 seconds)
| Query | Duration | Rows |
|-------|----------|------|
| `stoc_lent` | 0.06s | 100 |
| `solduri_furnizori` | 0.08s | 172 |
| `pozitia_cash` | 0.10s | 4 |
| `indicatori_lichiditate` | 0.13s | 4 |
| `analiza_prajitorie` | 0.15s | 39 |
| `stoc_curent` | 0.16s | 28 |
| `solduri_clienti` | 0.29s | 825 |
| `facturi_restante_furnizori` | 0.55s | 100 |
| `dso_dpo` | 0.65s | 2 |
| `ciclu_conversie_cash` | 0.95s | 4 |
| `clasificare_datorii` | 0.99s | 5 |
| `facturi_restante` | 1.24s | 100 |
| `aging_datorii` | 1.43s | 305 |
| `portofoliu_clienti` | 1.60s | 5 |
| `rotatie_stocuri` | 1.70s | 100 |
| `grad_acoperire_datorii` | 2.17s | 5 |
| `proiectie_lichiditate` | 2.17s | 4 |
| `aging_creante` | 4.37s | 5281 |
---
## Excel Generation Breakdown
| Operation | Duration | Rows |
|-----------|----------|------|
| Save workbook | 4.12s | - |
| trending_clienti sheet | 2.43s | 12514 |
| marja_per_client sheet | 2.56s | 7760 |
| aging_creante sheet | 1.57s | 5281 |
| clienti_ranking_profit sheet | 0.78s | 2463 |
| marja_client_categorie sheet | 0.56s | 2622 |
| All other sheets | <0.2s each | - |
**Total Excel:** ~12 seconds
---
## PDF Generation Breakdown
| Operation | Duration |
|-----------|----------|
| Chart: vanzari_lunare | 0.80s |
| Chart: concentrare_clienti | 0.61s |
| Chart: ciclu_conversie_cash | 0.33s |
| Chart: productie_vs_revanzare | 0.21s |
| Save document | 0.49s |
| All pages | <0.01s each |
**Total PDF:** ~3 seconds
---
## Root Cause Analysis
### Why are queries slow?
1. **Full table scans on `fact_vfacturi2`**
- Most queries filter by `data >= ADD_MONTHS(SYSDATE, -12)` or `-24`
- Without an index on `data`, Oracle scans the entire table
2. **YoY queries scan 24 months**
- `sumar_executiv_yoy`, `indicatori_agregati_venituri_yoy`, etc.
- These compare current 12 months vs previous 12 months
- Double the data scanned
3. **Complex JOINs without indexes**
- Joins between `fact_vfacturi2`, `fact_vfacturi_detalii`, `vnom_articole`, `vnom_parteneri`
- Missing indexes on foreign keys
4. **Repeated aggregations**
- Multiple queries calculate similar sums (vânzări, marjă)
- Each query re-scans the same data
---
## Optimization Recommendations
### Priority 1: Add Indexes (Immediate Impact)
```sql
-- Index on date column (most critical)
CREATE INDEX idx_vfacturi2_data ON fact_vfacturi2(data);
-- Composite index for common filters
CREATE INDEX idx_vfacturi2_filter ON fact_vfacturi2(sters, tip, data);
-- Index on detail table join column
CREATE INDEX idx_vfacturi_det_nrfac ON fact_vfacturi_detalii(nrfactura);
```
### Priority 2: Materialized Views (Medium-term)
```sql
-- Pre-aggregated monthly sales
CREATE MATERIALIZED VIEW mv_vanzari_lunare
BUILD IMMEDIATE
REFRESH COMPLETE ON DEMAND
AS
SELECT
TRUNC(data, 'MM') as luna,
SUM(valoare) as vanzari,
SUM(marja) as marja
FROM fact_vfacturi2
WHERE sters = 0 AND tip NOT IN (7,8,9,24)
GROUP BY TRUNC(data, 'MM');
```
### Priority 3: Query Consolidation (Long-term)
- Combine related queries into single CTEs
- Calculate base metrics once, derive others
- Use window functions instead of self-joins for YoY
---
## Monitoring
Run with performance logging enabled:
```bash
python main.py --months 12
# Check output/performance_log.txt for detailed breakdown
```
---
## Version History
| Date | Change |
|------|--------|
| 2024-12-11 | Initial performance analysis with PerformanceLogger |

View File

@@ -1,448 +0,0 @@
# Plan: Corectii Report Generator - 28.11.2025
## Probleme de Rezolvat
1. **Analiza Prajitorie** - intrarile si iesirile apar pe randuri diferite in loc de coloane
2. **Query-uri Financiare "No Data"** - DSO/DPO, Solduri clienti/furnizori, Aging, Pozitia Cash, Ciclu Conversie Cash nu afiseaza date (user confirma ca datele EXISTA)
3. **Recomandari in Sumar Executiv** - trebuie incluse sub KPIs in sheet-ul Sumar Executiv
4. **Reordonare Sheet-uri** - agregatele (indicatori_agregati, portofoliu_clienti, concentrare_risc) trebuie mutate imediat dupa Sumar Executiv
---
## ISSUE 1: Analiza Prajitorie - Restructurare din Randuri in Coloane
### Fisier: `queries.py` liniile 450-478
### Problema Curenta
Query-ul grupeaza dupa `tip_miscare` (Intrare/Iesire/Transformare), creand randuri separate:
```
luna | tip | tip_miscare | cantitate_intrata | cantitate_iesita
2024-01 | Materii prime | Intrare | 1000 | 0
2024-01 | Materii prime | Iesire | 0 | 800
```
### Output Cerut
Un rand per Luna + Tip cu coloane separate pentru Intrari si Iesiri:
```
luna | tip | cantitate_intrari | valoare_intrari | cantitate_iesiri | valoare_iesiri | sold_net
2024-01 | Materii prime | 1000 | 50000 | 800 | 40000 | 10000
```
### Solutia: Inlocuieste ANALIZA_PRAJITORIE (liniile 450-478)
```sql
ANALIZA_PRAJITORIE = """
SELECT
TO_CHAR(r.dataact, 'YYYY-MM') AS luna,
CASE
WHEN r.cont = '301' THEN 'Materii prime'
WHEN r.cont = '341' THEN 'Semifabricate'
WHEN r.cont = '345' THEN 'Produse finite'
ELSE 'Altele'
END AS tip,
-- Intrari: cantitate > 0 AND cante = 0
ROUND(SUM(CASE WHEN r.cant > 0 AND NVL(r.cante, 0) = 0 THEN r.cant ELSE 0 END), 2) AS cantitate_intrari,
ROUND(SUM(CASE WHEN r.cant > 0 AND NVL(r.cante, 0) = 0 THEN r.cant * NVL(r.pret, 0) ELSE 0 END), 2) AS valoare_intrari,
-- Iesiri: cant = 0 AND cante > 0
ROUND(SUM(CASE WHEN NVL(r.cant, 0) = 0 AND r.cante > 0 THEN r.cante ELSE 0 END), 2) AS cantitate_iesiri,
ROUND(SUM(CASE WHEN NVL(r.cant, 0) = 0 AND r.cante > 0 THEN r.cante * NVL(r.pret, 0) ELSE 0 END), 2) AS valoare_iesiri,
-- Transformari: cant > 0 AND cante > 0 (intrare si iesire simultan)
ROUND(SUM(CASE WHEN r.cant > 0 AND r.cante > 0 THEN r.cant ELSE 0 END), 2) AS cantitate_transformari_in,
ROUND(SUM(CASE WHEN r.cant > 0 AND r.cante > 0 THEN r.cante ELSE 0 END), 2) AS cantitate_transformari_out,
-- Sold net
ROUND(SUM(NVL(r.cant, 0) - NVL(r.cante, 0)), 2) AS sold_net_cantitate,
ROUND(SUM((NVL(r.cant, 0) - NVL(r.cante, 0)) * NVL(r.pret, 0)), 2) AS sold_net_valoare
FROM vrul r
WHERE r.cont IN ('301', '341', '345')
AND r.dataact >= ADD_MONTHS(TRUNC(SYSDATE), -:months)
GROUP BY TO_CHAR(r.dataact, 'YYYY-MM'),
CASE WHEN r.cont = '301' THEN 'Materii prime'
WHEN r.cont = '341' THEN 'Semifabricate'
WHEN r.cont = '345' THEN 'Produse finite'
ELSE 'Altele' END
ORDER BY luna, tip
"""
```
### Modificari Cheie
1. **Eliminat** `tip_miscare` din SELECT si GROUP BY
2. **Agregare conditionala** cu `CASE WHEN ... THEN ... ELSE 0 END` in SUM()
3. **Coloane separate** pentru fiecare tip de miscare
4. **Adaugat coloane valoare** pe langa cantitati
### Update Legends in main.py (in jurul liniei 224)
Adauga in dictionarul `legends`:
```python
'analiza_prajitorie': {
'CANTITATE_INTRARI': 'Cantitate intrata (cant > 0, cante = 0)',
'VALOARE_INTRARI': 'Valoare intrari = cantitate x pret',
'CANTITATE_IESIRI': 'Cantitate iesita (cant = 0, cante > 0)',
'VALOARE_IESIRI': 'Valoare iesiri = cantitate x pret',
'CANTITATE_TRANSFORMARI_IN': 'Cantitate intrata in transformari',
'CANTITATE_TRANSFORMARI_OUT': 'Cantitate iesita din transformari',
'SOLD_NET_CANTITATE': 'Sold net = Total intrari - Total iesiri',
'SOLD_NET_VALOARE': 'Valoare neta a soldului'
}
```
---
## ISSUE 2: Query-uri Financiare "No Data" - DIAGNOSTIC NECESAR
### Query-uri Afectate
| Query | View Folosit | Linie in queries.py | Filtru Curent |
|-------|--------------|---------------------|---------------|
| DSO_DPO | vbalanta_parteneri | 796-844 | `an = EXTRACT(YEAR FROM SYSDATE) AND luna = EXTRACT(MONTH FROM SYSDATE)` |
| SOLDURI_CLIENTI | vbalanta_parteneri | 636-654 | Acelasi + `cont LIKE '4111%'` |
| SOLDURI_FURNIZORI | vbalanta_parteneri | 659-677 | Acelasi + `cont LIKE '401%'` |
| AGING_CREANTE | vireg_parteneri | 682-714 | `cont LIKE '4111%' OR '461%'` |
| FACTURI_RESTANTE | vireg_parteneri | 719-734 | Acelasi + `datascad < SYSDATE` |
| POZITIA_CASH | vbal | 849-872 | `cont LIKE '512%' OR '531%'` |
| CICLU_CONVERSIE_CASH | Multiple | 877-940 | Combina toate de mai sus |
### User-ul confirma ca DATELE EXISTA - trebuie diagnosticat problema
### Cauze Posibile
1. Numele view-urilor difera in baza de date
2. Numele coloanelor difera (`an`, `luna`, `solddeb`, `soldcred`)
3. Prefixele codurilor de cont nu se potrivesc (4111%, 401%, 512%)
4. Pragurile HAVING sunt prea restrictive (`> 1`, `> 100`)
### FIX IMEDIAT: Relaxeaza Pragurile HAVING
**SOLDURI_CLIENTI** (linia 652):
```sql
-- DE LA:
HAVING ABS(SUM(b.solddeb - b.soldcred)) > 1
-- LA:
HAVING ABS(SUM(b.solddeb - b.soldcred)) > 0.01
```
**SOLDURI_FURNIZORI** (linia 675):
```sql
-- DE LA:
HAVING ABS(SUM(b.soldcred - b.solddeb)) > 1
-- LA:
HAVING ABS(SUM(b.soldcred - b.solddeb)) > 0.01
```
**AGING_CREANTE** (linia 712):
```sql
-- DE LA:
HAVING SUM(sold_ramas) > 100
-- LA:
HAVING SUM(sold_ramas) > 0.01
```
**AGING_DATORII** (linia 770):
```sql
-- DE LA:
HAVING SUM(sold_ramas) > 100
-- LA:
HAVING SUM(sold_ramas) > 0.01
```
**POZITIA_CASH** (linia 870):
```sql
-- DE LA:
HAVING ABS(SUM(b.solddeb - b.soldcred)) > 0.01
-- Deja OK, dar verifica daca vbal exista
```
### Daca Tot Nu Functioneaza - Verifica View-urile
Ruleaza in Oracle:
```sql
-- Verifica daca view-urile exista
SELECT view_name FROM user_views
WHERE view_name IN ('VBALANTA_PARTENERI', 'VIREG_PARTENERI', 'VBAL', 'VRUL');
-- Verifica daca exista date pentru luna curenta
SELECT an, luna, COUNT(*)
FROM vbalanta_parteneri
WHERE an = EXTRACT(YEAR FROM SYSDATE)
GROUP BY an, luna
ORDER BY luna DESC;
-- Verifica prefixele de cont existente
SELECT DISTINCT SUBSTR(cont, 1, 4) AS prefix_cont
FROM vbalanta_parteneri
WHERE an = EXTRACT(YEAR FROM SYSDATE);
```
---
## ISSUE 3: Recomandari in Sumar Executiv
### Stare Curenta
- Sheet `sumar_executiv` (linia 166) - contine doar KPIs
- Sheet `recomandari` (linia 168) - sheet separat cu toate recomandarile
### Solutia: Metoda noua in report_generator.py
### Adauga metoda noua in clasa `ExcelReportGenerator` (dupa linia 167 in report_generator.py)
```python
def add_sheet_with_recommendations(self, name: str, df: pd.DataFrame,
recommendations_df: pd.DataFrame,
title: str = None, description: str = None,
legend: dict = None, top_n_recommendations: int = 5):
"""Adauga sheet formatat cu KPIs si top recomandari dedesubt"""
sheet_name = name[:31]
ws = self.wb.create_sheet(title=sheet_name)
start_row = 1
# Adauga titlu
if title:
ws.cell(row=start_row, column=1, value=title)
ws.cell(row=start_row, column=1).font = Font(bold=True, size=14)
start_row += 1
# Adauga descriere
if description:
ws.cell(row=start_row, column=1, value=description)
ws.cell(row=start_row, column=1).font = Font(italic=True, size=10, color='666666')
start_row += 1
# Adauga timestamp
ws.cell(row=start_row, column=1, value=f"Generat: {datetime.now().strftime('%Y-%m-%d %H:%M')}")
ws.cell(row=start_row, column=1).font = Font(size=9, color='999999')
start_row += 2
# === SECTIUNEA 1: KPIs ===
if df is not None and not df.empty:
# Header
for col_idx, col_name in enumerate(df.columns, 1):
cell = ws.cell(row=start_row, column=col_idx, value=col_name)
cell.font = self.header_font
cell.fill = self.header_fill
cell.alignment = Alignment(horizontal='center', vertical='center', wrap_text=True)
cell.border = self.border
# Date
for row_idx, row in enumerate(df.itertuples(index=False), start_row + 1):
for col_idx, value in enumerate(row, 1):
cell = ws.cell(row=row_idx, column=col_idx, value=value)
cell.border = self.border
if isinstance(value, (int, float)):
cell.number_format = '#,##0.00' if isinstance(value, float) else '#,##0'
cell.alignment = Alignment(horizontal='right')
start_row = start_row + len(df) + 3
# === SECTIUNEA 2: TOP RECOMANDARI ===
if recommendations_df is not None and not recommendations_df.empty:
ws.cell(row=start_row, column=1, value="Top Recomandari Prioritare")
ws.cell(row=start_row, column=1).font = Font(bold=True, size=12, color='366092')
start_row += 1
# Sorteaza dupa prioritate (ALERTA primul, apoi ATENTIE, apoi OK)
df_sorted = recommendations_df.copy()
status_order = {'ALERTA': 0, 'ATENTIE': 1, 'OK': 2}
df_sorted['_order'] = df_sorted['STATUS'].map(status_order).fillna(3)
df_sorted = df_sorted.sort_values('_order').head(top_n_recommendations)
df_sorted = df_sorted.drop(columns=['_order'])
# Coloane de afisat
display_cols = ['STATUS', 'CATEGORIE', 'INDICATOR', 'VALOARE', 'RECOMANDARE']
display_cols = [c for c in display_cols if c in df_sorted.columns]
# Header cu background mov
for col_idx, col_name in enumerate(display_cols, 1):
cell = ws.cell(row=start_row, column=col_idx, value=col_name)
cell.font = self.header_font
cell.fill = PatternFill(start_color='8E44AD', end_color='8E44AD', fill_type='solid')
cell.alignment = Alignment(horizontal='center', vertical='center', wrap_text=True)
cell.border = self.border
# Randuri cu colorare dupa status
for row_idx, (_, row) in enumerate(df_sorted.iterrows(), start_row + 1):
status = row.get('STATUS', 'OK')
for col_idx, col_name in enumerate(display_cols, 1):
value = row.get(col_name, '')
cell = ws.cell(row=row_idx, column=col_idx, value=value)
cell.border = self.border
cell.alignment = Alignment(wrap_text=True)
# Colorare condiționata
if status == 'ALERTA':
cell.fill = PatternFill(start_color='FADBD8', end_color='FADBD8', fill_type='solid')
elif status == 'ATENTIE':
cell.fill = PatternFill(start_color='FCF3CF', end_color='FCF3CF', fill_type='solid')
else:
cell.fill = PatternFill(start_color='D5F5E3', end_color='D5F5E3', fill_type='solid')
# Auto-adjust latime coloane
for col_idx in range(1, 8):
ws.column_dimensions[get_column_letter(col_idx)].width = 22
ws.freeze_panes = ws.cell(row=5, column=1)
```
### Modifica main.py - Loop-ul de Creare Sheet-uri (in jurul liniei 435)
```python
for query_name in sheet_order:
if query_name in results:
# Tratare speciala pentru 'sumar_executiv' - adauga recomandari sub KPIs
if query_name == 'sumar_executiv':
query_info = QUERIES.get(query_name, {})
excel_gen.add_sheet_with_recommendations(
name='Sumar Executiv',
df=results['sumar_executiv'],
recommendations_df=results.get('recomandari'),
title=query_info.get('title', 'Sumar Executiv'),
description=query_info.get('description', ''),
legend=legends.get('sumar_executiv'),
top_n_recommendations=5
)
# Pastreaza sheet-ul complet de recomandari
elif query_name == 'recomandari':
excel_gen.add_sheet(
name='RECOMANDARI',
df=results['recomandari'],
title='Recomandari Automate (Lista Completa)',
description='Toate insight-urile si actiunile sugerate bazate pe analiza datelor',
legend=legends.get('recomandari')
)
elif query_name in QUERIES:
# ... logica existenta neschimbata
```
---
## ISSUE 4: Reordonare Sheet-uri
### Fisier: `main.py` liniile 165-221
### Noul sheet_order (inlocuieste complet liniile 165-221)
```python
sheet_order = [
# SUMAR EXECUTIV
'sumar_executiv',
'sumar_executiv_yoy',
'recomandari',
# INDICATORI AGREGATI (MUTATI SUS - imagine de ansamblu)
'indicatori_agregati_venituri',
'indicatori_agregati_venituri_yoy',
'portofoliu_clienti',
'concentrare_risc',
'concentrare_risc_yoy',
'sezonalitate_lunara',
# INDICATORI GENERALI & LICHIDITATE
'indicatori_generali',
'indicatori_lichiditate',
'clasificare_datorii',
'grad_acoperire_datorii',
'proiectie_lichiditate',
# ALERTE
'vanzari_sub_cost',
'clienti_marja_mica',
# CICLU CASH
'ciclu_conversie_cash',
# ANALIZA CLIENTI
'marja_per_client',
'clienti_ranking_profit',
'frecventa_clienti',
'concentrare_clienti',
'trending_clienti',
'marja_client_categorie',
# PRODUSE
'top_produse',
'marja_per_categorie',
'marja_per_gestiune',
'articole_negestionabile',
'productie_vs_revanzare',
# PRETURI
'dispersie_preturi',
'clienti_sub_medie',
'evolutie_discount',
# FINANCIAR
'dso_dpo',
'dso_dpo_yoy',
'solduri_clienti',
'aging_creante',
'facturi_restante',
'solduri_furnizori',
'aging_datorii',
'facturi_restante_furnizori',
'pozitia_cash',
# ISTORIC
'vanzari_lunare',
# STOC
'stoc_curent',
'stoc_lent',
'rotatie_stocuri',
# PRODUCTIE
'analiza_prajitorie',
]
```
---
## Ordinea de Implementare
### Pasul 1: queries.py
1. Inlocuieste ANALIZA_PRAJITORIE (liniile 450-478) cu versiunea cu agregare conditionala
2. Relaxeaza pragurile HAVING in:
- SOLDURI_CLIENTI (linia 652): `> 1` -> `> 0.01`
- SOLDURI_FURNIZORI (linia 675): `> 1` -> `> 0.01`
- AGING_CREANTE (linia 712): `> 100` -> `> 0.01`
- AGING_DATORII (linia 770): `> 100` -> `> 0.01`
### Pasul 2: report_generator.py
1. Adauga metoda `add_sheet_with_recommendations()` dupa linia 167
2. Asigura-te ca importurile includ `PatternFill`, `get_column_letter` din openpyxl
### Pasul 3: main.py
1. Inlocuieste array-ul `sheet_order` (liniile 165-221)
2. Modifica loop-ul de creare sheet-uri pentru `sumar_executiv` (in jurul liniei 435)
3. Adauga legend pentru `analiza_prajitorie` in dictionarul `legends`
### Pasul 4: Testare
1. Ruleaza cu `python main.py --months 1` pentru test rapid
2. Verifica sheet-ul `analiza_prajitorie` - format columnar
3. Verifica query-urile financiare - trebuie sa returneze date
4. Verifica `Sumar Executiv` - sectiune recomandari dedesubt
5. Verifica ordinea sheet-urilor - agregatele dupa sumar
---
## Fisiere Critice
| Fisier | Ce se modifica | Linii |
|--------|----------------|-------|
| `queries.py` | ANALIZA_PRAJITORIE SQL | 450-478 |
| `queries.py` | HAVING thresholds | 652, 675, 712, 770 |
| `report_generator.py` | Metoda noua | dupa 167 |
| `main.py` | sheet_order array | 165-221 |
| `main.py` | Loop creare sheet-uri | ~435 |
| `main.py` | legends dict | ~224 |
---
## Note pentru Sesiunea Viitoare
1. **Prioritate ALERTA**: Query-urile financiare "no data" - user-ul a confirmat ca datele EXISTA. Daca relaxarea HAVING nu rezolva, trebuie verificate numele view-urilor si coloanelor in Oracle.
2. **Import necesar** in report_generator.py:
```python
from openpyxl.utils import get_column_letter
from openpyxl.styles import PatternFill
```
3. **Testare**: Dupa implementare, ruleaza raportul si verifica fiecare din cele 4 fix-uri.

435
main.py
View File

@@ -15,6 +15,7 @@ import sys
import argparse import argparse
from datetime import datetime from datetime import datetime
from pathlib import Path from pathlib import Path
import time
import warnings import warnings
warnings.filterwarnings('ignore') warnings.filterwarnings('ignore')
@@ -62,6 +63,72 @@ from report_generator import (
from recommendations import RecommendationsEngine from recommendations import RecommendationsEngine
class PerformanceLogger:
"""Tracks execution time for each operation to identify bottlenecks."""
def __init__(self):
self.timings = []
self.start_time = time.perf_counter()
self.phase_start = None
self.phase_name = None
def start(self, name: str):
"""Start timing a named operation."""
self.phase_name = name
self.phase_start = time.perf_counter()
print(f"⏱️ [{self._timestamp()}] START: {name}")
def stop(self, rows: int = None):
"""Stop timing and record duration."""
if self.phase_start is None:
return
duration = time.perf_counter() - self.phase_start
self.timings.append({
'name': self.phase_name,
'duration': duration,
'rows': rows
})
rows_info = f" ({rows} rows)" if rows else ""
print(f"✅ [{self._timestamp()}] DONE: {self.phase_name} - {duration:.2f}s{rows_info}")
self.phase_start = None
def _timestamp(self):
return datetime.now().strftime("%H:%M:%S")
def summary(self, output_path: str = None):
"""Print summary sorted by duration (slowest first)."""
total = time.perf_counter() - self.start_time
print("\n" + "="*70)
print("📊 PERFORMANCE SUMMARY (sorted by duration, slowest first)")
print("="*70)
sorted_timings = sorted(self.timings, key=lambda x: x['duration'], reverse=True)
lines = []
for t in sorted_timings:
pct = (t['duration'] / total) * 100 if total > 0 else 0
rows_info = f" [{t['rows']} rows]" if t['rows'] else ""
line = f"{t['duration']:8.2f}s ({pct:5.1f}%) - {t['name']}{rows_info}"
print(line)
lines.append(line)
print("-"*70)
print(f"TOTAL: {total:.2f}s ({total/60:.1f} minutes)")
# Save to file
if output_path:
log_file = f"{output_path}/performance_log.txt"
with open(log_file, 'w', encoding='utf-8') as f:
f.write(f"Performance Log - {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n")
f.write("="*70 + "\n\n")
for line in lines:
f.write(line + "\n")
f.write("\n" + "-"*70 + "\n")
f.write(f"TOTAL: {total:.2f}s ({total/60:.1f} minutes)\n")
print(f"\n📝 Log saved to: {log_file}")
class OracleConnection: class OracleConnection:
"""Context manager for Oracle database connection""" """Context manager for Oracle database connection"""
@@ -142,47 +209,112 @@ def generate_reports(args):
# Connect and execute queries # Connect and execute queries
results = {} results = {}
perf = PerformanceLogger() # Initialize performance logger
with OracleConnection() as conn: with OracleConnection() as conn:
print("\n📥 Extragere date din Oracle:\n") print("\n📥 Extragere date din Oracle:\n")
for query_name, query_info in QUERIES.items(): for query_name, query_info in QUERIES.items():
perf.start(f"QUERY: {query_name}")
df = execute_query(conn, query_name, query_info) df = execute_query(conn, query_name, query_info)
results[query_name] = df results[query_name] = df
perf.stop(rows=len(df) if df is not None and not df.empty else 0)
# Generate Excel Report # Generate Excel Report
print("\n📝 Generare raport Excel...") print("\n📝 Generare raport Excel...")
excel_gen = ExcelReportGenerator(excel_path) excel_gen = ExcelReportGenerator(excel_path)
# Generate recommendations based on all data # Generate recommendations based on all data
print("\n🔍 Generare recomandări automate...") perf.start("RECOMMENDATIONS: analyze_all")
recommendations_engine = RecommendationsEngine(RECOMMENDATION_THRESHOLDS) recommendations_engine = RecommendationsEngine(RECOMMENDATION_THRESHOLDS)
recommendations_df = recommendations_engine.analyze_all(results) recommendations_df = recommendations_engine.analyze_all(results)
results['recomandari'] = recommendations_df results['recomandari'] = recommendations_df
perf.stop(rows=len(recommendations_df))
print(f"{len(recommendations_df)} recomandări generate") print(f"{len(recommendations_df)} recomandări generate")
# Add sheets in logical order (updated per PLAN_INDICATORI_LICHIDITATE_YOY.md) # =========================================================================
# CONSOLIDARE DATE PENTRU VEDERE DE ANSAMBLU
# =========================================================================
print("\n📊 Consolidare date pentru vedere de ansamblu...")
# --- Consolidare 1: Vedere Executivă (KPIs + YoY) ---
perf.start("CONSOLIDATION: kpi_consolidated")
# Folosim direct sumar_executiv_yoy care are deja toate coloanele necesare:
# INDICATOR, VALOARE_CURENTA, VALOARE_ANTERIOARA, VARIATIE_PROCENT, TREND
if 'sumar_executiv_yoy' in results and not results['sumar_executiv_yoy'].empty:
df_kpi = results['sumar_executiv_yoy'].copy()
# Adaugă coloana UM bazată pe tipul indicatorului
df_kpi['UM'] = df_kpi['INDICATOR'].apply(lambda x:
'%' if '%' in x or 'marja' in x.lower() else
'buc' if 'numar' in x.lower() else 'RON'
)
results['kpi_consolidated'] = df_kpi
else:
# Fallback la sumar_executiv simplu (fără YoY)
results['kpi_consolidated'] = results.get('sumar_executiv', pd.DataFrame())
perf.stop()
# --- Consolidare 2: Indicatori Venituri (Current + YoY) ---
perf.start("CONSOLIDATION: venituri_consolidated")
if 'indicatori_agregati_venituri' in results and 'indicatori_agregati_venituri_yoy' in results:
df_venituri = results['indicatori_agregati_venituri'].copy()
df_venituri_yoy = results['indicatori_agregati_venituri_yoy'].copy()
if not df_venituri.empty and not df_venituri_yoy.empty:
# Merge pe LINIE_BUSINESS
df_venituri_yoy = df_venituri_yoy.rename(columns={
'VANZARI': 'VANZARI_ANTERIOARE',
'MARJA': 'MARJA_ANTERIOARA'
})
df_venituri_combined = pd.merge(
df_venituri,
df_venituri_yoy[['LINIE_BUSINESS', 'VANZARI_ANTERIOARE', 'VARIATIE_PROCENT', 'TREND']],
on='LINIE_BUSINESS',
how='left'
)
df_venituri_combined = df_venituri_combined.rename(columns={'VANZARI': 'VANZARI_CURENTE'})
results['venituri_consolidated'] = df_venituri_combined
else:
results['venituri_consolidated'] = df_venituri
else:
results['venituri_consolidated'] = results.get('indicatori_agregati_venituri', pd.DataFrame())
perf.stop()
# --- Consolidare 3: Clienți și Risc (Portofoliu + Concentrare + YoY) ---
perf.start("CONSOLIDATION: risc_consolidated")
if 'concentrare_risc' in results and 'concentrare_risc_yoy' in results:
df_risc = results['concentrare_risc'].copy()
df_risc_yoy = results['concentrare_risc_yoy'].copy()
if not df_risc.empty and not df_risc_yoy.empty:
# Merge pe INDICATOR
df_risc = df_risc.rename(columns={'PROCENT': 'PROCENT_CURENT'})
df_risc_combined = pd.merge(
df_risc,
df_risc_yoy[['INDICATOR', 'PROCENT_ANTERIOR', 'VARIATIE', 'TREND']],
on='INDICATOR',
how='left'
)
results['risc_consolidated'] = df_risc_combined
else:
results['risc_consolidated'] = df_risc
else:
results['risc_consolidated'] = results.get('concentrare_risc', pd.DataFrame())
perf.stop()
print("✓ Consolidări finalizate")
# Add sheets in logical order - CONSOLIDAT primul, apoi detalii
sheet_order = [ sheet_order = [
# SUMAR EXECUTIV # CONSOLIDAT - Vedere de Ansamblu (înlocuiește sheet-urile individuale)
'sumar_executiv', 'vedere_ansamblu', # KPIs + YoY + Recomandări
'sumar_executiv_yoy', 'indicatori_venituri', # Venituri Current + YoY merged
'recomandari', 'clienti_risc', # Portofoliu + Concentrare + YoY
'tablou_financiar', # 5 secțiuni financiare
# INDICATORI AGREGATI (MUTATI SUS - imagine de ansamblu) # DETALII - Sheet-uri individuale pentru analiză profundă
'indicatori_agregati_venituri',
'indicatori_agregati_venituri_yoy',
'portofoliu_clienti',
'concentrare_risc',
'concentrare_risc_yoy',
'sezonalitate_lunara', 'sezonalitate_lunara',
# INDICATORI GENERALI & LICHIDITATE
'indicatori_generali',
'indicatori_lichiditate',
'clasificare_datorii',
'grad_acoperire_datorii',
'proiectie_lichiditate',
# ALERTE # ALERTE
'vanzari_sub_cost', 'vanzari_sub_cost',
'clienti_marja_mica', 'clienti_marja_mica',
@@ -452,36 +584,140 @@ def generate_reports(args):
'CANTITATE_TRANSFORMARI_OUT': 'Cantitate iesita din transformari', 'CANTITATE_TRANSFORMARI_OUT': 'Cantitate iesita din transformari',
'SOLD_NET_CANTITATE': 'Sold net = Total intrari - Total iesiri', 'SOLD_NET_CANTITATE': 'Sold net = Total intrari - Total iesiri',
'SOLD_NET_VALOARE': 'Valoare neta a soldului' 'SOLD_NET_VALOARE': 'Valoare neta a soldului'
},
# =====================================================================
# LEGENDS FOR CONSOLIDATED SHEETS
# =====================================================================
'vedere_ansamblu': {
'INDICATOR': 'Denumirea indicatorului de business',
'VALOARE_CURENTA': 'Valoare în perioada curentă (ultimele 12 luni)',
'UM': 'Unitate de măsură',
'VALOARE_ANTERIOARA': 'Valoare în perioada anterioară (12-24 luni)',
'VARIATIE_PROCENT': 'Variație procentuală YoY',
'TREND': 'CREȘTERE/SCĂDERE/STABIL',
'STATUS': 'OK = bine, ATENȚIE = necesită atenție, ALERTĂ = acțiune urgentă',
'CATEGORIE': 'Domeniu: Marja, Clienți, Stoc, Financiar',
'RECOMANDARE': 'Acțiune sugerată'
},
'indicatori_venituri': {
'LINIE_BUSINESS': 'Producție proprie / Materii prime / Marfă revândută',
'VANZARI_CURENTE': 'Vânzări în ultimele 12 luni',
'PROCENT_VENITURI': 'Contribuția la totalul vânzărilor (%)',
'MARJA': 'Marja brută pe linia de business',
'PROCENT_MARJA': 'Marja procentuală',
'VANZARI_ANTERIOARE': 'Vânzări în perioada anterioară',
'VARIATIE_PROCENT': 'Creștere/scădere procentuală YoY',
'TREND': 'CREȘTERE / SCĂDERE / STABIL'
},
'clienti_risc': {
'CATEGORIE': 'Tipul de categorie clienți',
'VALOARE': 'Numărul de clienți sau valoarea',
'EXPLICATIE': 'Explicația categoriei',
'INDICATOR': 'Top 1/5/10 clienți',
'PROCENT_CURENT': '% vânzări la Top N clienți - an curent',
'PROCENT_ANTERIOR': '% vânzări la Top N clienți - an trecut',
'VARIATIE': 'Schimbarea în puncte procentuale',
'TREND': 'DIVERSIFICARE (bine) / CONCENTRARE (risc) / STABIL',
'STATUS': 'OK / ATENTIE / RISC MARE'
},
'tablou_financiar': {
'INDICATOR': 'Denumirea indicatorului financiar',
'VALOARE': 'Valoarea calculată',
'STATUS': 'OK / ATENȚIE / ALERTĂ',
'RECOMANDARE': 'Acțiune sugerată pentru îmbunătățire',
'INTERPRETARE': 'Ce înseamnă valoarea pentru business'
} }
} }
# =========================================================================
# GENERARE SHEET-URI CONSOLIDATE EXCEL
# =========================================================================
# --- Sheet 0: DASHBOARD COMPLET (toate secțiunile într-o singură vedere) ---
perf.start("EXCEL: Dashboard Complet sheet (9 sections)")
excel_gen.add_consolidated_sheet(
name='Dashboard Complet',
sheet_title='Dashboard Executiv - Vedere Completă',
sheet_description='Toate indicatorii cheie consolidați într-o singură vedere rapidă',
sections=[
# KPIs și Recomandări
{
'title': 'KPIs cu Comparație YoY',
'df': results.get('kpi_consolidated', pd.DataFrame()),
'description': 'Indicatori cheie de performanță - curent vs anterior'
},
{
'title': 'Recomandări Prioritare',
'df': results.get('recomandari', pd.DataFrame()).head(10),
'description': 'Top 10 acțiuni sugerate bazate pe analiză'
},
# Venituri
{
'title': 'Venituri per Linie Business',
'df': results.get('venituri_consolidated', pd.DataFrame()),
'description': 'Producție proprie, Materii prime, Marfă revândută'
},
# Clienți și Risc
{
'title': 'Portofoliu Clienți',
'df': results.get('portofoliu_clienti', pd.DataFrame()),
'description': 'Structura și segmentarea clienților'
},
{
'title': 'Concentrare Risc YoY',
'df': results.get('risc_consolidated', pd.DataFrame()),
'description': 'Dependența de clienții mari - curent vs anterior'
},
# Tablou Financiar
{
'title': 'Indicatori Generali',
'df': results.get('indicatori_generali', pd.DataFrame()),
'description': 'Sold clienți, furnizori, cifra afaceri'
},
{
'title': 'Indicatori Lichiditate',
'df': results.get('indicatori_lichiditate', pd.DataFrame()),
'description': 'Zile rotație stoc, creanțe, datorii'
},
{
'title': 'Clasificare Datorii',
'df': results.get('clasificare_datorii', pd.DataFrame()),
'description': 'Datorii pe intervale de întârziere'
},
{
'title': 'Proiecție Lichiditate',
'df': results.get('proiectie_lichiditate', pd.DataFrame()),
'description': 'Previziune încasări și plăți pe 30 zile'
}
]
)
perf.stop()
# NOTE: Sheet-urile individuale (Vedere Ansamblu, Indicatori Venituri, Clienti si Risc,
# Tablou Financiar) au fost eliminate - toate datele sunt acum în Dashboard Complet
# --- Adaugă restul sheet-urilor de detaliu ---
# Skip sheet-urile care sunt acum în view-urile consolidate
consolidated_sheets = {
'vedere_ansamblu', 'indicatori_venituri', 'clienti_risc', 'tablou_financiar',
# Sheet-uri incluse în consolidări (nu mai sunt separate):
'sumar_executiv', 'sumar_executiv_yoy', 'recomandari',
'indicatori_agregati_venituri', 'indicatori_agregati_venituri_yoy',
'portofoliu_clienti', 'concentrare_risc', 'concentrare_risc_yoy',
'indicatori_generali', 'indicatori_lichiditate', 'clasificare_datorii',
'grad_acoperire_datorii', 'proiectie_lichiditate'
}
for query_name in sheet_order: for query_name in sheet_order:
if query_name in results: # Skip consolidated sheets and their source sheets
# Tratare speciala pentru 'sumar_executiv' - adauga recomandari sub KPIs if query_name in consolidated_sheets:
if query_name == 'sumar_executiv': continue
query_info = QUERIES.get(query_name, {})
excel_gen.add_sheet_with_recommendations( if query_name in results and query_name in QUERIES:
name='Sumar Executiv',
df=results['sumar_executiv'],
recommendations_df=results.get('recomandari'),
title=query_info.get('title', 'Sumar Executiv'),
description=query_info.get('description', ''),
legend=legends.get('sumar_executiv'),
top_n_recommendations=5
)
# Pastreaza sheet-ul complet de recomandari
elif query_name == 'recomandari':
excel_gen.add_sheet(
name='RECOMANDARI',
df=results['recomandari'],
title='Recomandari Automate (Lista Completa)',
description='Toate insight-urile si actiunile sugerate bazate pe analiza datelor',
legend=legends.get('recomandari')
)
elif query_name in QUERIES:
query_info = QUERIES[query_name] query_info = QUERIES[query_name]
# Create short sheet name from query name # Create short sheet name from query name
sheet_name = query_name.replace('_', ' ').title()[:31] sheet_name = query_name.replace('_', ' ').title()[:31]
perf.start(f"EXCEL: {query_name} detail sheet")
excel_gen.add_sheet( excel_gen.add_sheet(
name=sheet_name, name=sheet_name,
df=results[query_name], df=results[query_name],
@@ -489,81 +725,107 @@ def generate_reports(args):
description=query_info.get('description', ''), description=query_info.get('description', ''),
legend=legends.get(query_name) legend=legends.get(query_name)
) )
df_rows = len(results[query_name]) if results[query_name] is not None else 0
perf.stop(rows=df_rows)
perf.start("EXCEL: Save workbook")
excel_gen.save() excel_gen.save()
perf.stop()
# Generate PDF Report # =========================================================================
# GENERARE PDF - PAGINI CONSOLIDATE
# =========================================================================
print("\n📄 Generare raport PDF...") print("\n📄 Generare raport PDF...")
pdf_gen = PDFReportGenerator(pdf_path, company_name=COMPANY_NAME) pdf_gen = PDFReportGenerator(pdf_path, company_name=COMPANY_NAME)
# Title page # Pagina 1: Titlu
perf.start("PDF: Title page")
pdf_gen.add_title_page() pdf_gen.add_title_page()
perf.stop()
# KPIs # Pagina 2-3: DASHBOARD COMPLET (toate secțiunile într-o vedere unificată)
pdf_gen.add_kpi_section(results.get('sumar_executiv')) perf.start("PDF: Dashboard Complet page (4 sections)")
pdf_gen.add_consolidated_page(
# NEW: Indicatori Generali section 'Dashboard Complet',
if 'indicatori_generali' in results and not results['indicatori_generali'].empty: sections=[
pdf_gen.add_table_section( {
"Indicatori Generali de Business", 'title': 'KPIs cu Comparație YoY',
results.get('indicatori_generali'), 'df': results.get('kpi_consolidated', pd.DataFrame()),
columns=['INDICATOR', 'VALOARE', 'STATUS', 'RECOMANDARE'], 'columns': ['INDICATOR', 'VALOARE_CURENTA', 'UM', 'VALOARE_ANTERIOARA', 'VARIATIE_PROCENT', 'TREND'],
max_rows=10 'max_rows': 6
},
{
'title': 'Recomandări Prioritare',
'df': results.get('recomandari', pd.DataFrame()),
'columns': ['STATUS', 'CATEGORIE', 'INDICATOR', 'RECOMANDARE'],
'max_rows': 5
},
{
'title': 'Venituri per Linie Business',
'df': results.get('venituri_consolidated', pd.DataFrame()),
'columns': ['LINIE_BUSINESS', 'VANZARI_CURENTE', 'PROCENT_VENITURI', 'VARIATIE_PROCENT', 'TREND'],
'max_rows': 5
},
{
'title': 'Concentrare Risc YoY',
'df': results.get('risc_consolidated', pd.DataFrame()),
'columns': ['INDICATOR', 'PROCENT_CURENT', 'PROCENT_ANTERIOR', 'TREND'],
'max_rows': 4
}
]
) )
perf.stop()
# NEW: Indicatori Lichiditate section # NOTE: Paginile individuale (Vedere Executivă, Indicatori Venituri, Clienți și Risc,
if 'indicatori_lichiditate' in results and not results['indicatori_lichiditate'].empty: # Tablou Financiar) au fost eliminate - toate datele sunt acum în Dashboard Complet
pdf_gen.add_table_section(
"Indicatori de Lichiditate",
results.get('indicatori_lichiditate'),
columns=['INDICATOR', 'VALOARE', 'STATUS', 'RECOMANDARE'],
max_rows=10
)
# NEW: Proiecție Lichiditate pdf_gen.add_page_break()
if 'proiectie_lichiditate' in results and not results['proiectie_lichiditate'].empty:
pdf_gen.add_table_section(
"Proiecție Cash Flow 30/60/90 zile",
results.get('proiectie_lichiditate'),
columns=['PERIOADA', 'SOLD_PROIECTAT', 'INCASARI', 'PLATI', 'STATUS'],
max_rows=5
)
# NEW: Recommendations section (top priorities) # Alerte (vânzări sub cost, clienți marjă mică)
if 'recomandari' in results and not results['recomandari'].empty: perf.start("PDF: Alerts section")
pdf_gen.add_recommendations_section(results['recomandari'])
# Alerts
pdf_gen.add_alerts_section({ pdf_gen.add_alerts_section({
'vanzari_sub_cost': results.get('vanzari_sub_cost', pd.DataFrame()), 'vanzari_sub_cost': results.get('vanzari_sub_cost', pd.DataFrame()),
'clienti_marja_mica': results.get('clienti_marja_mica', pd.DataFrame()) 'clienti_marja_mica': results.get('clienti_marja_mica', pd.DataFrame())
}) })
perf.stop()
pdf_gen.add_page_break() pdf_gen.add_page_break()
# Monthly chart # =========================================================================
# PAGINI DE GRAFICE ȘI DETALII
# =========================================================================
# Grafic: Evoluția Vânzărilor Lunare
if 'vanzari_lunare' in results and not results['vanzari_lunare'].empty: if 'vanzari_lunare' in results and not results['vanzari_lunare'].empty:
perf.start("PDF: Chart - vanzari_lunare")
fig = create_monthly_chart(results['vanzari_lunare']) fig = create_monthly_chart(results['vanzari_lunare'])
pdf_gen.add_chart_image(fig, "Evoluția Vânzărilor și Marjei") pdf_gen.add_chart_image(fig, "Evoluția Vânzărilor și Marjei")
perf.stop()
# Client concentration # Grafic: Concentrare Clienți
if 'concentrare_clienti' in results and not results['concentrare_clienti'].empty: if 'concentrare_clienti' in results and not results['concentrare_clienti'].empty:
perf.start("PDF: Chart - concentrare_clienti")
fig = create_client_concentration_chart(results['concentrare_clienti']) fig = create_client_concentration_chart(results['concentrare_clienti'])
pdf_gen.add_chart_image(fig, "Concentrare Clienți") pdf_gen.add_chart_image(fig, "Concentrare Clienți")
perf.stop()
pdf_gen.add_page_break() pdf_gen.add_page_break()
# NEW: Cash Conversion Cycle chart # Grafic: Ciclu Conversie Cash
if 'ciclu_conversie_cash' in results and not results['ciclu_conversie_cash'].empty: if 'ciclu_conversie_cash' in results and not results['ciclu_conversie_cash'].empty:
perf.start("PDF: Chart - ciclu_conversie_cash")
fig = create_cash_cycle_chart(results['ciclu_conversie_cash']) fig = create_cash_cycle_chart(results['ciclu_conversie_cash'])
pdf_gen.add_chart_image(fig, "Ciclu Conversie Cash (DIO + DSO - DPO)") pdf_gen.add_chart_image(fig, "Ciclu Conversie Cash (DIO + DSO - DPO)")
perf.stop()
# Production vs Resale # Grafic: Producție vs Revânzare
if 'productie_vs_revanzare' in results and not results['productie_vs_revanzare'].empty: if 'productie_vs_revanzare' in results and not results['productie_vs_revanzare'].empty:
perf.start("PDF: Chart - productie_vs_revanzare")
fig = create_production_chart(results['productie_vs_revanzare']) fig = create_production_chart(results['productie_vs_revanzare'])
pdf_gen.add_chart_image(fig, "Producție Proprie vs Revânzare") pdf_gen.add_chart_image(fig, "Producție Proprie vs Revânzare")
perf.stop()
# Top clients table # Tabel: Top clienți
pdf_gen.add_table_section( pdf_gen.add_table_section(
"Top 15 Clienți după Vânzări", "Top 15 Clienți după Vânzări",
results.get('marja_per_client'), results.get('marja_per_client'),
@@ -573,7 +835,7 @@ def generate_reports(args):
pdf_gen.add_page_break() pdf_gen.add_page_break()
# Top products # Tabel: Top produse
pdf_gen.add_table_section( pdf_gen.add_table_section(
"Top 15 Produse după Vânzări", "Top 15 Produse după Vânzări",
results.get('top_produse'), results.get('top_produse'),
@@ -581,7 +843,7 @@ def generate_reports(args):
max_rows=15 max_rows=15
) )
# Trending clients # Tabel: Trending clienți
pdf_gen.add_table_section( pdf_gen.add_table_section(
"Trending Clienți (YoY)", "Trending Clienți (YoY)",
results.get('trending_clienti'), results.get('trending_clienti'),
@@ -589,7 +851,7 @@ def generate_reports(args):
max_rows=15 max_rows=15
) )
# NEW: Aging Creanțe table # Tabel: Aging Creanțe
if 'aging_creante' in results and not results['aging_creante'].empty: if 'aging_creante' in results and not results['aging_creante'].empty:
pdf_gen.add_page_break() pdf_gen.add_page_break()
pdf_gen.add_table_section( pdf_gen.add_table_section(
@@ -599,7 +861,7 @@ def generate_reports(args):
max_rows=15 max_rows=15
) )
# Stoc lent # Tabel: Stoc lent
if 'stoc_lent' in results and not results['stoc_lent'].empty: if 'stoc_lent' in results and not results['stoc_lent'].empty:
pdf_gen.add_page_break() pdf_gen.add_page_break()
pdf_gen.add_table_section( pdf_gen.add_table_section(
@@ -609,7 +871,12 @@ def generate_reports(args):
max_rows=20 max_rows=20
) )
perf.start("PDF: Save document")
pdf_gen.save() pdf_gen.save()
perf.stop()
# Performance Summary
perf.summary(output_path=str(args.output_dir))
# Summary # Summary
print("\n" + "="*60) print("\n" + "="*60)

View File

@@ -2075,7 +2075,8 @@ ranked_anterior AS (
SELECT vanzari, ROW_NUMBER() OVER (ORDER BY vanzari DESC) AS rn SELECT vanzari, ROW_NUMBER() OVER (ORDER BY vanzari DESC) AS rn
FROM vanzari_anterior FROM vanzari_anterior
), ),
metrics_anterior AS ( -- Raw metrics for anterior (may have NULL if no data)
metrics_anterior_raw AS (
SELECT SELECT
SUM(vanzari) AS total, SUM(vanzari) AS total,
SUM(CASE WHEN rn <= 1 THEN vanzari ELSE 0 END) AS top1, SUM(CASE WHEN rn <= 1 THEN vanzari ELSE 0 END) AS top1,
@@ -2083,15 +2084,25 @@ metrics_anterior AS (
SUM(CASE WHEN rn <= 10 THEN vanzari ELSE 0 END) AS top10 SUM(CASE WHEN rn <= 10 THEN vanzari ELSE 0 END) AS top10
FROM ranked_anterior FROM ranked_anterior
), ),
-- Fallback to 0 for NULL values (when no anterior data exists)
metrics_anterior AS (
SELECT
NVL(total, 0) AS total,
NVL(top1, 0) AS top1,
NVL(top5, 0) AS top5,
NVL(top10, 0) AS top10
FROM metrics_anterior_raw
),
-- Final metrics: just 1 row each, no cartesian product -- Final metrics: just 1 row each, no cartesian product
combined AS ( combined AS (
SELECT SELECT
ROUND(mc.top1 * 100.0 / NULLIF(mc.total, 0), 2) AS pct_curent_1, ROUND(mc.top1 * 100.0 / NULLIF(mc.total, 0), 2) AS pct_curent_1,
ROUND(ma.top1 * 100.0 / NULLIF(ma.total, 0), 2) AS pct_anterior_1, CASE WHEN ma.total = 0 THEN NULL ELSE ROUND(ma.top1 * 100.0 / ma.total, 2) END AS pct_anterior_1,
ROUND(mc.top5 * 100.0 / NULLIF(mc.total, 0), 2) AS pct_curent_5, ROUND(mc.top5 * 100.0 / NULLIF(mc.total, 0), 2) AS pct_curent_5,
ROUND(ma.top5 * 100.0 / NULLIF(ma.total, 0), 2) AS pct_anterior_5, CASE WHEN ma.total = 0 THEN NULL ELSE ROUND(ma.top5 * 100.0 / ma.total, 2) END AS pct_anterior_5,
ROUND(mc.top10 * 100.0 / NULLIF(mc.total, 0), 2) AS pct_curent_10, ROUND(mc.top10 * 100.0 / NULLIF(mc.total, 0), 2) AS pct_curent_10,
ROUND(ma.top10 * 100.0 / NULLIF(ma.total, 0), 2) AS pct_anterior_10 CASE WHEN ma.total = 0 THEN NULL ELSE ROUND(ma.top10 * 100.0 / ma.total, 2) END AS pct_anterior_10,
CASE WHEN ma.total > 0 THEN 1 ELSE 0 END AS has_anterior
FROM metrics_curent mc FROM metrics_curent mc
CROSS JOIN metrics_anterior ma CROSS JOIN metrics_anterior ma
) )
@@ -2099,8 +2110,9 @@ SELECT
'Top 1 client' AS indicator, 'Top 1 client' AS indicator,
pct_curent_1 AS procent_curent, pct_curent_1 AS procent_curent,
pct_anterior_1 AS procent_anterior, pct_anterior_1 AS procent_anterior,
ROUND(pct_curent_1 - pct_anterior_1, 2) AS variatie, CASE WHEN has_anterior = 1 THEN ROUND(pct_curent_1 - pct_anterior_1, 2) ELSE NULL END AS variatie,
CASE CASE
WHEN has_anterior = 0 THEN 'FARA DATE YOY'
WHEN pct_curent_1 < pct_anterior_1 THEN 'DIVERSIFICARE' WHEN pct_curent_1 < pct_anterior_1 THEN 'DIVERSIFICARE'
WHEN pct_curent_1 > pct_anterior_1 + 5 THEN 'CONCENTRARE' WHEN pct_curent_1 > pct_anterior_1 + 5 THEN 'CONCENTRARE'
ELSE 'STABIL' ELSE 'STABIL'
@@ -2111,8 +2123,9 @@ SELECT
'Top 5 clienti' AS indicator, 'Top 5 clienti' AS indicator,
pct_curent_5 AS procent_curent, pct_curent_5 AS procent_curent,
pct_anterior_5 AS procent_anterior, pct_anterior_5 AS procent_anterior,
ROUND(pct_curent_5 - pct_anterior_5, 2) AS variatie, CASE WHEN has_anterior = 1 THEN ROUND(pct_curent_5 - pct_anterior_5, 2) ELSE NULL END AS variatie,
CASE CASE
WHEN has_anterior = 0 THEN 'FARA DATE YOY'
WHEN pct_curent_5 < pct_anterior_5 THEN 'DIVERSIFICARE' WHEN pct_curent_5 < pct_anterior_5 THEN 'DIVERSIFICARE'
WHEN pct_curent_5 > pct_anterior_5 + 5 THEN 'CONCENTRARE' WHEN pct_curent_5 > pct_anterior_5 + 5 THEN 'CONCENTRARE'
ELSE 'STABIL' ELSE 'STABIL'
@@ -2123,8 +2136,9 @@ SELECT
'Top 10 clienti' AS indicator, 'Top 10 clienti' AS indicator,
pct_curent_10 AS procent_curent, pct_curent_10 AS procent_curent,
pct_anterior_10 AS procent_anterior, pct_anterior_10 AS procent_anterior,
ROUND(pct_curent_10 - pct_anterior_10, 2) AS variatie, CASE WHEN has_anterior = 1 THEN ROUND(pct_curent_10 - pct_anterior_10, 2) ELSE NULL END AS variatie,
CASE CASE
WHEN has_anterior = 0 THEN 'FARA DATE YOY'
WHEN pct_curent_10 < pct_anterior_10 THEN 'DIVERSIFICARE' WHEN pct_curent_10 < pct_anterior_10 THEN 'DIVERSIFICARE'
WHEN pct_curent_10 > pct_anterior_10 + 5 THEN 'CONCENTRARE' WHEN pct_curent_10 > pct_anterior_10 + 5 THEN 'CONCENTRARE'
ELSE 'STABIL' ELSE 'STABIL'

View File

@@ -262,6 +262,156 @@ class ExcelReportGenerator:
ws.freeze_panes = ws.cell(row=5, column=1) ws.freeze_panes = ws.cell(row=5, column=1)
def add_consolidated_sheet(self, name: str, sections: list, sheet_title: str = None,
sheet_description: str = None):
"""
Add a consolidated sheet with multiple sections separated visually.
Args:
name: Sheet name (max 31 chars)
sections: List of dicts with keys:
- 'title': Section title (str)
- 'df': DataFrame with data
- 'description': Optional section description (str)
- 'legend': Optional dict with column explanations
sheet_title: Overall sheet title
sheet_description: Overall sheet description
"""
sheet_name = name[:31]
ws = self.wb.create_sheet(title=sheet_name)
start_row = 1
# Add overall sheet title
if sheet_title:
ws.cell(row=start_row, column=1, value=sheet_title)
ws.cell(row=start_row, column=1).font = Font(bold=True, size=16)
start_row += 1
# Add overall description
if sheet_description:
ws.cell(row=start_row, column=1, value=sheet_description)
ws.cell(row=start_row, column=1).font = Font(italic=True, size=10, color='666666')
start_row += 1
# Add timestamp
ws.cell(row=start_row, column=1, value=f"Generat: {datetime.now().strftime('%Y-%m-%d %H:%M')}")
ws.cell(row=start_row, column=1).font = Font(size=9, color='999999')
start_row += 2
# Process each section
for section in sections:
section_title = section.get('title', '')
df = section.get('df')
description = section.get('description', '')
legend = section.get('legend', {})
# Section separator
separator_fill = PatternFill(start_color='2C3E50', end_color='2C3E50', fill_type='solid')
for col in range(1, 10): # Wide separator
# Use >>> instead of === to avoid Excel formula interpretation
cell = ws.cell(row=start_row, column=col, value='' if col > 1 else f'>>> {section_title}')
cell.fill = separator_fill
cell.font = Font(bold=True, color='FFFFFF', size=11)
start_row += 1
# Section description
if description:
ws.cell(row=start_row, column=1, value=description)
ws.cell(row=start_row, column=1).font = Font(italic=True, size=9, color='666666')
start_row += 1
start_row += 1
# Check for empty data
if df is None or df.empty:
ws.cell(row=start_row, column=1, value="Nu există date pentru această secțiune.")
ws.cell(row=start_row, column=1).font = Font(italic=True, color='999999')
start_row += 3
continue
# Write headers
for col_idx, col_name in enumerate(df.columns, 1):
cell = ws.cell(row=start_row, column=col_idx, value=col_name)
cell.font = self.header_font
cell.fill = self.header_fill
cell.alignment = Alignment(horizontal='center', vertical='center', wrap_text=True)
cell.border = self.border
# Write data
for row_idx, row in enumerate(df.itertuples(index=False), start_row + 1):
for col_idx, value in enumerate(row, 1):
cell = ws.cell(row=row_idx, column=col_idx, value=value)
cell.border = self.border
# Format numbers
if isinstance(value, (int, float)):
cell.number_format = '#,##0.00' if isinstance(value, float) else '#,##0'
cell.alignment = Alignment(horizontal='right')
# Highlight based on column name
col_name = df.columns[col_idx - 1].lower()
# Status coloring
if col_name == 'status' or col_name == 'acoperire':
if isinstance(value, str):
if value == 'OK':
cell.fill = self.good_fill
elif value in ('ATENTIE', 'NECESAR'):
cell.fill = self.warning_fill
elif value in ('ALERTA', 'DEFICIT', 'RISC MARE'):
cell.fill = self.alert_fill
# Trend coloring
if col_name == 'trend':
if isinstance(value, str):
if value in ('CRESTERE', 'IMBUNATATIRE', 'DIVERSIFICARE'):
cell.fill = self.good_fill
elif value in ('SCADERE', 'DETERIORARE', 'CONCENTRARE', 'PIERDUT'):
cell.fill = self.alert_fill
elif value == 'ATENTIE':
cell.fill = self.warning_fill
# Variatie coloring
if 'variatie' in col_name:
if isinstance(value, (int, float)):
if value > 0:
cell.fill = self.good_fill
elif value < 0:
cell.fill = self.alert_fill
# Margin coloring
if 'procent' in col_name or 'marja' in col_name:
if isinstance(value, (int, float)):
if value < 10:
cell.fill = self.alert_fill
elif value < 15:
cell.fill = self.warning_fill
elif value > 25:
cell.fill = self.good_fill
start_row = start_row + len(df) + 2
# Add legend for this section
if legend:
ws.cell(row=start_row, column=1, value="Legendă:")
ws.cell(row=start_row, column=1).font = Font(bold=True, size=8, color='336699')
start_row += 1
for col_name, explanation in legend.items():
ws.cell(row=start_row, column=1, value=f"{col_name}: {explanation}")
ws.cell(row=start_row, column=1).font = Font(size=8, color='666666')
start_row += 1
# Space between sections
start_row += 2
# Auto-adjust column widths
for col_idx in range(1, 12):
ws.column_dimensions[get_column_letter(col_idx)].width = 18
# Freeze title row
ws.freeze_panes = ws.cell(row=5, column=1)
def save(self): def save(self):
"""Save the workbook""" """Save the workbook"""
self.wb.save(self.output_path) self.wb.save(self.output_path)
@@ -497,6 +647,108 @@ class PDFReportGenerator:
"""Add page break""" """Add page break"""
self.elements.append(PageBreak()) self.elements.append(PageBreak())
def add_consolidated_page(self, page_title: str, sections: list):
"""
Add a consolidated PDF page with multiple sections.
Args:
page_title: Main title for the page
sections: List of dicts with keys:
- 'title': Section title (str)
- 'df': DataFrame with data
- 'columns': List of columns to display (optional)
- 'max_rows': Max rows to display (default 15)
"""
# Page title
self.elements.append(Paragraph(page_title, self.styles['SectionHeader']))
self.elements.append(Spacer(1, 0.3*cm))
for section in sections:
section_title = section.get('title', '')
df = section.get('df')
columns = section.get('columns')
max_rows = section.get('max_rows', 15)
# Sub-section title
subsection_style = ParagraphStyle(
name='SubSection',
parent=self.styles['Heading2'],
fontSize=11,
spaceBefore=10,
spaceAfter=5,
textColor=colors.HexColor('#2C3E50')
)
self.elements.append(Paragraph(section_title, subsection_style))
if df is None or df.empty:
self.elements.append(Paragraph("Nu există date.", self.styles['Normal']))
self.elements.append(Spacer(1, 0.3*cm))
continue
# Select columns
if columns:
cols = [c for c in columns if c in df.columns]
else:
cols = list(df.columns)[:6] # Max 6 columns
if not cols:
continue
# Prepare data
data = [cols]
for _, row in df.head(max_rows).iterrows():
row_data = []
for col in cols:
val = row.get(col, '')
if isinstance(val, float):
row_data.append(f"{val:,.2f}")
elif isinstance(val, int):
row_data.append(f"{val:,}")
else:
row_data.append(str(val)[:30]) # Truncate long strings
data.append(row_data)
# Calculate column widths
n_cols = len(cols)
col_width = 16*cm / n_cols
table = Table(data, colWidths=[col_width] * n_cols)
# Build style with conditional row colors for status
table_style = [
('BACKGROUND', (0, 0), (-1, 0), colors.HexColor('#366092')),
('TEXTCOLOR', (0, 0), (-1, 0), colors.white),
('ALIGN', (0, 0), (-1, -1), 'LEFT'),
('FONTNAME', (0, 0), (-1, 0), 'Helvetica-Bold'),
('FONTSIZE', (0, 0), (-1, -1), 7),
('BOTTOMPADDING', (0, 0), (-1, 0), 6),
('GRID', (0, 0), (-1, -1), 0.5, colors.gray),
('ROWBACKGROUNDS', (0, 1), (-1, -1), [colors.white, colors.HexColor('#f5f5f5')])
]
# Color status cells if STATUS column exists
if 'STATUS' in cols:
status_col_idx = cols.index('STATUS')
for row_idx, row in enumerate(df.head(max_rows).itertuples(index=False), 1):
status_val = str(row[df.columns.get_loc('STATUS')]) if 'STATUS' in df.columns else ''
if status_val == 'ALERTA':
table_style.append(('BACKGROUND', (status_col_idx, row_idx), (status_col_idx, row_idx), colors.HexColor('#FF6B6B')))
elif status_val == 'ATENTIE':
table_style.append(('BACKGROUND', (status_col_idx, row_idx), (status_col_idx, row_idx), colors.HexColor('#FFE66D')))
elif status_val == 'OK':
table_style.append(('BACKGROUND', (status_col_idx, row_idx), (status_col_idx, row_idx), colors.HexColor('#4ECDC4')))
table.setStyle(TableStyle(table_style))
self.elements.append(table)
if len(df) > max_rows:
self.elements.append(Paragraph(
f"... și încă {len(df) - max_rows} înregistrări",
self.styles['SmallText']
))
self.elements.append(Spacer(1, 0.4*cm))
def add_recommendations_section(self, recommendations_df: pd.DataFrame): def add_recommendations_section(self, recommendations_df: pd.DataFrame):
"""Add recommendations section with status colors""" """Add recommendations section with status colors"""
self.elements.append(Paragraph("Recomandari Cheie", self.styles['SectionHeader'])) self.elements.append(Paragraph("Recomandari Cheie", self.styles['SectionHeader']))