ids.alfacom.it/replit.md
marco370 14d67c63a3 Improve syslog parser reliability and add monitoring
Enhance the syslog parser with auto-reconnect, error recovery, and integrated health metrics logging. Add a cron job for automated health checks and restarts.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 4885eae4-ffc7-4601-8f1c-5414922d5350
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/AXTUZmH
2025-11-25 09:09:21 +00:00

13 KiB

IDS - Intrusion Detection System

Overview

This project is a full-stack web application for an Intrusion Detection System (IDS) tailored for MikroTik routers, utilizing Machine Learning. Its core function is to monitor network traffic, identify anomalies indicative of intrusions, and automatically block malicious IP addresses across multiple routers. The system aims to provide real-time monitoring, efficient anomaly detection, and streamlined network security management for MikroTik environments, including advanced features like IP geolocation and robust service monitoring.

User Preferences

Operazioni Git e Deployment

  • IMPORTANTE: L'agente NON deve usare comandi git (push-gitlab.sh) perché Replit blocca le operazioni git
  • Workflow corretto:
    1. Utente riporta errori/problemi dal server AlmaLinux
    2. Agente risolve problemi e modifica file su Replit
    3. Utente esegue manualmente: ./push-gitlab.sh per commit+push
    4. Utente esegue sul server: ./update_from_git.sh o ./update_from_git.sh --db
    5. Utente testa e riporta risultati all'agente
    6. Ripeti fino a funzionamento completo

Linguaggio

  • Tutte le risposte dell'agente devono essere in italiano
  • Codice e documentazione tecnica: inglese
  • Commit message: italiano

System Architecture

The IDS employs a React-based frontend for real-time monitoring, detection visualization, and whitelist management, built with ShadCN UI and TanStack Query. The backend consists of a Python FastAPI service dedicated to ML analysis (Isolation Forest with 25 targeted features), MikroTik API management, and a detection engine that scores anomalies from 0-100 across five risk levels. A Node.js (Express) backend handles API requests from the frontend, manages the PostgreSQL database, and coordinates service operations.

Key Architectural Decisions & Features:

  • Log Collection & Processing: MikroTik syslog data (UDP:514) is sent to RSyslog, parsed by syslog_parser.py, and stored in PostgreSQL. The parser includes auto-cleanup with a 3-day retention policy.
  • Machine Learning: An Isolation Forest model trained on 25 network log features performs real-time anomaly detection, assigning a risk score.
  • Automated Blocking: Critical IPs (score >= 80) are automatically blocked in parallel across all configured MikroTik routers via their REST API.
  • Service Monitoring & Management: A dashboard provides real-time status (green/red indicators) for the ML Backend, Database, and Syslog Parser. Service management (start/stop/restart) for Python services is available via API endpoints, secured with API key authentication and Systemd integration for production-grade control and auto-restart capabilities.
  • IP Geolocation: Integrated ip-api.com for enriching detection data with geographical and Autonomous System (AS) information, including intelligent caching.
  • Database Management: PostgreSQL is used for all persistent data. An intelligent database versioning system ensures efficient SQL migrations, applying only new scripts. Dual-mode database drivers (@neondatabase/serverless for Replit, pg for AlmaLinux) ensure environment compatibility.
  • Microservices: Clear separation of concerns between the Python ML backend and the Node.js API backend.
  • UI/UX: Utilizes ShadCN UI for a modern component library and react-hook-form with Zod for robust form validation.

External Dependencies

  • React: Frontend framework.
  • FastAPI: Python web framework for the ML backend.
  • PostgreSQL: Primary database for storing configurations, logs, detections, and whitelist entries.
  • MikroTik API REST: For router communication and IP blocking.
  • ShadCN UI: Frontend component library.
  • TanStack Query: Data fetching for the frontend.
  • Isolation Forest: Machine Learning algorithm for anomaly detection.
  • RSyslog: Log collection daemon.
  • Drizzle ORM: For database schema definition in Node.js.
  • Neon Database: Cloud-native PostgreSQL service (used in Replit).
  • pg (Node.js driver): Standard PostgreSQL driver for Node.js (used in AlmaLinux).
  • psycopg2: PostgreSQL adapter for Python.
  • ip-api.com: External API for IP geolocation data.
  • Recharts: Charting library for analytics visualization.

Recent Updates (Novembre 2025)

🛡️ Syslog Parser Resilience & Monitoring (25 Nov 2025 - 11:00)

  • Feature: Parser resiliente con auto-recovery e monitoring automatico
  • Problema Risolto: Parser si bloccava periodicamente (ultimo: 24 Nov mattina)
  • Root Cause: Database connection timeout, eccezioni non gestite, cleanup bloccante
  • Soluzioni Implementate:
    1. Auto-Reconnect: Riconnessione automatica su DB timeout
    2. Error Recovery: Continue processing dopo eccezioni (non crashare!)
    3. Health Check: Log ogni 5 minuti [HEALTH] Parser alive: X righe, Y salvate, Z errori
    4. Monitoring Script: deployment/check_parser_health.sh (cron ogni 5 min)
    5. Auto-Restart: Se ultimo log > 5 min fa → restart automatico
  • Files Modificati:
    • python_ml/syslog_parser.py - metodo reconnect_db() + try/catch nidificati
    • deployment/check_parser_health.sh - health check con auto-restart
    • deployment/setup_parser_monitoring.sh - setup cron job
    • deployment/TROUBLESHOOTING_SYSLOG_PARSER.md - guida completa
  • Timestamp Detection Chiariti:
    • first_seen/last_seen: timestamp dei log network_logs (es. 18:46:21)
    • detected_at: quando ML backend rileva anomalia (es. 19:45 - 1 ora dopo!)
    • Il delay è normale: ML backend esegue analisi batch ogni ora
  • Deploy: ./update_from_git.shsudo systemctl restart ids-syslog-parsersudo ./deployment/setup_parser_monitoring.sh
  • Monitoring: tail -f /var/log/ids/parser-health.log

🔧 Analytics Aggregator Fix - Data Consistency (24 Nov 2025 - 17:00)

  • BUG FIX CRITICO: Risolto mismatch dati Dashboard Live
  • Problema: Distribuzione traffico mostrava 262k attacchi ma breakdown solo 19
  • ROOT CAUSE: Aggregatore contava occorrenze invece di pacchetti in attacks_by_type e attacks_by_country
  • Soluzione:
    1. Spostato conteggio da loop detections → loop pacchetti
    2. attacks_by_type[tipo] += packets (non +1!)
    3. attacks_by_country[paese] += packets (non +1!)
    4. Fallback "unknown"/"Unknown" per dati mancanti (tipo/geo)
    5. Logging validazione: verifica breakdown_total == attack_packets
  • Invariante matematica: Σ(attacks_by_type) == Σ(attacks_by_country) == attack_packets
  • Files modificati: python_ml/analytics_aggregator.py
  • Deploy: Restart ML backend + aggregator run manuale per testare
  • Validazione: Log mostra match: True e nessun warning mismatch

📊 Network Analytics & Dashboard System (24 Nov 2025 - 11:30)

  • Feature Completa: Sistema analytics con traffico normale + attacchi, visualizzazioni grafiche avanzate, dati permanenti
  • Componenti:
    1. Database: network_analytics table con aggregazioni orarie/giornaliere permanenti
    2. Aggregatore Python: analytics_aggregator.py classifica traffico ogni ora
    3. Systemd Timer: Esecuzione automatica ogni ora (:05 minuti)
    4. API: /api/analytics/recent e /api/analytics/range
    5. Frontend: Dashboard Live (real-time 3 giorni) + Analytics Storici (permanente)
  • Grafici: Area Chart, Pie Chart, Bar Chart, Line Chart, Real-time Stream
  • Flag Emoji: 🇮🇹🇺🇸🇷🇺🇨🇳 per identificazione immediata paese origine
  • Deploy: Migration 005 + ./deployment/setup_analytics_timer.sh
  • Security Fix: Rimosso hardcoded path, implementato wrapper script sicuro run_analytics.sh per esecuzioni manuali
  • Production-grade: Credenziali gestite via systemd EnvironmentFile (automatico) o wrapper script (manuale)
  • Frontend Fix: Analytics History ora usa dati orari (hourly: true) finché aggregazione daily non è schedulata

🌍 IP Geolocation Integration (22 Nov 2025 - 13:00)

  • Feature: Informazioni geografiche complete (paese, città, organizzazione, AS) per ogni IP
  • API: ip-api.com con batch async lookup (100 IP in ~1.5s invece di 150s!)
  • Performance: Caching intelligente + fallback robusto
  • Display: Globe/Building/MapPin icons nella pagina Detections
  • Deploy: Migration 004 + restart ML backend

🤖 Hybrid ML Detector - False Positive Reduction System (24 Nov 2025)

  • Obiettivo: Riduzione falsi positivi 80-90% mantenendo alta detection accuracy
  • Architettura:
    1. Isolation Forest (sklearn): n_estimators=250, contamination=0.03 (tuning scientifico)
    2. Feature Selection: Chi-Square test riduce 25→18 feature più rilevanti
    3. Ensemble Classifier: DT + RF + XGBoost con voting ponderato (1:2:2)
    4. Confidence Scoring: 3-tier system (High≥95%, Medium≥70%, Low<70%)
    5. Validation Framework: CICIDS2017 dataset con Precision/Recall/F1/FPR metrics
  • Componenti:
    • python_ml/ml_hybrid_detector.py - Core detector con IF + ensemble + feature selection
    • python_ml/dataset_loader.py - CICIDS2017 loader con mappatura 80→25 features
    • python_ml/validation_metrics.py - Production-grade metrics calculator
    • python_ml/train_hybrid.py - CLI training script (test/train/validate)
  • Dipendenze ML: xgboost==2.0.3, joblib==1.3.2, scikit-learn==1.3.2
  • Backward Compatibility: USE_HYBRID_DETECTOR env var (default=true)
  • Target Metrics: Precision≥90%, Recall≥80%, FPR≤5%, F1≥85%
  • Deploy: Vedere deployment/CHECKLIST_ML_HYBRID.md

🎯 Decisione Architetturale - sklearn.IsolationForest (24 Nov 2025 - 22:00)

  • Problema Deploy: eif==2.0.2 incompatibile con Python 3.11 (richiede distutils rimosso, API Cython obsolete, fermo dal 2021)
  • Tentativi falliti (1+ ora bloccati): Build isolation flags, Cython pre-install, PIP_NO_BUILD_ISOLATION, Python downgrade consideration
  • Analisi Architect:
    • Extended IF (eif) NON supporta Python ≥3.11 (incompatibilità fondamentale C++/Cython)
    • Downgrade Python 3.10 = ricreare venv + 50 dipendenze (rischio regressioni, EOL 2026)
    • PyOD NON ha Extended IF (solo standard IF wrapper sklearn - fonte verificata)
    • Codice aveva GIÀ fallback funzionante a sklearn.ensemble.IsolationForest!
  • DECISIONE FINALE: Usare sklearn.IsolationForest (fallback pre-esistente)
    • Compatibile Python 3.11+ (wheels pre-compilati, zero compilazione)
    • ZERO modifica codice (fallback già implementato con flag EIF_AVAILABLE)
    • Target metrics raggiungibili con IF standard + ensemble + feature selection
    • Production-grade, libreria scikit-learn mantenuta e stabile
    • Installazione semplificata: pip install xgboost joblib (2 step invece di 4!)
  • Files modificati:
    • requirements.txt: Rimosso eif==2.0.2 e Cython==3.0.5 (non più necessari)
    • deployment/install_ml_deps.sh: Semplificato da 4 a 2 step, nessuna compilazione
    • deployment/CHECKLIST_ML_HYBRID.md: Aggiornato con nuove istruzioni semplificate

🔄 Database Schema Adaptation & Auto-Training (24 Nov 2025 - 23:30)

  • Database Schema Fix: Adattato ML detector allo schema reale network_logs
    • Query SQL corretta: destination_ip (non dest_ip), destination_port (non dest_port)
    • Feature extraction: supporto packet_length invece di packets/bytes separati
    • Backward compatible: funziona sia con schema MikroTik che dataset CICIDS2017
  • Training Automatico Settimanale:
    • Script wrapper: deployment/run_ml_training.sh (carica credenziali da .env)
    • Systemd service: ids-ml-training.service
    • Systemd timer: ids-ml-training.timer (ogni Lunedì 03:00 AM)
    • Setup automatico: ./deployment/setup_ml_training_timer.sh
    • Log persistenti: /var/log/ids/ml-training.log
  • Workflow Completo:
    1. Timer systemd esegue training settimanale automatico
    2. Script carica ultimi 7 giorni di traffico dal database (234M+ records)
    3. Training Hybrid ML (IF + Ensemble + Feature Selection)
    4. Modelli salvati in python_ml/models/
    5. ML backend li carica automaticamente al prossimo riavvio
  • Files creati:
    • deployment/run_ml_training.sh - Wrapper sicuro per training
    • deployment/train_hybrid_production.sh - Script training manuale completo
    • deployment/systemd/ids-ml-training.service - Service systemd
    • deployment/systemd/ids-ml-training.timer - Timer settimanale
    • deployment/setup_ml_training_timer.sh - Setup automatico
  • Files modificati:
    • python_ml/train_hybrid.py - Query SQL adattata allo schema DB reale
    • python_ml/ml_hybrid_detector.py - Supporto packet_length, backward compatible
    • python_ml/dataset_loader.py - Fix timestamp mancante in dataset sintetico
  • Impatto: Sistema userà automaticamente sklearn IF tramite fallback, tutti gli 8 checkpoint fail-fast funzionano identicamente