Compare commits

..

53 Commits

Author SHA1 Message Date
Marco Lanzara
f6d656ce14 🚀 Release v1.0.122
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-17 09:13:40
2026-02-17 09:13:40 +00:00
marco370
17dc79372e Saved progress at the end of the loop
Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 5abe4a2e-1608-4c5e-a264-50329bac4934
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/s2eMVCL
2026-02-17 09:10:10 +00:00
marco370
4118d60d6d Update service monitoring to display detailed status and health
Refactor the services page to dynamically fetch and display the status of various systemd services and timers, improving the observability of the application's backend components.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: intermediate_checkpoint
Replit-Commit-Event-Id: 48245392-3f34-4eac-aeaf-99e52684ddf2
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/s2eMVCL
2026-02-17 09:09:26 +00:00
marco370
6ce60ed5d3 Improve router connectivity checks and logging efficiency
Refactor logging configuration to reduce noise from slow or failed router connections, consolidating error details and improving the handling of unreachable routers.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 2b9a2559-b0f6-49cd-b8db-eb24672eef5e
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/6WuDAR4
2026-02-17 08:35:42 +00:00
marco370
fe113d5518 Improve error handling and logging for router operations
Implement a circuit breaker pattern for router connections and optimize logging to reduce noise from unresponsive routers.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 85f7e8cc-b58e-4a55-87f1-84eb08509a81
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/6WuDAR4
2026-02-17 08:19:02 +00:00
Marco Lanzara
9104c67f97 🚀 Release v1.0.121
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-17 08:11:26
2026-02-17 08:11:26 +00:00
marco370
d01eca2cf0 Saved progress at the end of the loop
Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 2b961dac-b073-4f80-8b6f-8bb8c7b26675
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/6WuDAR4
2026-02-17 08:07:59 +00:00
marco370
b7abd340bc Improve logging for Mikrotik requests and IP blocking operations
Enhance logging in `mikrotik.ts` to include request details, response statuses, and timings. Add verbose logging for successful operations and warnings for errors or slow responses. Update `getExistingBlockedIps` to log total entries and specific list counts per router. Modify `addToAddressList` to log successful additions and specific error conditions. Update `bulkBlockIps` to log detailed operation outcomes, including partial and failed IPs, with a final summary. Add router information to the `BLOCK-ALL` log in `routes.ts`.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: intermediate_checkpoint
Replit-Commit-Event-Id: 3945267e-74c4-4c36-912a-462ddd667392
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/6WuDAR4
2026-02-17 08:07:45 +00:00
marco370
0bd84ed2ed Ensure backend services are running and auto-blocking is functional
Add systemd service for Node.js backend, update scripts, and verify service status and auto-block functionality.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: ee67fff9-dcaf-42b7-ac9b-297b17ddfdb3
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/6WuDAR4
2026-02-17 07:53:05 +00:00
Marco Lanzara
74eb423a92 🚀 Release v1.0.120
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-17 07:48:15
2026-02-17 07:48:15 +00:00
marco370
3c8c03bb98 Saved progress at the end of the loop
Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 0c68b0a5-a8f9-48dc-bd76-f9bc06f520aa
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/6WuDAR4
2026-02-17 07:47:41 +00:00
marco370
2f76875f2b Add systemd service for Node.js backend and update deployment scripts
Create `ids-backend.service` for the Node.js backend, modify `check_frontend.sh` to use systemd, and update `install_systemd_services.sh` to include the new service.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: intermediate_checkpoint
Replit-Commit-Event-Id: 4484d762-7461-4e0f-bf71-fa7a7609e794
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/6WuDAR4
2026-02-17 07:47:24 +00:00
marco370
f9e0e1a98e Diagnose issues with the Intrusion Detection System backend services
Identify that the `ids-backend` (Node.js) service is not found and `ids-analytics` is also missing, while `ids-auto-block` fails due to the absence of the backend.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 70f0c377-a69a-4cca-811c-25145638dcc0
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/cJuycQ5
2026-02-17 07:37:31 +00:00
Marco Lanzara
544b7cfa49 🚀 Release v1.0.119
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-17 07:32:28
2026-02-17 07:32:28 +00:00
marco370
1fc63c657a Saved progress at the end of the loop
Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 700b70e8-ba3c-4702-99d1-a30058c7e961
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/cJuycQ5
2026-02-16 18:36:27 +00:00
marco370
b45b810eb9 Improve IP blocking process by increasing timeouts and adding detailed logging
Increase auto-block timeout to 300s, update systemd service timeout to 480s, and reduce individual MikroTik request timeout to 8s. Add per-router logging for blocking operations.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: intermediate_checkpoint
Replit-Commit-Event-Id: 455f4d8c-e90c-45d5-a7f1-e5f98b1345d3
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/cJuycQ5
2026-02-16 18:35:39 +00:00
Marco Lanzara
64c491f245 🚀 Release v1.0.118
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-16 18:28:03
2026-02-16 18:28:03 +00:00
marco370
88b0dd7472 Improve backend restart reliability for improved system stability
Update backend check script to handle user permissions for systemctl restarts and add fallback process checking.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: e3c3b9b2-fba8-4fc9-b4fb-ac39615693a8
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/4aeldgV
2026-02-16 18:25:43 +00:00
Marco Lanzara
b18e0a51e1 🚀 Release v1.0.117
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-16 15:49:34
2026-02-16 15:49:34 +00:00
marco370
a7967260b1 Improve IP blocking by separating detection and blocking steps
Refactor auto_block.py to call the Node.js backend for blocking critical IPs and adjust the auto-block service configuration.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: aef8a3be-adf0-4bdc-942f-3e7b19be7d72
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/4aeldgV
2026-02-16 15:04:35 +00:00
marco370
59416f0fe3 Configure analytics timer to run hourly and fix script execution
Correctly set up the analytics timer to run hourly and address issues with script parameter passing.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 7725d830-0400-498d-a538-8a6f833ea045
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/4aeldgV
2026-02-16 14:55:05 +00:00
Marco Lanzara
85db2b1483 🚀 Release v1.0.116
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-16 14:49:08
2026-02-16 14:49:08 +00:00
marco370
cc7a0f6f0f Update scripts to properly manage backend and frontend services
Adjusted `check_frontend.sh` and `restart_all.sh` to use `systemctl` for the ML backend and direct process management for the frontend, resolving issues with incorrect Python environments and process termination.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: cb6e0872-24a9-4a4b-a053-9491c053b13f
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/4aeldgV
2026-02-16 14:48:50 +00:00
Marco Lanzara
44be5e232e 🚀 Release v1.0.115
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-16 14:40:14
2026-02-16 14:40:14 +00:00
marco370
34d830b275 Replace custom scripts with systemd for managing backend and frontend services
Replace custom process management scripts (`check_backend.sh`, `check_frontend.sh`, `restart_all.sh`) with `systemctl` commands to ensure proper service management and virtual environment utilization.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 9aa98b1a-1ee1-47f9-a579-83bad5992ed1
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/MmMtYN7
2026-02-16 12:02:53 +00:00
marco370
3e0bd64b14 Fix ML backend startup issues and improve logging
Address issues with the ML backend not starting correctly due to missing dependencies by ensuring the correct virtual environment is used and improving logging for easier debugging of startup failures.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 0a2e2f05-ffc8-4767-ba4d-9b4f2b98416d
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/MmMtYN7
2026-02-16 12:00:21 +00:00
Marco Lanzara
6ebab9e23e 🚀 Release v1.0.114
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-16 11:54:24
2026-02-16 11:54:24 +00:00
marco370
7498527667 Increase ML backend stats timeout and add detailed error logging
Extend timeout for fetching ML backend stats from 5 to 15 seconds and add detailed error logging to the /api/ml/stats endpoint to diagnose potential issues.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 67dbf601-ef19-4ba2-8e69-77f75ec2c104
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/MmMtYN7
2026-02-16 11:54:00 +00:00
marco370
d901f264cd Update application to improve UI and data handling
Update UI for training page, enhance API routes, and modify database schema.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 653d873d-8c67-4562-9016-b1971c0e6b73
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/MmMtYN7
2026-02-16 11:43:54 +00:00
Marco Lanzara
14645c520b 🚀 Release v1.0.113
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-16 11:32:42
2026-02-16 11:32:42 +00:00
marco370
c62b41d624 Saved progress at the end of the loop
Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 95131e28-728c-4c85-9a81-76c89430618b
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/MmMtYN7
2026-02-16 11:29:58 +00:00
marco370
c8efe5c942 Improve ML stats display and add database fallback
Add backend logic for ML stats to use database when unavailable and update frontend to show offline status.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: intermediate_checkpoint
Replit-Commit-Event-Id: b69b4d7d-5491-401d-a003-d99b33ae655d
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/MmMtYN7
2026-02-16 11:29:12 +00:00
Marco Lanzara
40f8f05e87 🚀 Release v1.0.112
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-16 11:06:31
2026-02-16 11:06:31 +00:00
marco370
3faddb3f5f Saved progress at the end of the loop
Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 333d0b67-9480-4f1f-b21b-fdcf6298717b
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/B8f0CIv
2026-02-16 11:06:04 +00:00
marco370
20bdf72f81 Integrate Mikrotik router management for IP blocking and unblocking
Introduces `server/mikrotik.ts` to manage router connections and IP blocking/unblocking via API calls, replacing direct calls to an external ML backend. Updates `server/routes.ts` to utilize these new functions for whitelisting and unblocking IPs.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: intermediate_checkpoint
Replit-Commit-Event-Id: 23a20497-0848-4aec-aef2-4a9483164195
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/B8f0CIv
2026-02-16 11:05:13 +00:00
Marco Lanzara
a858958481 🚀 Release v1.0.111
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-16 10:51:45
2026-02-16 10:51:45 +00:00
marco370
c498916716 Saved progress at the end of the loop
Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: cbfad4fd-3d8a-4b3f-bdbb-d226ab5033f3
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/B8f0CIv
2026-02-16 10:51:19 +00:00
marco370
26d7445eb7 Improve IP blocking and log analysis performance
Optimize critical IP blocking using bulk operations and refine log query for better accuracy and performance.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: intermediate_checkpoint
Replit-Commit-Event-Id: 545f1e17-399b-4078-b609-9458832db9c4
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/B8f0CIv
2026-02-16 10:51:01 +00:00
Marco Lanzara
0269dc929e 🚀 Release v1.0.110
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-16 10:36:31
2026-02-16 10:36:31 +00:00
marco370
2b5dd0646c Saved progress at the end of the loop
Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 7294bd1d-c7f3-46ed-9640-2c81e55da0c8
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/5aJYCET
2026-02-16 08:14:48 +00:00
marco370
4229c54d62 Improve system accuracy and router configuration for security monitoring
Fixes type mismatches in API responses, updates router configuration to use correct REST API ports, and refactors statistics calculation for improved accuracy.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: intermediate_checkpoint
Replit-Commit-Event-Id: 2601dca2-8641-4d91-9722-c30ebbbf23af
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/5aJYCET
2026-02-16 08:14:39 +00:00
Marco Lanzara
a3ec75b86b 🚀 Release v1.0.109
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-16 07:50:06
2026-02-16 07:50:06 +00:00
marco370
ee2ba0b3b9 Update how IP addresses are checked for whitelisting
Correctly handle whitelist data response format in detections page to prevent runtime errors.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: c1ba4b9b-0664-47cc-9e51-47cb707b4948
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/RyXWGQA
2026-02-16 07:48:48 +00:00
marco370
66e110ce4c Generate comprehensive ISO 27001 compliance documentation in Word format
Add python-docx dependency and a new script to generate a Word document detailing the IDS features and mapping them to ISO 27001 Annex A controls.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 2744be16-afcd-406e-ae68-fcf62f19bcc3
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/RyXWGQA
2026-02-14 10:53:03 +00:00
Marco Lanzara
aa589ab64d 🚀 Release v1.0.108
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-14 10:47:54
2026-02-14 10:47:54 +00:00
marco370
dd0e44f78f Add pagination and server-side search to the whitelist page
Implement server-side pagination and search functionality for the whitelist page, including API route updates, storage layer modifications, and frontend enhancements in `Whitelist.tsx`.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 1971ce5e-1b30-49d5-90b7-63e075ccb563
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/RyXWGQA
2026-02-14 10:47:35 +00:00
marco370
1f9ee3919f Improve bulk IP blocking by capturing router connection errors
Improve the bulk IP blocking endpoint to catch and report errors encountered when connecting to MikroTik routers, providing better diagnostics for failed blocks.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: d822b292-cd84-4c15-8a0f-daabbc03a243
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/18EyBWl
2026-02-14 10:17:43 +00:00
Marco Lanzara
89a531eec4 🚀 Release v1.0.107
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-14 10:15:20
2026-02-14 10:15:20 +00:00
marco370
255444d2dd Add ability to mass block critical IP addresses from the system
Add a new endpoint to the ML backend to block all critical IPs and an API route to trigger it from the server.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 685ee18c-cd9c-4b9a-8cb1-70c4b25db835
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/18EyBWl
2026-02-14 10:15:07 +00:00
Marco Lanzara
897c9a3d5c 🚀 Release v1.0.106
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-14 09:48:51
2026-02-14 09:48:51 +00:00
marco370
170c807264 Add lists for Microsoft Azure and Meta/Facebook IP addresses
Update schema_version to version 9 in the database migration for adding Microsoft Azure and Meta/Facebook IP lists.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 128391b2-7bbf-4c17-9beb-e969f52acb42
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/vkzPhDc
2026-02-14 09:47:52 +00:00
Marco Lanzara
f9fb378edc 🚀 Release v1.0.105
- Tipo: patch
- Database schema: database-schema/schema.sql (solo struttura)
- Data: 2026-02-14 09:47:11
2026-02-14 09:47:11 +00:00
marco370
1bb9a65d0f Add Microsoft and Meta IP lists to the database
Modify the database migration script to insert Microsoft Azure and Meta/Facebook IP address lists, ensuring idempotency by checking for existing entries before insertion.

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 7a657272-55ba-4a79-9a2e-f1ed9bc7a528
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 6356f864-1db9-4ace-8af2-69d49012f49b
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/449cf7c4-c97a-45ae-8234-e5c5b8d6a84f/7a657272-55ba-4a79-9a2e-f1ed9bc7a528/vkzPhDc
2026-02-14 09:46:56 +00:00
41 changed files with 3612 additions and 849 deletions

Binary file not shown.

View File

@ -0,0 +1,2 @@
curl -X POST http://localhost:8000/block-all-critical -H "Content-Type: application/json" -d '{"min_score": 80}'
{"message":"Blocco massivo completato: 0 IP bloccati, 260 falliti","blocked":0,"failed":260,"total_critical":260,"details":[{"ip":"157.240.231.60","score":99.99,"status":"failed"},{"ip":"5.134.122.207","score":99.91,"status":"failed"},{"ip":"79.6.115.203","score":99.75,"status":"failed"},{"ip":"37.13.179.85","score":99.15,"status":"failed"},{"ip":"129.69.0.42","score":99.03,"status":"failed"},{"ip":"104.18.36.146","score":97.96,"status":"failed"},{"ip":"88.39.149.52","score":97.14,"status":"failed"},{"ip":"216.58.204.150","score":96.31,"status":"failed"},{"ip":"83.230.138.36","score":96.05,"status":"failed"},{"ip":"80.211.249.119","score":95.86,"status":"failed"},{"ip":"104.18.38.59","score":95.85,"status":"failed"},{"ip":"216.58.204.142","score":95.83,"status":"failed"},{"ip":"185.30.182.159","score":95.56,"status":"failed"},{"ip":"142.251.140.100","score":94.94,"status":"failed"},{"ip":"146.247.137.195","score":94.86,"status":"failed"},{"ip":"172.64.146.98","score":93.8,"status":"failed"},{"ip":"34.252.43.174","score":93.6,"status":"failed"},{"ip":"199.232.194.27","score":93.53,"status":"failed"},{"ip":"151.58.164.255","score":93.49,"status":"failed"},{"ip":"146.247.137.121","score":93.25,"status":"failed"},{"ip":"192.178.202.119","score":92.71,"status":"failed"},{"ip":"195.32.16.194","score":92.39,"status":"failed"},{"ip":"5.90.193.36","score":92.27,"status":"failed"},{"ip":"51.161.172.246","score":92.12,"status":"failed"},{"ip":"83.217.187.128","score":92.05,"status":"failed"},{"ip":"23.216.150.188","score":92.04,"status":"failed"},{"ip":"34.160.109.235","score":91.95,"status":"failed"},{"ip":"13.107.246.43","score":91.93,"status":"failed"},{"ip":"151.101.66.27","score":91.93,"status":"failed"},{"ip":"10.0.237.18","score":91.75,"status":"failed"},{"ip":"5.63.17.10","score":91.66,"status":"failed"},{"ip":"74.125.45.108","score":91.66,"status":"failed"},{"ip":"109.54.106.99","score":91.63,"status":"failed"},{"ip":"103.169.126.50","score":91.49,"status":"failed"},{"ip":"103.169.126.52","score":91.49,"status":"failed"},{"ip":"93.39.92.133","score":91.26,"status":"failed"},{"ip":"103.169.126.16","score":91.18,"status":"failed"},{"ip":"52.123.255.227","score":91.18,"status":"failed"},{"ip":"77.89.41.174","score":91.13,"status":"failed"},{"ip":"93.148.252.209","score":91.12,"status":"failed"},{"ip":"94.101.54.84","score":90.95,"status":"failed"},{"ip":"23.239.11.118","score":90.86,"status":"failed"},{"ip":"52.123.129.14","score":90.66,"status":"failed"},{"ip":"151.78.177.243","score":90.6,"status":"failed"},{"ip":"151.19.103.232","score":90.53,"status":"failed"},{"ip":"35.219.227.195","score":90.29,"status":"failed"},{"ip":"103.169.126.48","score":90.28,"status":"failed"},{"ip":"103.169.126.197","score":90.26,"status":"failed"},{"ip":"151.5.26.99","score":90.24,"status":"failed"},{"ip":"103.169.126.203","score":90.2,"status":"failed"}]}[root@ids ids]#

View File

@ -0,0 +1,136 @@
./deployment/update_from_git.sh --db
╔═══════════════════════════════════════════════╗
║  AGGIORNAMENTO SISTEMA IDS DA GIT ║
╚═══════════════════════════════════════════════╝
 Verifica configurazione git...
 Backup configurazione locale...
✅ .env salvato in .env.backup
 Verifica modifiche locali...
⚠ Ci sono modifiche locali non committate
Esegui 'git status' per vedere i dettagli
Vuoi procedere comunque? (y/n) y
Salvo modifiche locali temporaneamente...
No local changes to save
 Download aggiornamenti da git.alfacom.it...
remote: Enumerating objects: 27, done.
remote: Counting objects: 100% (27/27), done.
remote: Compressing objects: 100% (16/16), done.
remote: Total 16 (delta 13), reused 0 (delta 0), pack-reused 0 (from 0)
Unpacking objects: 100% (16/16), 4.03 KiB | 295.00 KiB/s, done.
From https://git.alfacom.it/marco/ids.alfacom.it
40f8f05..14645c5 main -> origin/main
* [new tag] v1.0.113 -> v1.0.113
From https://git.alfacom.it/marco/ids.alfacom.it
* branch main -> FETCH_HEAD
Updating 40f8f05..14645c5
Fast-forward
client/src/pages/Training.tsx | 65 +++++++++++++++++++++++++++++++++++++++++++++++++----------------
database-schema/schema.sql | 4 ++--
replit.md | 2 +-
server/routes.ts | 138 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------------------------------------------------
version.json | 16 ++++++++--------
5 files changed, 149 insertions(+), 76 deletions(-)
✅ Aggiornamenti scaricati con successo
 Ripristino configurazione locale...
✅ .env ripristinato
 Aggiornamento dipendenze Node.js...
up to date, audited 492 packages in 5s
65 packages are looking for funding
run `npm fund` for details
13 vulnerabilities (3 low, 6 moderate, 4 high)
To address issues that do not require attention, run:
npm audit fix
To address all issues (including breaking changes), run:
npm audit fix --force
Run `npm audit` for details.
✅ Dipendenze Node.js aggiornate
 Aggiornamento dipendenze Python...
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: fastapi==0.104.1 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 1)) (0.104.1)
Requirement already satisfied: uvicorn==0.24.0 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 2)) (0.24.0)
Requirement already satisfied: pandas==2.1.3 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 3)) (2.1.3)
Requirement already satisfied: numpy==1.26.2 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 4)) (1.26.2)
Requirement already satisfied: scikit-learn==1.3.2 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 5)) (1.3.2)
Requirement already satisfied: psycopg2-binary==2.9.9 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 6)) (2.9.9)
Requirement already satisfied: python-dotenv==1.0.0 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 7)) (1.0.0)
Requirement already satisfied: pydantic==2.5.0 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 8)) (2.5.0)
Requirement already satisfied: httpx==0.25.1 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 9)) (0.25.1)
Requirement already satisfied: xgboost==2.0.3 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 10)) (2.0.3)
Requirement already satisfied: joblib==1.3.2 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 11)) (1.3.2)
Requirement already satisfied: anyio<4.0.0,>=3.7.1 in /home/ids/.local/lib/python3.11/site-packages (from fastapi==0.104.1->-r requirements.txt (line 1)) (3.7.1)
Requirement already satisfied: starlette<0.28.0,>=0.27.0 in /home/ids/.local/lib/python3.11/site-packages (from fastapi==0.104.1->-r requirements.txt (line 1)) (0.27.0)
Requirement already satisfied: typing-extensions>=4.8.0 in /home/ids/.local/lib/python3.11/site-packages (from fastapi==0.104.1->-r requirements.txt (line 1)) (4.15.0)
Requirement already satisfied: click>=7.0 in /home/ids/.local/lib/python3.11/site-packages (from uvicorn==0.24.0->-r requirements.txt (line 2)) (8.3.1)
Requirement already satisfied: h11>=0.8 in /home/ids/.local/lib/python3.11/site-packages (from uvicorn==0.24.0->-r requirements.txt (line 2)) (0.16.0)
Requirement already satisfied: python-dateutil>=2.8.2 in /home/ids/.local/lib/python3.11/site-packages (from pandas==2.1.3->-r requirements.txt (line 3)) (2.9.0.post0)
Requirement already satisfied: pytz>=2020.1 in /home/ids/.local/lib/python3.11/site-packages (from pandas==2.1.3->-r requirements.txt (line 3)) (2025.2)
Requirement already satisfied: tzdata>=2022.1 in /home/ids/.local/lib/python3.11/site-packages (from pandas==2.1.3->-r requirements.txt (line 3)) (2025.2)
Requirement already satisfied: scipy>=1.5.0 in /home/ids/.local/lib/python3.11/site-packages (from scikit-learn==1.3.2->-r requirements.txt (line 5)) (1.16.3)
Requirement already satisfied: threadpoolctl>=2.0.0 in /home/ids/.local/lib/python3.11/site-packages (from scikit-learn==1.3.2->-r requirements.txt (line 5)) (3.6.0)
Requirement already satisfied: annotated-types>=0.4.0 in /home/ids/.local/lib/python3.11/site-packages (from pydantic==2.5.0->-r requirements.txt (line 8)) (0.7.0)
Requirement already satisfied: pydantic-core==2.14.1 in /home/ids/.local/lib/python3.11/site-packages (from pydantic==2.5.0->-r requirements.txt (line 8)) (2.14.1)
Requirement already satisfied: certifi in /home/ids/.local/lib/python3.11/site-packages (from httpx==0.25.1->-r requirements.txt (line 9)) (2025.11.12)
Requirement already satisfied: httpcore in /home/ids/.local/lib/python3.11/site-packages (from httpx==0.25.1->-r requirements.txt (line 9)) (1.0.9)
Requirement already satisfied: idna in /home/ids/.local/lib/python3.11/site-packages (from httpx==0.25.1->-r requirements.txt (line 9)) (3.11)
Requirement already satisfied: sniffio in /home/ids/.local/lib/python3.11/site-packages (from httpx==0.25.1->-r requirements.txt (line 9)) (1.3.1)
Requirement already satisfied: six>=1.5 in /home/ids/.local/lib/python3.11/site-packages (from python-dateutil>=2.8.2->pandas==2.1.3->-r requirements.txt (line 3)) (1.17.0)
✅ Dipendenze Python aggiornate
🗄 Aggiornamento schema database...
Applicando migrazioni SQL...
🗄 Sistema Migrazioni Database (Versioned)
📋 Verifica sistema versioning...
psql:/opt/ids/database-schema/migrations/000_init_schema_version.sql:14: NOTICE: relation "schema_version" already exists, skipping
✅ Sistema versioning attivo
📊 Versione database corrente: 9
✅ Database già aggiornato (nessuna migrazione da applicare)
✅ Migrazioni SQL applicate
Sincronizzando schema Drizzle...
> rest-express@1.0.0 db:push
> drizzle-kit push
No config path provided, using default 'drizzle.config.ts'
Reading config file '/opt/ids/drizzle.config.ts'
Using 'pg' driver for database querying
[✓] Pulling schema from database...
[✓] Changes applied
✅ Schema database completamente sincronizzato
📡 Configurazione RSyslog (log MikroTik)...
✅ RSyslog già configurato
📋 Verifica servizio list-fetcher...
✅ Servizio ids-list-fetcher già installato
🔄 Restart servizi...
✅ Servizi riavviati
╔═══════════════════════════════════════════════╗
║ ✅ AGGIORNAMENTO COMPLETATO ║
╚═══════════════════════════════════════════════╝
📋 VERIFICA SISTEMA:
• Log backend: tail -f /var/log/ids/backend.log
• Log frontend: tail -f /var/log/ids/frontend.log
• API backend: curl http://localhost:8000/health
• Frontend: curl http://localhost:5000
📊 STATO SERVIZI:
ids 1034 1.1 2.3 2939568 381944 ? Ssl 12:18 0:09 /opt/ids/python_ml/venv/bin/python3 main.py
ids 1069 18.3 0.1 52452 26240 ? Ss 12:18 2:40 /opt/ids/python_ml/venv/bin/python3 syslog_parser.py
root 2447 0.0 0.2 731344 32324 pts/0 Rl+ 12:33 0:00 /usr/bin/node /usr/bin/npm run dev

View File

@ -0,0 +1,57 @@
echo "=== TEST PORTA 5000 ===" && curl -s -o /dev/null -w "HTTP %{http_code}\n" http://localhost:5000/api/health && echo "=== TEST AUTO-BLOCK MANUALE ===" && sudo -u ids /opt/ids/python_ml/venv/bin/python3 /opt/ids/python_ml/auto_block.py 2>&1 && echo "=== STATO TUTTI I SERVIZI ===" && systemctl status ids-backend ids-ml-backend ids-syslog-parser ids-auto-block.timer --no-pager -l
=== TEST PORTA 5000 ===
HTTP 200
=== TEST AUTO-BLOCK MANUALE ===
[2026-02-17 08:51:22] Starting auto-block cycle...
[2026-02-17 08:51:22] Step 1: Detection ML...
[2026-02-17 08:51:22] Detection completata: 0 anomalie rilevate
[2026-02-17 08:51:22] Step 2: Blocco IP critici sui router...
[2026-02-17 08:51:22] 24 IP bloccati sui router, 0 falliti, 0 gia' bloccati
=== STATO TUTTI I SERVIZI ===
● ids-backend.service - IDS Node.js Backend (Express API + Frontend)
Loaded: loaded (/etc/systemd/system/ids-backend.service; enabled; preset: disabled)
Active: active (running) since Tue 2026-02-17 08:51:09 CET; 57s ago
Process: 31307 ExecStartPre=/bin/bash -c test -f /opt/ids/dist/index.js || (echo "ERRORE: dist/index.js non trovato - eseguire npm run build" && exit 1) (code=exited, status=0/SUCCESS)
Main PID: 31308 (node)
Tasks: 11 (limit: 100409)
Memory: 59.1M (max: 1.0G available: 964.8M)
CPU: 1.669s
CGroup: /system.slice/ids-backend.service
└─31308 node dist/index.js
Feb 17 08:51:09 ids.alfacom.it systemd[1]: Starting IDS Node.js Backend (Express API + Frontend)...
Feb 17 08:51:09 ids.alfacom.it systemd[1]: Started IDS Node.js Backend (Express API + Frontend).
● ids-ml-backend.service - IDS ML Backend (FastAPI)
Loaded: loaded (/etc/systemd/system/ids-ml-backend.service; enabled; preset: disabled)
Active: active (running) since Tue 2026-02-17 08:50:14 CET; 1min 51s ago
Main PID: 31127 (python3)
Tasks: 26 (limit: 100409)
Memory: 256.8M (max: 2.0G available: 1.7G)
CPU: 4.073s
CGroup: /system.slice/ids-ml-backend.service
└─31127 /opt/ids/python_ml/venv/bin/python3 main.py
Feb 17 08:50:14 ids.alfacom.it systemd[1]: Started IDS ML Backend (FastAPI).
● ids-syslog-parser.service - IDS Syslog Parser (Network Logs Processor)
Loaded: loaded (/etc/systemd/system/ids-syslog-parser.service; enabled; preset: disabled)
Active: active (running) since Mon 2026-02-16 12:18:52 CET; 20h ago
Main PID: 1069 (python3)
Tasks: 1 (limit: 100409)
Memory: 9.7M (max: 1.0G available: 1014.2M)
CPU: 1h 59min 34.854s
CGroup: /system.slice/ids-syslog-parser.service
└─1069 /opt/ids/python_ml/venv/bin/python3 syslog_parser.py
Feb 16 12:18:52 ids.alfacom.it systemd[1]: Started IDS Syslog Parser (Network Logs Processor).
● ids-auto-block.timer - IDS Auto-Blocking Timer - Run every 5 minutes
Loaded: loaded (/etc/systemd/system/ids-auto-block.timer; enabled; preset: disabled)
Active: active (running) since Mon 2026-02-16 19:24:04 CET; 13h ago
Until: Mon 2026-02-16 19:24:04 CET; 13h ago
Trigger: n/a
Triggers: ● ids-auto-block.service
Docs: https://github.com/yourusername/ids
Feb 16 19:24:04 ids.alfacom.it systemd[1]: Started IDS Auto-Blocking Timer - Run every 5 minutes.

View File

@ -0,0 +1,85 @@
tail -f /var/log/ids/backend.log
9:21:00 AM [express] GET /api/detections 304 in 19ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7c…
9:21:01 AM [express] GET /api/services/status 200 in 29ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:21:02 AM [express] GET /api/routers 200 in 3ms :: [{"id":"c6904896-59dc-4060-952f-0e55df7dee4a","n…
9:21:12 AM [express] PUT /api/routers/c6904896-59dc-4060-952f-0e55df7dee4a 200 in 8ms :: {"id":"c690…
9:21:12 AM [express] GET /api/routers 304 in 2ms :: [{"id":"c6904896-59dc-4060-952f-0e55df7dee4a","n…
9:22:16 AM [express] POST /api/ml/block-all-critical 200 in 99ms :: {"message":"Nessun IP critico da…
9:24:17 AM [express] POST /api/ml/block-all-critical 200 in 75ms :: {"message":"Nessun IP critico da…
 Using standard PostgreSQL database
9:26:18 AM [express] serving on port 5000
✅ Database connection successful
9:28:19 AM [express] POST /api/ml/block-all-critical 200 in 93ms :: {"message":"Nessun IP critico da…
9:30:19 AM [express] POST /api/ml/block-all-critical 200 in 82ms :: {"message":"Nessun IP critico da…
9:30:25 AM [express] GET /api/training-history 200 in 6ms :: [{"id":"7570df54-8169-4fb2-abdc-e9c1bfa…
9:30:29 AM [express] GET /api/detections 200 in 24ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7c…
9:30:29 AM [express] GET /api/whitelist 200 in 259ms :: {"items":[{"id":"49b5b9a9-4683-452c-a784-fc5…
9:30:37 AM [express] GET /api/services/status 200 in 22ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:30:39 AM [express] POST /api/services/ids-syslog-parser/start 500 in 22ms :: {"error":"Service con…
9:30:42 AM [express] GET /api/services/status 200 in 21ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:30:47 AM [express] GET /api/services/status 200 in 18ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:30:52 AM [express] GET /api/services/status 200 in 17ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:30:57 AM [express] GET /api/services/status 200 in 17ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:31:02 AM [express] GET /api/services/status 200 in 17ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:31:07 AM [express] GET /api/services/status 200 in 23ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:31:12 AM [express] GET /api/services/status 200 in 18ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:31:16 AM [express] GET /api/services/status 200 in 18ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:31:21 AM [express] GET /api/services/status 200 in 17ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:31:26 AM [express] GET /api/services/status 200 in 17ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:31:27 AM [express] GET /api/detections 304 in 11ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7c…
9:31:27 AM [express] GET /api/whitelist 200 in 229ms :: {"items":[{"id":"91108765-7ef6-4e49-87d8-319…
9:31:30 AM [express] GET /api/public-lists 200 in 5ms :: [{"id":"2a835798-02a7-4129-a528-d39a026ad15…
9:31:32 AM [express] GET /api/services/status 200 in 19ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:31:37 AM [express] GET /api/services/status 200 in 20ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:31:42 AM [express] GET /api/services/status 200 in 19ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:31:47 AM [express] GET /api/services/status 200 in 18ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:31:52 AM [express] GET /api/services/status 200 in 17ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:31:57 AM [express] GET /api/services/status 200 in 21ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:32:02 AM [express] GET /api/services/status 200 in 18ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:32:07 AM [express] GET /api/services/status 200 in 24ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:32:12 AM [express] GET /api/services/status 200 in 24ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:32:17 AM [express] GET /api/services/status 200 in 23ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:32:25 AM [express] GET /api/services/status 200 in 2875ms :: {"services":{"mlBackend":{"name":"ML …
[BLOCK-ALL] Avvio blocco massivo: 11/11 IP con score >= 80 su 2 router
[BULK-BLOCK] Starting: 11 IPs on 2 routers (10.20.30.100, 185.203.24.2)
[BULK-BLOCK] Router 185.203.24.2: 124 IPs already in list (50ms)
9:32:30 AM [express] GET /api/services/status 200 in 31ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:32:35 AM [express] GET /api/services/status 200 in 31ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:32:40 AM [express] GET /api/services/status 200 in 46ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:32:46 AM [express] GET /api/services/status 200 in 75ms :: {"services":{"mlBackend":{"name":"ML Ba…
[MIKROTIK] Failed to get address-list from 10.20.30.100: This operation was aborted
[BULK-BLOCK] Router 10.20.30.100: 0 IPs already in list (20004ms)
[BULK-BLOCK] 0 already blocked, 11 new to block
9:32:51 AM [express] GET /api/services/status 200 in 40ms :: {"services":{"mlBackend":{"name":"ML Ba…
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 87.17.182.32: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 185.98.164.31: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 109.115.163.146: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 2.118.225.238: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 74.125.99.38: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 192.178.112.96: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 74.125.111.102: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 212.14.142.214: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8000ms for IP 89.96.215.146: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 172.217.39.230: This operation was aborted
9:32:56 AM [express] GET /api/services/status 200 in 44ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:33:01 AM [express] GET /api/services/status 200 in 46ms :: {"services":{"mlBackend":{"name":"ML Ba…
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 104.19.148.8: This operation was aborted
[BULK-BLOCK] Progress: 11/11
[BULK-BLOCK] Router 10.20.30.100: 0 blocked, 11 failed, 0 skipped
[BULK-BLOCK] Router 185.203.24.2: 11 blocked, 0 failed, 0 skipped
[BULK-BLOCK] Done: 11 blocked, 0 failed, 0 skipped
[BLOCK-ALL] Database aggiornato: 11 IP marcati come bloccati
9:33:03 AM [express] POST /api/ml/block-all-critical 200 in 36137ms :: {"message":"Blocco massivo co…
9:33:07 AM [express] GET /api/services/status 200 in 1125ms :: {"services":{"mlBackend":{"name":"ML …
9:33:12 AM [express] GET /api/services/status 200 in 40ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:33:17 AM [express] GET /api/services/status 200 in 56ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:33:22 AM [express] GET /api/services/status 200 in 58ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:33:27 AM [express] GET /api/services/status 200 in 57ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:33:32 AM [express] GET /api/services/status 200 in 57ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:33:37 AM [express] GET /api/services/status 200 in 52ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:33:42 AM [express] GET /api/services/status 200 in 77ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:33:48 AM [express] GET /api/services/status 200 in 50ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:33:53 AM [express] GET /api/services/status 200 in 72ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:33:58 AM [express] GET /api/services/status 200 in 69ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:34:03 AM [express] GET /api/services/status 200 in 64ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:34:08 AM [express] GET /api/services/status 200 in 79ms :: {"services":{"mlBackend":{"name":"ML Ba…

View File

@ -0,0 +1,224 @@
tail -f /var/log/ids/backend.log
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8000ms for IP 188.8.66.34: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 151.101.193.229: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 109.205.211.40: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 151.59.33.110: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 151.73.209.26: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 79.8.248.29: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 217.202.28.18: This operation was aborted
 Using standard PostgreSQL database
9:13:09 AM [express] serving on port 5000
✅ Database connection successful
[BLOCK-ALL] Avvio blocco massivo: 100/100 IP con score >= 80 su 2 router
[BULK-BLOCK] Starting: 100 IPs on 2 routers (10.20.30.100, 185.203.24.2)
[BULK-BLOCK] Router 185.203.24.2: 74 IPs already in list (38ms)
9:14:15 AM [express] GET /api/detections 200 in 17ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7c…
9:14:15 AM [express] GET /api/services/status 200 in 42ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:14:20 AM [express] GET /api/detections 200 in 19ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7c…
9:14:20 AM [express] GET /api/services/status 200 in 20ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:14:25 AM [express] GET /api/detections 200 in 7ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7cf…
9:14:25 AM [express] GET /api/whitelist 200 in 222ms :: {"items":[{"id":"4b7a60b4-b47c-431e-8efa-864…
[MIKROTIK] Failed to get address-list from 10.20.30.100: This operation was aborted
[BULK-BLOCK] Router 10.20.30.100: 0 IPs already in list (20004ms)
[BULK-BLOCK] 0 already blocked, 100 new to block
9:14:35 AM [express] GET /api/detections 200 in 8ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7cf…
9:14:39 AM [express] GET /api/training-history 200 in 5ms :: [{"id":"4306f5a1-b30b-4241-97d9-6b6a1e2…
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 151.73.24.49: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 104.16.248.249: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 77.39.130.185: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 104.16.249.249: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 104.18.36.146: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 193.205.185.20: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 50.93.53.165: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8004ms for IP 91.208.175.82: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8004ms for IP 74.0.42.209: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8005ms for IP 79.6.115.203: This operation was aborted
9:14:50 AM [express] GET /api/detections 200 in 31ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7c…
9:14:50 AM [express] GET /api/whitelist 200 in 246ms :: {"items":[{"id":"49b5b9a9-4683-452c-a784-fc5…
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 142.250.181.168: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 195.32.127.72: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 93.150.41.42: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 212.183.171.12: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 78.134.17.204: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 62.94.77.251: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 151.73.24.107: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 74.125.45.108: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 2.42.47.211: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8004ms for IP 89.97.31.234: This operation was aborted
9:14:52 AM [express] GET /api/dashboard/live 200 in 12ms :: {"totalPackets":75520962,"attackPackets"…
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 31.197.212.130: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 44.197.141.31: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 146.75.61.140: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 93.150.200.81: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 101.56.92.15: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 2.33.192.82: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 151.41.104.237: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 77.39.220.111: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 185.96.131.245: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8004ms for IP 142.250.181.164: This operation was aborted
9:15:00 AM [express] GET /api/routers 200 in 16ms :: [{"id":"c6904896-59dc-4060-952f-0e55df7dee4a","…
9:15:00 AM [express] GET /api/detections 200 in 24ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7c…
9:15:00 AM [express] GET /api/services/status 200 in 28ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:15:05 AM [express] GET /api/detections 200 in 11ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7c…
9:15:05 AM [express] GET /api/services/status 200 in 18ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:15:06 AM [express] GET /api/stats 200 in 5385ms :: {"routers":{"total":2,"enabled":2},"detections"…
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 45.146.216.240: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 62.112.11.234: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 51.159.85.76: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 188.8.66.34: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 151.101.193.229: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 109.205.211.40: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 151.59.33.110: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 151.73.209.26: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 79.8.248.29: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8003ms for IP 217.202.28.18: This operation was aborted
[ANALYTICS QUERY] {
startDate: '2026-01-18T08:15:10.899Z',
endDate: '2026-02-17T08:15:10.899Z',
hourly: true,
hourCondition: 'NOT NULL'
}
[ANALYTICS RESULTS] 41 records found
[ANALYTICS SAMPLE] {
id: 'b3216c05-7f95-4ba9-a5fb-b37461441401',
date: 2026-02-16T00:00:00.000Z,
hour: 23,
totalPackets: 10463767,
totalBytes: 13102201398,
uniqueIps: 25472,
normalPackets: 10463767,
normalBytes: 13102201398,
normalUniqueIps: 25472,
topNormalIps: '[{"ip": "88.39.149.52", "packets": 3744014, "bytes": 5747025096, "country": null}, {"ip": "79.11.175.156", "packets": 1279527, "bytes": 1892212998, "country": null}, {"ip": "95.229.133.69", "packets": 1188856, "bytes": 1758976078, "country": null}, {"ip": "188.12.75.242", "packets": 1168922, "bytes": 1729133469, "country": null}, {"ip": "95.229.85.134", "packets": 1151191, "bytes": 1703382127, "country": null}, {"ip": "74.125.45.108", "packets": 108382, "bytes": 13501983, "country": "United States"}, {"ip": "185.243.5.22", "packets": 87376, "bytes": 38384352, "country": null}, {"ip": "104.16.248.249", "packets": 72117, "bytes": 40864635, "country": "Canada"}, {"ip": "8.8.8.8", "packets": 66005, "bytes": 18862021, "country": "United States"}, {"ip": "95.110.183.67", "packets": 51456, "bytes": 2281666, "country": "Italy"}]',
attackPackets: 0,
attackBytes: 0,
attackUniqueIps: 0,
attacksByCountry: '{}',
attacksByType: '{}',
topAttackers: '[]',
trafficByCountry: '{"Canada": {"normal": 81941, "attacks": 0}, "The Netherlands": {"normal": 49409, "attacks": 0}, "Singapore": {"normal": 61, "attacks": 0}, "Germany": {"normal": 339, "attacks": 0}, "United States":
{"normal": 385230, "attacks": 0}, "Italy": {"normal": 73801, "attacks": 0}, "Netherlands": {"normal": 19928, "attacks": 0}}',
createdAt: 2026-02-17T00:05:00.185Z
}
9:15:10 AM [express] GET /api/analytics/recent 200 in 16ms :: [{"id":"b3216c05-7f95-4ba9-a5fb-b37461…
9:15:12 AM [express] GET /api/training-history 200 in 3ms :: [{"id":"4306f5a1-b30b-4241-97d9-6b6a1e2…
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 7968ms for IP 151.73.139.162: HTTP 401: {"error":401,"message":"Unauthorized"}
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 7969ms for IP 217.28.70.122: HTTP 401: {"error":401,"message":"Unauthorized"}
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 7971ms for IP 79.9.120.141: HTTP 401: {"error":401,"message":"Unauthorized"}
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 7971ms for IP 52.123.129.14: HTTP 401: {"error":401,"message":"Unauthorized"}
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 7970ms for IP 157.240.231.35: HTTP 401: {"error":401,"message":"Unauthorized"}
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 7975ms for IP 46.229.84.162: HTTP 401: {"error":401,"message":"Unauthorized"}
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 7973ms for IP 178.248.182.171: HTTP 401: {"error":401,"message":"Unauthorized"}
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 7974ms for IP 93.43.107.86: HTTP 401: {"error":401,"message":"Unauthorized"}
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 7974ms for IP 178.248.51.104: HTTP 401: {"error":401,"message":"Unauthorized"}
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 7977ms for IP 93.51.52.251: HTTP 401: {"error":401,"message":"Unauthorized"}
[BULK-BLOCK] Progress: 50/100
9:15:19 AM [express] GET /api/ml/stats 200 in 6466ms :: {"logs":{"total":126216346,"last_hour":0},"d…
9:15:22 AM [express] POST /api/ml/train 200 in 10ms :: {"message":"Training avviato in background","…
9:15:22 AM [express] GET /api/training-history 304 in 3ms :: [{"id":"4306f5a1-b30b-4241-97d9-6b6a1e2…
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 85.44.118.45: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 2.228.8.122: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8000ms for IP 188.217.110.96: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 151.84.198.239: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 93.54.65.87: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 213.82.166.186: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 23.216.150.137: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 185.110.20.185: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 95.100.171.8: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 79.11.222.236: This operation was aborted
9:15:28 AM [express] GET /api/ml/stats 304 in 6490ms :: {"logs":{"total":126216346,"last_hour":0},"d…
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 85.35.59.252: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 79.10.32.9: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 79.13.197.84: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 94.85.21.211: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 213.215.214.82: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 185.82.114.4: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 5.158.71.206: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 94.34.87.184: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 93.67.251.46: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 31.7.147.21: This operation was aborted
9:15:32 AM [express] GET /api/training-history 304 in 15ms :: [{"id":"4306f5a1-b30b-4241-97d9-6b6a1e…
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 91.187.197.104: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 172.217.38.150: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 217.141.0.110: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 172.217.38.157: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 140.82.121.3: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 95.255.202.79: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 151.58.132.84: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 82.55.154.58: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 62.98.165.6: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 95.110.183.67: This operation was aborted
9:15:42 AM [express] GET /api/training-history 304 in 16ms :: [{"id":"4306f5a1-b30b-4241-97d9-6b6a1e…
9:15:45 AM [express] GET /api/ml/stats 304 in 7198ms :: {"logs":{"total":126216346,"last_hour":0},"d…
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 93.40.225.146: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 216.128.11.53: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 95.230.242.4: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 2.118.179.69: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 188.8.204.7: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 185.168.176.197: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 93.64.198.214: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 217.202.56.187: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8002ms for IP 5.63.174.25: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 151.64.151.157: This operation was aborted
9:15:52 AM [express] GET /api/training-history 304 in 17ms :: [{"id":"4306f5a1-b30b-4241-97d9-6b6a1e…
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 79.2.176.5: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 82.192.139.100: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 93.40.226.33: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 93.146.168.160: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 94.101.59.182: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 185.104.127.31: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 31.197.212.236: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8000ms for IP 79.3.132.252: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 172.217.38.148: This operation was aborted
[BULK-BLOCK] SLOW: Router 10.20.30.100 took 8001ms for IP 172.217.38.151: This operation was aborted
[BULK-BLOCK] Progress: 100/100
[BULK-BLOCK] Router 10.20.30.100: 0 blocked, 100 failed, 0 skipped
[BULK-BLOCK] Router 185.203.24.2: 50 blocked, 0 failed, 50 skipped
[BULK-BLOCK] Done: 100 blocked, 0 failed, 0 skipped
[BLOCK-ALL] Database aggiornato: 100 IP marcati come bloccati
9:15:55 AM [express] POST /api/ml/block-all-critical 200 in 100172ms :: {"message":"Blocco massivo c…
9:16:02 AM [express] GET /api/training-history 304 in 2ms :: [{"id":"4306f5a1-b30b-4241-97d9-6b6a1e2…
9:16:02 AM [express] GET /api/ml/stats 200 in 6527ms :: {"logs":{"total":126216346,"last_hour":0},"d…
9:16:12 AM [express] GET /api/training-history 304 in 13ms :: [{"id":"4306f5a1-b30b-4241-97d9-6b6a1e…
9:16:19 AM [express] GET /api/ml/stats 304 in 6558ms :: {"logs":{"total":126216346,"last_hour":0},"d…
9:16:19 AM [express] POST /api/ml/block-all-critical 200 in 76ms :: {"message":"Nessun IP critico da…
9:16:22 AM [express] GET /api/training-history 304 in 3ms :: [{"id":"4306f5a1-b30b-4241-97d9-6b6a1e2…
9:16:29 AM [express] GET /api/training-history 304 in 3ms :: [{"id":"4306f5a1-b30b-4241-97d9-6b6a1e2…
9:16:35 AM [express] GET /api/ml/stats 304 in 6623ms :: {"logs":{"total":126216346,"last_hour":0},"d…
9:16:39 AM [express] GET /api/training-history 304 in 13ms :: [{"id":"4306f5a1-b30b-4241-97d9-6b6a1e…
9:16:49 AM [express] GET /api/training-history 304 in 14ms :: [{"id":"4306f5a1-b30b-4241-97d9-6b6a1e…
9:16:59 AM [express] GET /api/training-history 304 in 15ms :: [{"id":"4306f5a1-b30b-4241-97d9-6b6a1e…
[ML Stats] Fallback to database - ML Backend error: This operation was aborted
9:17:06 AM [express] GET /api/ml/stats 200 in 20345ms :: {"source":"database_fallback","ml_backend_s…
9:17:08 AM [express] GET /api/detections 200 in 21ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7c…
9:17:08 AM [express] GET /api/routers 200 in 21ms :: [{"id":"c6904896-59dc-4060-952f-0e55df7dee4a","…
9:17:13 AM [express] GET /api/services/status 200 in 5007ms :: {"services":{"mlBackend":{"name":"ML …
9:17:13 AM [express] GET /api/detections 304 in 8ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7cf…
9:17:14 AM [express] GET /api/stats 200 in 5393ms :: {"routers":{"total":2,"enabled":2},"detections"…
9:17:18 AM [express] GET /api/detections 304 in 7ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7cf…
9:17:23 AM [express] GET /api/services/status 304 in 5006ms :: {"services":{"mlBackend":{"name":"ML …
9:17:23 AM [express] GET /api/detections 304 in 7ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7cf…
9:17:28 AM [express] GET /api/services/status 200 in 85ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:17:28 AM [express] GET /api/detections 304 in 8ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7cf…
9:17:29 AM [express] GET /api/stats 304 in 5314ms :: {"routers":{"total":2,"enabled":2},"detections"…
9:17:33 AM [express] GET /api/services/status 200 in 29ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:17:33 AM [express] GET /api/detections 304 in 7ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7cf…
9:17:38 AM [express] GET /api/services/status 200 in 53ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:17:39 AM [express] GET /api/detections 304 in 7ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7cf…
9:17:43 AM [express] GET /api/services/status 200 in 70ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:17:44 AM [express] GET /api/detections 304 in 8ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7cf…
9:17:44 AM [express] GET /api/stats 304 in 5395ms :: {"routers":{"total":2,"enabled":2},"detections"…
9:17:49 AM [express] GET /api/services/status 200 in 47ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:17:49 AM [express] GET /api/detections 304 in 7ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7cf…
9:17:54 AM [express] GET /api/detections 304 in 12ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7c…
9:17:54 AM [express] GET /api/services/status 200 in 61ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:17:59 AM [express] GET /api/detections 304 in 11ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7c…
9:17:59 AM [express] GET /api/services/status 200 in 30ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:18:00 AM [express] GET /api/stats 304 in 5405ms :: {"routers":{"total":2,"enabled":2},"detections"…
9:18:04 AM [express] GET /api/detections 304 in 7ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7cf…
9:18:04 AM [express] GET /api/services/status 200 in 64ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:18:09 AM [express] GET /api/detections 304 in 8ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7cf…
9:18:09 AM [express] GET /api/services/status 200 in 21ms :: {"services":{"mlBackend":{"name":"ML Ba…
9:18:14 AM [express] GET /api/detections 304 in 21ms :: {"detections":[{"id":"fcde52d3-0ae4-4904-a7c…
9:18:15 AM [express] GET /api/stats 304 in 5344ms :: {"routers":{"total":2,"enabled":2},"detections"…

View File

@ -0,0 +1,41 @@
Cerca il watchdog che riavvia il backend
grep -r "Backend Python NON attivo" /opt/ids/ --include="*.sh"
grep -r "Backend Python NON attivo" /etc/cron* /var/spool/cron/
# Verifica cron jobs attivi
crontab -l
crontab -l -u ids
# Verifica timer systemd
systemctl list-timers --all | grep ids
/opt/ids/deployment/check_backend.sh: echo "[$(date)] Backend Python NON attivo, riavvio..." >> "$LOG_FILE"
# ============================================
# SISTEMA IDS - CONFIGURAZIONE AUTOMATICA
# ============================================
# Training ML ogni 12 ore (alle 00:00 e 12:00)
0 */12 * * * /opt/ids/deployment/cron_train.sh
# Detection automatica ogni 5 minuti
*/3 * * * * /opt/ids/deployment/cron_detect.sh
# Verifica processo backend Python ogni 5 minuti (riavvia se non attivo)
*/5 * * * * /opt/ids/deployment/check_backend.sh >> /var/log/ids/cron.log 2>&1
# Verifica processo frontend ogni 5 minuti (riavvia se non attivo)
*/5 * * * * /opt/ids/deployment/check_frontend.sh >> /var/log/ids/cron.log 2>&1
# Pulizia log settimanale (ogni domenica alle 02:00)
0 2 * * 0 find /var/log/ids -name "*.log" -size +100M -exec truncate -s 50M {} \; >> /var/log/ids/cron.log 2>&1
# Restart completo del sistema ogni settimana (domenica alle 03:00)
0 3 * * 0 /opt/ids/deployment/restart_all.sh >> /var/log/ids/cron.log 2>&1
# Backup database giornaliero (alle 04:00)
0 4 * * * /opt/ids/deployment/backup_db.sh >> /var/log/ids/cron.log 2>&1
0 3 * * * /opt/ids/deployment/cleanup_database.sh >> /var/log/ids/cleanup.log 2>&1
Mon 2026-02-16 13:05:00 CET 4min 9s left Mon 2026-02-16 12:05:00 CET 55min ago ids-analytics-aggregator.timer ids-analytics-aggregator.service
Mon 2026-02-16 13:14:33 CET 13min left Mon 2026-02-16 12:13:57 CET 46min ago ids-cleanup.timer ids-cleanup.service
Mon 2026-02-23 03:00:00 CET 6 days left Mon 2026-02-16 03:00:00 CET 10h ago ids-ml-training.timer ids-ml-training.service
- - Mon 2026-02-16 12:48:47 CET 12min ago ids-auto-block.timer ids-auto-block.service
- - Mon 2026-02-16 13:00:01 CET 48s ago ids-list-fetcher.timer ids-list-fetcher.service

View File

@ -0,0 +1,158 @@
echo "=== 1. STATO SERVIZI ===" && systemctl status ids-backend ids-ml-backend ids-syslog-parser ids-analytics ids-auto-block.timer ids-auto-block.service --no-pager -l 2>&1 | tail -80 && echo "=== 2. LOG
NODE.JS ===" && journalctl -u ids-backend --no-pager -n 50 && echo "=== 3. LOG ML ===" && journalctl -u ids-ml-backend --no-pager -n 50 && echo "=== 4. LOG AUTO-BLOCK ===" && journalctl -u ids-auto-block --no-pager -n 50 && echo "=== 5. LOG SYSLOG ===" && journalctl -u ids-syslog-parser --no-pager -n 30 && echo "=== 6. PORTE ===" && ss -tlnp | grep -E '3001|5001|514' && echo "=== 7. PROCESSI ===" && ps aux | grep -E 'node|python|uvicorn' | grep -v grep && echo "=== 8. DISCO/MEMORIA ===" && df -h / && free -h && echo "=== 9. TEST CONNESSIONE ===" && curl -s -o /dev/null -w "%{http_code} - Node.js backend\n" http://localhost:3001/api/health && curl -s -o /dev/null -w "%{http_code} - ML backend\n" http://localhost:5001/health && echo "=== 10. LOG DB ===" && sudo -u ids psql -d ids_db -c "SELECT COUNT(*) as logs_last_30min FROM network_logs WHERE timestamp > NOW() - INTERVAL '30 minutes';"
=== 1. STATO SERVIZI ===
Unit ids-backend.service could not be found.
Unit ids-analytics.service could not be found.
● ids-ml-backend.service - IDS ML Backend (FastAPI)
Loaded: loaded (/etc/systemd/system/ids-ml-backend.service; enabled; preset: disabled)
Active: active (running) since Mon 2026-02-16 19:29:06 CET; 13h ago
Main PID: 17629 (python3)
Tasks: 26 (limit: 100409)
Memory: 75.8M (max: 2.0G available: 1.9G)
CPU: 40.396s
CGroup: /system.slice/ids-ml-backend.service
└─17629 /opt/ids/python_ml/venv/bin/python3 main.py
Feb 16 19:29:06 ids.alfacom.it systemd[1]: Started IDS ML Backend (FastAPI).
● ids-syslog-parser.service - IDS Syslog Parser (Network Logs Processor)
Loaded: loaded (/etc/systemd/system/ids-syslog-parser.service; enabled; preset: disabled)
Active: active (running) since Mon 2026-02-16 12:18:52 CET; 20h ago
Main PID: 1069 (python3)
Tasks: 1 (limit: 100409)
Memory: 9.7M (max: 1.0G available: 1014.2M)
CPU: 1h 59min 34.173s
CGroup: /system.slice/ids-syslog-parser.service
└─1069 /opt/ids/python_ml/venv/bin/python3 syslog_parser.py
Feb 16 12:18:52 ids.alfacom.it systemd[1]: Started IDS Syslog Parser (Network Logs Processor).
● ids-auto-block.timer - IDS Auto-Blocking Timer - Run every 5 minutes
Loaded: loaded (/etc/systemd/system/ids-auto-block.timer; enabled; preset: disabled)
Active: active (running) since Mon 2026-02-16 19:24:04 CET; 13h ago
Until: Mon 2026-02-16 19:24:04 CET; 13h ago
Trigger: n/a
Triggers: ● ids-auto-block.service
Docs: https://github.com/yourusername/ids
Feb 16 19:24:04 ids.alfacom.it systemd[1]: Started IDS Auto-Blocking Timer - Run every 5 minutes.
● ids-auto-block.service - IDS Auto-Blocking Service - Detect and Block Malicious IPs
Loaded: loaded (/etc/systemd/system/ids-auto-block.service; disabled; preset: disabled)
Active: activating (start) since Tue 2026-02-17 08:33:33 CET; 3min 14s ago
TriggeredBy: ● ids-auto-block.timer
Main PID: 30644 (python3)
Tasks: 1 (limit: 100409)
Memory: 14.7M
CPU: 148ms
CGroup: /system.slice/ids-auto-block.service
└─30644 /opt/ids/python_ml/venv/bin/python3 /opt/ids/python_ml/auto_block.py
Feb 17 08:33:33 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
=== 2. LOG NODE.JS ===
-- No entries --
=== 3. LOG ML ===
Feb 16 15:51:21 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 12676 (n/a) with signal SIGKILL.
Feb 16 15:51:21 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 12677 (n/a) with signal SIGKILL.
Feb 16 15:51:21 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 12681 (n/a) with signal SIGKILL.
Feb 16 15:51:21 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 12682 (n/a) with signal SIGKILL.
Feb 16 15:51:21 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 12684 (python3) with signal SIGKILL.
Feb 16 15:51:21 ids.alfacom.it systemd[1]: ids-ml-backend.service: Main process exited, code=killed, status=9/KILL
Feb 16 15:51:21 ids.alfacom.it systemd[1]: ids-ml-backend.service: Failed with result 'timeout'.
Feb 16 15:51:21 ids.alfacom.it systemd[1]: Stopped IDS ML Backend (FastAPI).
Feb 16 15:51:21 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 9.526s CPU time.
Feb 16 15:51:26 ids.alfacom.it systemd[1]: Started IDS ML Backend (FastAPI).
Feb 16 16:50:11 ids.alfacom.it systemd[1]: Stopping IDS ML Backend (FastAPI)...
Feb 16 16:51:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: State 'stop-sigterm' timed out. Killing.
Feb 16 16:51:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 13099 (python3) with signal SIGKILL.
Feb 16 16:51:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 13102 (python3) with signal SIGKILL.
Feb 16 16:51:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 13103 (n/a) with signal SIGKILL.
Feb 16 16:51:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 13110 (n/a) with signal SIGKILL.
Feb 16 16:51:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 13112 (n/a) with signal SIGKILL.
Feb 16 16:51:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 13116 (n/a) with signal SIGKILL.
Feb 16 16:51:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 13117 (python3) with signal SIGKILL.
Feb 16 16:51:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 13122 (python3) with signal SIGKILL.
Feb 16 16:51:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 13125 (n/a) with signal SIGKILL.
Feb 16 16:51:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: Main process exited, code=killed, status=9/KILL
Feb 16 16:51:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: Failed with result 'timeout'.
Feb 16 16:51:41 ids.alfacom.it systemd[1]: Stopped IDS ML Backend (FastAPI).
Feb 16 16:51:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 15.919s CPU time.
Feb 16 16:51:46 ids.alfacom.it systemd[1]: Started IDS ML Backend (FastAPI).
Feb 16 19:27:20 ids.alfacom.it systemd[1]: Stopping IDS ML Backend (FastAPI)...
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: State 'stop-sigterm' timed out. Killing.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 14614 (python3) with signal SIGKILL.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 14619 (python3) with signal SIGKILL.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 14626 (python3) with signal SIGKILL.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 14675 (n/a) with signal SIGKILL.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 14676 (python3) with signal SIGKILL.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 14677 (python3) with signal SIGKILL.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 14678 (python3) with signal SIGKILL.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 14679 (python3) with signal SIGKILL.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 14680 (python3) with signal SIGKILL.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 14681 (python3) with signal SIGKILL.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 14682 (python3) with signal SIGKILL.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Killing process 14683 (python3) with signal SIGKILL.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Main process exited, code=killed, status=9/KILL
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Failed with result 'timeout'.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: Stopped IDS ML Backend (FastAPI).
Feb 16 19:28:50 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 15.247s CPU time.
Feb 16 19:28:50 ids.alfacom.it systemd[1]: Started IDS ML Backend (FastAPI).
Feb 16 19:29:00 ids.alfacom.it systemd[1]: Stopping IDS ML Backend (FastAPI)...
Feb 16 19:29:01 ids.alfacom.it systemd[1]: ids-ml-backend.service: Deactivated successfully.
Feb 16 19:29:01 ids.alfacom.it systemd[1]: Stopped IDS ML Backend (FastAPI).
Feb 16 19:29:01 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 4.113s CPU time.
Feb 16 19:29:06 ids.alfacom.it systemd[1]: Started IDS ML Backend (FastAPI).
=== 4. LOG AUTO-BLOCK ===
Feb 17 07:45:29 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 17 07:45:29 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 17 07:49:30 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 17 07:49:30 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 17 07:49:30 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 17 07:49:30 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 17 07:53:30 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 17 07:53:30 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 17 07:53:30 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 17 07:53:30 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 17 07:57:30 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 17 07:57:30 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 17 07:57:30 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 17 07:57:30 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 17 08:01:31 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 17 08:01:31 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 17 08:01:31 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 17 08:01:31 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 17 08:05:31 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 17 08:05:31 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 17 08:05:31 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 17 08:05:31 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 17 08:09:31 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 17 08:09:31 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 17 08:09:31 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 17 08:09:31 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 17 08:13:32 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 17 08:13:32 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 17 08:13:32 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 17 08:13:32 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 17 08:17:32 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 17 08:17:32 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 17 08:17:32 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 17 08:17:32 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 17 08:21:32 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 17 08:21:32 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 17 08:21:32 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 17 08:21:32 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 17 08:25:33 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 17 08:25:33 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 17 08:25:33 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 17 08:25:33 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 17 08:29:33 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 17 08:29:33 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 17 08:29:33 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 17 08:29:33 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 17 08:33:33 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 17 08:33:33 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 17 08:33:33 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 17 08:33:33 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
=== 5. LOG SYSLOG ===
Feb 16 12:18:52 ids.alfacom.it systemd[1]: Started IDS Syslog Parser (Network Logs Processor).
=== 6. PORTE ===

View File

@ -0,0 +1,110 @@
echo "=== VERIFICA BACKEND NODE.JS ===" && ls -la /etc/systemd/system/ids-*.service /etc/systemd/system/ids-*.timer && echo "=== FILE SERVICE DISPONIBILI ===" && cat /etc/systemd/system/ids-backend.service 2>&1 || echo "FILE NON TROVATO" && echo "=== NGINX/REVERSE PROXY ===" && ss -tlnp | grep -E '80|443|3001|5001' && echo "=== TEST PORTA 3001 ===" && curl -v --connect-timeout 5 http://localhost:3001/api/health 2>&1 && echo "=== COME VIENE AVVIATO NODE.JS ===" && ps aux | grep -i node | grep -v grep && echo "=== PM2 STATUS ===" && pm2 list 2>&1 || echo "PM2 non installato" && echo "=== CONTENUTO /opt/ids/ ===" && ls -la /opt/ids/ && echo "=== PACKAGE.JSON ===" && cat /opt/ids/package.json 2>&1 | head -30 && echo "=== AUTO_BLOCK OUTPUT DETTAGLIATO ===" && sudo -u ids /opt/ids/python_ml/venv/bin/python3 /opt/ids/python_ml/auto_block.py 2>&1
=== VERIFICA BACKEND NODE.JS ===
-rw-r--r--. 1 root root 473 Feb 16 15:52 /etc/systemd/system/ids-analytics-aggregator.service
-rw-r--r--. 1 root root 339 Feb 16 15:52 /etc/systemd/system/ids-analytics-aggregator.timer
-rw-r--r--. 1 root root 674 Feb 16 19:23 /etc/systemd/system/ids-auto-block.service
-rw-r--r--. 1 root root 457 Feb 14 11:42 /etc/systemd/system/ids-auto-block.timer
-rw-r--r--. 1 root root 550 Nov 25 11:47 /etc/systemd/system/ids-cleanup.service
-rw-r--r--. 1 root root 440 Nov 25 11:47 /etc/systemd/system/ids-cleanup.timer
-rw-r--r--. 1 root root 623 Nov 27 19:29 /etc/systemd/system/ids-list-fetcher.service
-rw-r--r--. 1 root root 246 Nov 27 19:29 /etc/systemd/system/ids-list-fetcher.timer
-rw-r--r--. 1 root root 675 Nov 24 12:12 /etc/systemd/system/ids-ml-backend.service
-rw-r--r--. 1 root root 620 Nov 24 19:19 /etc/systemd/system/ids-ml-training.service
-rw-r--r--. 1 root root 398 Nov 24 19:19 /etc/systemd/system/ids-ml-training.timer
-rw-r--r--. 1 root root 727 Nov 24 12:12 /etc/systemd/system/ids-syslog-parser.service
=== FILE SERVICE DISPONIBILI ===
cat: /etc/systemd/system/ids-backend.service: No such file or directory
FILE NON TROVATO
=== NGINX/REVERSE PROXY ===
LISTEN 1107 2048 0.0.0.0:8000 0.0.0.0:* users:(("python3",pid=17629,fd=12))
=== TEST PORTA 3001 ===
* Trying ::1:3001...
* connect to ::1 port 3001 failed: Connection refused
* Trying 127.0.0.1:3001...
* connect to 127.0.0.1 port 3001 failed: Connection refused
* Failed to connect to localhost port 3001: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 3001: Connection refused
PM2 non installato
=== CONTENUTO /opt/ids/ ===
total 608
drwxr-xr-x. 14 ids ids 4096 Feb 16 19:28 .
drwxr-xr-x. 3 root root 43 Nov 17 18:20 ..
-rw-------. 1 ids ids 508 Feb 16 19:28 .env
-rw-r-----. 1 root root 508 Feb 16 19:28 .env.backup
-rw-r--r--. 1 ids ids 446 Nov 17 18:23 .env.example
drwxr-xr-x. 8 ids ids 4096 Feb 16 19:28 .git
-rw-r--r--. 1 ids ids 686 Nov 17 18:23 .gitignore
-rw-r--r--. 1 ids ids 801 Jan 2 12:50 .replit
-rw-r--r--. 1 ids ids 6264 Nov 17 17:08 GUIDA_INSTALLAZIONE.md
-rw-r--r--. 1 ids ids 44765 Feb 16 08:50 IDS_Conformita_ISO27001.docx
-rw-r--r--. 1 ids ids 7595 Nov 25 19:14 MIKROTIK_API_FIX.md
-rw-r--r--. 1 ids ids 8452 Nov 17 16:40 README.md
-rw-r--r--. 1 ids ids 9092 Nov 17 16:40 RISPOSTA_DEPLOYMENT.md
drwxr-xr-x. 2 ids ids 12288 Feb 16 16:49 attached_assets
drwxr-xr-x. 2 ids ids 4096 Feb 17 04:00 backups
drwxr-xr-x. 4 ids ids 49 Nov 17 16:40 client
-rw-r--r--. 1 ids ids 459 Nov 17 16:40 components.json
drwxr-xr-x. 3 ids ids 4096 Feb 16 19:28 database-schema
-rwxr-xr-x. 1 ids ids 10264 Nov 17 18:23 deploy-to-gitlab.sh
drwxr-xr-x. 7 ids ids 4096 Feb 16 19:28 deployment
-rw-r--r--. 1 ids ids 3165 Nov 17 16:40 design_guidelines.md
drwxr-xr-x. 3 root root 36 Nov 24 11:05 dist
-rw-r--r--. 1 ids ids 325 Nov 17 16:40 drizzle.config.ts
drwxr-xr-x. 4 ids ids 4096 Nov 17 16:40 extracted_idf
-rw-r--r--. 1 ids ids 28609 Feb 16 08:50 generate_iso27001_doc.py
-rw-r--r--. 1 ids ids 1033 Nov 17 17:08 git.env.example
-rw-r--r--. 1 ids ids 96 Nov 26 11:14 main.py
drwxr-xr-x. 328 ids ids 12288 Feb 16 19:28 node_modules
-rw-r--r--. 1 ids ids 299523 Feb 16 19:28 package-lock.json
-rw-r--r--. 1 ids ids 3696 Nov 17 16:40 package.json
-rw-r--r--. 1 ids ids 80 Nov 17 16:40 postcss.config.js
-rwxr-xr-x. 1 ids ids 2496 Nov 17 16:40 push-gitlab.sh
-rw-r--r--. 1 ids ids 191 Feb 16 08:50 pyproject.toml
drwxr-xr-x. 7 ids ids 4096 Feb 16 16:49 python_ml
-rw-r--r--. 1 ids ids 5796 Feb 16 12:33 replit.md
drwxr-xr-x. 2 ids ids 104 Feb 16 12:55 server
drwxr-xr-x. 2 ids ids 23 Jan 2 15:50 shared
-rw-r--r--. 1 ids ids 4050 Nov 17 16:40 tailwind.config.ts
-rw-r--r--. 1 ids ids 657 Nov 17 16:40 tsconfig.json
-rw-r--r--. 1 ids ids 37505 Feb 16 08:50 uv.lock
-rw-r--r--. 1 ids ids 7329 Feb 16 19:28 version.json
-rw-r--r--. 1 ids ids 1080 Nov 17 16:40 vite.config.ts
=== PACKAGE.JSON ===
{
"name": "rest-express",
"version": "1.0.0",
"type": "module",
"license": "MIT",
"scripts": {
"dev": "NODE_ENV=development tsx server/index.ts",
"build": "vite build && esbuild server/index.ts --platform=node --packages=external --bundle --format=esm --outdir=dist",
"start": "NODE_ENV=production node dist/index.js",
"check": "tsc",
"db:push": "drizzle-kit push"
},
"dependencies": {
"@hookform/resolvers": "^3.10.0",
"@jridgewell/trace-mapping": "^0.3.25",
"@neondatabase/serverless": "^0.10.4",
"@radix-ui/react-accordion": "^1.2.4",
"@radix-ui/react-alert-dialog": "^1.1.7",
"@radix-ui/react-aspect-ratio": "^1.1.3",
"@radix-ui/react-avatar": "^1.1.4",
"@radix-ui/react-checkbox": "^1.1.5",
"@radix-ui/react-collapsible": "^1.1.4",
"@radix-ui/react-context-menu": "^2.2.7",
"@radix-ui/react-dialog": "^1.1.7",
"@radix-ui/react-dropdown-menu": "^2.1.7",
"@radix-ui/react-hover-card": "^1.1.7",
"@radix-ui/react-label": "^2.1.3",
"@radix-ui/react-menubar": "^1.1.7",
"@radix-ui/react-navigation-menu": "^1.2.6",
"@radix-ui/react-popover": "^1.1.7",
=== AUTO_BLOCK OUTPUT DETTAGLIATO ===
[2026-02-17 08:38:05] Starting auto-block cycle...
[2026-02-17 08:38:05] Step 1: Detection ML...
[2026-02-17 08:38:05] ML Detection timeout, skip (blocco IP esistenti continua)
[2026-02-17 08:38:05] Step 2: Blocco IP critici sui router...
[2026-02-17 08:38:05] ERRORE: Timeout blocco IP (120s)
[root@ids ~]#

View File

@ -0,0 +1,77 @@
journalctl -u ids-analytics-aggregator.timer -f
Feb 16 12:18:50 ids.alfacom.it systemd[1]: Started IDS Analytics Aggregation Timer - Runs every hour.
Feb 16 12:40:08 ids.alfacom.it systemd[1]: ids-analytics-aggregator.timer: Deactivated successfully.
Feb 16 12:40:08 ids.alfacom.it systemd[1]: Stopped IDS Analytics Aggregation Timer - Runs every hour.
Feb 16 12:40:08 ids.alfacom.it systemd[1]: Stopping IDS Analytics Aggregation Timer - Runs every hour...
Feb 16 12:40:08 ids.alfacom.it systemd[1]: Started IDS Analytics Aggregation Timer - Runs every hour.
^C
[root@ids ids]# systemctl status ids-ml-backend
● ids-ml-backend.service - IDS ML Backend (FastAPI)
Loaded: loaded (/etc/systemd/system/ids-ml-backend.service; enabled; preset: disabled)
Active: active (running) since Mon 2026-02-16 15:51:26 CET; 9min ago
Main PID: 13099 (python3)
Tasks: 26 (limit: 100409)
Memory: 402.9M (max: 2.0G available: 1.6G)
CPU: 15.905s
CGroup: /system.slice/ids-ml-backend.service
└─13099 /opt/ids/python_ml/venv/bin/python3 main.py
Feb 16 15:51:26 ids.alfacom.it systemd[1]: Started IDS ML Backend (FastAPI).
[root@ids ids]# cat /var/log/ids/backend.log | tail -20
[Mon Feb 16 15:40:04 CET 2026] Backend riavviato con PID: 12165
INFO: Started server process [12165]
INFO: Waiting for application startup.
INFO: Application startup complete.
ERROR: [Errno 98] error while attempting to bind on address ('0.0.0.0', 8000): address already in use
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
[WARNING] Extended Isolation Forest not available, using standard IF
[ML] Using Hybrid ML Detector (Extended Isolation Forest + Feature Selection)
[HYBRID] Ensemble classifier loaded
[HYBRID] Models loaded (version: latest)
[HYBRID] Selected features: 18/25
[HYBRID] Mode: Hybrid (IF + Ensemble)
[ML] ✓ Hybrid detector models loaded and ready
 Starting IDS API on http://0.0.0.0:8000
 Docs available at http://0.0.0.0:8000/docs
[Mon Feb 16 15:45:01 CET 2026] Backend Python NON attivo, riavvio via systemctl...
[Mon Feb 16 15:45:04 CET 2026] ERRORE: Backend non si è avviato. Controlla: journalctl -u ids-ml-backend
[Mon Feb 16 15:50:01 CET 2026] Backend Python NON attivo, riavvio via systemctl...
[Mon Feb 16 15:50:04 CET 2026] ERRORE: Backend non si è avviato. Controlla: journalctl -u ids-ml-backend
[root@ids ids]# systemctl status ids-auto-block
journalctl -u ids-auto-block --no-pager | tail -20
× ids-auto-block.service - IDS Auto-Blocking Service - Detect and Block Malicious IPs
Loaded: loaded (/etc/systemd/system/ids-auto-block.service; disabled; preset: disabled)
Active: failed (Result: signal) since Mon 2026-02-16 12:47:58 CET; 3h 13min ago
TriggeredBy: ○ ids-auto-block.timer
Docs: https://github.com/yourusername/ids
Main PID: 2896 (code=killed, signal=TERM)
CPU: 155ms
Feb 16 12:46:47 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 16 12:47:58 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=killed, status=15/TERM
Feb 16 12:47:58 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'signal'.
Feb 16 12:47:58 ids.alfacom.it systemd[1]: Stopped IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 16 12:38:46 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 16 12:40:46 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 16 12:40:46 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 16 12:40:46 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 16 12:40:46 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 16 12:42:46 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 16 12:42:46 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 16 12:42:46 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 16 12:42:46 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 16 12:44:47 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 16 12:44:47 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 16 12:44:47 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 16 12:44:47 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 16 12:46:47 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=exited, status=1/FAILURE
Feb 16 12:46:47 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'exit-code'.
Feb 16 12:46:47 ids.alfacom.it systemd[1]: Failed to start IDS Auto-Blocking Service - Detect and Block Malicious IPs.
Feb 16 12:46:47 ids.alfacom.it systemd[1]: Starting IDS Auto-Blocking Service - Detect and Block Malicious IPs...
Feb 16 12:47:58 ids.alfacom.it systemd[1]: ids-auto-block.service: Main process exited, code=killed, status=15/TERM
Feb 16 12:47:58 ids.alfacom.it systemd[1]: ids-auto-block.service: Failed with result 'signal'.
Feb 16 12:47:58 ids.alfacom.it systemd[1]: Stopped IDS Auto-Blocking Service - Detect and Block Malicious IPs.
[root@ids ids]# curl -X POST http://localhost:5000/api/ml/block-all-critical \
-H "Content-Type: application/json" \
-d '{"min_score": 80, "limit": 200}'

View File

@ -0,0 +1,57 @@
sudo /opt/ids/deployment/setup_analytics_timer.sh
╔═══════════════════════════════════════════════╗
║ IDS Analytics Timer Setup ║
╚═══════════════════════════════════════════════╝
 Copia file systemd...
 Reload systemd daemon...
⚙ Enable e start timer...
 Stato timer:
● ids-analytics-aggregator.timer - IDS Analytics Aggregation Timer - Runs every hour
Loaded: loaded (/etc/systemd/system/ids-analytics-aggregator.timer; enabled; preset: disabled)
Active: active (waiting) since Mon 2026-02-16 12:40:08 CET; 3h 12min ago
Until: Mon 2026-02-16 12:40:08 CET; 3h 12min ago
Trigger: Mon 2026-02-16 16:05:00 CET; 12min left
Triggers: ● ids-analytics-aggregator.service
Feb 16 12:40:08 ids.alfacom.it systemd[1]: Stopped IDS Analytics Aggregation Timer - Runs every hour.
Feb 16 12:40:08 ids.alfacom.it systemd[1]: Stopping IDS Analytics Aggregation Timer - Runs every hour...
Feb 16 12:40:08 ids.alfacom.it systemd[1]: Started IDS Analytics Aggregation Timer - Runs every hour.
 Prossime esecuzioni:
NEXT LEFT LAST PASSED UNIT ACTIVATES
Mon 2026-02-16 16:05:00 CET 12min left Mon 2026-02-16 15:05:00 CET 47min ago ids-analytics-aggregator.timer ids-analytics-aggregator.service
1 timers listed.
Pass --all to see loaded but inactive timers, too.
╔═══════════════════════════════════════════════╗
║ ✅ ANALYTICS TIMER CONFIGURATO ║
╚═══════════════════════════════════════════════╝
📝 Comandi utili:
Stato timer: sudo systemctl status ids-analytics-aggregator.timer
Prossime run: sudo systemctl list-timers
Log aggregazione: sudo journalctl -u ids-analytics-aggregator -f
Test manuale: sudo systemctl start ids-analytics-aggregator
[root@ids ids]# systemctl status ids-analytics-aggregator.timer
● ids-analytics-aggregator.timer - IDS Analytics Aggregation Timer - Runs every hour
Loaded: loaded (/etc/systemd/system/ids-analytics-aggregator.timer; enabled; preset: disabled)
Active: active (waiting) since Mon 2026-02-16 12:40:08 CET; 3h 12min ago
Until: Mon 2026-02-16 12:40:08 CET; 3h 12min ago
Trigger: Mon 2026-02-16 16:05:00 CET; 11min left
Triggers: ● ids-analytics-aggregator.service
Feb 16 12:40:08 ids.alfacom.it systemd[1]: Stopped IDS Analytics Aggregation Timer - Runs every hour.
Feb 16 12:40:08 ids.alfacom.it systemd[1]: Stopping IDS Analytics Aggregation Timer - Runs every hour...
Feb 16 12:40:08 ids.alfacom.it systemd[1]: Started IDS Analytics Aggregation Timer - Runs every hour.
[root@ids ids]# cd /opt/ids && ./deployment/run_analytics.sh
Usage: ./deployment/run_analytics.sh {hourly|daily}
[root@ids ids]# cd /opt/ids && ./deployment/run_analytics.sh {1}
Errore: modo deve essere 'hourly' o 'daily'
[root@ids ids]# cd /opt/ids && ./deployment/run_analytics.sh {hourly}
Errore: modo deve essere 'hourly' o 'daily'
[root@ids ids]# cd /opt/ids && ./deployment/run_analytics.sh {hourly=1}
Errore: modo deve essere 'hourly' o 'daily'

View File

@ -0,0 +1,59 @@
systemctl stop ids-ml-backend
[root@ids ~]# systemctl start ids-ml-backend
[root@ids ~]# systemctl status ids-ml-backend
● ids-ml-backend.service - IDS ML Backend (FastAPI)
Loaded: loaded (/etc/systemd/system/ids-ml-backend.service; enabled; preset: disabled)
Active: active (running) since Mon 2026-02-16 12:59:19 CET; 4s ago
Main PID: 3600 (python3)
Tasks: 26 (limit: 100409)
Memory: 157.6M (max: 2.0G available: 1.8G)
CPU: 3.936s
CGroup: /system.slice/ids-ml-backend.service
└─3600 /opt/ids/python_ml/venv/bin/python3 main.py
Feb 16 12:59:19 ids.alfacom.it systemd[1]: Started IDS ML Backend (FastAPI).
[root@ids ~]# cat /etc/systemd/system/ids-ml-backend.service
[Unit]
Description=IDS ML Backend (FastAPI)
After=network.target postgresql-16.service
Wants=postgresql-16.service
[Service]
Type=simple
User=ids
Group=ids
WorkingDirectory=/opt/ids/python_ml
EnvironmentFile=/opt/ids/.env
# Comando esecuzione (usa virtual environment)
ExecStart=/opt/ids/python_ml/venv/bin/python3 main.py
# Restart automatico sempre (non solo on-failure)
Restart=always
RestartSec=10
StartLimitInterval=300
StartLimitBurst=5
# Limiti risorse
LimitNOFILE=65536
MemoryMax=2G
# Logging
StandardOutput=append:/var/log/ids/ml_backend.log
StandardError=append:/var/log/ids/ml_backend.log
SyslogIdentifier=ids-ml-backend
[Install]
WantedBy=multi-user.target
[root@ids ~]# tail -f /var/log/ids/backend.log
🚀 Starting IDS API on http://0.0.0.0:8000
📚 Docs available at http://0.0.0.0:8000/docs
[Mon Feb 16 12:56:12 CET 2026] Backend Python NON attivo, riavvio...
[Mon Feb 16 12:56:14 CET 2026] Backend riavviato con PID: 3453
Traceback (most recent call last):
File "/opt/ids/python_ml/main.py", line 21, in <module>
from ml_hybrid_detector import MLHybridDetector
File "/opt/ids/python_ml/ml_hybrid_detector.py", line 13, in <module>
from xgboost import XGBClassifier
ModuleNotFoundError: No module named 'xgboost'

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

View File

@ -37,10 +37,12 @@ export default function Dashboard() {
refetchInterval: 10000, // Refresh every 10s
});
const { data: recentDetections } = useQuery<Detection[]>({
queryKey: ["/api/detections?limit=100"],
refetchInterval: 5000, // Refresh every 5s
const { data: recentDetectionsData } = useQuery<{ detections: Detection[]; total: number }>({
queryKey: ["/api/detections", { limit: 100 }],
queryFn: () => fetch("/api/detections?limit=100").then(r => r.json()),
refetchInterval: 5000,
});
const recentDetections = recentDetectionsData?.detections;
const { data: routers } = useQuery<Router[]>({
queryKey: ["/api/routers"],
@ -204,7 +206,7 @@ export default function Dashboard() {
{stats?.logs.recent || 0}
</div>
<p className="text-xs text-muted-foreground">
Ultimi 1000 log analizzati
Ultime 24 ore
</p>
</CardContent>
</Card>

View File

@ -16,6 +16,7 @@ interface DashboardStats {
attacksByCountry: Record<string, number>;
attacksByType: Record<string, number>;
recentDetections: Detection[];
blockedCount: number;
}
export default function DashboardLive() {
@ -32,7 +33,7 @@ export default function DashboardLive() {
const attackPercentage = totalTraffic > 0 ? ((totalAttacks / totalTraffic) * 100).toFixed(2) : "0";
const detections = stats?.recentDetections || [];
const blockedAttacks = detections.filter(d => d.blocked).length;
const blockedAttacks = stats?.blockedCount || 0;
// Usa dati aggregati già calcolati dal backend
const attacksByCountry = stats?.attacksByCountry || {};

View File

@ -73,13 +73,12 @@ export default function Detections() {
const totalCount = data?.total || 0;
const totalPages = Math.ceil(totalCount / ITEMS_PER_PAGE);
// Fetch whitelist to check if IP is already whitelisted
const { data: whitelistData } = useQuery<Whitelist[]>({
queryKey: ["/api/whitelist"],
const { data: whitelistData } = useQuery<{ items: Whitelist[]; total: number }>({
queryKey: ["/api/whitelist", "all"],
queryFn: () => fetch("/api/whitelist?limit=10000").then(r => r.json()),
});
// Create a Set of whitelisted IPs for fast lookup
const whitelistedIps = new Set(whitelistData?.map(w => w.ipAddress) || []);
const whitelistedIps = new Set(whitelistData?.items?.map(w => w.ipAddress) || []);
// Mutation per aggiungere a whitelist
const addToWhitelistMutation = useMutation({

View File

@ -35,7 +35,7 @@ export default function PublicLists() {
const [isAddDialogOpen, setIsAddDialogOpen] = useState(false);
const [editingList, setEditingList] = useState<any>(null);
const { data: lists, isLoading } = useQuery({
const { data: lists, isLoading } = useQuery<any[]>({
queryKey: ["/api/public-lists"],
});

View File

@ -47,7 +47,7 @@ export default function Routers() {
defaultValues: {
name: "",
ipAddress: "",
apiPort: 8729,
apiPort: 80,
username: "",
password: "",
enabled: true,
@ -167,7 +167,7 @@ export default function Routers() {
<DialogHeader>
<DialogTitle>Aggiungi Router MikroTik</DialogTitle>
<DialogDescription>
Configura un nuovo router MikroTik per il sistema IDS. Assicurati che l'API RouterOS (porta 8729/8728) sia abilitata.
Configura un nuovo router MikroTik per il sistema IDS. Usa la REST API (porta 80 HTTP o 443 HTTPS).
</DialogDescription>
</DialogHeader>
@ -216,14 +216,14 @@ export default function Routers() {
<FormControl>
<Input
type="number"
placeholder="8729"
placeholder="80"
{...field}
onChange={(e) => field.onChange(parseInt(e.target.value))}
data-testid="input-port"
/>
</FormControl>
<FormDescription>
Porta RouterOS API MikroTik (8729 per API-SSL, 8728 per API)
Porta REST API MikroTik (80 per HTTP, 443 per HTTPS)
</FormDescription>
<FormMessage />
</FormItem>
@ -445,14 +445,14 @@ export default function Routers() {
<FormControl>
<Input
type="number"
placeholder="8729"
placeholder="80"
{...field}
onChange={(e) => field.onChange(parseInt(e.target.value))}
data-testid="input-edit-port"
/>
</FormControl>
<FormDescription>
Porta RouterOS API MikroTik (8729 per API-SSL, 8728 per API)
Porta REST API MikroTik (80 per HTTP, 443 per HTTPS)
</FormDescription>
<FormMessage />
</FormItem>

View File

@ -2,25 +2,21 @@ import { useQuery, useMutation } from "@tanstack/react-query";
import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
import { Badge } from "@/components/ui/badge";
import { Button } from "@/components/ui/button";
import { Activity, Brain, Database, FileText, Terminal, RefreshCw, AlertCircle, Play, Square, RotateCw } from "lucide-react";
import { Alert, AlertDescription, AlertTitle } from "@/components/ui/alert";
import { Activity, Brain, Database, FileText, Terminal, RefreshCw, Play, Square, RotateCw, Shield, Trash2, ListChecks, GraduationCap, Server, Clock, Timer } from "lucide-react";
import { useToast } from "@/hooks/use-toast";
import { queryClient, apiRequest } from "@/lib/queryClient";
interface ServiceStatus {
name: string;
status: "running" | "idle" | "offline" | "error" | "unknown";
status: string;
healthy: boolean;
details: any;
systemdUnit: string;
type: string;
}
interface ServicesStatusResponse {
services: {
mlBackend: ServiceStatus;
database: ServiceStatus;
syslogParser: ServiceStatus;
analyticsAggregator: ServiceStatus;
};
services: Record<string, ServiceStatus>;
}
export default function ServicesPage() {
@ -28,10 +24,9 @@ export default function ServicesPage() {
const { data: servicesStatus, isLoading, refetch } = useQuery<ServicesStatusResponse>({
queryKey: ["/api/services/status"],
refetchInterval: 5000, // Refresh every 5s
refetchInterval: 5000,
});
// Mutation for service control
const serviceControlMutation = useMutation({
mutationFn: async ({ service, action }: { service: string; action: string }) => {
return apiRequest("POST", `/api/services/${service}/${action}`);
@ -39,9 +34,8 @@ export default function ServicesPage() {
onSuccess: (data, variables) => {
toast({
title: "Operazione completata",
description: `Servizio ${variables.service}: ${variables.action} eseguito con successo`,
description: `Servizio ${variables.service}: ${variables.action} eseguito`,
});
// Refresh status after 2 seconds
setTimeout(() => {
queryClient.invalidateQueries({ queryKey: ["/api/services/status"] });
}, 2000);
@ -59,39 +53,260 @@ export default function ServicesPage() {
serviceControlMutation.mutate({ service, action });
};
const getStatusBadge = (service: ServiceStatus) => {
const getStatusBadge = (service: ServiceStatus, key: string) => {
if (service.healthy) {
return <Badge variant="default" className="bg-green-600" data-testid={`badge-status-healthy`}>Online</Badge>;
return <Badge variant="default" className="bg-green-600" data-testid={`badge-status-${key}-healthy`}>Online</Badge>;
}
if (service.status === 'idle') {
return <Badge variant="secondary" data-testid={`badge-status-idle`}>In Attesa</Badge>;
return <Badge variant="secondary" data-testid={`badge-status-${key}-idle`}>In Attesa</Badge>;
}
if (service.status === 'offline') {
return <Badge variant="destructive" data-testid={`badge-status-offline`}>Offline</Badge>;
return <Badge variant="destructive" data-testid={`badge-status-${key}-offline`}>Offline</Badge>;
}
if (service.status === 'error') {
return <Badge variant="destructive" data-testid={`badge-status-error`}>Errore</Badge>;
return <Badge variant="destructive" data-testid={`badge-status-${key}-error`}>Errore</Badge>;
}
return <Badge variant="outline" data-testid={`badge-status-unknown`}>Sconosciuto</Badge>;
return <Badge variant="outline" data-testid={`badge-status-${key}-unknown`}>Sconosciuto</Badge>;
};
const getStatusIndicator = (service: ServiceStatus) => {
if (service.healthy) {
return <div className="h-3 w-3 rounded-full bg-green-500" />;
return <div className="h-3 w-3 rounded-full bg-green-500 shrink-0" />;
}
if (service.status === 'idle') {
return <div className="h-3 w-3 rounded-full bg-yellow-500" />;
return <div className="h-3 w-3 rounded-full bg-yellow-500 shrink-0" />;
}
return <div className="h-3 w-3 rounded-full bg-red-500" />;
return <div className="h-3 w-3 rounded-full bg-red-500 shrink-0" />;
};
const getServiceIcon = (key: string) => {
const icons: Record<string, any> = {
nodeBackend: Server,
mlBackend: Brain,
database: Database,
syslogParser: FileText,
analyticsAggregator: Activity,
autoBlock: Shield,
cleanup: Trash2,
listFetcher: ListChecks,
mlTraining: GraduationCap,
};
const Icon = icons[key] || Activity;
return <Icon className="h-5 w-5" />;
};
const controllableServices = [
"ids-ml-backend", "ids-syslog-parser", "ids-backend",
"ids-analytics-aggregator", "ids-auto-block", "ids-cleanup",
"ids-list-fetcher", "ids-ml-training"
];
const getLogCommand = (key: string): string | null => {
const logs: Record<string, string> = {
nodeBackend: "tail -f /var/log/ids/backend.log",
mlBackend: "journalctl -u ids-ml-backend -f",
database: "sudo journalctl -u postgresql-16 -f",
syslogParser: "tail -f /var/log/ids/syslog_parser.log",
analyticsAggregator: "journalctl -u ids-analytics-aggregator -f",
autoBlock: "journalctl -u ids-auto-block -f",
cleanup: "journalctl -u ids-cleanup -f",
listFetcher: "journalctl -u ids-list-fetcher -f",
mlTraining: "journalctl -u ids-ml-training -f",
};
return logs[key] || null;
};
const renderDetailRow = (label: string, value: any, variant?: "default" | "destructive" | "secondary" | "outline") => {
if (value === undefined || value === null) return null;
return (
<div className="flex items-center justify-between gap-2 flex-wrap">
<span className="text-sm text-muted-foreground">{label}:</span>
{variant ? (
<Badge variant={variant} className="text-xs">{String(value)}</Badge>
) : (
<span className="text-sm font-mono">{String(value)}</span>
)}
</div>
);
};
const renderServiceDetails = (key: string, service: ServiceStatus) => {
const d = service.details;
if (!d) return null;
switch (key) {
case "nodeBackend":
return (
<>
{renderDetailRow("Porta", d.port)}
{renderDetailRow("Uptime", d.uptime)}
</>
);
case "mlBackend":
return (
<>
{d.modelLoaded !== undefined && renderDetailRow("Modello ML", d.modelLoaded ? "Caricato" : "Non Caricato", d.modelLoaded ? "default" : "secondary")}
{d.error && renderDetailRow("Errore", d.error, "destructive")}
</>
);
case "database":
return (
<>
{d.connected && renderDetailRow("Connessione", "Attiva", "default")}
{d.error && renderDetailRow("Errore", d.error, "destructive")}
</>
);
case "syslogParser":
return (
<>
{d.recentLogs30min !== undefined && renderDetailRow("Log ultimi 30min", d.recentLogs30min.toLocaleString())}
{d.lastLog && renderDetailRow("Ultimo log", typeof d.lastLog === 'string' ? d.lastLog : new Date(d.lastLog).toLocaleString('it-IT'))}
{d.warning && renderDetailRow("Avviso", d.warning, "destructive")}
</>
);
case "analyticsAggregator":
return (
<>
{d.lastRun && renderDetailRow("Ultima esecuzione", new Date(d.lastRun).toLocaleString('it-IT'))}
{d.hoursSinceLastRun && renderDetailRow("Ore dall'ultimo run", d.hoursSinceLastRun + "h", parseFloat(d.hoursSinceLastRun) < 2 ? "default" : "destructive")}
{d.warning && renderDetailRow("Avviso", d.warning, "destructive")}
</>
);
case "autoBlock":
return (
<>
{renderDetailRow("Blocchi ultimi 10min", d.recentBlocks10min)}
{renderDetailRow("Totale bloccati", d.totalBlocked)}
{d.lastBlock && renderDetailRow("Ultimo blocco", typeof d.lastBlock === 'string' && d.lastBlock !== 'Mai' ? new Date(d.lastBlock).toLocaleString('it-IT') : d.lastBlock)}
{renderDetailRow("Intervallo", d.interval)}
</>
);
case "cleanup":
return (
<>
{renderDetailRow("Detection vecchie (>48h)", d.oldDetections48h, d.oldDetections48h > 0 ? "destructive" : "default")}
{renderDetailRow("Detection totali", d.totalDetections)}
{renderDetailRow("Intervallo", d.interval)}
{d.warning && renderDetailRow("Avviso", d.warning, "destructive")}
</>
);
case "listFetcher":
return (
<>
{renderDetailRow("Liste totali", d.totalLists)}
{renderDetailRow("Liste attive", d.enabledLists)}
{d.lastFetched && renderDetailRow("Ultimo fetch", typeof d.lastFetched === 'string' && d.lastFetched !== 'Mai' ? new Date(d.lastFetched).toLocaleString('it-IT') : d.lastFetched)}
{d.hoursSinceLastFetch && renderDetailRow("Ore dall'ultimo fetch", d.hoursSinceLastFetch + "h", parseFloat(d.hoursSinceLastFetch) < 1 ? "default" : "destructive")}
{renderDetailRow("Intervallo", d.interval)}
</>
);
case "mlTraining":
return (
<>
{d.lastTraining && renderDetailRow("Ultimo training", typeof d.lastTraining === 'string' && d.lastTraining !== 'Mai' ? new Date(d.lastTraining).toLocaleString('it-IT') : d.lastTraining)}
{d.daysSinceLastTraining && renderDetailRow("Giorni dall'ultimo", d.daysSinceLastTraining, parseFloat(d.daysSinceLastTraining) < 8 ? "default" : "destructive")}
{d.lastStatus && renderDetailRow("Stato ultimo training", d.lastStatus, d.lastStatus === 'completed' ? "default" : "destructive")}
{d.lastModel && renderDetailRow("Modello", d.lastModel)}
{d.recordsProcessed && renderDetailRow("Record processati", d.recordsProcessed.toLocaleString())}
{renderDetailRow("Intervallo", d.interval)}
</>
);
default:
return d.error ? renderDetailRow("Errore", d.error, "destructive") : null;
}
};
const coreServices = ["nodeBackend", "mlBackend", "database", "syslogParser"];
const timerServices = ["autoBlock", "analyticsAggregator", "cleanup", "listFetcher", "mlTraining"];
const renderServiceCard = (key: string, service: ServiceStatus) => {
const isControllable = controllableServices.includes(service.systemdUnit);
const isTimer = service.type === "timer";
const logCmd = getLogCommand(key);
return (
<Card key={key} data-testid={`card-service-${key}`}>
<CardHeader className="flex flex-row items-center justify-between gap-2 space-y-0 pb-2">
<CardTitle className="flex items-center gap-2 text-base">
{getServiceIcon(key)}
<span className="truncate">{service.name}</span>
</CardTitle>
<div className="flex items-center gap-2 shrink-0">
{isTimer && <Timer className="h-4 w-4 text-muted-foreground" />}
{getStatusIndicator(service)}
</div>
</CardHeader>
<CardContent className="space-y-3">
<div className="flex items-center justify-between gap-2">
<span className="text-sm text-muted-foreground">Stato:</span>
{getStatusBadge(service, key)}
</div>
<div className="flex items-center justify-between gap-2">
<span className="text-sm text-muted-foreground">Systemd:</span>
<Badge variant="outline" className="text-xs font-mono">
{service.systemdUnit}{isTimer ? '.timer' : '.service'}
</Badge>
</div>
{renderServiceDetails(key, service)}
{isControllable && (
<div className="space-y-2 pt-2 border-t">
<p className="text-xs font-medium text-muted-foreground">Controlli:</p>
<div className="flex gap-2 flex-wrap">
<Button
size="sm"
variant="outline"
onClick={() => handleServiceAction(service.systemdUnit, "restart")}
disabled={serviceControlMutation.isPending}
data-testid={`button-restart-${key}`}
>
<RotateCw className="h-3 w-3 mr-1" />
Restart
</Button>
<Button
size="sm"
variant="outline"
onClick={() => handleServiceAction(service.systemdUnit, "start")}
disabled={serviceControlMutation.isPending || service.status === 'running'}
data-testid={`button-start-${key}`}
>
<Play className="h-3 w-3 mr-1" />
Start
</Button>
<Button
size="sm"
variant="outline"
onClick={() => handleServiceAction(service.systemdUnit, "stop")}
disabled={serviceControlMutation.isPending || service.status === 'offline'}
data-testid={`button-stop-${key}`}
>
<Square className="h-3 w-3 mr-1" />
Stop
</Button>
</div>
</div>
)}
{logCmd && (
<div className="p-2 bg-muted rounded-md">
<p className="text-xs font-medium text-muted-foreground mb-1">Log:</p>
<code className="text-xs font-mono break-all" data-testid={`code-log-${key}`}>{logCmd}</code>
</div>
)}
</CardContent>
</Card>
);
};
return (
<div className="flex flex-col gap-6 p-6" data-testid="page-services">
<div className="flex items-center justify-between">
<div className="flex items-center justify-between gap-4 flex-wrap">
<div>
<h1 className="text-3xl font-semibold" data-testid="text-services-title">Gestione Servizi</h1>
<p className="text-muted-foreground" data-testid="text-services-subtitle">
Monitoraggio e controllo dei servizi IDS
Monitoraggio e controllo di tutti i servizi IDS
</p>
</div>
<Button onClick={() => refetch()} variant="outline" data-testid="button-refresh">
@ -100,303 +315,40 @@ export default function ServicesPage() {
</Button>
</div>
<Alert data-testid="alert-server-instructions">
<AlertCircle className="h-4 w-4" />
<AlertTitle>Gestione Servizi Systemd</AlertTitle>
<AlertDescription>
I servizi IDS sono gestiti da systemd sul server AlmaLinux.
Usa i pulsanti qui sotto per controllarli oppure esegui i comandi systemctl direttamente sul server.
</AlertDescription>
</Alert>
{isLoading && (
<div className="text-center py-8 text-muted-foreground">Caricamento stato servizi...</div>
)}
{/* Services Grid */}
<div className="grid grid-cols-1 lg:grid-cols-3 gap-6">
{/* ML Backend Service */}
<Card data-testid="card-ml-backend-service">
<CardHeader>
<CardTitle className="flex items-center gap-2 text-lg">
<Brain className="h-5 w-5" />
ML Backend Python
{servicesStatus && getStatusIndicator(servicesStatus.services.mlBackend)}
</CardTitle>
</CardHeader>
<CardContent className="space-y-4">
<div className="flex items-center justify-between">
<span className="text-sm text-muted-foreground">Stato:</span>
{servicesStatus && getStatusBadge(servicesStatus.services.mlBackend)}
{servicesStatus && (
<>
<div>
<h2 className="text-lg font-semibold mb-3 flex items-center gap-2">
<Server className="h-5 w-5" />
Servizi Core
</h2>
<div className="grid grid-cols-1 md:grid-cols-2 xl:grid-cols-4 gap-4">
{coreServices.map((key) => {
const service = (servicesStatus.services as any)[key];
return service ? renderServiceCard(key, service) : null;
})}
</div>
</div>
{servicesStatus?.services.mlBackend.details?.modelLoaded !== undefined && (
<div className="flex items-center justify-between">
<span className="text-sm text-muted-foreground">Modello ML:</span>
<Badge variant={servicesStatus.services.mlBackend.details.modelLoaded ? "default" : "secondary"}>
{servicesStatus.services.mlBackend.details.modelLoaded ? "Caricato" : "Non Caricato"}
</Badge>
</div>
)}
{/* Service Controls */}
<div className="mt-4 space-y-2">
<p className="text-xs font-medium mb-2">Controlli Servizio:</p>
<div className="flex gap-2 flex-wrap">
<Button
size="sm"
variant="outline"
onClick={() => handleServiceAction("ids-ml-backend", "start")}
disabled={serviceControlMutation.isPending || servicesStatus?.services.mlBackend.status === 'running'}
data-testid="button-start-ml"
>
<Play className="h-3 w-3 mr-1" />
Start
</Button>
<Button
size="sm"
variant="outline"
onClick={() => handleServiceAction("ids-ml-backend", "stop")}
disabled={serviceControlMutation.isPending || servicesStatus?.services.mlBackend.status === 'offline'}
data-testid="button-stop-ml"
>
<Square className="h-3 w-3 mr-1" />
Stop
</Button>
<Button
size="sm"
variant="outline"
onClick={() => handleServiceAction("ids-ml-backend", "restart")}
disabled={serviceControlMutation.isPending}
data-testid="button-restart-ml"
>
<RotateCw className="h-3 w-3 mr-1" />
Restart
</Button>
</div>
<div>
<h2 className="text-lg font-semibold mb-3 flex items-center gap-2">
<Clock className="h-5 w-5" />
Timer Systemd (Attivita Periodiche)
</h2>
<div className="grid grid-cols-1 md:grid-cols-2 xl:grid-cols-3 gap-4">
{timerServices.map((key) => {
const service = (servicesStatus.services as any)[key];
return service ? renderServiceCard(key, service) : null;
})}
</div>
</div>
</>
)}
{/* Manual Commands (fallback) */}
<div className="mt-4 p-3 bg-muted rounded-lg">
<p className="text-xs font-medium mb-2">Comando systemctl (sul server):</p>
<code className="text-xs bg-background p-2 rounded block font-mono" data-testid="code-systemctl-ml">
sudo systemctl {servicesStatus?.services.mlBackend.status === 'offline' ? 'start' : 'restart'} ids-ml-backend
</code>
</div>
<div className="mt-4 p-3 bg-muted rounded-lg">
<p className="text-xs font-medium mb-2">Log:</p>
<code className="text-xs bg-background p-2 rounded block font-mono" data-testid="code-log-ml">
tail -f /var/log/ids/backend.log
</code>
</div>
</CardContent>
</Card>
{/* Database Service */}
<Card data-testid="card-database-service">
<CardHeader>
<CardTitle className="flex items-center gap-2 text-lg">
<Database className="h-5 w-5" />
PostgreSQL Database
{servicesStatus && getStatusIndicator(servicesStatus.services.database)}
</CardTitle>
</CardHeader>
<CardContent className="space-y-4">
<div className="flex items-center justify-between">
<span className="text-sm text-muted-foreground">Stato:</span>
{servicesStatus && getStatusBadge(servicesStatus.services.database)}
</div>
{servicesStatus?.services.database.status === 'running' && (
<div className="flex items-center justify-between">
<span className="text-sm text-muted-foreground">Connessione:</span>
<Badge variant="default" className="bg-green-600">Connesso</Badge>
</div>
)}
<div className="mt-4 p-3 bg-muted rounded-lg">
<p className="text-xs font-medium mb-2">Verifica status:</p>
<code className="text-xs bg-background p-2 rounded block font-mono" data-testid="code-status-db">
systemctl status postgresql-16
</code>
</div>
{servicesStatus?.services.database.status === 'error' && (
<div className="mt-4 p-3 bg-muted rounded-lg">
<p className="text-xs font-medium mb-2">Riavvia database:</p>
<code className="text-xs bg-background p-2 rounded block font-mono" data-testid="code-restart-db">
sudo systemctl restart postgresql-16
</code>
</div>
)}
<div className="mt-4 p-3 bg-muted rounded-lg">
<p className="text-xs font-medium mb-2">Log:</p>
<code className="text-xs bg-background p-2 rounded block font-mono" data-testid="code-log-db">
sudo journalctl -u postgresql-16 -f
</code>
</div>
</CardContent>
</Card>
{/* Syslog Parser Service */}
<Card data-testid="card-syslog-parser-service">
<CardHeader>
<CardTitle className="flex items-center gap-2 text-lg">
<FileText className="h-5 w-5" />
Syslog Parser
{servicesStatus && getStatusIndicator(servicesStatus.services.syslogParser)}
</CardTitle>
</CardHeader>
<CardContent className="space-y-4">
<div className="flex items-center justify-between">
<span className="text-sm text-muted-foreground">Stato:</span>
{servicesStatus && getStatusBadge(servicesStatus.services.syslogParser)}
</div>
{servicesStatus?.services.syslogParser.details?.pid && (
<div className="flex items-center justify-between">
<span className="text-sm text-muted-foreground">PID Processo:</span>
<Badge variant="outline" className="font-mono">
{servicesStatus.services.syslogParser.details.pid}
</Badge>
</div>
)}
{servicesStatus?.services.syslogParser.details?.systemd_unit && (
<div className="flex items-center justify-between">
<span className="text-sm text-muted-foreground">Systemd Unit:</span>
<Badge variant="outline" className="font-mono text-xs">
{servicesStatus.services.syslogParser.details.systemd_unit}
</Badge>
</div>
)}
{/* Service Controls */}
<div className="mt-4 space-y-2">
<p className="text-xs font-medium mb-2">Controlli Servizio:</p>
<div className="flex gap-2 flex-wrap">
<Button
size="sm"
variant="outline"
onClick={() => handleServiceAction("ids-syslog-parser", "start")}
disabled={serviceControlMutation.isPending || servicesStatus?.services.syslogParser.status === 'running'}
data-testid="button-start-parser"
>
<Play className="h-3 w-3 mr-1" />
Start
</Button>
<Button
size="sm"
variant="outline"
onClick={() => handleServiceAction("ids-syslog-parser", "stop")}
disabled={serviceControlMutation.isPending || servicesStatus?.services.syslogParser.status === 'offline'}
data-testid="button-stop-parser"
>
<Square className="h-3 w-3 mr-1" />
Stop
</Button>
<Button
size="sm"
variant="outline"
onClick={() => handleServiceAction("ids-syslog-parser", "restart")}
disabled={serviceControlMutation.isPending}
data-testid="button-restart-parser"
>
<RotateCw className="h-3 w-3 mr-1" />
Restart
</Button>
</div>
</div>
{/* Manual Commands (fallback) */}
<div className="mt-4 p-3 bg-muted rounded-lg">
<p className="text-xs font-medium mb-2">Comando systemctl (sul server):</p>
<code className="text-xs bg-background p-2 rounded block font-mono" data-testid="code-systemctl-parser">
sudo systemctl {servicesStatus?.services.syslogParser.status === 'offline' ? 'start' : 'restart'} ids-syslog-parser
</code>
</div>
<div className="mt-4 p-3 bg-muted rounded-lg">
<p className="text-xs font-medium mb-2">Log:</p>
<code className="text-xs bg-background p-2 rounded block font-mono" data-testid="code-log-parser">
tail -f /var/log/ids/syslog_parser.log
</code>
</div>
</CardContent>
</Card>
{/* Analytics Aggregator Service */}
<Card data-testid="card-analytics-aggregator-service">
<CardHeader>
<CardTitle className="flex items-center gap-2 text-lg">
<Activity className="h-5 w-5" />
Analytics Aggregator
{servicesStatus && getStatusIndicator(servicesStatus.services.analyticsAggregator)}
</CardTitle>
</CardHeader>
<CardContent className="space-y-4">
<div className="flex items-center justify-between">
<span className="text-sm text-muted-foreground">Stato:</span>
{servicesStatus && getStatusBadge(servicesStatus.services.analyticsAggregator)}
</div>
{servicesStatus?.services.analyticsAggregator.details?.lastRun && (
<div className="flex items-center justify-between">
<span className="text-sm text-muted-foreground">Ultima Aggregazione:</span>
<Badge variant="outline" className="text-xs">
{new Date(servicesStatus.services.analyticsAggregator.details.lastRun).toLocaleString('it-IT')}
</Badge>
</div>
)}
{servicesStatus?.services.analyticsAggregator.details?.hoursSinceLastRun && (
<div className="flex items-center justify-between">
<span className="text-sm text-muted-foreground">Ore dall'ultimo run:</span>
<Badge variant={parseFloat(servicesStatus.services.analyticsAggregator.details.hoursSinceLastRun) < 2 ? "default" : "destructive"}>
{servicesStatus.services.analyticsAggregator.details.hoursSinceLastRun}h
</Badge>
</div>
)}
{/* CRITICAL ALERT: Aggregator idle for too long */}
{servicesStatus?.services.analyticsAggregator.details?.hoursSinceLastRun &&
parseFloat(servicesStatus.services.analyticsAggregator.details.hoursSinceLastRun) > 2 && (
<Alert variant="destructive" className="mt-2" data-testid="alert-aggregator-idle">
<AlertCircle className="h-4 w-4" />
<AlertTitle className="text-sm font-semibold"> Timer Systemd Non Attivo</AlertTitle>
<AlertDescription className="text-xs mt-1">
<p className="mb-2">L'aggregatore non esegue da {servicesStatus.services.analyticsAggregator.details.hoursSinceLastRun}h! Dashboard e Analytics bloccati.</p>
<p className="font-semibold">Soluzione Immediata (sul server):</p>
<code className="block bg-destructive-foreground/10 p-2 rounded mt-1 font-mono text-xs">
sudo /opt/ids/deployment/setup_analytics_timer.sh
</code>
</AlertDescription>
</Alert>
)}
<div className="mt-4 p-3 bg-muted rounded-lg">
<p className="text-xs font-medium mb-2">Verifica timer:</p>
<code className="text-xs bg-background p-2 rounded block font-mono" data-testid="code-status-aggregator">
systemctl status ids-analytics-aggregator.timer
</code>
</div>
<div className="mt-4 p-3 bg-muted rounded-lg">
<p className="text-xs font-medium mb-2">Avvia aggregazione manualmente:</p>
<code className="text-xs bg-background p-2 rounded block font-mono" data-testid="code-run-aggregator">
cd /opt/ids && ./deployment/run_analytics.sh
</code>
</div>
<div className="mt-4 p-3 bg-muted rounded-lg">
<p className="text-xs font-medium mb-2">Log:</p>
<code className="text-xs bg-background p-2 rounded block font-mono" data-testid="code-log-aggregator">
journalctl -u ids-analytics-aggregator.timer -f
</code>
</div>
</CardContent>
</Card>
</div>
{/* Additional Commands */}
<Card data-testid="card-additional-commands">
<CardHeader>
<CardTitle className="flex items-center gap-2">
@ -406,30 +358,27 @@ export default function ServicesPage() {
</CardHeader>
<CardContent className="space-y-4">
<div>
<p className="text-sm font-medium mb-2">Verifica tutti i processi IDS attivi:</p>
<code className="text-xs bg-muted p-2 rounded block font-mono" data-testid="code-check-processes">
ps aux | grep -E "python.*(main|syslog_parser)" | grep -v grep
<p className="text-sm font-medium mb-2">Stato di tutti i servizi IDS:</p>
<code className="text-xs bg-muted p-2 rounded-md block font-mono" data-testid="code-all-services">
systemctl list-units 'ids-*' --all
</code>
</div>
<div>
<p className="text-sm font-medium mb-2">Stato di tutti i timer IDS:</p>
<code className="text-xs bg-muted p-2 rounded-md block font-mono" data-testid="code-all-timers">
systemctl list-timers 'ids-*' --all
</code>
</div>
<div>
<p className="text-sm font-medium mb-2">Verifica log RSyslog (ricezione log MikroTik):</p>
<code className="text-xs bg-muted p-2 rounded block font-mono" data-testid="code-check-rsyslog">
<code className="text-xs bg-muted p-2 rounded-md block font-mono" data-testid="code-check-rsyslog">
tail -f /var/log/mikrotik/raw.log
</code>
</div>
<div>
<p className="text-sm font-medium mb-2">Esegui training manuale ML:</p>
<code className="text-xs bg-muted p-2 rounded block font-mono" data-testid="code-manual-training">
curl -X POST http://localhost:8000/train -H "Content-Type: application/json" -d '&#123;"max_records": 10000, "hours_back": 24&#125;'
</code>
</div>
<div>
<p className="text-sm font-medium mb-2">Verifica storico training nel database:</p>
<code className="text-xs bg-muted p-2 rounded block font-mono" data-testid="code-check-training">
psql $DATABASE_URL -c "SELECT * FROM training_history ORDER BY trained_at DESC LIMIT 5;"
<p className="text-sm font-medium mb-2">Verifica processi IDS attivi:</p>
<code className="text-xs bg-muted p-2 rounded-md block font-mono" data-testid="code-check-processes">
ps aux | grep -E "python.*(main|syslog_parser)" | grep -v grep
</code>
</div>
</CardContent>

View File

@ -33,10 +33,13 @@ import { Input } from "@/components/ui/input";
import { Checkbox } from "@/components/ui/checkbox";
interface MLStatsResponse {
source?: string;
ml_backend_status?: string;
logs?: { total: number; last_hour: number };
detections?: { total: number; blocked: number };
detections?: { total: number; blocked: number; critical?: number; unique_ips?: number };
routers?: { active: number };
latest_training?: any;
logs_24h?: number;
}
const trainFormSchema = z.object({
@ -147,21 +150,43 @@ export default function TrainingPage() {
</p>
</div>
{/* ML Backend Status Warning */}
{mlStats?.ml_backend_status === "offline" && (
<Card className="border-orange-300 dark:border-orange-700" data-testid="card-ml-offline-warning">
<CardContent className="flex items-center gap-3 py-3">
<XCircle className="h-5 w-5 text-orange-500 shrink-0" />
<div>
<p className="font-medium text-sm">ML Backend Python offline</p>
<p className="text-xs text-muted-foreground">
Le statistiche mostrate provengono dal database. Training e detection manuali non sono disponibili fino al riavvio del servizio.
</p>
</div>
</CardContent>
</Card>
)}
{/* ML Stats */}
{mlStats && (
<div className="grid grid-cols-1 md:grid-cols-3 gap-4">
<Card data-testid="card-ml-logs">
<CardHeader className="flex flex-row items-center justify-between gap-2 space-y-0 pb-2">
<CardTitle className="text-sm font-medium">Log Totali</CardTitle>
<CardTitle className="text-sm font-medium">
{mlStats.source === "database_fallback" ? "Log Ultime 24h" : "Log Totali"}
</CardTitle>
<Brain className="h-4 w-4 text-muted-foreground" />
</CardHeader>
<CardContent>
<div className="text-2xl font-semibold" data-testid="text-ml-logs-total">
{mlStats.logs?.total?.toLocaleString() || 0}
{(mlStats.source === "database_fallback"
? mlStats.logs_24h
: mlStats.logs?.total
)?.toLocaleString() || 0}
</div>
<p className="text-xs text-muted-foreground mt-1">
Ultima ora: {mlStats.logs?.last_hour?.toLocaleString() || 0}
</p>
{mlStats.source !== "database_fallback" && (
<p className="text-xs text-muted-foreground mt-1">
Ultima ora: {mlStats.logs?.last_hour?.toLocaleString() || 0}
</p>
)}
</CardContent>
</Card>
@ -172,22 +197,30 @@ export default function TrainingPage() {
</CardHeader>
<CardContent>
<div className="text-2xl font-semibold" data-testid="text-ml-detections-total">
{mlStats.detections?.total || 0}
{mlStats.detections?.total?.toLocaleString() || 0}
</div>
<p className="text-xs text-muted-foreground mt-1">
Bloccati: {mlStats.detections?.blocked || 0}
Bloccati: {mlStats.detections?.blocked?.toLocaleString() || 0}
{mlStats.detections?.critical !== undefined && (
<span> | Critici: {mlStats.detections.critical.toLocaleString()}</span>
)}
</p>
</CardContent>
</Card>
<Card data-testid="card-ml-routers">
<CardHeader className="flex flex-row items-center justify-between gap-2 space-y-0 pb-2">
<CardTitle className="text-sm font-medium">Router Attivi</CardTitle>
<CardTitle className="text-sm font-medium">
{mlStats.source === "database_fallback" ? "IP Unici" : "Router Attivi"}
</CardTitle>
<TrendingUp className="h-4 w-4 text-muted-foreground" />
</CardHeader>
<CardContent>
<div className="text-2xl font-semibold" data-testid="text-ml-routers-active">
{mlStats.routers?.active || 0}
{(mlStats.source === "database_fallback"
? mlStats.detections?.unique_ips
: mlStats.routers?.active
)?.toLocaleString() || 0}
</div>
</CardContent>
</Card>
@ -214,9 +247,9 @@ export default function TrainingPage() {
</p>
<Dialog open={isTrainDialogOpen} onOpenChange={setIsTrainDialogOpen}>
<DialogTrigger asChild>
<Button className="w-full" data-testid="button-start-training">
<Button className="w-full" disabled={mlStats?.ml_backend_status === "offline"} data-testid="button-start-training">
<Play className="h-4 w-4 mr-2" />
Avvia Training
{mlStats?.ml_backend_status === "offline" ? "ML Backend Offline" : "Avvia Training"}
</Button>
</DialogTrigger>
<DialogContent data-testid="dialog-training">
@ -265,7 +298,7 @@ export default function TrainingPage() {
>
Annulla
</Button>
<Button type="submit" disabled={trainMutation.isPending} data-testid="button-confirm-training">
<Button type="submit" disabled={trainMutation.isPending || mlStats?.ml_backend_status === "offline"} data-testid="button-confirm-training">
{trainMutation.isPending ? "Avvio..." : "Avvia Training"}
</Button>
</DialogFooter>
@ -294,9 +327,9 @@ export default function TrainingPage() {
</p>
<Dialog open={isDetectDialogOpen} onOpenChange={setIsDetectDialogOpen}>
<DialogTrigger asChild>
<Button variant="secondary" className="w-full" data-testid="button-start-detection">
<Button variant="secondary" className="w-full" disabled={mlStats?.ml_backend_status === "offline"} data-testid="button-start-detection">
<Search className="h-4 w-4 mr-2" />
Avvia Detection
{mlStats?.ml_backend_status === "offline" ? "ML Backend Offline" : "Avvia Detection"}
</Button>
</DialogTrigger>
<DialogContent data-testid="dialog-detection">
@ -377,7 +410,7 @@ export default function TrainingPage() {
>
Annulla
</Button>
<Button type="submit" disabled={detectMutation.isPending} data-testid="button-confirm-detection">
<Button type="submit" disabled={detectMutation.isPending || mlStats?.ml_backend_status === "offline"} data-testid="button-confirm-detection">
{detectMutation.isPending ? "Avvio..." : "Avvia Detection"}
</Button>
</DialogFooter>

View File

@ -2,9 +2,9 @@ import { useQuery, useMutation } from "@tanstack/react-query";
import { queryClient, apiRequest } from "@/lib/queryClient";
import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
import { Button } from "@/components/ui/button";
import { Shield, Plus, Trash2, CheckCircle2, XCircle, Search } from "lucide-react";
import { Shield, Plus, Trash2, CheckCircle2, XCircle, Search, ChevronLeft, ChevronRight } from "lucide-react";
import { format } from "date-fns";
import { useState } from "react";
import { useState, useEffect, useMemo } from "react";
import { useForm } from "react-hook-form";
import { zodResolver } from "@hookform/resolvers/zod";
import { z } from "zod";
@ -31,6 +31,8 @@ import {
import { Input } from "@/components/ui/input";
import { Textarea } from "@/components/ui/textarea";
const ITEMS_PER_PAGE = 50;
const whitelistFormSchema = insertWhitelistSchema.extend({
ipAddress: z.string()
.min(7, "Inserisci un IP valido")
@ -41,10 +43,17 @@ const whitelistFormSchema = insertWhitelistSchema.extend({
}, "Ogni ottetto deve essere tra 0 e 255"),
});
interface WhitelistResponse {
items: Whitelist[];
total: number;
}
export default function WhitelistPage() {
const { toast } = useToast();
const [isAddDialogOpen, setIsAddDialogOpen] = useState(false);
const [searchQuery, setSearchQuery] = useState("");
const [searchInput, setSearchInput] = useState("");
const [debouncedSearch, setDebouncedSearch] = useState("");
const [currentPage, setCurrentPage] = useState(1);
const form = useForm<z.infer<typeof whitelistFormSchema>>({
resolver: zodResolver(whitelistFormSchema),
@ -56,16 +65,33 @@ export default function WhitelistPage() {
},
});
const { data: whitelist, isLoading } = useQuery<Whitelist[]>({
queryKey: ["/api/whitelist"],
useEffect(() => {
const timer = setTimeout(() => {
setDebouncedSearch(searchInput);
setCurrentPage(1);
}, 300);
return () => clearTimeout(timer);
}, [searchInput]);
const queryParams = useMemo(() => {
const params = new URLSearchParams();
params.set("limit", ITEMS_PER_PAGE.toString());
params.set("offset", ((currentPage - 1) * ITEMS_PER_PAGE).toString());
if (debouncedSearch.trim()) {
params.set("search", debouncedSearch.trim());
}
return params.toString();
}, [currentPage, debouncedSearch]);
const { data, isLoading } = useQuery<WhitelistResponse>({
queryKey: ["/api/whitelist", currentPage, debouncedSearch],
queryFn: () => fetch(`/api/whitelist?${queryParams}`).then(r => r.json()),
refetchInterval: 10000,
});
// Filter whitelist based on search query
const filteredWhitelist = whitelist?.filter((item) =>
item.ipAddress.toLowerCase().includes(searchQuery.toLowerCase()) ||
item.reason?.toLowerCase().includes(searchQuery.toLowerCase()) ||
item.comment?.toLowerCase().includes(searchQuery.toLowerCase())
);
const whitelistItems = data?.items || [];
const totalCount = data?.total || 0;
const totalPages = Math.ceil(totalCount / ITEMS_PER_PAGE);
const addMutation = useMutation({
mutationFn: async (data: z.infer<typeof whitelistFormSchema>) => {
@ -203,9 +229,9 @@ export default function WhitelistPage() {
<div className="relative">
<Search className="absolute left-3 top-1/2 -translate-y-1/2 h-4 w-4 text-muted-foreground" />
<Input
placeholder="Cerca per IP, motivo o note..."
value={searchQuery}
onChange={(e) => setSearchQuery(e.target.value)}
placeholder="Cerca per IP, motivo, note o sorgente..."
value={searchInput}
onChange={(e) => setSearchInput(e.target.value)}
className="pl-9"
data-testid="input-search-whitelist"
/>
@ -215,9 +241,36 @@ export default function WhitelistPage() {
<Card data-testid="card-whitelist">
<CardHeader>
<CardTitle className="flex items-center gap-2">
<Shield className="h-5 w-5" />
IP Protetti ({filteredWhitelist?.length || 0}{searchQuery && whitelist ? ` di ${whitelist.length}` : ''})
<CardTitle className="flex items-center justify-between gap-2 flex-wrap">
<div className="flex items-center gap-2">
<Shield className="h-5 w-5" />
IP Protetti ({totalCount})
</div>
{totalPages > 1 && (
<div className="flex items-center gap-2 text-sm font-normal">
<Button
variant="outline"
size="icon"
onClick={() => setCurrentPage(p => Math.max(1, p - 1))}
disabled={currentPage === 1}
data-testid="button-prev-page"
>
<ChevronLeft className="h-4 w-4" />
</Button>
<span data-testid="text-pagination">
Pagina {currentPage} di {totalPages}
</span>
<Button
variant="outline"
size="icon"
onClick={() => setCurrentPage(p => Math.min(totalPages, p + 1))}
disabled={currentPage === totalPages}
data-testid="button-next-page"
>
<ChevronRight className="h-4 w-4" />
</Button>
</div>
)}
</CardTitle>
</CardHeader>
<CardContent>
@ -225,9 +278,9 @@ export default function WhitelistPage() {
<div className="text-center py-8 text-muted-foreground" data-testid="text-loading">
Caricamento...
</div>
) : filteredWhitelist && filteredWhitelist.length > 0 ? (
) : whitelistItems.length > 0 ? (
<div className="space-y-3">
{filteredWhitelist.map((item) => (
{whitelistItems.map((item) => (
<div
key={item.id}
className="p-4 rounded-lg border hover-elevate"
@ -272,12 +325,45 @@ export default function WhitelistPage() {
</div>
</div>
))}
{/* Bottom pagination */}
{totalPages > 1 && (
<div className="flex items-center justify-center gap-4 mt-6 pt-4 border-t">
<Button
variant="outline"
size="sm"
onClick={() => setCurrentPage(p => Math.max(1, p - 1))}
disabled={currentPage === 1}
data-testid="button-prev-page-bottom"
>
<ChevronLeft className="h-4 w-4 mr-1" />
Precedente
</Button>
<span className="text-sm text-muted-foreground" data-testid="text-pagination-bottom">
Pagina {currentPage} di {totalPages} ({totalCount} totali)
</span>
<Button
variant="outline"
size="sm"
onClick={() => setCurrentPage(p => Math.min(totalPages, p + 1))}
disabled={currentPage === totalPages}
data-testid="button-next-page-bottom"
>
Successiva
<ChevronRight className="h-4 w-4 ml-1" />
</Button>
</div>
)}
</div>
) : (
<div className="text-center py-12 text-muted-foreground" data-testid="text-empty">
<Shield className="h-12 w-12 mx-auto mb-4 opacity-50" />
<p className="font-medium">Nessun IP in whitelist</p>
<p className="text-sm mt-2">Aggiungi indirizzi IP fidati per proteggerli dal blocco automatico</p>
{debouncedSearch ? (
<p className="text-sm mt-2">Prova con un altro termine di ricerca</p>
) : (
<p className="text-sm mt-2">Aggiungi indirizzi IP fidati per proteggerli dal blocco automatico</p>
)}
</div>
)}
</CardContent>

View File

@ -2,7 +2,7 @@
-- PostgreSQL database dump
--
\restrict Pm7Se0tW1nWw6qc6EX1uRmohgf9e12QrVE3tMjJCKfmvUUPM4d23uZQYNGwgWyr
\restrict QQPZgpukcxzRMKOdS5xNsXDiphiHLW5uAuhQxN7luRJ2u8BkVkDOz1h9Un2BrJ0
-- Dumped from database version 16.11 (df20cf9)
-- Dumped by pg_dump version 16.10
@ -387,5 +387,5 @@ ALTER TABLE ONLY public.public_blacklist_ips
-- PostgreSQL database dump complete
--
\unrestrict Pm7Se0tW1nWw6qc6EX1uRmohgf9e12QrVE3tMjJCKfmvUUPM4d23uZQYNGwgWyr
\unrestrict QQPZgpukcxzRMKOdS5xNsXDiphiHLW5uAuhQxN7luRJ2u8BkVkDOz1h9Un2BrJ0

View File

@ -1,34 +1,39 @@
#!/bin/bash
# =========================================================
# CHECK BACKEND - Verifica e riavvia backend Python se necessario
# Usa systemctl per gestire il servizio (con virtual environment)
# Nota: questo script può girare come root o come user ids
# =========================================================
PROCESS_NAME="python3.11 python_ml/main.py"
PID_FILE="/var/log/ids/backend.pid"
LOG_FILE="/var/log/ids/backend.log"
WORK_DIR="/opt/ids"
mkdir -p /var/log/ids
# Check if backend is running
if pgrep -f "$PROCESS_NAME" > /dev/null; then
# Backend running, update PID
pgrep -f "$PROCESS_NAME" > "$PID_FILE"
# Check if systemd service is active
if systemctl is-active --quiet ids-ml-backend; then
exit 0
else
echo "[$(date)] Backend Python NON attivo, riavvio..." >> "$LOG_FILE"
# Kill any orphaned Python processes
pkill -f "python_ml/main.py" 2>/dev/null
# Wait a moment
sleep 2
# Start backend
cd "$WORK_DIR/python_ml"
nohup /usr/bin/python3.11 main.py >> "$LOG_FILE" 2>&1 &
NEW_PID=$!
echo $NEW_PID > "$PID_FILE"
echo "[$(date)] Backend riavviato con PID: $NEW_PID" >> "$LOG_FILE"
fi
# Verifica anche se il processo Python è attivo (fallback)
if pgrep -f "python.*main.py" > /dev/null; then
exit 0
fi
echo "[$(date)] Backend Python NON attivo, riavvio..." >> "$LOG_FILE"
# Prova prima con systemctl (funziona se eseguito come root o con sudo configurato)
if [ "$(id -u)" -eq 0 ]; then
systemctl restart ids-ml-backend
else
# Se non siamo root, prova con sudo (richiede sudoers configurato)
sudo systemctl restart ids-ml-backend 2>/dev/null
fi
# Wait for startup
sleep 5
if systemctl is-active --quiet ids-ml-backend || pgrep -f "python.*main.py" > /dev/null; then
echo "[$(date)] Backend riavviato con successo" >> "$LOG_FILE"
else
echo "[$(date)] ERRORE: Backend non si è avviato. Controlla: journalctl -u ids-ml-backend" >> "$LOG_FILE"
fi

View File

@ -1,41 +1,23 @@
#!/bin/bash
# =========================================================
# CHECK FRONTEND - Verifica e riavvia frontend Node.js se necessario
# CHECK FRONTEND - Verifica se backend Node.js e' attivo
# =========================================================
PROCESS_NAME="npm run dev"
PID_FILE="/var/log/ids/frontend.pid"
LOG_FILE="/var/log/ids/frontend.log"
WORK_DIR="/opt/ids"
LOG_FILE="/var/log/ids/backend.log"
mkdir -p /var/log/ids
# Check if frontend is running
if pgrep -f "vite" > /dev/null; then
# Frontend running, update PID
pgrep -f "vite" > "$PID_FILE"
if systemctl is-active --quiet ids-backend.service 2>/dev/null; then
exit 0
else
echo "[$(date)] Frontend Node NON attivo, riavvio..." >> "$LOG_FILE"
echo "[$(date)] Backend Node.js NON attivo" >> "$LOG_FILE"
systemctl start ids-backend.service 2>> "$LOG_FILE" || true
# Kill any orphaned Node processes
pkill -f "vite" 2>/dev/null
pkill -f "npm run dev" 2>/dev/null
sleep 3
# Wait a moment
sleep 2
# Start frontend with environment variables from .env
cd "$WORK_DIR"
if [ -f "$WORK_DIR/.env" ]; then
# Load .env and start npm with those variables
nohup env $(cat "$WORK_DIR/.env" | grep -v '^#' | xargs) npm run dev >> "$LOG_FILE" 2>&1 &
if systemctl is-active --quiet ids-backend.service 2>/dev/null; then
echo "[$(date)] Backend riavviato con successo via systemd" >> "$LOG_FILE"
else
# Fallback: start without .env (will use system env vars)
nohup npm run dev >> "$LOG_FILE" 2>&1 &
echo "[$(date)] ERRORE: Backend non si e' avviato - verificare con: journalctl -u ids-backend -n 20" >> "$LOG_FILE"
fi
NEW_PID=$!
echo $NEW_PID > "$PID_FILE"
echo "[$(date)] Frontend riavviato con PID: $NEW_PID" >> "$LOG_FILE"
fi

View File

@ -18,43 +18,49 @@ SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
echo ""
echo "📋 Installing systemd service files..."
echo "Installing systemd service files..."
# Copy service files
cp "$PROJECT_ROOT/deployment/systemd/ids-backend.service" /etc/systemd/system/
cp "$PROJECT_ROOT/deployment/systemd/ids-ml-backend.service" /etc/systemd/system/
cp "$PROJECT_ROOT/deployment/systemd/ids-syslog-parser.service" /etc/systemd/system/
cp "$PROJECT_ROOT/deployment/systemd/ids-auto-block.service" /etc/systemd/system/
# Ensure correct permissions
chmod 644 /etc/systemd/system/ids-backend.service
chmod 644 /etc/systemd/system/ids-ml-backend.service
chmod 644 /etc/systemd/system/ids-syslog-parser.service
chmod 644 /etc/systemd/system/ids-auto-block.service
echo "Service files copied to /etc/systemd/system/"
echo "Service files copied to /etc/systemd/system/"
echo ""
echo "🔄 Reloading systemd daemon..."
echo "Reloading systemd daemon..."
systemctl daemon-reload
echo ""
echo "🔧 Enabling services to start on boot..."
echo "Enabling services to start on boot..."
systemctl enable ids-backend.service
systemctl enable ids-ml-backend.service
systemctl enable ids-syslog-parser.service
echo ""
echo "========================================="
echo "Installation Complete!"
echo "Installation Complete!"
echo "========================================="
echo ""
echo "Next steps:"
echo ""
echo "1. Start the services:"
echo " sudo systemctl start ids-backend"
echo " sudo systemctl start ids-ml-backend"
echo " sudo systemctl start ids-syslog-parser"
echo ""
echo "2. Check status:"
echo " sudo systemctl status ids-ml-backend"
echo " sudo systemctl status ids-syslog-parser"
echo " sudo systemctl status ids-backend ids-ml-backend ids-syslog-parser"
echo ""
echo "3. View logs:"
echo " tail -f /var/log/ids/backend.log"
echo " tail -f /var/log/ids/ml_backend.log"
echo " tail -f /var/log/ids/syslog_parser.log"
echo ""

View File

@ -3,30 +3,20 @@
-- Microsoft Azure IP ranges (whitelist - cloud provider)
INSERT INTO public_lists (name, url, type, enabled, fetch_interval_minutes)
VALUES (
'Microsoft Azure',
'https://raw.githubusercontent.com/femueller/cloud-ip-ranges/master/microsoft-azure-ip-ranges.json',
'whitelist',
true,
60
) ON CONFLICT (name) DO UPDATE SET
url = EXCLUDED.url,
enabled = EXCLUDED.enabled;
SELECT 'Microsoft Azure',
'https://raw.githubusercontent.com/femueller/cloud-ip-ranges/master/microsoft-azure-ip-ranges.json',
'whitelist', true, 60
WHERE NOT EXISTS (SELECT 1 FROM public_lists WHERE name = 'Microsoft Azure');
-- Meta/Facebook IP ranges (whitelist - major service provider)
INSERT INTO public_lists (name, url, type, enabled, fetch_interval_minutes)
VALUES (
'Meta (Facebook)',
'https://raw.githubusercontent.com/parseword/util-misc/master/block-facebook/facebook-ip-ranges.txt',
'whitelist',
true,
60
) ON CONFLICT (name) DO UPDATE SET
url = EXCLUDED.url,
enabled = EXCLUDED.enabled;
SELECT 'Meta (Facebook)',
'https://raw.githubusercontent.com/parseword/util-misc/master/block-facebook/facebook-ip-ranges.txt',
'whitelist', true, 60
WHERE NOT EXISTS (SELECT 1 FROM public_lists WHERE name = 'Meta (Facebook)');
-- Update schema version
UPDATE schema_version SET version = 9, updated_at = NOW();
UPDATE schema_version SET version = 9;
-- Verify insertion
SELECT id, name, type, enabled, url FROM public_lists WHERE name IN ('Microsoft Azure', 'Meta (Facebook)');

View File

@ -1,17 +1,20 @@
#!/bin/bash
# =========================================================
# RESTART ALL - Riavvio completo sistema IDS
# Usa systemctl per ML Backend, processo diretto per frontend
# =========================================================
LOG_FILE="/var/log/ids/cron.log"
echo "$(date): === RESTART SETTIMANALE SISTEMA IDS ===" >> "$LOG_FILE"
# Stop all services
# Stop ML Backend via systemctl
echo "$(date): Arresto servizi..." >> "$LOG_FILE"
pkill -f "python_ml/main.py"
pkill -f "vite"
pkill -f "npm run dev"
systemctl stop ids-ml-backend 2>/dev/null
# Stop frontend processes
pkill -f "vite" 2>/dev/null
pkill -f "npm run dev" 2>/dev/null
sleep 5
@ -20,10 +23,26 @@ echo "$(date): Pulizia file temporanei..." >> "$LOG_FILE"
rm -f /var/log/ids/*.pid
find /tmp -name "ids_*" -mtime +7 -delete 2>/dev/null
# Restart services
# Restart ML Backend via systemctl
echo "$(date): Riavvio servizi..." >> "$LOG_FILE"
/opt/ids/deployment/check_backend.sh >> "$LOG_FILE" 2>&1
systemctl start ids-ml-backend
sleep 3
# Restart frontend via check script
/opt/ids/deployment/check_frontend.sh >> "$LOG_FILE" 2>&1
# Verify ML Backend
if systemctl is-active --quiet ids-ml-backend; then
echo "$(date): ML Backend avviato con successo" >> "$LOG_FILE"
else
echo "$(date): ERRORE: ML Backend non si è avviato" >> "$LOG_FILE"
fi
# Verify Frontend
if pgrep -f "vite" > /dev/null; then
echo "$(date): Frontend avviato con successo" >> "$LOG_FILE"
else
echo "$(date): ERRORE: Frontend non si è avviato" >> "$LOG_FILE"
fi
echo "$(date): Restart completato!" >> "$LOG_FILE"

View File

@ -1,58 +1,56 @@
#!/bin/bash
#
# Restart IDS Frontend (Node.js/Express/Vite)
# Utility per restart manuale del server frontend
# Restart IDS Frontend (Node.js/Express)
# Utility per restart manuale del server frontend via systemd
#
set -e
echo "🔄 Restart Frontend Node.js..."
echo "Restart Backend Node.js via systemd..."
# Kill AGGRESSIVO di tutti i processi Node/Vite
echo "⏸️ Stopping all Node/Vite processes..."
pkill -9 -f "node.*tsx" 2>/dev/null || true
pkill -9 -f "vite" 2>/dev/null || true
pkill -9 -f "npm run dev" 2>/dev/null || true
# Stop servizio
echo "Stopping ids-backend..."
sudo systemctl stop ids-backend.service 2>/dev/null || true
sleep 2
# Kill processo sulla porta 5000 (se esiste)
echo "🔍 Liberando porta 5000..."
# Kill eventuali processi orfani sulla porta 5000
echo "Liberando porta 5000..."
lsof -ti:5000 | xargs kill -9 2>/dev/null || true
sleep 1
# Verifica porta LIBERA
# Verifica porta libera
if lsof -Pi :5000 -sTCP:LISTEN -t >/dev/null 2>&1; then
echo "ERRORE: Porta 5000 ancora occupata!"
echo "ERRORE: Porta 5000 ancora occupata!"
echo "Processi sulla porta:"
lsof -i:5000
exit 1
fi
echo "Porta 5000 libera"
echo "Porta 5000 libera"
# Restart usando check_frontend.sh
echo "🚀 Starting frontend..."
/opt/ids/deployment/check_frontend.sh
# Start servizio
echo "Starting ids-backend..."
sudo systemctl start ids-backend.service
# Attendi avvio completo
sleep 5
# Verifica avvio
if pgrep -f "vite" > /dev/null; then
PID=$(pgrep -f "vite")
echo "✅ Frontend avviato con PID: $PID"
echo "📡 Server disponibile su: http://localhost:5000"
if systemctl is-active --quiet ids-backend.service; then
echo "Backend avviato con successo"
echo "Server disponibile su: http://localhost:5000"
# Test rapido
sleep 2
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:5000/ 2>/dev/null || echo "000")
if [ "$HTTP_CODE" = "200" ]; then
echo "HTTP test OK (200)"
echo "HTTP test OK (200)"
else
echo "⚠️ HTTP test: $HTTP_CODE"
echo "HTTP test: $HTTP_CODE (potrebbe essere in fase di avvio)"
fi
else
echo "❌ Errore: Frontend non avviato!"
echo "📋 Controlla log: tail -f /var/log/ids/frontend.log"
echo "ERRORE: Backend non avviato!"
echo "Controlla log: journalctl -u ids-backend -n 20"
sudo journalctl -u ids-backend -n 20 --no-pager
exit 1
fi

View File

@ -1,8 +1,7 @@
[Unit]
Description=IDS Auto-Blocking Service - Detect and Block Malicious IPs
Documentation=https://github.com/yourusername/ids
After=network.target ids-ml-backend.service postgresql-16.service
Requires=ids-ml-backend.service
After=network.target postgresql-16.service
Wants=ids-ml-backend.service
[Service]
Type=oneshot
@ -23,8 +22,8 @@ SyslogIdentifier=ids-auto-block
NoNewPrivileges=true
PrivateTmp=true
# Timeout: max 3 minuti per detection+blocking
TimeoutStartSec=180
# Timeout: max 8 minuti per detection+blocking
TimeoutStartSec=480
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,32 @@
[Unit]
Description=IDS Node.js Backend (Express API + Frontend)
After=network.target postgresql-16.service
Wants=postgresql-16.service
[Service]
Type=simple
User=ids
Group=ids
WorkingDirectory=/opt/ids
EnvironmentFile=/opt/ids/.env
Environment=NODE_ENV=production
Environment=PORT=5000
Environment=PATH=/usr/local/bin:/usr/bin:/bin
ExecStartPre=/bin/bash -c 'test -f /opt/ids/dist/index.js || (echo "ERRORE: dist/index.js non trovato - eseguire npm run build" && exit 1)'
ExecStart=/usr/bin/env node dist/index.js
Restart=always
RestartSec=5
StartLimitInterval=300
StartLimitBurst=10
LimitNOFILE=65536
MemoryMax=1G
StandardOutput=append:/var/log/ids/backend.log
StandardError=append:/var/log/ids/backend.log
SyslogIdentifier=ids-backend
[Install]
WantedBy=multi-user.target

622
generate_iso27001_doc.py Normal file
View File

@ -0,0 +1,622 @@
#!/usr/bin/env python3
"""Genera documento Word IDS - Conformità ISO 27001"""
from docx import Document
from docx.shared import Inches, Pt, Cm, RGBColor
from docx.enum.text import WD_ALIGN_PARAGRAPH
from docx.enum.table import WD_TABLE_ALIGNMENT
from datetime import datetime
doc = Document()
style = doc.styles['Normal']
font = style.font
font.name = 'Calibri'
font.size = Pt(11)
sections = doc.sections
for section in sections:
section.top_margin = Cm(2.5)
section.bottom_margin = Cm(2.5)
section.left_margin = Cm(2.5)
section.right_margin = Cm(2.5)
# --- COVER PAGE ---
for _ in range(6):
doc.add_paragraph("")
title = doc.add_paragraph()
title.alignment = WD_ALIGN_PARAGRAPH.CENTER
run = title.add_run("INTRUSION DETECTION SYSTEM (IDS)")
run.bold = True
run.font.size = Pt(28)
run.font.color.rgb = RGBColor(0, 51, 102)
subtitle = doc.add_paragraph()
subtitle.alignment = WD_ALIGN_PARAGRAPH.CENTER
run = subtitle.add_run("Documentazione Funzionale e Conformità ISO/IEC 27001:2022")
run.font.size = Pt(16)
run.font.color.rgb = RGBColor(51, 51, 51)
doc.add_paragraph("")
info = doc.add_paragraph()
info.alignment = WD_ALIGN_PARAGRAPH.CENTER
run = info.add_run("Sistema di Rilevamento Intrusioni per Router MikroTik\ncon Machine Learning e Blocco Automatico")
run.font.size = Pt(12)
run.font.color.rgb = RGBColor(100, 100, 100)
doc.add_paragraph("")
doc.add_paragraph("")
meta = doc.add_paragraph()
meta.alignment = WD_ALIGN_PARAGRAPH.CENTER
run = meta.add_run(f"Versione: 2.0.0\nData: {datetime.now().strftime('%d/%m/%Y')}\nClassificazione: Riservato")
run.font.size = Pt(11)
run.font.color.rgb = RGBColor(100, 100, 100)
doc.add_page_break()
# --- TABLE OF CONTENTS ---
doc.add_heading('Indice', level=1)
toc_items = [
"1. Introduzione e Scopo del Documento",
"2. Panoramica del Sistema IDS",
"3. Architettura del Sistema",
"4. Funzionalità del Sistema",
" 4.1 Raccolta e Analisi Log (Syslog)",
" 4.2 Rilevamento Anomalie con Machine Learning",
" 4.3 Blocco Automatico degli IP Malevoli",
" 4.4 Gestione Whitelist e Blacklist",
" 4.5 Liste Pubbliche e Threat Intelligence",
" 4.6 Dashboard e Monitoraggio in Tempo Reale",
" 4.7 Geolocalizzazione IP",
" 4.8 Gestione Router MikroTik",
" 4.9 Pulizia Automatica dei Dati",
" 4.10 Monitoraggio Servizi",
"5. Mappatura Controlli ISO/IEC 27001:2022",
"6. Controlli Annex A Coperti",
"7. Politiche di Sicurezza Implementate",
"8. Gestione degli Incidenti",
"9. Continuità Operativa e Disponibilità",
"10. Audit e Tracciabilità",
"11. Conclusioni",
]
for item in toc_items:
p = doc.add_paragraph(item)
p.paragraph_format.space_after = Pt(2)
doc.add_page_break()
# --- 1. INTRODUZIONE ---
doc.add_heading('1. Introduzione e Scopo del Documento', level=1)
doc.add_paragraph(
"Il presente documento descrive le funzionalità, l'architettura e le misure di sicurezza "
"implementate nel Sistema di Rilevamento Intrusioni (IDS) progettato per ambienti di rete "
"basati su router MikroTik. Lo scopo principale è fornire evidenza documentale della conformità "
"del sistema ai requisiti dello standard ISO/IEC 27001:2022, con particolare riferimento ai "
"controlli dell'Annex A relativi alla sicurezza delle reti, al monitoraggio, alla gestione "
"degli incidenti e alla protezione delle informazioni."
)
doc.add_paragraph(
"Questo documento è destinato a responsabili della sicurezza informatica, auditor interni "
"ed esterni, e al management aziendale coinvolto nel Sistema di Gestione della Sicurezza "
"delle Informazioni (SGSI/ISMS)."
)
# --- 2. PANORAMICA ---
doc.add_heading('2. Panoramica del Sistema IDS', level=1)
doc.add_paragraph(
"L'IDS è un sistema completo di sicurezza di rete che integra raccolta log in tempo reale, "
"analisi basata su Machine Learning, blocco automatico degli IP malevoli e monitoraggio "
"continuo dello stato della rete. Il sistema è progettato per operare in ambienti con "
"10+ router MikroTik e gestire volumi elevati di traffico (186M+ record di log di rete)."
)
doc.add_heading('Caratteristiche Principali', level=2)
features_main = [
"Rilevamento anomalie in tempo reale tramite Isolation Forest e classificatore ensemble",
"Blocco automatico degli IP con risk score >= 80 sui router MikroTik via REST API",
"Integrazione con feed di threat intelligence pubblici (Spamhaus, Talos, AWS, GCP, Microsoft Azure, Meta/Facebook, Cloudflare)",
"Dashboard web interattiva con visualizzazioni in tempo reale",
"Gestione whitelist/blacklist con supporto completo CIDR",
"Geolocalizzazione automatica degli IP rilevati",
"Sistema di pulizia automatica dei dati obsoleti",
"Monitoraggio continuo dei servizi di sistema",
]
for f in features_main:
doc.add_paragraph(f, style='List Bullet')
# --- 3. ARCHITETTURA ---
doc.add_heading('3. Architettura del Sistema', level=1)
doc.add_paragraph(
"Il sistema adotta un'architettura a microservizi composta da tre componenti principali:"
)
arch_table = doc.add_table(rows=5, cols=3)
arch_table.style = 'Light Grid Accent 1'
arch_table.alignment = WD_TABLE_ALIGNMENT.CENTER
headers = ['Componente', 'Tecnologia', 'Funzione']
for i, h in enumerate(headers):
arch_table.rows[0].cells[i].text = h
for paragraph in arch_table.rows[0].cells[i].paragraphs:
for run in paragraph.runs:
run.bold = True
arch_data = [
['Frontend Web', 'React, ShadCN UI, TanStack Query', 'Dashboard di monitoraggio, gestione whitelist/blacklist, visualizzazione rilevamenti'],
['Backend API', 'Node.js, Express, Drizzle ORM', 'API REST, gestione database PostgreSQL, coordinamento servizi'],
['Backend ML', 'Python, FastAPI, scikit-learn, XGBoost', 'Analisi anomalie con Isolation Forest e classificatore ensemble, blocco automatico IP'],
['Database', 'PostgreSQL', 'Persistenza dati: log di rete, rilevamenti, whitelist, blacklist, configurazione router'],
]
for i, row_data in enumerate(arch_data):
for j, cell_text in enumerate(row_data):
arch_table.rows[i+1].cells[j].text = cell_text
doc.add_paragraph("")
doc.add_paragraph(
"La comunicazione tra i componenti avviene tramite API REST protette da autenticazione "
"mediante API Key (header X-API-Key). Il database PostgreSQL è accessibile solo dai "
"backend tramite connessione autenticata."
)
# --- 4. FUNZIONALITÀ ---
doc.add_heading('4. Funzionalità del Sistema', level=1)
# 4.1
doc.add_heading('4.1 Raccolta e Analisi Log (Syslog)', level=2)
doc.add_paragraph(
"Il componente syslog_parser.py riceve i log syslog via protocollo UDP sulla porta 514 "
"da tutti i router MikroTik configurati. I log vengono analizzati, normalizzati e "
"memorizzati nel database PostgreSQL nella tabella network_logs."
)
doc.add_heading('Caratteristiche:', level=3)
syslog_features = [
"Ricezione syslog UDP sulla porta 514 (standard RFC 5424)",
"Parsing automatico dei messaggi syslog con estrazione di IP sorgente, destinazione, porte, protocollo",
"Auto-reconnect e recovery in caso di errori di connessione al database",
"Politica di retention dei dati: 3 giorni di conservazione dei log grezzi",
"Gestione di volumi elevati: oltre 186 milioni di record processati",
]
for f in syslog_features:
doc.add_paragraph(f, style='List Bullet')
# 4.2
doc.add_heading('4.2 Rilevamento Anomalie con Machine Learning', level=2)
doc.add_paragraph(
"Il cuore del sistema è il motore di Machine Learning che utilizza un approccio ibrido "
"per il rilevamento delle anomalie, combinando due algoritmi complementari per ridurre "
"i falsi positivi e migliorare l'accuratezza del rilevamento."
)
doc.add_heading('Algoritmi Utilizzati:', level=3)
ml_features = [
"Extended Isolation Forest (EIF): algoritmo non supervisionato per il rilevamento di anomalie basato sull'isolamento dei punti dati anomali. Analizza 25 feature di rete estratte dai log.",
"Classificatore Ensemble con Voting Pesato: combina XGBoost e altri classificatori per una classificazione più precisa delle anomalie rilevate.",
"Risk Score (0-100): ogni IP riceve un punteggio di rischio su scala 0-100 distribuito su 5 livelli (Normale <40, Basso 40-59, Medio 60-69, Alto 70-84, Critico >=85).",
"Retraining automatico settimanale del modello ML per adattarsi ai nuovi pattern di traffico.",
"Analisi ogni 2 minuti dei log dell'ultima ora per rilevamento near-real-time.",
]
for f in ml_features:
doc.add_paragraph(f, style='List Bullet')
doc.add_heading('Feature di Rete Analizzate (25 feature):', level=3)
doc.add_paragraph(
"Il modello analizza feature quali: frequenza delle connessioni per IP, distribuzione delle porte "
"di destinazione, rapporto pacchetti in/out, diversità dei protocolli, pattern temporali, "
"entropia delle connessioni, velocità di scansione porte, distribuzione geografica delle "
"connessioni, e altre metriche statistiche derivate dal traffico di rete."
)
# 4.3
doc.add_heading('4.3 Blocco Automatico degli IP Malevoli', level=2)
doc.add_paragraph(
"Gli IP identificati come critici (risk score >= 80) vengono automaticamente bloccati "
"su tutti i router MikroTik configurati tramite la REST API di RouterOS."
)
block_features = [
"Blocco automatico ogni 2 minuti per IP con risk score >= 80",
"Comunicazione con router MikroTik tramite REST API (HTTP/HTTPS)",
"Blocco parallelo su tutti i router abilitati contemporaneamente",
"Verifica whitelist prima del blocco (gli IP in whitelist non vengono mai bloccati)",
"Blocco massivo retroattivo: endpoint /block-all-critical per bloccare tutti gli IP critici storici non ancora bloccati",
"Sblocco manuale disponibile dalla dashboard web (pulsante 'Sblocca Router')",
"Auto-sblocco quando un IP viene aggiunto alla whitelist",
"Timeout configurabile per le regole di blocco sul router",
"Tracciamento dello stato di blocco nel database (campo blocked e blocked_at)",
]
for f in block_features:
doc.add_paragraph(f, style='List Bullet')
# 4.4
doc.add_heading('4.4 Gestione Whitelist e Blacklist', level=2)
doc.add_paragraph(
"Il sistema gestisce whitelist e blacklist con supporto completo per singoli IP e "
"range CIDR, utilizzando i tipi nativi INET/CIDR di PostgreSQL per un matching efficiente."
)
wl_features = [
"Whitelist manuale: aggiunta/rimozione IP dalla dashboard web con motivo e note",
"Blacklist automatica: alimentata da feed di threat intelligence pubblici",
"Supporto completo CIDR: matching di range di rete (es. 192.168.0.0/16) con operatori PostgreSQL <<=",
"Logica di priorità: Whitelist manuale > Whitelist pubblica > Blacklist",
"Auto-sblocco dai router quando un IP viene aggiunto alla whitelist",
"Paginazione server-side (50 record/pagina) e ricerca con debounce per performance",
"Campo source per tracciare l'origine di ogni entry (manuale, Spamhaus, AWS, ecc.)",
]
for f in wl_features:
doc.add_paragraph(f, style='List Bullet')
# 4.5
doc.add_heading('4.5 Liste Pubbliche e Threat Intelligence', level=2)
doc.add_paragraph(
"Il sistema integra automaticamente feed di threat intelligence da fonti pubbliche riconosciute, "
"sincronizzandoli ogni 10 minuti per mantenere aggiornate le liste di IP noti come malevoli "
"o appartenenti a provider cloud legittimi."
)
lists_table = doc.add_table(rows=8, cols=3)
lists_table.style = 'Light Grid Accent 1'
lists_table.alignment = WD_TABLE_ALIGNMENT.CENTER
list_headers = ['Feed', 'Tipo', 'Descrizione']
for i, h in enumerate(list_headers):
lists_table.rows[0].cells[i].text = h
for paragraph in lists_table.rows[0].cells[i].paragraphs:
for run in paragraph.runs:
run.bold = True
list_data = [
['Spamhaus DROP', 'Blacklist', "Lista di IP/CIDR noti per attività malevole (spam, botnet, C&C)"],
['Talos Intelligence', 'Blacklist', 'Feed di threat intelligence di Cisco Talos'],
['Amazon AWS', 'Whitelist', 'Range IP ufficiali dei servizi cloud AWS'],
['Google Cloud/GCP', 'Whitelist', 'Range IP ufficiali dei servizi Google Cloud'],
['Microsoft Azure', 'Whitelist', 'Range IP ufficiali dei servizi cloud Microsoft Azure'],
['Meta (Facebook)', 'Whitelist', 'Range IP di Meta (Facebook, Instagram, WhatsApp)'],
['Cloudflare', 'Whitelist', 'Range IP della CDN Cloudflare'],
]
for i, row_data in enumerate(list_data):
for j, cell_text in enumerate(row_data):
lists_table.rows[i+1].cells[j].text = cell_text
doc.add_paragraph("")
doc.add_paragraph(
"La merge logic applica una priorità basata sul tipo: gli IP in whitelist manuale hanno "
"sempre la precedenza, seguiti dalla whitelist pubblica e infine dalla blacklist. Questo "
"previene il blocco accidentale di servizi cloud legittimi."
)
# 4.6
doc.add_heading('4.6 Dashboard e Monitoraggio in Tempo Reale', level=2)
doc.add_paragraph(
"La dashboard web fornisce una visione completa e in tempo reale dello stato della "
"sicurezza di rete, con aggiornamento automatico ogni 10 secondi."
)
dash_features = [
"Panoramica generale: contatori di router attivi, rilevamenti totali, IP bloccati, IP critici",
"Pagina Rilevamenti: elenco paginato (50/pagina) con ricerca server-side su IP, paese, organizzazione",
"Pagina Whitelist: gestione paginata con ricerca e operazioni CRUD",
"Pagina Analytics: grafici e visualizzazioni del traffico normale vs attacco",
"Filtri avanzati: per tipo di anomalia (DDoS, Port Scan, Brute Force, Botnet), range di risk score",
"Indicatori visivi: badge colorati per livelli di rischio, stato di blocco, flag paese",
"Monitoraggio servizi: stato in tempo reale di ML Backend, Database, Syslog Parser",
"Operazioni dirette: pulsanti per aggiungere a whitelist, sbloccare router, avviare training ML",
]
for f in dash_features:
doc.add_paragraph(f, style='List Bullet')
# 4.7
doc.add_heading('4.7 Geolocalizzazione IP', level=2)
doc.add_paragraph(
"Integrazione con il servizio ip-api.com per arricchire ogni rilevamento con informazioni "
"geografiche e di rete, inclusi paese, città, ISP, numero AS e organizzazione. "
"Il sistema implementa caching intelligente per ridurre le chiamate API e rispettare i "
"rate limit del servizio."
)
# 4.8
doc.add_heading('4.8 Gestione Router MikroTik', level=2)
doc.add_paragraph(
"Il sistema comunica con i router MikroTik tramite la REST API di RouterOS, "
"supportando operazioni parallele su multipli router contemporaneamente."
)
router_features = [
"Configurazione router: IP, porta API, credenziali, abilitazione/disabilitazione",
"Test di connettività automatico",
"Supporto HTTP (porta 80) e HTTPS (porta 443) con gestione certificati SSL/TLS",
"Operazioni parallele su tutti i router (blocco/sblocco simultaneo)",
"Gestione address-list di firewall (aggiunta, rimozione, lettura)",
"Compatibilità con RouterOS 7.x e versioni successive",
]
for f in router_features:
doc.add_paragraph(f, style='List Bullet')
# 4.9
doc.add_heading('4.9 Pulizia Automatica dei Dati', level=2)
doc.add_paragraph(
"Un timer systemd orario esegue lo script cleanup_detections.py che:"
)
cleanup_features = [
"Rimuove rilevamenti più vecchi di 48 ore",
"Sblocca automaticamente gli IP bloccati da più di 2 ore",
"Mantiene la retention dei log di rete a 3 giorni",
"Registra statistiche di pulizia nei log di sistema",
]
for f in cleanup_features:
doc.add_paragraph(f, style='List Bullet')
# 4.10
doc.add_heading('4.10 Monitoraggio Servizi', level=2)
doc.add_paragraph(
"La dashboard fornisce monitoraggio in tempo reale dello stato dei servizi critici "
"con possibilità di riavvio tramite API protette."
)
service_features = [
"Monitoraggio stato: ML Backend, Database PostgreSQL, Syslog Parser",
"API per gestione servizi (start/stop/restart) protette da API Key",
"Integrazione con systemd per il controllo dei servizi Python",
"Health check endpoint (/health) per verifica rapida dello stato del sistema",
]
for f in service_features:
doc.add_paragraph(f, style='List Bullet')
# --- 5. MAPPATURA ISO 27001 ---
doc.add_heading('5. Mappatura Controlli ISO/IEC 27001:2022', level=1)
doc.add_paragraph(
"La seguente tabella mappa le funzionalità dell'IDS ai controlli dell'Annex A dello "
"standard ISO/IEC 27001:2022, evidenziando come il sistema contribuisce alla conformità."
)
iso_table = doc.add_table(rows=1, cols=4)
iso_table.style = 'Light Grid Accent 1'
iso_table.alignment = WD_TABLE_ALIGNMENT.CENTER
iso_headers = ['Controllo', 'Titolo', 'Funzionalità IDS', 'Copertura']
for i, h in enumerate(iso_headers):
iso_table.rows[0].cells[i].text = h
for paragraph in iso_table.rows[0].cells[i].paragraphs:
for run in paragraph.runs:
run.bold = True
iso_mappings = [
['A.5.1', 'Politiche per la sicurezza delle informazioni',
'Politiche di blocco automatico, soglie di rischio configurabili, logica di priorità whitelist/blacklist',
'Parziale'],
['A.5.7', 'Threat intelligence',
'Integrazione automatica con 7+ feed di threat intelligence (Spamhaus, Talos, AWS, GCP, Azure, Meta, Cloudflare)',
'Completa'],
['A.5.24', 'Pianificazione e preparazione della gestione degli incidenti',
'Rilevamento automatico anomalie, classificazione per livello di rischio, workflow di risposta automatizzato',
'Completa'],
['A.5.25', 'Valutazione e decisione sugli eventi di sicurezza',
'Risk scoring ML (0-100), classificazione automatica in 5 livelli, soglie configurabili',
'Completa'],
['A.5.26', 'Risposta agli incidenti di sicurezza',
'Blocco automatico IP critici sui router, sblocco manuale, gestione whitelist',
'Completa'],
['A.5.28', 'Raccolta delle evidenze',
'Log completi di tutte le operazioni, timestamp di rilevamento e blocco, geolocalizzazione',
'Completa'],
['A.8.1', 'Dispositivi endpoint utente',
'Protezione della rete tramite blocco IP malevoli a livello router (perimetrale)',
'Parziale'],
['A.8.9', 'Gestione della configurazione',
'Gestione centralizzata della configurazione dei router, migrazioni database versionante',
'Parziale'],
['A.8.15', 'Logging',
'Raccolta syslog centralizzata da tutti i router, retention 3 giorni, 186M+ record',
'Completa'],
['A.8.16', 'Attività di monitoraggio',
'Dashboard real-time, auto-refresh 10s, monitoraggio servizi, alert visivi per livelli di rischio',
'Completa'],
['A.8.20', 'Sicurezza delle reti',
'Firewall automatico via address-list MikroTik, blocco parallelo su tutti i router',
'Completa'],
['A.8.21', 'Sicurezza dei servizi di rete',
'Protezione contro DDoS, port scanning, brute force, botnet tramite rilevamento ML',
'Completa'],
['A.8.22', 'Segregazione delle reti',
'Supporto multi-router con configurazioni indipendenti, blocco selettivo per router',
'Parziale'],
['A.8.23', 'Filtraggio web',
'Blocco IP malevoli a livello di rete, integrazione blacklist/whitelist',
'Parziale'],
]
for mapping in iso_mappings:
row = iso_table.add_row()
for i, cell_text in enumerate(mapping):
row.cells[i].text = cell_text
# --- 6. CONTROLLI ANNEX A ---
doc.add_heading('6. Controlli Annex A Coperti - Dettaglio', level=1)
doc.add_heading('A.5.7 - Threat Intelligence', level=2)
doc.add_paragraph(
"Il sistema implementa un processo automatizzato di raccolta e integrazione di threat intelligence "
"da fonti pubbliche riconosciute a livello internazionale. La sincronizzazione avviene ogni 10 minuti "
"tramite il servizio ids-list-fetcher, che scarica, analizza e aggiorna le liste nel database. "
"La merge logic applica una gerarchia di priorità per evitare conflitti tra liste diverse."
)
doc.add_heading('A.5.24/25/26 - Gestione Incidenti', level=2)
doc.add_paragraph(
"Il ciclo di vita degli incidenti di sicurezza è gestito automaticamente:\n"
"1. RILEVAMENTO: Il modello ML analizza i log ogni 2 minuti e identifica anomalie\n"
"2. CLASSIFICAZIONE: Ogni anomalia riceve un risk score e viene categorizzata (DDoS, Port Scan, Brute Force, Botnet)\n"
"3. RISPOSTA: Gli IP con score >= 80 vengono bloccati automaticamente sui router\n"
"4. DOCUMENTAZIONE: Ogni evento viene registrato con timestamp, geolocalizzazione, score e stato di blocco\n"
"5. RISOLUZIONE: Pulizia automatica dopo 48 ore, sblocco dopo 2 ore, possibilità di whitelist manuale"
)
doc.add_heading('A.8.15/16 - Logging e Monitoraggio', level=2)
doc.add_paragraph(
"Il sistema fornisce capacità complete di logging e monitoraggio:\n"
"- Raccolta centralizzata dei log syslog da tutti i router via UDP:514\n"
"- Conservazione dei log per 3 giorni con retention policy automatica\n"
"- Dashboard di monitoraggio con aggiornamento ogni 10 secondi\n"
"- Monitoraggio dello stato dei servizi (ML Backend, Database, Syslog Parser)\n"
"- Visualizzazioni analitiche del traffico normale vs attacco\n"
"- Paginazione e ricerca server-side per gestire grandi volumi di dati"
)
# --- 7. POLITICHE DI SICUREZZA ---
doc.add_heading('7. Politiche di Sicurezza Implementate', level=1)
doc.add_heading('7.1 Autenticazione e Controllo Accessi', level=2)
sec_features = [
"API Key authentication (header X-API-Key) per tutti gli endpoint del backend ML",
"Credenziali router cifrate nel database",
"Connessioni database autenticate con credenziali separate",
"Supporto HTTPS/TLS per la comunicazione con i router MikroTik",
]
for f in sec_features:
doc.add_paragraph(f, style='List Bullet')
doc.add_heading('7.2 Protezione dei Dati', level=2)
data_features = [
"Database PostgreSQL con accesso autenticato",
"Retention policy automatica: 3 giorni per log, 48 ore per rilevamenti",
"Nessun dato sensibile esposto tramite API pubblica",
"Validazione input con schema Zod per prevenire injection",
"Query parametrizzate per prevenire SQL injection",
]
for f in data_features:
doc.add_paragraph(f, style='List Bullet')
doc.add_heading('7.3 Disponibilità e Resilienza', level=2)
avail_features = [
"Auto-reconnect del parser syslog in caso di disconnessione dal database",
"Timer systemd con restart automatico dei servizi in caso di failure",
"Health check endpoint per monitoraggio esterno",
"Architettura a microservizi: il failure di un componente non blocca gli altri",
]
for f in avail_features:
doc.add_paragraph(f, style='List Bullet')
# --- 8. GESTIONE INCIDENTI ---
doc.add_heading('8. Gestione degli Incidenti', level=1)
doc.add_heading('8.1 Workflow di Risposta Automatica', level=2)
doc.add_paragraph(
"Il sistema implementa un workflow di risposta automatica agli incidenti di sicurezza:"
)
incident_table = doc.add_table(rows=6, cols=3)
incident_table.style = 'Light Grid Accent 1'
incident_table.alignment = WD_TABLE_ALIGNMENT.CENTER
inc_headers = ['Fase', 'Azione', 'Tempistica']
for i, h in enumerate(inc_headers):
incident_table.rows[0].cells[i].text = h
for paragraph in incident_table.rows[0].cells[i].paragraphs:
for run in paragraph.runs:
run.bold = True
inc_data = [
['1. Raccolta', 'Ricezione log syslog da router MikroTik', 'Tempo reale (UDP)'],
['2. Analisi', 'Analisi ML con Isolation Forest + Ensemble', 'Ogni 2 minuti'],
['3. Classificazione', 'Assegnazione risk score e tipo anomalia', 'Automatica'],
['4. Risposta', 'Blocco automatico IP su tutti i router (score >= 80)', 'Immediato dopo analisi'],
['5. Documentazione', 'Registrazione con geolocalizzazione e timestamp', 'Automatica'],
]
for i, row_data in enumerate(inc_data):
for j, cell_text in enumerate(row_data):
incident_table.rows[i+1].cells[j].text = cell_text
doc.add_paragraph("")
doc.add_heading('8.2 Tipi di Anomalie Rilevate', level=2)
anomaly_table = doc.add_table(rows=6, cols=2)
anomaly_table.style = 'Light Grid Accent 1'
anomaly_table.alignment = WD_TABLE_ALIGNMENT.CENTER
anom_headers = ['Tipo', 'Descrizione']
for i, h in enumerate(anom_headers):
anomaly_table.rows[0].cells[i].text = h
for paragraph in anomaly_table.rows[0].cells[i].paragraphs:
for run in paragraph.runs:
run.bold = True
anom_data = [
['DDoS Attack', 'Attacco Distributed Denial of Service - elevato volume di traffico da singola sorgente'],
['Port Scanning', "Scansione sistematica delle porte per identificare servizi vulnerabili"],
['Brute Force', 'Tentativi ripetuti di autenticazione con credenziali diverse'],
['Botnet Activity', 'Traffico riconducibile a reti di dispositivi compromessi (C&C)'],
['Suspicious Activity', 'Comportamento anomalo non classificabile nelle categorie precedenti'],
]
for i, row_data in enumerate(anom_data):
for j, cell_text in enumerate(row_data):
anomaly_table.rows[i+1].cells[j].text = cell_text
# --- 9. CONTINUITÀ ---
doc.add_heading('9. Continuità Operativa e Disponibilità', level=1)
doc.add_paragraph(
"Il sistema è progettato per operare in modo continuo e autonomo, minimizzando "
"l'intervento manuale e garantendo la disponibilità del servizio di monitoraggio."
)
continuity_features = [
"Servizi gestiti da systemd con restart automatico in caso di failure (Restart=always)",
"Timer systemd per operazioni periodiche (analisi ML ogni 2 min, pulizia oraria, sync liste ogni 10 min)",
"Auto-reconnect del parser syslog in caso di perdita connessione al database",
"Architettura a microservizi: il failure del backend ML non blocca la raccolta log",
"Health check endpoint per integrazione con sistemi di monitoraggio esterni (Nagios, Zabbix, ecc.)",
"Database PostgreSQL con supporto a backup e recovery point-in-time",
"Migrazioni database versionante per aggiornamenti sicuri dello schema",
]
for f in continuity_features:
doc.add_paragraph(f, style='List Bullet')
# --- 10. AUDIT ---
doc.add_heading('10. Audit e Tracciabilità', level=1)
doc.add_paragraph(
"Il sistema mantiene traccia completa di tutte le operazioni per supportare attività "
"di audit e conformità:"
)
audit_features = [
"Log di tutti i rilevamenti con timestamp, risk score, tipo anomalia, geolocalizzazione",
"Registrazione delle operazioni di blocco/sblocco con timestamp (blocked_at)",
"Storico delle entry di whitelist con data di creazione e motivo",
"Log delle sincronizzazioni delle liste pubbliche con conteggio IP aggiunti/rimossi",
"Storico dei training del modello ML con parametri e risultati",
"Log di sistema via journald per tutti i servizi (accessibili con journalctl)",
"Database delle migrazioni con versioning per tracciare le modifiche allo schema",
]
for f in audit_features:
doc.add_paragraph(f, style='List Bullet')
# --- 11. CONCLUSIONI ---
doc.add_heading('11. Conclusioni', level=1)
doc.add_paragraph(
"Il Sistema di Rilevamento Intrusioni (IDS) implementa un insieme completo di controlli "
"di sicurezza che contribuiscono significativamente alla conformità con lo standard "
"ISO/IEC 27001:2022. In particolare, il sistema copre in modo completo i controlli "
"relativi a:"
)
conclusions = [
"Threat intelligence (A.5.7): integrazione automatica con 7+ feed pubblici",
"Gestione degli incidenti (A.5.24-A.5.28): workflow completo dalla rilevazione alla risoluzione",
"Logging e monitoraggio (A.8.15-A.8.16): raccolta centralizzata e dashboard real-time",
"Sicurezza delle reti (A.8.20-A.8.21): protezione attiva tramite blocco automatico",
]
for c in conclusions:
doc.add_paragraph(c, style='List Bullet')
doc.add_paragraph("")
doc.add_paragraph(
"Il sistema richiede integrazione con ulteriori controlli organizzativi, procedurali e "
"tecnici per una conformità completa allo standard ISO 27001, tra cui: politiche formali "
"documentate, formazione del personale, gestione degli asset, controllo degli accessi "
"fisici e logici, e processi di audit interno periodico."
)
doc.add_paragraph("")
p = doc.add_paragraph()
p.alignment = WD_ALIGN_PARAGRAPH.CENTER
run = p.add_run("--- Fine del Documento ---")
run.font.color.rgb = RGBColor(128, 128, 128)
run.font.size = Pt(10)
# Save
output_path = "IDS_Conformita_ISO27001.docx"
doc.save(output_path)
print(f"Documento generato: {output_path}")

View File

@ -5,4 +5,5 @@ description = "Add your description here"
requires-python = ">=3.11"
dependencies = [
"httpx>=0.28.1",
"python-docx>=1.2.0",
]

View File

@ -3,59 +3,92 @@
IDS Auto-Blocking Script
Rileva e blocca automaticamente IP con risk_score >= 80
Eseguito periodicamente da systemd timer (ogni 5 minuti)
Flusso:
1. Chiama Node.js /api/ml/detect per eseguire detection ML
2. Chiama Node.js /api/ml/block-all-critical per bloccare IP critici sui router
"""
import requests
import sys
from datetime import datetime
NODE_API_URL = "http://localhost:5000"
ML_API_URL = "http://localhost:8000"
def auto_block():
"""Esegue detection e blocking automatico degli IP critici"""
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
print(f"[{timestamp}] 🔍 Starting auto-block detection...")
print(f"[{timestamp}] Starting auto-block cycle...")
# Step 1: Esegui detection via ML Backend (se disponibile)
try:
# Chiama endpoint ML /detect con auto_block=true
print(f"[{timestamp}] Step 1: Detection ML...")
response = requests.post(
f"{ML_API_URL}/detect",
json={
"max_records": 5000, # Analizza ultimi 5000 log
"hours_back": 1.0, # Ultima ora
"risk_threshold": 80.0, # Solo IP critici (score >= 80)
"auto_block": True # BLOCCA AUTOMATICAMENTE
"max_records": 50000,
"hours_back": 1.0,
"risk_threshold": 75.0,
"auto_block": False
},
timeout=120 # 2 minuti timeout
timeout=120
)
if response.status_code == 200:
data = response.json()
detections = len(data.get("detections", []))
print(f"[{timestamp}] Detection completata: {detections} anomalie rilevate")
else:
print(f"[{timestamp}] Detection API error: HTTP {response.status_code}")
except requests.exceptions.ConnectionError:
print(f"[{timestamp}] ML Backend non raggiungibile, skip detection (blocco IP esistenti continua)")
except requests.exceptions.Timeout:
print(f"[{timestamp}] ML Detection timeout, skip (blocco IP esistenti continua)")
except Exception as e:
print(f"[{timestamp}] Detection error: {e}")
# Step 2: Blocca IP critici (score >= 80) via Node.js
try:
print(f"[{timestamp}] Step 2: Blocco IP critici sui router...")
response = requests.post(
f"{NODE_API_URL}/api/ml/block-all-critical",
json={
"min_score": 80,
"limit": 200,
"list_name": "ddos_blocked"
},
timeout=300
)
if response.status_code == 200:
data = response.json()
blocked = data.get("blocked", 0)
failed = data.get("failed", 0)
skipped = data.get("skipped", 0)
remaining = data.get("remaining", 0)
if blocked > 0:
print(f"✓ Detection completata: {detections} anomalie rilevate, {blocked} IP bloccati")
print(f"[{timestamp}] {blocked} IP bloccati sui router, {failed} falliti, {skipped} gia' bloccati")
else:
print(f"✓ Detection completata: {detections} anomalie rilevate, nessun nuovo IP da bloccare")
print(f"[{timestamp}] Nessun nuovo IP da bloccare ({skipped} gia' bloccati)")
if remaining > 0:
print(f"[{timestamp}] Rimangono {remaining} IP critici da bloccare")
return 0
else:
print(f"✗ API error: HTTP {response.status_code}")
print(f" Response: {response.text}")
print(f"[{timestamp}] Block API error: HTTP {response.status_code} - {response.text[:200]}")
return 1
except requests.exceptions.ConnectionError:
print("✗ ERRORE: ML Backend non raggiungibile su http://localhost:8000")
print(" Verifica che ids-ml-backend.service sia attivo:")
print(" sudo systemctl status ids-ml-backend")
print(f"[{timestamp}] ERRORE: Node.js backend non raggiungibile su {NODE_API_URL}")
return 1
except requests.exceptions.Timeout:
print("✗ ERRORE: Timeout dopo 120 secondi. Detection troppo lenta?")
print(f"[{timestamp}] ERRORE: Timeout blocco IP (300s)")
return 1
except Exception as e:
print(f"✗ ERRORE imprevisto: {type(e).__name__}: {e}")
import traceback
traceback.print_exc()
print(f"[{timestamp}] ERRORE imprevisto: {type(e).__name__}: {e}")
return 1
if __name__ == "__main__":

View File

@ -111,6 +111,10 @@ class UnblockIPRequest(BaseModel):
ip_address: str
list_name: str = "ddos_blocked"
class BlockAllCriticalRequest(BaseModel):
min_score: float = 80.0
list_name: str = "ddos_blocked"
# API Endpoints
@ -499,6 +503,110 @@ async def block_ip(request: BlockIPRequest):
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.post("/block-all-critical")
async def block_all_critical(request: BlockAllCriticalRequest):
"""Blocca tutti gli IP critici non ancora bloccati sui router - versione ottimizzata con bulk blocking"""
try:
conn = get_db_connection()
cursor = conn.cursor(cursor_factory=RealDictCursor)
cursor.execute("SELECT * FROM routers WHERE enabled = true")
routers = cursor.fetchall()
if not routers:
cursor.close()
conn.close()
raise HTTPException(status_code=400, detail="Nessun router configurato")
cursor.execute("""
SELECT DISTINCT source_ip, MAX(CAST(risk_score AS FLOAT)) as max_score,
MAX(anomaly_type) as anomaly_type
FROM detections
WHERE CAST(risk_score AS FLOAT) >= %s
AND blocked = false
AND source_ip NOT IN (
SELECT ip_address FROM whitelist WHERE active = true
)
GROUP BY source_ip
ORDER BY max_score DESC
""", (request.min_score,))
unblocked_ips = cursor.fetchall()
if not unblocked_ips:
cursor.close()
conn.close()
return {
"message": "Nessun IP critico da bloccare",
"blocked": 0,
"total_critical": 0,
"skipped_whitelisted": 0
}
ip_data = {row['source_ip']: row for row in unblocked_ips}
ip_list = [row['source_ip'] for row in unblocked_ips]
print(f"[BLOCK-ALL] Avvio blocco massivo: {len(ip_list)} IP con score >= {request.min_score} su {len(routers)} router")
bulk_results = await mikrotik_manager.bulk_block_ips_on_all_routers(
routers=routers,
ip_list=ip_list,
list_name=request.list_name,
comment_prefix=f"IDS bulk-block (score>={request.min_score})",
timeout_duration="1h",
concurrency=10
)
blocked_count = 0
failed_count = 0
results_detail = []
blocked_source_ips = []
for ip in ip_list:
router_results = bulk_results.get(ip, {})
score = ip_data[ip]['max_score']
anomaly = ip_data[ip]['anomaly_type']
if any(router_results.values()):
blocked_count += 1
blocked_source_ips.append(ip)
results_detail.append({"ip": ip, "score": float(score), "status": "blocked"})
else:
failed_count += 1
results_detail.append({"ip": ip, "score": float(score), "status": "failed"})
if blocked_source_ips:
batch_size = 100
for i in range(0, len(blocked_source_ips), batch_size):
batch = blocked_source_ips[i:i+batch_size]
placeholders = ','.join(['%s'] * len(batch))
cursor.execute(f"""
UPDATE detections
SET blocked = true, blocked_at = NOW()
WHERE source_ip IN ({placeholders}) AND blocked = false
""", batch)
conn.commit()
print(f"[BLOCK-ALL] Database aggiornato: {len(blocked_source_ips)} IP marcati come bloccati")
cursor.close()
conn.close()
print(f"[BLOCK-ALL] Completato: {blocked_count} bloccati, {failed_count} falliti su {len(ip_list)} totali")
return {
"message": f"Blocco massivo completato: {blocked_count} IP bloccati, {failed_count} falliti",
"blocked": blocked_count,
"failed": failed_count,
"total_critical": len(unblocked_ips),
"details": results_detail[:100]
}
except HTTPException:
raise
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.post("/unblock-ip")
async def unblock_ip(request: UnblockIPRequest):
"""Sblocca un IP da tutti i router"""

View File

@ -1,14 +1,14 @@
"""
MikroTik Manager - Gestione router tramite API REST
Più veloce e affidabile di SSH per 10+ router
Porte REST API: 80 (HTTP) o 443 (HTTPS)
"""
import httpx
import asyncio
import ssl
from typing import List, Dict, Optional
from typing import List, Dict, Optional, Set
from datetime import datetime
import hashlib
import base64
@ -16,39 +16,33 @@ class MikroTikManager:
"""
Gestisce comunicazione con router MikroTik tramite API REST
Supporta operazioni parallele su multipli router
Porte default: 80 (HTTP REST) o 443 (HTTPS REST)
"""
def __init__(self, timeout: int = 10):
def __init__(self, timeout: int = 15):
self.timeout = timeout
self.clients = {} # Cache di client HTTP per router
self.clients = {}
def _get_client(self, router_ip: str, username: str, password: str, port: int = 8728, use_ssl: bool = False) -> httpx.AsyncClient:
def _get_client(self, router_ip: str, username: str, password: str, port: int = 80, use_ssl: bool = False) -> httpx.AsyncClient:
"""Ottiene o crea client HTTP per un router"""
key = f"{router_ip}:{port}:{use_ssl}"
if key not in self.clients:
# API REST MikroTik:
# - Porta 8728: HTTP (default)
# - Porta 8729: HTTPS (SSL)
protocol = "https" if use_ssl or port == 8729 else "http"
protocol = "https" if use_ssl or port == 443 else "http"
auth = base64.b64encode(f"{username}:{password}".encode()).decode()
headers = {
"Authorization": f"Basic {auth}",
"Content-Type": "application/json"
}
# SSL context per MikroTik (supporta protocolli TLS legacy)
ssl_context = None
if protocol == "https":
ssl_context = ssl.create_default_context()
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
# Abilita protocolli TLS legacy per MikroTik (TLS 1.0+)
try:
ssl_context.minimum_version = ssl.TLSVersion.TLSv1
except AttributeError:
# Python < 3.7 fallback
pass
# Abilita cipher suite legacy per compatibilità
ssl_context.set_ciphers('DEFAULT@SECLEVEL=1')
self.clients[key] = httpx.AsyncClient(
@ -59,20 +53,41 @@ class MikroTikManager:
)
return self.clients[key]
async def test_connection(self, router_ip: str, username: str, password: str, port: int = 8728, use_ssl: bool = False) -> bool:
async def test_connection(self, router_ip: str, username: str, password: str, port: int = 80, use_ssl: bool = False) -> bool:
"""Testa connessione a un router"""
try:
# Auto-detect SSL: porta 8729 = SSL
if port == 8729:
if port == 443:
use_ssl = True
client = self._get_client(router_ip, username, password, port, use_ssl)
# Prova a leggere system identity
response = await client.get("/rest/system/identity")
return response.status_code == 200
except Exception as e:
print(f"[ERROR] Connessione a {router_ip}:{port} fallita: {e}")
return False
async def _get_existing_ips_set(
self,
router_ip: str,
username: str,
password: str,
list_name: str,
port: int = 80,
use_ssl: bool = False
) -> Set[str]:
"""Scarica la address-list UNA VOLTA e ritorna un set di IP già presenti"""
try:
if port == 443:
use_ssl = True
client = self._get_client(router_ip, username, password, port, use_ssl)
response = await client.get(f"/rest/ip/firewall/address-list", params={"list": list_name})
if response.status_code == 200:
entries = response.json()
return {entry.get('address', '') for entry in entries if entry.get('list') == list_name}
return set()
except Exception as e:
print(f"[ERROR] Lettura address-list da {router_ip}: {e}")
return set()
async def add_address_list(
self,
router_ip: str,
@ -82,29 +97,32 @@ class MikroTikManager:
list_name: str = "ddos_blocked",
comment: str = "",
timeout_duration: str = "1h",
port: int = 8728,
use_ssl: bool = False
port: int = 80,
use_ssl: bool = False,
skip_check: bool = False,
existing_ips: Optional[Set[str]] = None
) -> bool:
"""
Aggiunge IP alla address-list del router
timeout_duration: es. "1h", "30m", "1d"
skip_check: se True, non verifica se l'IP esiste già (per bulk operations)
existing_ips: set di IP già nella lista (per evitare GET per ogni IP)
"""
try:
# Auto-detect SSL: porta 8729 = SSL
if port == 8729:
if port == 443:
use_ssl = True
client = self._get_client(router_ip, username, password, port, use_ssl)
# Controlla se IP già esiste
response = await client.get("/rest/ip/firewall/address-list")
if response.status_code == 200:
existing = response.json()
for entry in existing:
if entry.get('address') == ip_address and entry.get('list') == list_name:
print(f"[INFO] IP {ip_address} già in lista {list_name} su {router_ip}")
if not skip_check:
if existing_ips is not None:
if ip_address in existing_ips:
return True
else:
response = await client.get("/rest/ip/firewall/address-list")
if response.status_code == 200:
for entry in response.json():
if entry.get('address') == ip_address and entry.get('list') == list_name:
return True
# Aggiungi nuovo IP
data = {
"list": list_name,
"address": ip_address,
@ -114,11 +132,25 @@ class MikroTikManager:
response = await client.post("/rest/ip/firewall/address-list/add", json=data)
if response.status_code == 201 or response.status_code == 200:
print(f"[SUCCESS] IP {ip_address} aggiunto a {list_name} su {router_ip} (timeout: {timeout_duration})")
if response.status_code in (200, 201):
print(f"[SUCCESS] IP {ip_address} aggiunto a {list_name} su {router_ip}")
return True
elif response.status_code in (400, 409):
resp_text = response.text.lower()
if "already" in resp_text or "exists" in resp_text or "duplicate" in resp_text or "failure: already" in resp_text:
return True
try:
verify_resp = await client.get("/rest/ip/firewall/address-list", params={"address": ip_address})
if verify_resp.status_code == 200:
for entry in verify_resp.json():
if entry.get('address') == ip_address and entry.get('list') == list_name:
return True
except Exception:
pass
print(f"[ERROR] IP {ip_address} su {router_ip}: {response.status_code} - {response.text}")
return False
else:
print(f"[ERROR] Errore aggiunta IP {ip_address} su {router_ip}: {response.status_code} - {response.text}")
print(f"[ERROR] Aggiunta IP {ip_address} su {router_ip}: {response.status_code} - {response.text}")
return False
except Exception as e:
@ -132,17 +164,15 @@ class MikroTikManager:
password: str,
ip_address: str,
list_name: str = "ddos_blocked",
port: int = 8728,
port: int = 80,
use_ssl: bool = False
) -> bool:
"""Rimuove IP dalla address-list del router"""
try:
# Auto-detect SSL: porta 8729 = SSL
if port == 8729:
if port == 443:
use_ssl = True
client = self._get_client(router_ip, username, password, port, use_ssl)
# Trova ID dell'entry
response = await client.get("/rest/ip/firewall/address-list")
if response.status_code != 200:
return False
@ -151,7 +181,6 @@ class MikroTikManager:
for entry in entries:
if entry.get('address') == ip_address and entry.get('list') == list_name:
entry_id = entry.get('.id')
# Rimuovi entry
response = await client.delete(f"/rest/ip/firewall/address-list/{entry_id}")
if response.status_code == 200:
print(f"[SUCCESS] IP {ip_address} rimosso da {list_name} su {router_ip}")
@ -170,13 +199,12 @@ class MikroTikManager:
username: str,
password: str,
list_name: Optional[str] = None,
port: int = 8728,
port: int = 80,
use_ssl: bool = False
) -> List[Dict]:
"""Ottiene address-list da router"""
try:
# Auto-detect SSL: porta 8729 = SSL
if port == 8729:
if port == 443:
use_ssl = True
client = self._get_client(router_ip, username, password, port, use_ssl)
response = await client.get("/rest/ip/firewall/address-list")
@ -203,7 +231,6 @@ class MikroTikManager:
) -> Dict[str, bool]:
"""
Blocca IP su tutti i router in parallelo
routers: lista di dict con {ip_address, username, password, api_port}
Returns: dict con {router_ip: success_bool}
"""
tasks = []
@ -221,20 +248,125 @@ class MikroTikManager:
list_name=list_name,
comment=comment,
timeout_duration=timeout_duration,
port=router.get('api_port', 8728)
port=router.get('api_port', 80)
)
tasks.append(task)
router_ips.append(router['ip_address'])
# Esegui in parallelo
results = await asyncio.gather(*tasks, return_exceptions=True)
# Combina risultati
return {
router_ip: result if not isinstance(result, Exception) else False
for router_ip, result in zip(router_ips, results)
}
async def bulk_block_ips_on_all_routers(
self,
routers: List[Dict],
ip_list: List[str],
list_name: str = "ddos_blocked",
comment_prefix: str = "IDS bulk-block",
timeout_duration: str = "1h",
concurrency: int = 10,
progress_callback=None
) -> Dict[str, Dict[str, bool]]:
"""
Blocco massivo ottimizzato: scarica address-list UNA VOLTA per router,
poi aggiunge solo IP non presenti con concurrency limitata.
Returns: {ip: {router_ip: success_bool}}
"""
enabled_routers = [r for r in routers if r.get('enabled', True)]
if not enabled_routers:
return {}
print(f"[BULK] Inizio blocco massivo: {len(ip_list)} IP su {len(enabled_routers)} router")
existing_cache = {}
for router in enabled_routers:
router_ip = router['ip_address']
port = router.get('api_port', 80)
use_ssl = port == 443
existing_ips = await self._get_existing_ips_set(
router_ip, router['username'], router['password'],
list_name, port, use_ssl
)
existing_cache[router_ip] = existing_ips
print(f"[BULK] Router {router_ip}: {len(existing_ips)} IP già in lista")
new_ips = []
for ip in ip_list:
is_new_on_any = False
for router in enabled_routers:
if ip not in existing_cache.get(router['ip_address'], set()):
is_new_on_any = True
break
if is_new_on_any:
new_ips.append(ip)
already_blocked = len(ip_list) - len(new_ips)
print(f"[BULK] {already_blocked} IP già bloccati, {len(new_ips)} nuovi da bloccare")
results = {}
semaphore = asyncio.Semaphore(concurrency)
blocked_count = 0
async def block_single_ip(ip: str) -> Dict[str, bool]:
nonlocal blocked_count
async with semaphore:
router_results = {}
tasks = []
r_ips = []
for router in enabled_routers:
r_ip = router['ip_address']
if ip in existing_cache.get(r_ip, set()):
router_results[r_ip] = True
continue
task = self.add_address_list(
router_ip=r_ip,
username=router['username'],
password=router['password'],
ip_address=ip,
list_name=list_name,
comment=f"{comment_prefix} {ip}",
timeout_duration=timeout_duration,
port=router.get('api_port', 80),
skip_check=True
)
tasks.append(task)
r_ips.append(r_ip)
if tasks:
task_results = await asyncio.gather(*tasks, return_exceptions=True)
for r_ip, result in zip(r_ips, task_results):
router_results[r_ip] = result if not isinstance(result, Exception) else False
blocked_count += 1
if progress_callback and blocked_count % 50 == 0:
await progress_callback(blocked_count, len(new_ips))
return router_results
batch_tasks = [block_single_ip(ip) for ip in new_ips]
batch_results = await asyncio.gather(*batch_tasks, return_exceptions=True)
for ip, result in zip(new_ips, batch_results):
if isinstance(result, Exception):
results[ip] = {r['ip_address']: False for r in enabled_routers}
else:
results[ip] = result
for ip in ip_list:
if ip not in results:
results[ip] = {r['ip_address']: True for r in enabled_routers}
total_success = sum(1 for ip_results in results.values() if any(ip_results.values()))
print(f"[BULK] Completato: {total_success}/{len(ip_list)} IP bloccati con successo")
return results
async def unblock_ip_on_all_routers(
self,
routers: List[Dict],
@ -255,7 +387,7 @@ class MikroTikManager:
password=router['password'],
ip_address=ip_address,
list_name=list_name,
port=router.get('api_port', 8728)
port=router.get('api_port', 80)
)
tasks.append(task)
router_ips.append(router['ip_address'])
@ -274,7 +406,6 @@ class MikroTikManager:
self.clients.clear()
# Fallback SSH per router che non supportano API REST
class MikroTikSSHManager:
"""Fallback usando SSH se API REST non disponibile"""
@ -288,29 +419,26 @@ class MikroTikSSHManager:
if __name__ == "__main__":
# Test MikroTik Manager
async def test():
manager = MikroTikManager()
# Test router demo (sostituire con dati reali)
test_router = {
'ip_address': '192.168.1.1',
'username': 'admin',
'password': 'password',
'api_port': 8728,
'api_port': 80,
'enabled': True
}
# Test connessione
print("Testing connection...")
connected = await manager.test_connection(
test_router['ip_address'],
test_router['username'],
test_router['password']
test_router['password'],
port=test_router['api_port']
)
print(f"Connected: {connected}")
# Test blocco IP
if connected:
print("\nTesting IP block...")
result = await manager.add_address_list(
@ -320,22 +448,22 @@ if __name__ == "__main__":
ip_address='10.0.0.100',
list_name='ddos_test',
comment='Test IDS',
timeout_duration='10m'
timeout_duration='10m',
port=test_router['api_port']
)
print(f"Block result: {result}")
# Leggi lista
print("\nReading address list...")
entries = await manager.get_address_list(
test_router['ip_address'],
test_router['username'],
test_router['password'],
list_name='ddos_test'
list_name='ddos_test',
port=test_router['api_port']
)
print(f"Entries: {entries}")
await manager.close_all()
# Esegui test
print("=== TEST MIKROTIK MANAGER ===\n")
asyncio.run(test())

View File

@ -28,7 +28,7 @@ The IDS employs a React-based frontend for real-time monitoring, detection visua
- **Automated Blocking**: Critical IPs (score >= 80) are automatically blocked in parallel across configured MikroTik routers via their REST API. **Auto-unblock on whitelist**: When an IP is added to the whitelist, it is automatically removed from all router blocklists. Manual unblock button available in Detections page.
- **Public Lists Integration (v2.0.0 - CIDR Complete)**: Automatic fetcher syncs blacklist/whitelist feeds every 10 minutes (Spamhaus, Talos, AWS, GCP, Cloudflare, IANA, NTP Pool). **Full CIDR support** using PostgreSQL INET/CIDR types with `<<=` containment operators for network range matching. Priority-based merge logic: Manual whitelist > Public whitelist > Blacklist (CIDR-aware). Detections created for blacklisted IPs/ranges (excluding whitelisted ranges). CRUD API + UI for list management. See `deployment/docs/PUBLIC_LISTS_V2_CIDR.md` for implementation details.
- **Automatic Cleanup**: An hourly systemd timer (`cleanup_detections.py`) removes old detections (48h) and auto-unblocks IPs (2h).
- **Service Monitoring & Management**: A dashboard provides real-time status (ML Backend, Database, Syslog Parser). API endpoints, secured with API key authentication and Systemd integration, allow for service management (start/stop/restart) of Python services.
- **Service Monitoring & Management**: A dashboard provides real-time status (ML Backend, Database, Syslog Parser, Analytics Aggregator). **Syslog Parser check is database-based** (counts logs in last 30 minutes) and independent of ML Backend availability. ML Stats endpoint has database fallback when Python backend is offline. Training UI shows offline warning and disables actions when ML Backend is unavailable. API endpoints, secured with API key authentication and Systemd integration, allow for service management (start/stop/restart) of Python services.
- **IP Geolocation**: Integration with `ip-api.com` enriches detection data with geographical and AS information, utilizing intelligent caching.
- **Database Management**: PostgreSQL is used for all persistent data. An intelligent database versioning system ensures efficient SQL migrations (v8 with forced INET/CIDR column types for network range matching). Migration 008 unconditionally recreates INET columns to fix type mismatches. Dual-mode database drivers (`@neondatabase/serverless` for Replit, `pg` for AlmaLinux) ensure environment compatibility.
- **Microservices**: Clear separation of concerns between the Python ML backend and the Node.js API backend.

413
server/mikrotik.ts Normal file
View File

@ -0,0 +1,413 @@
const VERBOSE = process.env.MIKROTIK_DEBUG === '1' || process.env.MIKROTIK_DEBUG === 'true';
interface RouterConfig {
id: string;
ipAddress: string;
apiPort: number;
username: string;
password: string;
enabled: boolean;
}
interface BlockResult {
routerIp: string;
routerName?: string;
success: boolean;
alreadyExists?: boolean;
error?: string;
}
async function mikrotikRequest(
router: RouterConfig,
method: string,
path: string,
body?: any,
timeoutMs: number = 8000
): Promise<{ status: number; data: any }> {
const useHttps = router.apiPort === 443;
const protocol = useHttps ? "https" : "http";
const url = `${protocol}://${router.ipAddress}:${router.apiPort}${path}`;
const auth = Buffer.from(`${router.username}:${router.password}`).toString("base64");
const startTime = Date.now();
const origTlsReject = process.env.NODE_TLS_REJECT_UNAUTHORIZED;
if (useHttps) {
process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
}
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), timeoutMs);
try {
const fetchOptions: RequestInit = {
method,
headers: {
"Authorization": `Basic ${auth}`,
"Content-Type": "application/json",
},
signal: controller.signal,
};
if (body) {
fetchOptions.body = JSON.stringify(body);
}
const response = await fetch(url, fetchOptions);
clearTimeout(timeout);
let data: any;
const text = await response.text();
try {
data = JSON.parse(text);
} catch {
data = text;
}
const elapsed = Date.now() - startTime;
if (VERBOSE) {
const bodyStr = body ? ` body=${JSON.stringify(body)}` : '';
const dataPreview = typeof data === 'string' ? data.substring(0, 200) : JSON.stringify(data).substring(0, 200);
console.log(`[MIKROTIK] ${method} ${url} => HTTP ${response.status} (${elapsed}ms)${bodyStr} response=${dataPreview}`);
} else if (response.status >= 400) {
const dataPreview = typeof data === 'string' ? data.substring(0, 100) : JSON.stringify(data).substring(0, 100);
console.warn(`[MIKROTIK] ${method} ${router.ipAddress}${path} => HTTP ${response.status} (${elapsed}ms) err=${dataPreview}`);
} else if (elapsed > 5000) {
console.warn(`[MIKROTIK] SLOW: ${method} ${router.ipAddress}${path} => HTTP ${response.status} (${elapsed}ms)`);
}
return { status: response.status, data };
} catch (error: any) {
clearTimeout(timeout);
const elapsed = Date.now() - startTime;
const errMsg = error.name === 'AbortError' ? `TIMEOUT after ${timeoutMs}ms` : error.message;
console.error(`[MIKROTIK] ${method} ${url} => ERRORE: ${errMsg} (${elapsed}ms)`);
if (useHttps && origTlsReject !== undefined) {
process.env.NODE_TLS_REJECT_UNAUTHORIZED = origTlsReject;
} else if (useHttps) {
delete process.env.NODE_TLS_REJECT_UNAUTHORIZED;
}
throw error;
} finally {
if (useHttps) {
if (origTlsReject !== undefined) {
process.env.NODE_TLS_REJECT_UNAUTHORIZED = origTlsReject;
} else {
delete process.env.NODE_TLS_REJECT_UNAUTHORIZED;
}
}
}
}
export async function testRouterConnection(router: RouterConfig): Promise<boolean> {
try {
const { status } = await mikrotikRequest(router, "GET", "/rest/system/identity");
return status === 200;
} catch {
return false;
}
}
export async function getExistingBlockedIps(
router: RouterConfig,
listName: string = "ddos_blocked"
): Promise<Set<string>> {
try {
if (VERBOSE) console.log(`[MIKROTIK] Fetching address-list da router ${router.ipAddress} (list=${listName}, timeout=20s)...`);
const { status, data } = await mikrotikRequest(router, "GET", "/rest/ip/firewall/address-list", undefined, 20000);
if (status === 200 && Array.isArray(data)) {
const ips = new Set<string>();
const allLists = new Map<string, number>();
for (const entry of data) {
const count = allLists.get(entry.list) || 0;
allLists.set(entry.list, count + 1);
if (entry.list === listName) {
ips.add(entry.address);
}
}
const listsInfo = Array.from(allLists.entries()).map(([name, count]) => `${name}:${count}`).join(', ');
console.log(`[MIKROTIK] Router ${router.ipAddress}: ${data.length} entries totali (${listsInfo}), ${ips.size} in list "${listName}"`);
return ips;
}
console.warn(`[MIKROTIK] Router ${router.ipAddress}: risposta inattesa status=${status}, data non e' array`);
return new Set();
} catch (e: any) {
console.error(`[MIKROTIK] Router ${router.ipAddress}: ERRORE fetch address-list: ${e.message}`);
return new Set();
}
}
export async function addToAddressList(
router: RouterConfig,
ipAddress: string,
listName: string = "ddos_blocked",
comment: string = "",
timeoutDuration: string = "1h"
): Promise<BlockResult> {
try {
const { status, data } = await mikrotikRequest(router, "POST", "/rest/ip/firewall/address-list/add", {
list: listName,
address: ipAddress,
comment: comment || `IDS block ${new Date().toISOString()}`,
timeout: timeoutDuration,
});
if (status === 200 || status === 201) {
if (VERBOSE) console.log(`[BLOCK] OK: ${ipAddress} aggiunto su router ${router.ipAddress} (HTTP ${status})`);
return { routerIp: router.ipAddress, success: true };
}
if (status === 400 || status === 409) {
const text = typeof data === "string" ? data.toLowerCase() : JSON.stringify(data).toLowerCase();
if (text.includes("already") || text.includes("exists") || text.includes("duplicate") || text.includes("failure: already")) {
if (VERBOSE) console.log(`[BLOCK] SKIP: ${ipAddress} gia' presente su router ${router.ipAddress} (HTTP ${status})`);
return { routerIp: router.ipAddress, success: true, alreadyExists: true };
}
console.warn(`[BLOCK] VERIFICA: ${ipAddress} su router ${router.ipAddress} HTTP ${status} risposta="${text.substring(0, 150)}", verifico lista...`);
try {
const verifyResult = await mikrotikRequest(router, "GET", "/rest/ip/firewall/address-list");
if (verifyResult.status === 200 && Array.isArray(verifyResult.data)) {
for (const entry of verifyResult.data) {
if (entry.address === ipAddress && entry.list === listName) {
console.log(`[BLOCK] CONFERMATO: ${ipAddress} trovato nella lista di router ${router.ipAddress} dopo verifica`);
return { routerIp: router.ipAddress, success: true, alreadyExists: true };
}
}
}
} catch (verifyErr: any) {
console.error(`[BLOCK] ERRORE verifica: ${ipAddress} su router ${router.ipAddress}: ${verifyErr.message}`);
}
const errMsg = `HTTP ${status}: ${typeof data === "string" ? data : JSON.stringify(data)}`;
console.error(`[BLOCK] FALLITO: ${ipAddress} su router ${router.ipAddress}: ${errMsg}`);
return {
routerIp: router.ipAddress,
success: false,
error: errMsg,
};
}
const errMsg = `HTTP ${status}: ${typeof data === "string" ? data : JSON.stringify(data)}`;
console.error(`[BLOCK] FALLITO: ${ipAddress} su router ${router.ipAddress}: ${errMsg}`);
return {
routerIp: router.ipAddress,
success: false,
error: errMsg,
};
} catch (error: any) {
const errMsg = error.name === 'AbortError' ? `TIMEOUT (8s)` : (error.message || "Connection failed");
console.error(`[BLOCK] ERRORE: ${ipAddress} su router ${router.ipAddress}: ${errMsg}`);
return {
routerIp: router.ipAddress,
success: false,
error: errMsg,
};
}
}
export async function removeFromAddressList(
router: RouterConfig,
ipAddress: string,
listName: string = "ddos_blocked"
): Promise<BlockResult> {
try {
if (VERBOSE) console.log(`[UNBLOCK] Rimozione ${ipAddress} da router ${router.ipAddress} (list=${listName})...`);
const { status, data } = await mikrotikRequest(router, "GET", "/rest/ip/firewall/address-list");
if (status !== 200 || !Array.isArray(data)) {
console.error(`[UNBLOCK] ERRORE: impossibile leggere address-list da router ${router.ipAddress}: HTTP ${status}`);
return { routerIp: router.ipAddress, success: false, error: "Failed to read address list" };
}
for (const entry of data) {
if (entry.address === ipAddress && entry.list === listName) {
const entryId = entry[".id"];
const delResult = await mikrotikRequest(router, "DELETE", `/rest/ip/firewall/address-list/${entryId}`);
if (delResult.status === 200 || delResult.status === 204) {
console.log(`[UNBLOCK] OK: ${ipAddress} rimosso da router ${router.ipAddress}`);
return { routerIp: router.ipAddress, success: true };
}
console.error(`[UNBLOCK] FALLITO: eliminazione ${ipAddress} da router ${router.ipAddress}: HTTP ${delResult.status}`);
return { routerIp: router.ipAddress, success: false, error: `Delete failed: ${delResult.status}` };
}
}
if (VERBOSE) console.log(`[UNBLOCK] ${ipAddress} non trovato su router ${router.ipAddress} (gia' assente)`);
return { routerIp: router.ipAddress, success: true };
} catch (error: any) {
console.error(`[UNBLOCK] ERRORE: ${ipAddress} su router ${router.ipAddress}: ${error.message}`);
return { routerIp: router.ipAddress, success: false, error: error.message };
}
}
export async function blockIpOnAllRouters(
routers: RouterConfig[],
ipAddress: string,
listName: string = "ddos_blocked",
comment: string = "",
timeoutDuration: string = "1h"
): Promise<BlockResult[]> {
const enabled = routers.filter((r) => r.enabled);
const results = await Promise.allSettled(
enabled.map((r) => addToAddressList(r, ipAddress, listName, comment, timeoutDuration))
);
return results.map((r, i) =>
r.status === "fulfilled" ? r.value : { routerIp: enabled[i].ipAddress, success: false, error: String(r.reason) }
);
}
export async function unblockIpOnAllRouters(
routers: RouterConfig[],
ipAddress: string,
listName: string = "ddos_blocked"
): Promise<BlockResult[]> {
const enabled = routers.filter((r) => r.enabled);
const results = await Promise.allSettled(
enabled.map((r) => removeFromAddressList(r, ipAddress, listName))
);
return results.map((r, i) =>
r.status === "fulfilled" ? r.value : { routerIp: enabled[i].ipAddress, success: false, error: String(r.reason) }
);
}
export async function bulkBlockIps(
routers: RouterConfig[],
ipList: string[],
listName: string = "ddos_blocked",
commentPrefix: string = "IDS bulk-block",
timeoutDuration: string = "1h",
concurrency: number = 10
): Promise<{ blocked: number; failed: number; skipped: number; details: Array<{ ip: string; status: string }> }> {
const enabled = routers.filter((r) => r.enabled);
if (enabled.length === 0) {
return { blocked: 0, failed: 0, skipped: 0, details: [] };
}
console.log(`[BULK-BLOCK] Starting: ${ipList.length} IPs on ${enabled.length} routers (${enabled.map(r => r.ipAddress).join(', ')})`);
const routerStatus = new Map<string, { ok: number; fail: number; skip: number }>();
for (const r of enabled) {
routerStatus.set(r.ipAddress, { ok: 0, fail: 0, skip: 0 });
}
const existingCache = new Map<string, Set<string>>();
await Promise.allSettled(
enabled.map(async (router) => {
const start = Date.now();
const existing = await getExistingBlockedIps(router, listName);
const elapsed = Date.now() - start;
existingCache.set(router.ipAddress, existing);
console.log(`[BULK-BLOCK] Router ${router.ipAddress}: ${existing.size} IPs already in list (${elapsed}ms)`);
})
);
const newIps: string[] = [];
const skippedIps: string[] = [];
for (const ip of ipList) {
let alreadyOnAll = true;
for (const router of enabled) {
const existing = existingCache.get(router.ipAddress) || new Set();
if (!existing.has(ip)) {
alreadyOnAll = false;
break;
}
}
if (alreadyOnAll) {
skippedIps.push(ip);
} else {
newIps.push(ip);
}
}
console.log(`[BULK-BLOCK] ${skippedIps.length} already blocked, ${newIps.length} new to block`);
let blocked = 0;
let failed = 0;
const details: Array<{ ip: string; status: string }> = [];
const partialIps: string[] = [];
const failedIps: string[] = [];
async function processIp(ip: string) {
const routerResults = await Promise.allSettled(
enabled.map(async (router) => {
const existing = existingCache.get(router.ipAddress) || new Set();
if (existing.has(ip)) {
const st = routerStatus.get(router.ipAddress);
if (st) st.skip++;
return { success: true, skipped: true, routerIp: router.ipAddress };
}
const start = Date.now();
const result = await addToAddressList(router, ip, listName, `${commentPrefix} ${ip}`, timeoutDuration);
const elapsed = Date.now() - start;
const st = routerStatus.get(router.ipAddress);
if (result.success) {
if (st) st.ok++;
} else {
if (st) st.fail++;
}
return { success: result.success, skipped: false, routerIp: router.ipAddress, elapsed, error: result.error };
})
);
const perRouterDetail = routerResults.map((r) => {
if (r.status === 'fulfilled') {
const v = r.value;
if (v.skipped) return `${v.routerIp}:SKIP`;
if (v.success) return `${v.routerIp}:OK(${v.elapsed}ms)`;
return `${v.routerIp}:FAIL(${v.elapsed}ms,${v.error})`;
}
return 'REJECTED';
}).join(' | ');
const anySuccess = routerResults.some(
(r) => r.status === "fulfilled" && r.value.success
);
const allSuccess = routerResults.every(
(r) => r.status === "fulfilled" && r.value.success
);
if (anySuccess) {
blocked++;
details.push({ ip, status: "blocked" });
if (!allSuccess) {
partialIps.push(ip);
if (VERBOSE) console.warn(`[BULK-BLOCK] PARZIALE: IP ${ip}: ${perRouterDetail}`);
}
} else {
failed++;
failedIps.push(ip);
details.push({ ip, status: "failed" });
if (VERBOSE) console.error(`[BULK-BLOCK] FALLITO: IP ${ip}: ${perRouterDetail}`);
}
}
const bulkStart = Date.now();
for (let i = 0; i < newIps.length; i += concurrency) {
const batch = newIps.slice(i, i + concurrency);
await Promise.allSettled(batch.map((ip) => processIp(ip)));
const progress = Math.min(i + concurrency, newIps.length);
if (progress === newIps.length || progress % 50 === 0) {
const elapsed = ((Date.now() - bulkStart) / 1000).toFixed(1);
console.log(`[BULK-BLOCK] Progress: ${progress}/${newIps.length} (${elapsed}s, ${blocked} ok, ${failed} fail)`);
}
}
for (const ip of skippedIps) {
details.push({ ip, status: "already_blocked" });
}
const totalElapsed = ((Date.now() - bulkStart) / 1000).toFixed(1);
routerStatus.forEach((st, routerIp) => {
console.log(`[BULK-BLOCK] Router ${routerIp}: ${st.ok} blocked, ${st.fail} failed, ${st.skip} skipped`);
});
console.log(`[BULK-BLOCK] Completato in ${totalElapsed}s: ${blocked} blocked, ${failed} failed, ${skippedIps.length} already_blocked, ${partialIps.length} parziali`);
if (failedIps.length > 0) {
console.error(`[BULK-BLOCK] IP non bloccati su nessun router (${failedIps.length}): ${failedIps.slice(0, 20).join(', ')}${failedIps.length > 20 ? '...' : ''}`);
}
if (partialIps.length > 0) {
console.warn(`[BULK-BLOCK] IP bloccati solo parzialmente (${partialIps.length}): ${partialIps.slice(0, 20).join(', ')}${partialIps.length > 20 ? '...' : ''}`);
}
return { blocked, failed, skipped: skippedIps.length, details };
}

View File

@ -1,9 +1,10 @@
import type { Express } from "express";
import { createServer, type Server } from "http";
import { storage } from "./storage";
import { insertRouterSchema, insertDetectionSchema, insertWhitelistSchema, insertPublicListSchema, networkAnalytics, routers } from "@shared/schema";
import { insertRouterSchema, insertDetectionSchema, insertWhitelistSchema, insertPublicListSchema, networkAnalytics, routers, detections, networkLogs, trainingHistory } from "@shared/schema";
import { db } from "./db";
import { desc, eq } from "drizzle-orm";
import { desc, eq, gte, sql } from "drizzle-orm";
import { blockIpOnAllRouters, unblockIpOnAllRouters, bulkBlockIps, testRouterConnection } from "./mikrotik";
export async function registerRoutes(app: Express): Promise<Server> {
// Routers
@ -122,8 +123,12 @@ export async function registerRoutes(app: Express): Promise<Server> {
// Whitelist
app.get("/api/whitelist", async (req, res) => {
try {
const whitelist = await storage.getAllWhitelist();
res.json(whitelist);
const limit = parseInt(req.query.limit as string) || 50;
const offset = parseInt(req.query.offset as string) || 0;
const search = req.query.search as string || undefined;
const result = await storage.getAllWhitelist({ limit, offset, search });
res.json(result);
} catch (error) {
console.error('[DB ERROR] Failed to fetch whitelist:', error);
res.status(500).json({ error: "Failed to fetch whitelist" });
@ -135,28 +140,16 @@ export async function registerRoutes(app: Express): Promise<Server> {
const validatedData = insertWhitelistSchema.parse(req.body);
const item = await storage.createWhitelist(validatedData);
// Auto-unblock from routers when adding to whitelist
const mlBackendUrl = process.env.ML_BACKEND_URL || 'http://localhost:8000';
const mlApiKey = process.env.IDS_API_KEY;
try {
const headers: Record<string, string> = { 'Content-Type': 'application/json' };
if (mlApiKey) {
headers['X-API-Key'] = mlApiKey;
}
const unblockResponse = await fetch(`${mlBackendUrl}/unblock-ip`, {
method: 'POST',
headers,
body: JSON.stringify({ ip_address: validatedData.ipAddress })
});
if (unblockResponse.ok) {
const result = await unblockResponse.json();
console.log(`[WHITELIST] Auto-unblocked ${validatedData.ipAddress} from ${result.unblocked_from} routers`);
} else {
console.warn(`[WHITELIST] Failed to auto-unblock ${validatedData.ipAddress}: ${unblockResponse.status}`);
const allRouters = await storage.getAllRouters();
const enabledRouters = allRouters.filter(r => r.enabled);
if (enabledRouters.length > 0) {
const results = await unblockIpOnAllRouters(enabledRouters as any, validatedData.ipAddress);
const unblocked = results.filter(r => r.success).length;
console.log(`[WHITELIST] Auto-unblocked ${validatedData.ipAddress} from ${unblocked}/${enabledRouters.length} routers`);
}
} catch (unblockError) {
// Don't fail if ML backend is unavailable
console.warn(`[WHITELIST] ML backend unavailable for auto-unblock: ${unblockError}`);
console.warn(`[WHITELIST] Auto-unblock failed for ${validatedData.ipAddress}:`, unblockError);
}
res.json(item);
@ -165,7 +158,6 @@ export async function registerRoutes(app: Express): Promise<Server> {
}
});
// Unblock IP from all routers (proxy to ML backend)
app.post("/api/unblock-ip", async (req, res) => {
try {
const { ipAddress, listName = "ddos_blocked" } = req.body;
@ -174,31 +166,31 @@ export async function registerRoutes(app: Express): Promise<Server> {
return res.status(400).json({ error: "IP address is required" });
}
const mlBackendUrl = process.env.ML_BACKEND_URL || 'http://localhost:8000';
const mlApiKey = process.env.IDS_API_KEY;
const headers: Record<string, string> = { 'Content-Type': 'application/json' };
if (mlApiKey) {
headers['X-API-Key'] = mlApiKey;
const allRouters = await storage.getAllRouters();
const enabledRouters = allRouters.filter(r => r.enabled);
if (enabledRouters.length === 0) {
return res.status(400).json({ error: "Nessun router abilitato" });
}
const response = await fetch(`${mlBackendUrl}/unblock-ip`, {
method: 'POST',
headers,
body: JSON.stringify({ ip_address: ipAddress, list_name: listName })
const results = await unblockIpOnAllRouters(enabledRouters as any, ipAddress, listName);
const successCount = results.filter(r => r.success).length;
await db.update(detections)
.set({ blocked: false })
.where(eq(detections.sourceIp, ipAddress));
console.log(`[UNBLOCK] ${ipAddress} rimosso da ${successCount}/${enabledRouters.length} router`);
res.json({
message: `IP ${ipAddress} sbloccato da ${successCount} router`,
unblocked_from: successCount,
total_routers: enabledRouters.length,
results: results.map(r => ({ router: r.routerIp, success: r.success, error: r.error }))
});
if (!response.ok) {
const errorText = await response.text();
console.error(`[UNBLOCK] ML backend error for ${ipAddress}: ${response.status} - ${errorText}`);
return res.status(response.status).json({ error: errorText || "Failed to unblock IP" });
}
const result = await response.json();
console.log(`[UNBLOCK] Successfully unblocked ${ipAddress} from ${result.unblocked_from || 0} routers`);
res.json(result);
} catch (error: any) {
console.error('[UNBLOCK] Error:', error);
res.status(500).json({ error: error.message || "Failed to unblock IP from routers" });
res.status(500).json({ error: error.message || "Errore sblocco IP" });
}
});
@ -306,7 +298,7 @@ export async function registerRoutes(app: Express): Promise<Server> {
const text = await response.text();
// Parse IPs based on content type
let ips: Array<{ip: string, cidr?: string}> = [];
let ips: Array<{ip: string, cidr: string | null}> = [];
if (contentType.includes('json') || list.url.endsWith('.json')) {
// JSON format (Spamhaus DROP v4 JSON)
@ -318,7 +310,7 @@ export async function registerRoutes(app: Express): Promise<Server> {
const [ip] = entry.cidr.split('/');
ips.push({ ip, cidr: entry.cidr });
} else if (entry.ip) {
ips.push({ ip: entry.ip, cidr: null as any });
ips.push({ ip: entry.ip, cidr: null });
}
}
}
@ -338,7 +330,7 @@ export async function registerRoutes(app: Express): Promise<Server> {
if (match) {
const ip = match[1];
const cidr = match[2] ? `${match[1]}${match[2]}` : null;
ips.push({ ip, cidr: cidr as any });
ips.push({ ip, cidr });
}
}
}
@ -477,33 +469,44 @@ export async function registerRoutes(app: Express): Promise<Server> {
// Stats
app.get("/api/stats", async (req, res) => {
try {
const routers = await storage.getAllRouters();
const detectionsResult = await storage.getAllDetections({ limit: 1000 });
const recentLogs = await storage.getRecentLogs(1000);
const whitelist = await storage.getAllWhitelist();
const routersList = await storage.getAllRouters();
const whitelistResult = await storage.getAllWhitelist({ limit: 1 });
const latestTraining = await storage.getLatestTraining();
const detectionsList = detectionsResult.detections;
const blockedCount = detectionsList.filter(d => d.blocked).length;
const criticalCount = detectionsList.filter(d => parseFloat(d.riskScore) >= 85).length;
const highCount = detectionsList.filter(d => parseFloat(d.riskScore) >= 70 && parseFloat(d.riskScore) < 85).length;
const detectionStats = await db.select({
total: sql<number>`count(*)::int`,
blocked: sql<number>`count(*) filter (where blocked = true)::int`,
critical: sql<number>`count(*) filter (where ${detections.riskScore}::numeric >= 85)::int`,
high: sql<number>`count(*) filter (where ${detections.riskScore}::numeric >= 70 and ${detections.riskScore}::numeric < 85)::int`,
}).from(detections);
let logCount = 0;
try {
const logStats = await db.execute(
sql`SELECT count(*)::int as count FROM network_logs WHERE timestamp >= NOW() - INTERVAL '24 hours'`
);
logCount = (logStats as any).rows?.[0]?.count ?? (logStats as any)[0]?.count ?? 0;
} catch (logError) {
console.error('[DB WARN] Log count query failed:', logError);
logCount = 0;
}
res.json({
routers: {
total: routers.length,
enabled: routers.filter(r => r.enabled).length
total: routersList.length,
enabled: routersList.filter(r => r.enabled).length
},
detections: {
total: detectionsResult.total,
blocked: blockedCount,
critical: criticalCount,
high: highCount
total: detectionStats[0]?.total || 0,
blocked: detectionStats[0]?.blocked || 0,
critical: detectionStats[0]?.critical || 0,
high: detectionStats[0]?.high || 0
},
logs: {
recent: recentLogs.length
recent: logCount
},
whitelist: {
total: whitelist.length
total: whitelistResult.total
},
latestTraining: latestTraining
});
@ -622,10 +625,102 @@ export async function registerRoutes(app: Express): Promise<Server> {
}
});
app.post("/api/ml/block-all-critical", async (req, res) => {
try {
const { min_score = 80, list_name = "ddos_blocked", limit = 100 } = req.body;
const maxIps = Math.min(Number(limit) || 100, 500);
const allRouters = await storage.getAllRouters();
const enabledRouters = allRouters.filter(r => r.enabled);
if (enabledRouters.length === 0) {
return res.status(400).json({ error: "Nessun router abilitato" });
}
const unblockedDetections = await db.execute(
sql`SELECT DISTINCT source_ip, MAX(CAST(risk_score AS FLOAT)) as max_score, MAX(anomaly_type) as anomaly_type
FROM detections
WHERE CAST(risk_score AS FLOAT) >= ${min_score}
AND blocked = false
AND source_ip NOT IN (SELECT ip_address FROM whitelist WHERE active = true)
GROUP BY source_ip
ORDER BY max_score DESC
LIMIT ${maxIps}`
);
const rows = (unblockedDetections as any).rows || unblockedDetections;
const totalUnblockedResult = await db.execute(
sql`SELECT COUNT(DISTINCT source_ip) as count
FROM detections
WHERE CAST(risk_score AS FLOAT) >= ${min_score}
AND blocked = false
AND source_ip NOT IN (SELECT ip_address FROM whitelist WHERE active = true)`
);
const totalUnblockedRows = (totalUnblockedResult as any).rows || totalUnblockedResult;
const totalUnblocked = parseInt(totalUnblockedRows[0]?.count || "0");
if (!rows || rows.length === 0) {
return res.json({
message: "Nessun IP critico da bloccare",
blocked: 0,
failed: 0,
total_critical: 0,
remaining: 0,
skipped: 0
});
}
const ipList = rows.map((r: any) => r.source_ip);
const routerInfo = enabledRouters.map((r: any) => `${r.name || r.ipAddress}(${r.ipAddress}:${r.apiPort})`).join(', ');
console.log(`[BLOCK-ALL] Avvio blocco massivo: ${ipList.length}/${totalUnblocked} IP con score >= ${min_score} su ${enabledRouters.length} router: ${routerInfo}`);
const result = await bulkBlockIps(
enabledRouters as any,
ipList,
list_name,
`IDS bulk-block (score>=${min_score})`,
"1h",
10
);
if (result.blocked > 0) {
const blockedIps = result.details
.filter(d => d.status === "blocked")
.map(d => d.ip);
const batchSize = 200;
for (let i = 0; i < blockedIps.length; i += batchSize) {
const batch = blockedIps.slice(i, i + batchSize);
const ipValues = batch.map(ip => `'${ip.replace(/'/g, "''")}'`).join(',');
await db.execute(
sql`UPDATE detections SET blocked = true, blocked_at = NOW() WHERE source_ip IN (${sql.raw(ipValues)}) AND blocked = false`
);
}
console.log(`[BLOCK-ALL] Database aggiornato: ${blockedIps.length} IP marcati come bloccati`);
}
const remaining = totalUnblocked - ipList.length;
res.json({
message: `Blocco massivo completato: ${result.blocked} IP bloccati, ${result.failed} falliti, ${result.skipped} già bloccati` +
(remaining > 0 ? `. Rimangono ${remaining} IP da bloccare.` : ''),
blocked: result.blocked,
failed: result.failed,
skipped: result.skipped,
total_critical: ipList.length,
remaining,
details: result.details.slice(0, 100)
});
} catch (error: any) {
console.error('[BLOCK-ALL] Error:', error);
res.status(500).json({ error: error.message || "Errore blocco massivo" });
}
});
app.get("/api/ml/stats", async (req, res) => {
try {
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 10000); // 10s timeout for stats
const timeout = setTimeout(() => controller.abort(), 15000);
const response = await fetch(`${ML_BACKEND_URL}/stats`, {
headers: getMLBackendHeaders(),
@ -635,62 +730,93 @@ export async function registerRoutes(app: Express): Promise<Server> {
clearTimeout(timeout);
if (!response.ok) {
const errorData = await response.json().catch(() => ({}));
return res.status(response.status).json({
error: errorData.detail || "Failed to fetch ML stats",
status: response.status,
});
throw new Error(`HTTP ${response.status}`);
}
const data = await response.json();
res.json(data);
} catch (error: any) {
if (error.name === 'AbortError') {
return res.status(504).json({ error: "Stats timeout" });
console.warn(`[ML Stats] Fallback to database - ML Backend error: ${error.message || error.code || 'unknown'}`);
try {
const latestTraining = await db
.select()
.from(trainingHistory)
.orderBy(desc(trainingHistory.trainedAt))
.limit(1);
const detectionStats = await db.execute(
sql`SELECT
COUNT(*) as total_detections,
COUNT(*) FILTER (WHERE blocked = true) as blocked_count,
COUNT(*) FILTER (WHERE CAST(risk_score AS FLOAT) >= 80) as critical_count,
COUNT(DISTINCT source_ip) as unique_ips
FROM detections`
);
const statsRows = (detectionStats as any).rows || detectionStats;
const logCount = await db.execute(
sql`SELECT COUNT(*) as count FROM network_logs WHERE timestamp > NOW() - INTERVAL '24 hours'`
);
const logRows = (logCount as any).rows || logCount;
res.json({
source: "database_fallback",
ml_backend_status: "offline",
latest_training: latestTraining[0] || null,
detections: {
total: parseInt(statsRows[0]?.total_detections || "0"),
blocked: parseInt(statsRows[0]?.blocked_count || "0"),
critical: parseInt(statsRows[0]?.critical_count || "0"),
unique_ips: parseInt(statsRows[0]?.unique_ips || "0"),
},
logs_24h: parseInt(logRows[0]?.count || "0"),
});
} catch (dbError: any) {
res.status(503).json({ error: "ML backend offline and database fallback failed" });
}
if (error.code === 'ECONNREFUSED') {
return res.status(503).json({ error: "ML backend not available" });
}
res.status(500).json({ error: error.message || "Failed to fetch ML stats" });
}
});
// Services monitoring
app.get("/api/services/status", async (req, res) => {
try {
const mkService = (name: string) => ({ name, status: "unknown" as string, healthy: false, details: null as any, systemdUnit: "" as string, type: "service" as string });
const services = {
mlBackend: { name: "ML Backend Python", status: "unknown", healthy: false, details: null as any },
database: { name: "PostgreSQL Database", status: "unknown", healthy: false, details: null as any },
syslogParser: { name: "Syslog Parser", status: "unknown", healthy: false, details: null as any },
analyticsAggregator: { name: "Analytics Aggregator Timer", status: "unknown", healthy: false, details: null as any },
nodeBackend: { ...mkService("Node.js Backend"), systemdUnit: "ids-backend", type: "service" },
mlBackend: { ...mkService("ML Backend Python"), systemdUnit: "ids-ml-backend", type: "service" },
database: { ...mkService("PostgreSQL Database"), systemdUnit: "postgresql-16", type: "service" },
syslogParser: { ...mkService("Syslog Parser"), systemdUnit: "ids-syslog-parser", type: "service" },
analyticsAggregator: { ...mkService("Analytics Aggregator"), systemdUnit: "ids-analytics-aggregator", type: "timer" },
autoBlock: { ...mkService("Auto Block"), systemdUnit: "ids-auto-block", type: "timer" },
cleanup: { ...mkService("Cleanup Detections"), systemdUnit: "ids-cleanup", type: "timer" },
listFetcher: { ...mkService("Public Lists Fetcher"), systemdUnit: "ids-list-fetcher", type: "timer" },
mlTraining: { ...mkService("ML Training Settimanale"), systemdUnit: "ids-ml-training", type: "timer" },
};
// Node.js Backend - always running if this endpoint responds
services.nodeBackend.status = "running";
services.nodeBackend.healthy = true;
services.nodeBackend.details = { port: 5000, uptime: process.uptime().toFixed(0) + "s" };
// Check ML Backend Python
try {
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 5000);
const response = await fetch(`${ML_BACKEND_URL}/health`, {
signal: controller.signal,
});
const response = await fetch(`${ML_BACKEND_URL}/health`, { signal: controller.signal });
clearTimeout(timeout);
if (response.ok) {
const data = await response.json();
services.mlBackend.status = "running";
services.mlBackend.healthy = true;
services.mlBackend.details = {
modelLoaded: data.ml_model === "loaded",
timestamp: data.timestamp,
};
services.mlBackend.details = { modelLoaded: data.ml_model === "loaded", timestamp: data.timestamp };
} else {
services.mlBackend.status = "error";
services.mlBackend.details = { error: `HTTP ${response.status}` };
}
} catch (error: any) {
services.mlBackend.status = "offline";
services.mlBackend.details = { error: error.code === 'ECONNREFUSED' ? "Connection refused" : error.message };
services.mlBackend.details = { error: error.code === 'ECONNREFUSED' ? "Connessione rifiutata" : error.message };
}
// Check Database
@ -706,89 +832,156 @@ export async function registerRoutes(app: Express): Promise<Server> {
services.database.details = { error: error.message };
}
// Check Python Services via authenticated endpoint
// Check Syslog Parser via database
try {
const controller2 = new AbortController();
const timeout2 = setTimeout(() => controller2.abort(), 5000);
const servicesResponse = await fetch(`${ML_BACKEND_URL}/services/status`, {
headers: getMLBackendHeaders(),
signal: controller2.signal,
});
clearTimeout(timeout2);
if (servicesResponse.ok) {
const servicesData = await servicesResponse.json();
// Update syslog parser status
const parserInfo = servicesData.services?.syslog_parser;
if (parserInfo) {
services.syslogParser.status = parserInfo.running ? "running" : "offline";
services.syslogParser.healthy = parserInfo.running;
services.syslogParser.details = {
systemd_unit: parserInfo.systemd_unit,
pid: parserInfo.details?.pid,
error: parserInfo.error,
};
}
} else if (servicesResponse.status === 403) {
services.syslogParser.status = "error";
services.syslogParser.healthy = false;
services.syslogParser.details = { error: "Authentication failed" };
const recentLogsResult = await db.execute(
sql`SELECT COUNT(*) as count, MAX(timestamp) as last_log FROM network_logs WHERE timestamp > NOW() - INTERVAL '30 minutes'`
);
const logRows = (recentLogsResult as any).rows || recentLogsResult;
const recentLogCount = parseInt(logRows[0]?.count || "0");
const lastLogTime = logRows[0]?.last_log;
if (recentLogCount > 0) {
services.syslogParser.status = "running";
services.syslogParser.healthy = true;
services.syslogParser.details = { recentLogs30min: recentLogCount, lastLog: lastLogTime };
} else {
throw new Error(`HTTP ${servicesResponse.status}`);
const lastLogEverResult = await db.execute(sql`SELECT MAX(timestamp) as last_log FROM network_logs`);
const lastLogEverRows = (lastLogEverResult as any).rows || lastLogEverResult;
services.syslogParser.status = "offline";
services.syslogParser.healthy = false;
services.syslogParser.details = { recentLogs30min: 0, lastLog: lastLogEverRows[0]?.last_log || "Mai", warning: "Nessun log negli ultimi 30 minuti" };
}
} catch (error: any) {
services.syslogParser.status = "error";
services.syslogParser.healthy = false;
services.syslogParser.details = {
error: error.code === 'ECONNREFUSED' ? "ML Backend offline" : error.message
};
services.syslogParser.details = { error: error.message };
}
// Check Analytics Aggregator (via last record timestamp)
try {
const latestAnalytics = await db
.select()
.from(networkAnalytics)
.orderBy(desc(networkAnalytics.date), desc(networkAnalytics.hour))
.limit(1);
const latestAnalytics = await db.select().from(networkAnalytics).orderBy(desc(networkAnalytics.date), desc(networkAnalytics.hour)).limit(1);
if (latestAnalytics.length > 0) {
const lastRun = new Date(latestAnalytics[0].date);
const lastTimestamp = lastRun.toISOString();
const hoursSinceLastRun = (Date.now() - lastRun.getTime()) / (1000 * 60 * 60);
if (hoursSinceLastRun < 2) {
const hoursSince = (Date.now() - lastRun.getTime()) / (1000 * 60 * 60);
if (hoursSince < 2) {
services.analyticsAggregator.status = "running";
services.analyticsAggregator.healthy = true;
services.analyticsAggregator.details = {
lastRun: latestAnalytics[0].date,
lastTimestamp,
hoursSinceLastRun: hoursSinceLastRun.toFixed(1),
};
services.analyticsAggregator.details = { lastRun: latestAnalytics[0].date, hoursSinceLastRun: hoursSince.toFixed(1) };
} else {
services.analyticsAggregator.status = "idle";
services.analyticsAggregator.healthy = false;
services.analyticsAggregator.details = {
lastRun: latestAnalytics[0].date,
lastTimestamp,
hoursSinceLastRun: hoursSinceLastRun.toFixed(1),
warning: "No aggregation in last 2 hours",
};
services.analyticsAggregator.details = { lastRun: latestAnalytics[0].date, hoursSinceLastRun: hoursSince.toFixed(1), warning: "Nessuna aggregazione nelle ultime 2 ore" };
}
} else {
services.analyticsAggregator.status = "idle";
services.analyticsAggregator.healthy = false;
services.analyticsAggregator.details = { error: "No analytics data found" };
services.analyticsAggregator.details = { error: "Nessun dato analytics trovato" };
}
} catch (error: any) {
services.analyticsAggregator.status = "error";
services.analyticsAggregator.healthy = false;
services.analyticsAggregator.details = { error: error.message };
}
// Check Auto Block (via recent blocked detections)
try {
const recentBlockResult = await db.execute(
sql`SELECT COUNT(*) as count, MAX(detected_at) as last_block FROM detections WHERE blocked = true AND detected_at > NOW() - INTERVAL '10 minutes'`
);
const blockRows = (recentBlockResult as any).rows || recentBlockResult;
const recentBlocks = parseInt(blockRows[0]?.count || "0");
const lastBlock = blockRows[0]?.last_block;
const totalBlockedResult = await db.execute(sql`SELECT COUNT(*) as count FROM detections WHERE blocked = true`);
const totalBlockedRows = (totalBlockedResult as any).rows || totalBlockedResult;
const totalBlocked = parseInt(totalBlockedRows[0]?.count || "0");
services.autoBlock.status = recentBlocks > 0 ? "running" : "idle";
services.autoBlock.healthy = true;
services.autoBlock.details = {
recentBlocks10min: recentBlocks,
totalBlocked,
lastBlock: lastBlock || "Mai",
interval: "ogni 5 minuti"
};
} catch (error: any) {
services.autoBlock.status = "error";
services.autoBlock.details = { error: error.message };
}
// Check Cleanup (via absence of old detections)
try {
const oldDetResult = await db.execute(
sql`SELECT COUNT(*) as count FROM detections WHERE detected_at < NOW() - INTERVAL '48 hours'`
);
const oldRows = (oldDetResult as any).rows || oldDetResult;
const oldDetections = parseInt(oldRows[0]?.count || "0");
const totalDetResult = await db.execute(sql`SELECT COUNT(*) as count FROM detections`);
const totalRows = (totalDetResult as any).rows || totalDetResult;
const totalDetections = parseInt(totalRows[0]?.count || "0");
services.cleanup.status = oldDetections === 0 ? "running" : "idle";
services.cleanup.healthy = oldDetections === 0;
services.cleanup.details = {
oldDetections48h: oldDetections,
totalDetections,
interval: "ogni ora",
warning: oldDetections > 0 ? `${oldDetections} detection vecchie non ancora pulite` : undefined
};
} catch (error: any) {
services.cleanup.status = "error";
services.cleanup.details = { error: error.message };
}
// Check List Fetcher (via public lists last_updated)
try {
const listsResult = await db.execute(
sql`SELECT COUNT(*) as total,
COUNT(*) FILTER (WHERE enabled = true) as enabled,
MAX(last_fetch) as last_fetch
FROM public_lists`
);
const listRows = (listsResult as any).rows || listsResult;
const totalLists = parseInt(listRows[0]?.total || "0");
const enabledLists = parseInt(listRows[0]?.enabled || "0");
const lastFetched = listRows[0]?.last_fetch;
if (lastFetched) {
const hoursSince = (Date.now() - new Date(lastFetched).getTime()) / (1000 * 60 * 60);
services.listFetcher.status = hoursSince < 1 ? "running" : "idle";
services.listFetcher.healthy = hoursSince < 1;
services.listFetcher.details = { totalLists, enabledLists, lastFetched, hoursSinceLastFetch: hoursSince.toFixed(1), interval: "ogni 10 minuti" };
} else {
services.listFetcher.status = "idle";
services.listFetcher.details = { totalLists, enabledLists, lastFetched: "Mai", interval: "ogni 10 minuti" };
}
} catch (error: any) {
services.listFetcher.status = "error";
services.listFetcher.details = { error: error.message };
}
// Check ML Training (via training history)
try {
const latestTraining = await db.select().from(trainingHistory).orderBy(desc(trainingHistory.trainedAt)).limit(1);
if (latestTraining.length > 0) {
const lastTrainDate = new Date(latestTraining[0].trainedAt);
const daysSince = (Date.now() - lastTrainDate.getTime()) / (1000 * 60 * 60 * 24);
services.mlTraining.status = daysSince < 8 ? "running" : "idle";
services.mlTraining.healthy = daysSince < 8;
services.mlTraining.details = {
lastTraining: latestTraining[0].trainedAt,
daysSinceLastTraining: daysSince.toFixed(1),
lastStatus: latestTraining[0].status,
lastModel: latestTraining[0].modelVersion,
recordsProcessed: latestTraining[0].recordsProcessed,
interval: "settimanale"
};
} else {
services.mlTraining.status = "idle";
services.mlTraining.details = { lastTraining: "Mai", interval: "settimanale" };
}
} catch (error: any) {
services.mlTraining.status = "error";
services.mlTraining.details = { error: error.message };
}
res.json({ services });
} catch (error: any) {
res.status(500).json({ error: "Failed to check services status" });
@ -796,7 +989,11 @@ export async function registerRoutes(app: Express): Promise<Server> {
});
// Service Control Endpoints (Secured - only allow specific systemd operations)
const ALLOWED_SERVICES = ["ids-ml-backend", "ids-syslog-parser"];
const ALLOWED_SERVICES = [
"ids-ml-backend", "ids-syslog-parser", "ids-backend",
"ids-analytics-aggregator", "ids-auto-block", "ids-cleanup",
"ids-list-fetcher", "ids-ml-training"
];
const ALLOWED_ACTIONS = ["start", "stop", "restart", "status"];
app.post("/api/services/:service/:action", async (req, res) => {

View File

@ -55,7 +55,7 @@ export interface IStorage {
getUnblockedDetections(): Promise<Detection[]>;
// Whitelist
getAllWhitelist(): Promise<Whitelist[]>;
getAllWhitelist(options?: { limit?: number; offset?: number; search?: string }): Promise<{ items: Whitelist[]; total: number }>;
getWhitelistByIp(ipAddress: string): Promise<Whitelist | undefined>;
createWhitelist(whitelist: InsertWhitelist): Promise<Whitelist>;
deleteWhitelist(id: string): Promise<boolean>;
@ -80,13 +80,14 @@ export interface IStorage {
attacksByCountry: Record<string, number>;
attacksByType: Record<string, number>;
recentDetections: Detection[];
blockedCount: number;
}>;
// Public Lists
getAllPublicLists(): Promise<PublicList[]>;
getPublicListById(id: string): Promise<PublicList | undefined>;
createPublicList(list: InsertPublicList): Promise<PublicList>;
updatePublicList(id: string, list: Partial<InsertPublicList>): Promise<PublicList | undefined>;
updatePublicList(id: string, list: Partial<InsertPublicList> & { lastFetch?: Date | null; lastSuccess?: Date | null }): Promise<PublicList | undefined>;
deletePublicList(id: string): Promise<boolean>;
// Public Blacklist IPs
@ -271,12 +272,40 @@ export class DatabaseStorage implements IStorage {
}
// Whitelist
async getAllWhitelist(): Promise<Whitelist[]> {
return await db
async getAllWhitelist(options?: { limit?: number; offset?: number; search?: string }): Promise<{ items: Whitelist[]; total: number }> {
const limit = options?.limit || 50;
const offset = options?.offset || 0;
const search = options?.search?.trim().toLowerCase();
const conditions: any[] = [eq(whitelist.active, true)];
if (search) {
conditions.push(
sql`(
LOWER(${whitelist.ipAddress}) LIKE ${'%' + search + '%'}
OR LOWER(COALESCE(${whitelist.reason}, '')) LIKE ${'%' + search + '%'}
OR LOWER(COALESCE(${whitelist.comment}, '')) LIKE ${'%' + search + '%'}
OR LOWER(COALESCE(${whitelist.source}, '')) LIKE ${'%' + search + '%'}
)`
);
}
const whereClause = and(...conditions);
const [countResult] = await db
.select({ count: sql<number>`cast(count(*) as integer)` })
.from(whitelist)
.where(whereClause);
const items = await db
.select()
.from(whitelist)
.where(eq(whitelist.active, true))
.orderBy(desc(whitelist.createdAt));
.where(whereClause)
.orderBy(desc(whitelist.createdAt))
.limit(limit)
.offset(offset);
return { items, total: countResult.count };
}
async getWhitelistByIp(ipAddress: string): Promise<Whitelist | undefined> {
@ -425,6 +454,11 @@ export class DatabaseStorage implements IStorage {
.orderBy(desc(detections.detectedAt))
.limit(100);
const [blockedResult] = await db
.select({ count: sql<number>`count(*)::int` })
.from(detections)
.where(eq(detections.blocked, true));
return {
totalPackets,
attackPackets,
@ -434,6 +468,7 @@ export class DatabaseStorage implements IStorage {
attacksByCountry,
attacksByType,
recentDetections,
blockedCount: blockedResult?.count || 0,
};
}
@ -452,7 +487,7 @@ export class DatabaseStorage implements IStorage {
return list;
}
async updatePublicList(id: string, updateData: Partial<InsertPublicList>): Promise<PublicList | undefined> {
async updatePublicList(id: string, updateData: Partial<InsertPublicList> & { lastFetch?: Date | null; lastSuccess?: Date | null }): Promise<PublicList | undefined> {
const [list] = await db
.update(publicLists)
.set(updateData)

121
uv.lock
View File

@ -71,16 +71,135 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" },
]
[[package]]
name = "lxml"
version = "6.0.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/aa/88/262177de60548e5a2bfc46ad28232c9e9cbde697bd94132aeb80364675cb/lxml-6.0.2.tar.gz", hash = "sha256:cd79f3367bd74b317dda655dc8fcfa304d9eb6e4fb06b7168c5cf27f96e0cd62", size = 4073426, upload-time = "2025-09-22T04:04:59.287Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/77/d5/becbe1e2569b474a23f0c672ead8a29ac50b2dc1d5b9de184831bda8d14c/lxml-6.0.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:13e35cbc684aadf05d8711a5d1b5857c92e5e580efa9a0d2be197199c8def607", size = 8634365, upload-time = "2025-09-22T04:00:45.672Z" },
{ url = "https://files.pythonhosted.org/packages/28/66/1ced58f12e804644426b85d0bb8a4478ca77bc1761455da310505f1a3526/lxml-6.0.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3b1675e096e17c6fe9c0e8c81434f5736c0739ff9ac6123c87c2d452f48fc938", size = 4650793, upload-time = "2025-09-22T04:00:47.783Z" },
{ url = "https://files.pythonhosted.org/packages/11/84/549098ffea39dfd167e3f174b4ce983d0eed61f9d8d25b7bf2a57c3247fc/lxml-6.0.2-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:8ac6e5811ae2870953390452e3476694196f98d447573234592d30488147404d", size = 4944362, upload-time = "2025-09-22T04:00:49.845Z" },
{ url = "https://files.pythonhosted.org/packages/ac/bd/f207f16abf9749d2037453d56b643a7471d8fde855a231a12d1e095c4f01/lxml-6.0.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:5aa0fc67ae19d7a64c3fe725dc9a1bb11f80e01f78289d05c6f62545affec438", size = 5083152, upload-time = "2025-09-22T04:00:51.709Z" },
{ url = "https://files.pythonhosted.org/packages/15/ae/bd813e87d8941d52ad5b65071b1affb48da01c4ed3c9c99e40abb266fbff/lxml-6.0.2-cp311-cp311-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:de496365750cc472b4e7902a485d3f152ecf57bd3ba03ddd5578ed8ceb4c5964", size = 5023539, upload-time = "2025-09-22T04:00:53.593Z" },
{ url = "https://files.pythonhosted.org/packages/02/cd/9bfef16bd1d874fbe0cb51afb00329540f30a3283beb9f0780adbb7eec03/lxml-6.0.2-cp311-cp311-manylinux_2_26_i686.manylinux_2_28_i686.whl", hash = "sha256:200069a593c5e40b8f6fc0d84d86d970ba43138c3e68619ffa234bc9bb806a4d", size = 5344853, upload-time = "2025-09-22T04:00:55.524Z" },
{ url = "https://files.pythonhosted.org/packages/b8/89/ea8f91594bc5dbb879734d35a6f2b0ad50605d7fb419de2b63d4211765cc/lxml-6.0.2-cp311-cp311-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7d2de809c2ee3b888b59f995625385f74629707c9355e0ff856445cdcae682b7", size = 5225133, upload-time = "2025-09-22T04:00:57.269Z" },
{ url = "https://files.pythonhosted.org/packages/b9/37/9c735274f5dbec726b2db99b98a43950395ba3d4a1043083dba2ad814170/lxml-6.0.2-cp311-cp311-manylinux_2_31_armv7l.whl", hash = "sha256:b2c3da8d93cf5db60e8858c17684c47d01fee6405e554fb55018dd85fc23b178", size = 4677944, upload-time = "2025-09-22T04:00:59.052Z" },
{ url = "https://files.pythonhosted.org/packages/20/28/7dfe1ba3475d8bfca3878365075abe002e05d40dfaaeb7ec01b4c587d533/lxml-6.0.2-cp311-cp311-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:442de7530296ef5e188373a1ea5789a46ce90c4847e597856570439621d9c553", size = 5284535, upload-time = "2025-09-22T04:01:01.335Z" },
{ url = "https://files.pythonhosted.org/packages/e7/cf/5f14bc0de763498fc29510e3532bf2b4b3a1c1d5d0dff2e900c16ba021ef/lxml-6.0.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:2593c77efde7bfea7f6389f1ab249b15ed4aa5bc5cb5131faa3b843c429fbedb", size = 5067343, upload-time = "2025-09-22T04:01:03.13Z" },
{ url = "https://files.pythonhosted.org/packages/1c/b0/bb8275ab5472f32b28cfbbcc6db7c9d092482d3439ca279d8d6fa02f7025/lxml-6.0.2-cp311-cp311-musllinux_1_2_armv7l.whl", hash = "sha256:3e3cb08855967a20f553ff32d147e14329b3ae70ced6edc2f282b94afbc74b2a", size = 4725419, upload-time = "2025-09-22T04:01:05.013Z" },
{ url = "https://files.pythonhosted.org/packages/25/4c/7c222753bc72edca3b99dbadba1b064209bc8ed4ad448af990e60dcce462/lxml-6.0.2-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:2ed6c667fcbb8c19c6791bbf40b7268ef8ddf5a96940ba9404b9f9a304832f6c", size = 5275008, upload-time = "2025-09-22T04:01:07.327Z" },
{ url = "https://files.pythonhosted.org/packages/6c/8c/478a0dc6b6ed661451379447cdbec77c05741a75736d97e5b2b729687828/lxml-6.0.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:b8f18914faec94132e5b91e69d76a5c1d7b0c73e2489ea8929c4aaa10b76bbf7", size = 5248906, upload-time = "2025-09-22T04:01:09.452Z" },
{ url = "https://files.pythonhosted.org/packages/2d/d9/5be3a6ab2784cdf9accb0703b65e1b64fcdd9311c9f007630c7db0cfcce1/lxml-6.0.2-cp311-cp311-win32.whl", hash = "sha256:6605c604e6daa9e0d7f0a2137bdc47a2e93b59c60a65466353e37f8272f47c46", size = 3610357, upload-time = "2025-09-22T04:01:11.102Z" },
{ url = "https://files.pythonhosted.org/packages/e2/7d/ca6fb13349b473d5732fb0ee3eec8f6c80fc0688e76b7d79c1008481bf1f/lxml-6.0.2-cp311-cp311-win_amd64.whl", hash = "sha256:e5867f2651016a3afd8dd2c8238baa66f1e2802f44bc17e236f547ace6647078", size = 4036583, upload-time = "2025-09-22T04:01:12.766Z" },
{ url = "https://files.pythonhosted.org/packages/ab/a2/51363b5ecd3eab46563645f3a2c3836a2fc67d01a1b87c5017040f39f567/lxml-6.0.2-cp311-cp311-win_arm64.whl", hash = "sha256:4197fb2534ee05fd3e7afaab5d8bfd6c2e186f65ea7f9cd6a82809c887bd1285", size = 3680591, upload-time = "2025-09-22T04:01:14.874Z" },
{ url = "https://files.pythonhosted.org/packages/f3/c8/8ff2bc6b920c84355146cd1ab7d181bc543b89241cfb1ebee824a7c81457/lxml-6.0.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:a59f5448ba2ceccd06995c95ea59a7674a10de0810f2ce90c9006f3cbc044456", size = 8661887, upload-time = "2025-09-22T04:01:17.265Z" },
{ url = "https://files.pythonhosted.org/packages/37/6f/9aae1008083bb501ef63284220ce81638332f9ccbfa53765b2b7502203cf/lxml-6.0.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:e8113639f3296706fbac34a30813929e29247718e88173ad849f57ca59754924", size = 4667818, upload-time = "2025-09-22T04:01:19.688Z" },
{ url = "https://files.pythonhosted.org/packages/f1/ca/31fb37f99f37f1536c133476674c10b577e409c0a624384147653e38baf2/lxml-6.0.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:a8bef9b9825fa8bc816a6e641bb67219489229ebc648be422af695f6e7a4fa7f", size = 4950807, upload-time = "2025-09-22T04:01:21.487Z" },
{ url = "https://files.pythonhosted.org/packages/da/87/f6cb9442e4bada8aab5ae7e1046264f62fdbeaa6e3f6211b93f4c0dd97f1/lxml-6.0.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:65ea18d710fd14e0186c2f973dc60bb52039a275f82d3c44a0e42b43440ea534", size = 5109179, upload-time = "2025-09-22T04:01:23.32Z" },
{ url = "https://files.pythonhosted.org/packages/c8/20/a7760713e65888db79bbae4f6146a6ae5c04e4a204a3c48896c408cd6ed2/lxml-6.0.2-cp312-cp312-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c371aa98126a0d4c739ca93ceffa0fd7a5d732e3ac66a46e74339acd4d334564", size = 5023044, upload-time = "2025-09-22T04:01:25.118Z" },
{ url = "https://files.pythonhosted.org/packages/a2/b0/7e64e0460fcb36471899f75831509098f3fd7cd02a3833ac517433cb4f8f/lxml-6.0.2-cp312-cp312-manylinux_2_26_i686.manylinux_2_28_i686.whl", hash = "sha256:700efd30c0fa1a3581d80a748157397559396090a51d306ea59a70020223d16f", size = 5359685, upload-time = "2025-09-22T04:01:27.398Z" },
{ url = "https://files.pythonhosted.org/packages/b9/e1/e5df362e9ca4e2f48ed6411bd4b3a0ae737cc842e96877f5bf9428055ab4/lxml-6.0.2-cp312-cp312-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:c33e66d44fe60e72397b487ee92e01da0d09ba2d66df8eae42d77b6d06e5eba0", size = 5654127, upload-time = "2025-09-22T04:01:29.629Z" },
{ url = "https://files.pythonhosted.org/packages/c6/d1/232b3309a02d60f11e71857778bfcd4acbdb86c07db8260caf7d008b08f8/lxml-6.0.2-cp312-cp312-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:90a345bbeaf9d0587a3aaffb7006aa39ccb6ff0e96a57286c0cb2fd1520ea192", size = 5253958, upload-time = "2025-09-22T04:01:31.535Z" },
{ url = "https://files.pythonhosted.org/packages/35/35/d955a070994725c4f7d80583a96cab9c107c57a125b20bb5f708fe941011/lxml-6.0.2-cp312-cp312-manylinux_2_31_armv7l.whl", hash = "sha256:064fdadaf7a21af3ed1dcaa106b854077fbeada827c18f72aec9346847cd65d0", size = 4711541, upload-time = "2025-09-22T04:01:33.801Z" },
{ url = "https://files.pythonhosted.org/packages/1e/be/667d17363b38a78c4bd63cfd4b4632029fd68d2c2dc81f25ce9eb5224dd5/lxml-6.0.2-cp312-cp312-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:fbc74f42c3525ac4ffa4b89cbdd00057b6196bcefe8bce794abd42d33a018092", size = 5267426, upload-time = "2025-09-22T04:01:35.639Z" },
{ url = "https://files.pythonhosted.org/packages/ea/47/62c70aa4a1c26569bc958c9ca86af2bb4e1f614e8c04fb2989833874f7ae/lxml-6.0.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:6ddff43f702905a4e32bc24f3f2e2edfe0f8fde3277d481bffb709a4cced7a1f", size = 5064917, upload-time = "2025-09-22T04:01:37.448Z" },
{ url = "https://files.pythonhosted.org/packages/bd/55/6ceddaca353ebd0f1908ef712c597f8570cc9c58130dbb89903198e441fd/lxml-6.0.2-cp312-cp312-musllinux_1_2_armv7l.whl", hash = "sha256:6da5185951d72e6f5352166e3da7b0dc27aa70bd1090b0eb3f7f7212b53f1bb8", size = 4788795, upload-time = "2025-09-22T04:01:39.165Z" },
{ url = "https://files.pythonhosted.org/packages/cf/e8/fd63e15da5e3fd4c2146f8bbb3c14e94ab850589beab88e547b2dbce22e1/lxml-6.0.2-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:57a86e1ebb4020a38d295c04fc79603c7899e0df71588043eb218722dabc087f", size = 5676759, upload-time = "2025-09-22T04:01:41.506Z" },
{ url = "https://files.pythonhosted.org/packages/76/47/b3ec58dc5c374697f5ba37412cd2728f427d056315d124dd4b61da381877/lxml-6.0.2-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:2047d8234fe735ab77802ce5f2297e410ff40f5238aec569ad7c8e163d7b19a6", size = 5255666, upload-time = "2025-09-22T04:01:43.363Z" },
{ url = "https://files.pythonhosted.org/packages/19/93/03ba725df4c3d72afd9596eef4a37a837ce8e4806010569bedfcd2cb68fd/lxml-6.0.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:6f91fd2b2ea15a6800c8e24418c0775a1694eefc011392da73bc6cef2623b322", size = 5277989, upload-time = "2025-09-22T04:01:45.215Z" },
{ url = "https://files.pythonhosted.org/packages/c6/80/c06de80bfce881d0ad738576f243911fccf992687ae09fd80b734712b39c/lxml-6.0.2-cp312-cp312-win32.whl", hash = "sha256:3ae2ce7d6fedfb3414a2b6c5e20b249c4c607f72cb8d2bb7cc9c6ec7c6f4e849", size = 3611456, upload-time = "2025-09-22T04:01:48.243Z" },
{ url = "https://files.pythonhosted.org/packages/f7/d7/0cdfb6c3e30893463fb3d1e52bc5f5f99684a03c29a0b6b605cfae879cd5/lxml-6.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:72c87e5ee4e58a8354fb9c7c84cbf95a1c8236c127a5d1b7683f04bed8361e1f", size = 4011793, upload-time = "2025-09-22T04:01:50.042Z" },
{ url = "https://files.pythonhosted.org/packages/ea/7b/93c73c67db235931527301ed3785f849c78991e2e34f3fd9a6663ffda4c5/lxml-6.0.2-cp312-cp312-win_arm64.whl", hash = "sha256:61cb10eeb95570153e0c0e554f58df92ecf5109f75eacad4a95baa709e26c3d6", size = 3672836, upload-time = "2025-09-22T04:01:52.145Z" },
{ url = "https://files.pythonhosted.org/packages/53/fd/4e8f0540608977aea078bf6d79f128e0e2c2bba8af1acf775c30baa70460/lxml-6.0.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:9b33d21594afab46f37ae58dfadd06636f154923c4e8a4d754b0127554eb2e77", size = 8648494, upload-time = "2025-09-22T04:01:54.242Z" },
{ url = "https://files.pythonhosted.org/packages/5d/f4/2a94a3d3dfd6c6b433501b8d470a1960a20ecce93245cf2db1706adf6c19/lxml-6.0.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:6c8963287d7a4c5c9a432ff487c52e9c5618667179c18a204bdedb27310f022f", size = 4661146, upload-time = "2025-09-22T04:01:56.282Z" },
{ url = "https://files.pythonhosted.org/packages/25/2e/4efa677fa6b322013035d38016f6ae859d06cac67437ca7dc708a6af7028/lxml-6.0.2-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:1941354d92699fb5ffe6ed7b32f9649e43c2feb4b97205f75866f7d21aa91452", size = 4946932, upload-time = "2025-09-22T04:01:58.989Z" },
{ url = "https://files.pythonhosted.org/packages/ce/0f/526e78a6d38d109fdbaa5049c62e1d32fdd70c75fb61c4eadf3045d3d124/lxml-6.0.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:bb2f6ca0ae2d983ded09357b84af659c954722bbf04dea98030064996d156048", size = 5100060, upload-time = "2025-09-22T04:02:00.812Z" },
{ url = "https://files.pythonhosted.org/packages/81/76/99de58d81fa702cc0ea7edae4f4640416c2062813a00ff24bd70ac1d9c9b/lxml-6.0.2-cp313-cp313-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:eb2a12d704f180a902d7fa778c6d71f36ceb7b0d317f34cdc76a5d05aa1dd1df", size = 5019000, upload-time = "2025-09-22T04:02:02.671Z" },
{ url = "https://files.pythonhosted.org/packages/b5/35/9e57d25482bc9a9882cb0037fdb9cc18f4b79d85df94fa9d2a89562f1d25/lxml-6.0.2-cp313-cp313-manylinux_2_26_i686.manylinux_2_28_i686.whl", hash = "sha256:6ec0e3f745021bfed19c456647f0298d60a24c9ff86d9d051f52b509663feeb1", size = 5348496, upload-time = "2025-09-22T04:02:04.904Z" },
{ url = "https://files.pythonhosted.org/packages/a6/8e/cb99bd0b83ccc3e8f0f528e9aa1f7a9965dfec08c617070c5db8d63a87ce/lxml-6.0.2-cp313-cp313-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:846ae9a12d54e368933b9759052d6206a9e8b250291109c48e350c1f1f49d916", size = 5643779, upload-time = "2025-09-22T04:02:06.689Z" },
{ url = "https://files.pythonhosted.org/packages/d0/34/9e591954939276bb679b73773836c6684c22e56d05980e31d52a9a8deb18/lxml-6.0.2-cp313-cp313-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ef9266d2aa545d7374938fb5c484531ef5a2ec7f2d573e62f8ce722c735685fd", size = 5244072, upload-time = "2025-09-22T04:02:08.587Z" },
{ url = "https://files.pythonhosted.org/packages/8d/27/b29ff065f9aaca443ee377aff699714fcbffb371b4fce5ac4ca759e436d5/lxml-6.0.2-cp313-cp313-manylinux_2_31_armv7l.whl", hash = "sha256:4077b7c79f31755df33b795dc12119cb557a0106bfdab0d2c2d97bd3cf3dffa6", size = 4718675, upload-time = "2025-09-22T04:02:10.783Z" },
{ url = "https://files.pythonhosted.org/packages/2b/9f/f756f9c2cd27caa1a6ef8c32ae47aadea697f5c2c6d07b0dae133c244fbe/lxml-6.0.2-cp313-cp313-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:a7c5d5e5f1081955358533be077166ee97ed2571d6a66bdba6ec2f609a715d1a", size = 5255171, upload-time = "2025-09-22T04:02:12.631Z" },
{ url = "https://files.pythonhosted.org/packages/61/46/bb85ea42d2cb1bd8395484fd72f38e3389611aa496ac7772da9205bbda0e/lxml-6.0.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:8f8d0cbd0674ee89863a523e6994ac25fd5be9c8486acfc3e5ccea679bad2679", size = 5057175, upload-time = "2025-09-22T04:02:14.718Z" },
{ url = "https://files.pythonhosted.org/packages/95/0c/443fc476dcc8e41577f0af70458c50fe299a97bb6b7505bb1ae09aa7f9ac/lxml-6.0.2-cp313-cp313-musllinux_1_2_armv7l.whl", hash = "sha256:2cbcbf6d6e924c28f04a43f3b6f6e272312a090f269eff68a2982e13e5d57659", size = 4785688, upload-time = "2025-09-22T04:02:16.957Z" },
{ url = "https://files.pythonhosted.org/packages/48/78/6ef0b359d45bb9697bc5a626e1992fa5d27aa3f8004b137b2314793b50a0/lxml-6.0.2-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:dfb874cfa53340009af6bdd7e54ebc0d21012a60a4e65d927c2e477112e63484", size = 5660655, upload-time = "2025-09-22T04:02:18.815Z" },
{ url = "https://files.pythonhosted.org/packages/ff/ea/e1d33808f386bc1339d08c0dcada6e4712d4ed8e93fcad5f057070b7988a/lxml-6.0.2-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:fb8dae0b6b8b7f9e96c26fdd8121522ce5de9bb5538010870bd538683d30e9a2", size = 5247695, upload-time = "2025-09-22T04:02:20.593Z" },
{ url = "https://files.pythonhosted.org/packages/4f/47/eba75dfd8183673725255247a603b4ad606f4ae657b60c6c145b381697da/lxml-6.0.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:358d9adae670b63e95bc59747c72f4dc97c9ec58881d4627fe0120da0f90d314", size = 5269841, upload-time = "2025-09-22T04:02:22.489Z" },
{ url = "https://files.pythonhosted.org/packages/76/04/5c5e2b8577bc936e219becb2e98cdb1aca14a4921a12995b9d0c523502ae/lxml-6.0.2-cp313-cp313-win32.whl", hash = "sha256:e8cd2415f372e7e5a789d743d133ae474290a90b9023197fd78f32e2dc6873e2", size = 3610700, upload-time = "2025-09-22T04:02:24.465Z" },
{ url = "https://files.pythonhosted.org/packages/fe/0a/4643ccc6bb8b143e9f9640aa54e38255f9d3b45feb2cbe7ae2ca47e8782e/lxml-6.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:b30d46379644fbfc3ab81f8f82ae4de55179414651f110a1514f0b1f8f6cb2d7", size = 4010347, upload-time = "2025-09-22T04:02:26.286Z" },
{ url = "https://files.pythonhosted.org/packages/31/ef/dcf1d29c3f530577f61e5fe2f1bd72929acf779953668a8a47a479ae6f26/lxml-6.0.2-cp313-cp313-win_arm64.whl", hash = "sha256:13dcecc9946dca97b11b7c40d29fba63b55ab4170d3c0cf8c0c164343b9bfdcf", size = 3671248, upload-time = "2025-09-22T04:02:27.918Z" },
{ url = "https://files.pythonhosted.org/packages/03/15/d4a377b385ab693ce97b472fe0c77c2b16ec79590e688b3ccc71fba19884/lxml-6.0.2-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:b0c732aa23de8f8aec23f4b580d1e52905ef468afb4abeafd3fec77042abb6fe", size = 8659801, upload-time = "2025-09-22T04:02:30.113Z" },
{ url = "https://files.pythonhosted.org/packages/c8/e8/c128e37589463668794d503afaeb003987373c5f94d667124ffd8078bbd9/lxml-6.0.2-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:4468e3b83e10e0317a89a33d28f7aeba1caa4d1a6fd457d115dd4ffe90c5931d", size = 4659403, upload-time = "2025-09-22T04:02:32.119Z" },
{ url = "https://files.pythonhosted.org/packages/00/ce/74903904339decdf7da7847bb5741fc98a5451b42fc419a86c0c13d26fe2/lxml-6.0.2-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:abd44571493973bad4598a3be7e1d807ed45aa2adaf7ab92ab7c62609569b17d", size = 4966974, upload-time = "2025-09-22T04:02:34.155Z" },
{ url = "https://files.pythonhosted.org/packages/1f/d3/131dec79ce61c5567fecf82515bd9bc36395df42501b50f7f7f3bd065df0/lxml-6.0.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:370cd78d5855cfbffd57c422851f7d3864e6ae72d0da615fca4dad8c45d375a5", size = 5102953, upload-time = "2025-09-22T04:02:36.054Z" },
{ url = "https://files.pythonhosted.org/packages/3a/ea/a43ba9bb750d4ffdd885f2cd333572f5bb900cd2408b67fdda07e85978a0/lxml-6.0.2-cp314-cp314-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:901e3b4219fa04ef766885fb40fa516a71662a4c61b80c94d25336b4934b71c0", size = 5055054, upload-time = "2025-09-22T04:02:38.154Z" },
{ url = "https://files.pythonhosted.org/packages/60/23/6885b451636ae286c34628f70a7ed1fcc759f8d9ad382d132e1c8d3d9bfd/lxml-6.0.2-cp314-cp314-manylinux_2_26_i686.manylinux_2_28_i686.whl", hash = "sha256:a4bf42d2e4cf52c28cc1812d62426b9503cdb0c87a6de81442626aa7d69707ba", size = 5352421, upload-time = "2025-09-22T04:02:40.413Z" },
{ url = "https://files.pythonhosted.org/packages/48/5b/fc2ddfc94ddbe3eebb8e9af6e3fd65e2feba4967f6a4e9683875c394c2d8/lxml-6.0.2-cp314-cp314-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:b2c7fdaa4d7c3d886a42534adec7cfac73860b89b4e5298752f60aa5984641a0", size = 5673684, upload-time = "2025-09-22T04:02:42.288Z" },
{ url = "https://files.pythonhosted.org/packages/29/9c/47293c58cc91769130fbf85531280e8cc7868f7fbb6d92f4670071b9cb3e/lxml-6.0.2-cp314-cp314-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:98a5e1660dc7de2200b00d53fa00bcd3c35a3608c305d45a7bbcaf29fa16e83d", size = 5252463, upload-time = "2025-09-22T04:02:44.165Z" },
{ url = "https://files.pythonhosted.org/packages/9b/da/ba6eceb830c762b48e711ded880d7e3e89fc6c7323e587c36540b6b23c6b/lxml-6.0.2-cp314-cp314-manylinux_2_31_armv7l.whl", hash = "sha256:dc051506c30b609238d79eda75ee9cab3e520570ec8219844a72a46020901e37", size = 4698437, upload-time = "2025-09-22T04:02:46.524Z" },
{ url = "https://files.pythonhosted.org/packages/a5/24/7be3f82cb7990b89118d944b619e53c656c97dc89c28cfb143fdb7cd6f4d/lxml-6.0.2-cp314-cp314-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:8799481bbdd212470d17513a54d568f44416db01250f49449647b5ab5b5dccb9", size = 5269890, upload-time = "2025-09-22T04:02:48.812Z" },
{ url = "https://files.pythonhosted.org/packages/1b/bd/dcfb9ea1e16c665efd7538fc5d5c34071276ce9220e234217682e7d2c4a5/lxml-6.0.2-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:9261bb77c2dab42f3ecd9103951aeca2c40277701eb7e912c545c1b16e0e4917", size = 5097185, upload-time = "2025-09-22T04:02:50.746Z" },
{ url = "https://files.pythonhosted.org/packages/21/04/a60b0ff9314736316f28316b694bccbbabe100f8483ad83852d77fc7468e/lxml-6.0.2-cp314-cp314-musllinux_1_2_armv7l.whl", hash = "sha256:65ac4a01aba353cfa6d5725b95d7aed6356ddc0a3cd734de00124d285b04b64f", size = 4745895, upload-time = "2025-09-22T04:02:52.968Z" },
{ url = "https://files.pythonhosted.org/packages/d6/bd/7d54bd1846e5a310d9c715921c5faa71cf5c0853372adf78aee70c8d7aa2/lxml-6.0.2-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:b22a07cbb82fea98f8a2fd814f3d1811ff9ed76d0fc6abc84eb21527596e7cc8", size = 5695246, upload-time = "2025-09-22T04:02:54.798Z" },
{ url = "https://files.pythonhosted.org/packages/fd/32/5643d6ab947bc371da21323acb2a6e603cedbe71cb4c99c8254289ab6f4e/lxml-6.0.2-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:d759cdd7f3e055d6bc8d9bec3ad905227b2e4c785dc16c372eb5b5e83123f48a", size = 5260797, upload-time = "2025-09-22T04:02:57.058Z" },
{ url = "https://files.pythonhosted.org/packages/33/da/34c1ec4cff1eea7d0b4cd44af8411806ed943141804ac9c5d565302afb78/lxml-6.0.2-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:945da35a48d193d27c188037a05fec5492937f66fb1958c24fc761fb9d40d43c", size = 5277404, upload-time = "2025-09-22T04:02:58.966Z" },
{ url = "https://files.pythonhosted.org/packages/82/57/4eca3e31e54dc89e2c3507e1cd411074a17565fa5ffc437c4ae0a00d439e/lxml-6.0.2-cp314-cp314-win32.whl", hash = "sha256:be3aaa60da67e6153eb15715cc2e19091af5dc75faef8b8a585aea372507384b", size = 3670072, upload-time = "2025-09-22T04:03:38.05Z" },
{ url = "https://files.pythonhosted.org/packages/e3/e0/c96cf13eccd20c9421ba910304dae0f619724dcf1702864fd59dd386404d/lxml-6.0.2-cp314-cp314-win_amd64.whl", hash = "sha256:fa25afbadead523f7001caf0c2382afd272c315a033a7b06336da2637d92d6ed", size = 4080617, upload-time = "2025-09-22T04:03:39.835Z" },
{ url = "https://files.pythonhosted.org/packages/d5/5d/b3f03e22b3d38d6f188ef044900a9b29b2fe0aebb94625ce9fe244011d34/lxml-6.0.2-cp314-cp314-win_arm64.whl", hash = "sha256:063eccf89df5b24e361b123e257e437f9e9878f425ee9aae3144c77faf6da6d8", size = 3754930, upload-time = "2025-09-22T04:03:41.565Z" },
{ url = "https://files.pythonhosted.org/packages/5e/5c/42c2c4c03554580708fc738d13414801f340c04c3eff90d8d2d227145275/lxml-6.0.2-cp314-cp314t-macosx_10_13_universal2.whl", hash = "sha256:6162a86d86893d63084faaf4ff937b3daea233e3682fb4474db07395794fa80d", size = 8910380, upload-time = "2025-09-22T04:03:01.645Z" },
{ url = "https://files.pythonhosted.org/packages/bf/4f/12df843e3e10d18d468a7557058f8d3733e8b6e12401f30b1ef29360740f/lxml-6.0.2-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:414aaa94e974e23a3e92e7ca5b97d10c0cf37b6481f50911032c69eeb3991bba", size = 4775632, upload-time = "2025-09-22T04:03:03.814Z" },
{ url = "https://files.pythonhosted.org/packages/e4/0c/9dc31e6c2d0d418483cbcb469d1f5a582a1cd00a1f4081953d44051f3c50/lxml-6.0.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:48461bd21625458dd01e14e2c38dd0aea69addc3c4f960c30d9f59d7f93be601", size = 4975171, upload-time = "2025-09-22T04:03:05.651Z" },
{ url = "https://files.pythonhosted.org/packages/e7/2b/9b870c6ca24c841bdd887504808f0417aa9d8d564114689266f19ddf29c8/lxml-6.0.2-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:25fcc59afc57d527cfc78a58f40ab4c9b8fd096a9a3f964d2781ffb6eb33f4ed", size = 5110109, upload-time = "2025-09-22T04:03:07.452Z" },
{ url = "https://files.pythonhosted.org/packages/bf/0c/4f5f2a4dd319a178912751564471355d9019e220c20d7db3fb8307ed8582/lxml-6.0.2-cp314-cp314t-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5179c60288204e6ddde3f774a93350177e08876eaf3ab78aa3a3649d43eb7d37", size = 5041061, upload-time = "2025-09-22T04:03:09.297Z" },
{ url = "https://files.pythonhosted.org/packages/12/64/554eed290365267671fe001a20d72d14f468ae4e6acef1e179b039436967/lxml-6.0.2-cp314-cp314t-manylinux_2_26_i686.manylinux_2_28_i686.whl", hash = "sha256:967aab75434de148ec80597b75062d8123cadf2943fb4281f385141e18b21338", size = 5306233, upload-time = "2025-09-22T04:03:11.651Z" },
{ url = "https://files.pythonhosted.org/packages/7a/31/1d748aa275e71802ad9722df32a7a35034246b42c0ecdd8235412c3396ef/lxml-6.0.2-cp314-cp314t-manylinux_2_26_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:d100fcc8930d697c6561156c6810ab4a508fb264c8b6779e6e61e2ed5e7558f9", size = 5604739, upload-time = "2025-09-22T04:03:13.592Z" },
{ url = "https://files.pythonhosted.org/packages/8f/41/2c11916bcac09ed561adccacceaedd2bf0e0b25b297ea92aab99fd03d0fa/lxml-6.0.2-cp314-cp314t-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:2ca59e7e13e5981175b8b3e4ab84d7da57993eeff53c07764dcebda0d0e64ecd", size = 5225119, upload-time = "2025-09-22T04:03:15.408Z" },
{ url = "https://files.pythonhosted.org/packages/99/05/4e5c2873d8f17aa018e6afde417c80cc5d0c33be4854cce3ef5670c49367/lxml-6.0.2-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:957448ac63a42e2e49531b9d6c0fa449a1970dbc32467aaad46f11545be9af1d", size = 4633665, upload-time = "2025-09-22T04:03:17.262Z" },
{ url = "https://files.pythonhosted.org/packages/0f/c9/dcc2da1bebd6275cdc723b515f93edf548b82f36a5458cca3578bc899332/lxml-6.0.2-cp314-cp314t-manylinux_2_38_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:b7fc49c37f1786284b12af63152fe1d0990722497e2d5817acfe7a877522f9a9", size = 5234997, upload-time = "2025-09-22T04:03:19.14Z" },
{ url = "https://files.pythonhosted.org/packages/9c/e2/5172e4e7468afca64a37b81dba152fc5d90e30f9c83c7c3213d6a02a5ce4/lxml-6.0.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:e19e0643cc936a22e837f79d01a550678da8377d7d801a14487c10c34ee49c7e", size = 5090957, upload-time = "2025-09-22T04:03:21.436Z" },
{ url = "https://files.pythonhosted.org/packages/a5/b3/15461fd3e5cd4ddcb7938b87fc20b14ab113b92312fc97afe65cd7c85de1/lxml-6.0.2-cp314-cp314t-musllinux_1_2_armv7l.whl", hash = "sha256:1db01e5cf14345628e0cbe71067204db658e2fb8e51e7f33631f5f4735fefd8d", size = 4764372, upload-time = "2025-09-22T04:03:23.27Z" },
{ url = "https://files.pythonhosted.org/packages/05/33/f310b987c8bf9e61c4dd8e8035c416bd3230098f5e3cfa69fc4232de7059/lxml-6.0.2-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:875c6b5ab39ad5291588aed6925fac99d0097af0dd62f33c7b43736043d4a2ec", size = 5634653, upload-time = "2025-09-22T04:03:25.767Z" },
{ url = "https://files.pythonhosted.org/packages/70/ff/51c80e75e0bc9382158133bdcf4e339b5886c6ee2418b5199b3f1a61ed6d/lxml-6.0.2-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:cdcbed9ad19da81c480dfd6dd161886db6096083c9938ead313d94b30aadf272", size = 5233795, upload-time = "2025-09-22T04:03:27.62Z" },
{ url = "https://files.pythonhosted.org/packages/56/4d/4856e897df0d588789dd844dbed9d91782c4ef0b327f96ce53c807e13128/lxml-6.0.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:80dadc234ebc532e09be1975ff538d154a7fa61ea5031c03d25178855544728f", size = 5257023, upload-time = "2025-09-22T04:03:30.056Z" },
{ url = "https://files.pythonhosted.org/packages/0f/85/86766dfebfa87bea0ab78e9ff7a4b4b45225df4b4d3b8cc3c03c5cd68464/lxml-6.0.2-cp314-cp314t-win32.whl", hash = "sha256:da08e7bb297b04e893d91087df19638dc7a6bb858a954b0cc2b9f5053c922312", size = 3911420, upload-time = "2025-09-22T04:03:32.198Z" },
{ url = "https://files.pythonhosted.org/packages/fe/1a/b248b355834c8e32614650b8008c69ffeb0ceb149c793961dd8c0b991bb3/lxml-6.0.2-cp314-cp314t-win_amd64.whl", hash = "sha256:252a22982dca42f6155125ac76d3432e548a7625d56f5a273ee78a5057216eca", size = 4406837, upload-time = "2025-09-22T04:03:34.027Z" },
{ url = "https://files.pythonhosted.org/packages/92/aa/df863bcc39c5e0946263454aba394de8a9084dbaff8ad143846b0d844739/lxml-6.0.2-cp314-cp314t-win_arm64.whl", hash = "sha256:bb4c1847b303835d89d785a18801a883436cdfd5dc3d62947f9c49e24f0f5a2c", size = 3822205, upload-time = "2025-09-22T04:03:36.249Z" },
{ url = "https://files.pythonhosted.org/packages/0b/11/29d08bc103a62c0eba8016e7ed5aeebbf1e4312e83b0b1648dd203b0e87d/lxml-6.0.2-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:1c06035eafa8404b5cf475bb37a9f6088b0aca288d4ccc9d69389750d5543700", size = 3949829, upload-time = "2025-09-22T04:04:45.608Z" },
{ url = "https://files.pythonhosted.org/packages/12/b3/52ab9a3b31e5ab8238da241baa19eec44d2ab426532441ee607165aebb52/lxml-6.0.2-pp311-pypy311_pp73-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:c7d13103045de1bdd6fe5d61802565f1a3537d70cd3abf596aa0af62761921ee", size = 4226277, upload-time = "2025-09-22T04:04:47.754Z" },
{ url = "https://files.pythonhosted.org/packages/a0/33/1eaf780c1baad88224611df13b1c2a9dfa460b526cacfe769103ff50d845/lxml-6.0.2-pp311-pypy311_pp73-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:0a3c150a95fbe5ac91de323aa756219ef9cf7fde5a3f00e2281e30f33fa5fa4f", size = 4330433, upload-time = "2025-09-22T04:04:49.907Z" },
{ url = "https://files.pythonhosted.org/packages/7a/c1/27428a2ff348e994ab4f8777d3a0ad510b6b92d37718e5887d2da99952a2/lxml-6.0.2-pp311-pypy311_pp73-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:60fa43be34f78bebb27812ed90f1925ec99560b0fa1decdb7d12b84d857d31e9", size = 4272119, upload-time = "2025-09-22T04:04:51.801Z" },
{ url = "https://files.pythonhosted.org/packages/f0/d0/3020fa12bcec4ab62f97aab026d57c2f0cfd480a558758d9ca233bb6a79d/lxml-6.0.2-pp311-pypy311_pp73-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:21c73b476d3cfe836be731225ec3421fa2f048d84f6df6a8e70433dff1376d5a", size = 4417314, upload-time = "2025-09-22T04:04:55.024Z" },
{ url = "https://files.pythonhosted.org/packages/6c/77/d7f491cbc05303ac6801651aabeb262d43f319288c1ea96c66b1d2692ff3/lxml-6.0.2-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:27220da5be049e936c3aca06f174e8827ca6445a4353a1995584311487fc4e3e", size = 3518768, upload-time = "2025-09-22T04:04:57.097Z" },
]
[[package]]
name = "python-docx"
version = "1.2.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "lxml" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a9/f7/eddfe33871520adab45aaa1a71f0402a2252050c14c7e3009446c8f4701c/python_docx-1.2.0.tar.gz", hash = "sha256:7bc9d7b7d8a69c9c02ca09216118c86552704edc23bac179283f2e38f86220ce", size = 5723256, upload-time = "2025-06-16T20:46:27.921Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d0/00/1e03a4989fa5795da308cd774f05b704ace555a70f9bf9d3be057b680bcf/python_docx-1.2.0-py3-none-any.whl", hash = "sha256:3fd478f3250fbbbfd3b94fe1e985955737c145627498896a8a6bf81f4baf66c7", size = 252987, upload-time = "2025-06-16T20:46:22.506Z" },
]
[[package]]
name = "repl-nix-workspace"
version = "0.1.0"
source = { virtual = "." }
dependencies = [
{ name = "httpx" },
{ name = "python-docx" },
]
[package.metadata]
requires-dist = [{ name = "httpx", specifier = ">=0.28.1" }]
requires-dist = [
{ name = "httpx", specifier = ">=0.28.1" },
{ name = "python-docx", specifier = ">=1.2.0" },
]
[[package]]
name = "sniffio"

View File

@ -1,7 +1,115 @@
{
"version": "1.0.104",
"lastUpdate": "2026-02-14T09:45:26.595Z",
"version": "1.0.122",
"lastUpdate": "2026-02-17T09:13:40.571Z",
"changelog": [
{
"version": "1.0.122",
"date": "2026-02-17",
"type": "patch",
"description": "Deployment automatico v1.0.122"
},
{
"version": "1.0.121",
"date": "2026-02-17",
"type": "patch",
"description": "Deployment automatico v1.0.121"
},
{
"version": "1.0.120",
"date": "2026-02-17",
"type": "patch",
"description": "Deployment automatico v1.0.120"
},
{
"version": "1.0.119",
"date": "2026-02-17",
"type": "patch",
"description": "Deployment automatico v1.0.119"
},
{
"version": "1.0.118",
"date": "2026-02-16",
"type": "patch",
"description": "Deployment automatico v1.0.118"
},
{
"version": "1.0.117",
"date": "2026-02-16",
"type": "patch",
"description": "Deployment automatico v1.0.117"
},
{
"version": "1.0.116",
"date": "2026-02-16",
"type": "patch",
"description": "Deployment automatico v1.0.116"
},
{
"version": "1.0.115",
"date": "2026-02-16",
"type": "patch",
"description": "Deployment automatico v1.0.115"
},
{
"version": "1.0.114",
"date": "2026-02-16",
"type": "patch",
"description": "Deployment automatico v1.0.114"
},
{
"version": "1.0.113",
"date": "2026-02-16",
"type": "patch",
"description": "Deployment automatico v1.0.113"
},
{
"version": "1.0.112",
"date": "2026-02-16",
"type": "patch",
"description": "Deployment automatico v1.0.112"
},
{
"version": "1.0.111",
"date": "2026-02-16",
"type": "patch",
"description": "Deployment automatico v1.0.111"
},
{
"version": "1.0.110",
"date": "2026-02-16",
"type": "patch",
"description": "Deployment automatico v1.0.110"
},
{
"version": "1.0.109",
"date": "2026-02-16",
"type": "patch",
"description": "Deployment automatico v1.0.109"
},
{
"version": "1.0.108",
"date": "2026-02-14",
"type": "patch",
"description": "Deployment automatico v1.0.108"
},
{
"version": "1.0.107",
"date": "2026-02-14",
"type": "patch",
"description": "Deployment automatico v1.0.107"
},
{
"version": "1.0.106",
"date": "2026-02-14",
"type": "patch",
"description": "Deployment automatico v1.0.106"
},
{
"version": "1.0.105",
"date": "2026-02-14",
"type": "patch",
"description": "Deployment automatico v1.0.105"
},
{
"version": "1.0.104",
"date": "2026-02-14",
@ -193,114 +301,6 @@
"date": "2025-11-25",
"type": "patch",
"description": "Deployment automatico v1.0.73"
},
{
"version": "1.0.72",
"date": "2025-11-25",
"type": "patch",
"description": "Deployment automatico v1.0.72"
},
{
"version": "1.0.71",
"date": "2025-11-25",
"type": "patch",
"description": "Deployment automatico v1.0.71"
},
{
"version": "1.0.70",
"date": "2025-11-25",
"type": "patch",
"description": "Deployment automatico v1.0.70"
},
{
"version": "1.0.69",
"date": "2025-11-25",
"type": "patch",
"description": "Deployment automatico v1.0.69"
},
{
"version": "1.0.68",
"date": "2025-11-24",
"type": "patch",
"description": "Deployment automatico v1.0.68"
},
{
"version": "1.0.67",
"date": "2025-11-24",
"type": "patch",
"description": "Deployment automatico v1.0.67"
},
{
"version": "1.0.66",
"date": "2025-11-24",
"type": "patch",
"description": "Deployment automatico v1.0.66"
},
{
"version": "1.0.65",
"date": "2025-11-24",
"type": "patch",
"description": "Deployment automatico v1.0.65"
},
{
"version": "1.0.64",
"date": "2025-11-24",
"type": "patch",
"description": "Deployment automatico v1.0.64"
},
{
"version": "1.0.63",
"date": "2025-11-24",
"type": "patch",
"description": "Deployment automatico v1.0.63"
},
{
"version": "1.0.62",
"date": "2025-11-24",
"type": "patch",
"description": "Deployment automatico v1.0.62"
},
{
"version": "1.0.61",
"date": "2025-11-24",
"type": "patch",
"description": "Deployment automatico v1.0.61"
},
{
"version": "1.0.60",
"date": "2025-11-24",
"type": "patch",
"description": "Deployment automatico v1.0.60"
},
{
"version": "1.0.59",
"date": "2025-11-24",
"type": "patch",
"description": "Deployment automatico v1.0.59"
},
{
"version": "1.0.58",
"date": "2025-11-24",
"type": "patch",
"description": "Deployment automatico v1.0.58"
},
{
"version": "1.0.57",
"date": "2025-11-24",
"type": "patch",
"description": "Deployment automatico v1.0.57"
},
{
"version": "1.0.56",
"date": "2025-11-24",
"type": "patch",
"description": "Deployment automatico v1.0.56"
},
{
"version": "1.0.55",
"date": "2025-11-24",
"type": "patch",
"description": "Deployment automatico v1.0.55"
}
]
}