Compare commits
No commits in common. "main" and "v1.0.54" have entirely different histories.
12
.replit
12
.replit
@ -14,6 +14,18 @@ run = ["npm", "run", "start"]
|
|||||||
localPort = 5000
|
localPort = 5000
|
||||||
externalPort = 80
|
externalPort = 80
|
||||||
|
|
||||||
|
[[ports]]
|
||||||
|
localPort = 41303
|
||||||
|
externalPort = 3002
|
||||||
|
|
||||||
|
[[ports]]
|
||||||
|
localPort = 43471
|
||||||
|
externalPort = 3003
|
||||||
|
|
||||||
|
[[ports]]
|
||||||
|
localPort = 43803
|
||||||
|
externalPort = 3000
|
||||||
|
|
||||||
[env]
|
[env]
|
||||||
PORT = "5000"
|
PORT = "5000"
|
||||||
|
|
||||||
|
|||||||
@ -1,311 +0,0 @@
|
|||||||
# Fix Connessione MikroTik API
|
|
||||||
|
|
||||||
## 🐛 PROBLEMA RISOLTO
|
|
||||||
|
|
||||||
**Errore**: Timeout connessione API MikroTik - router non rispondeva a richieste HTTP.
|
|
||||||
|
|
||||||
**Causa Root**: Confusione tra **API Binary** (porta 8728) e **API REST** (porta 80/443).
|
|
||||||
|
|
||||||
## 🔍 API MikroTik: Binary vs REST
|
|
||||||
|
|
||||||
MikroTik RouterOS ha **DUE tipi di API completamente diversi**:
|
|
||||||
|
|
||||||
| Tipo | Porta | Protocollo | RouterOS | Compatibilità |
|
|
||||||
|------|-------|------------|----------|---------------|
|
|
||||||
| **Binary API** | 8728 | Proprietario RouterOS | Tutte | ❌ Non HTTP (libreria `routeros-api`) |
|
|
||||||
| **REST API** | 80/443 | HTTP/HTTPS standard | **>= 7.1** | ✅ HTTP con `httpx` |
|
|
||||||
|
|
||||||
**IDS usa REST API** (httpx + HTTP), quindi:
|
|
||||||
- ✅ **Porta 80** (HTTP) - **CONSIGLIATA**
|
|
||||||
- ✅ **Porta 443** (HTTPS) - Se necessario SSL
|
|
||||||
- ❌ **Porta 8728** - API Binary, NON REST (timeout)
|
|
||||||
- ❌ **Porta 8729** - API Binary SSL, NON REST (timeout)
|
|
||||||
|
|
||||||
## ✅ SOLUZIONE
|
|
||||||
|
|
||||||
### 1️⃣ Verifica RouterOS Versione
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Sul router MikroTik (via Winbox/SSH)
|
|
||||||
/system resource print
|
|
||||||
```
|
|
||||||
|
|
||||||
**Se RouterOS >= 7.1** → Usa **REST API** (porta 80/443)
|
|
||||||
**Se RouterOS < 7.1** → REST API non esiste, usa API Binary
|
|
||||||
|
|
||||||
### 2️⃣ Configurazione Porta Corretta
|
|
||||||
|
|
||||||
**Per RouterOS 7.14.2 (Alfabit):**
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Database: Usa porta 80 (REST API HTTP)
|
|
||||||
UPDATE routers SET api_port = 80 WHERE name = 'Alfabit';
|
|
||||||
```
|
|
||||||
|
|
||||||
**Porte disponibili**:
|
|
||||||
- **80** → REST API HTTP (✅ CONSIGLIATA)
|
|
||||||
- **443** → REST API HTTPS (se SSL richiesto)
|
|
||||||
- ~~8728~~ → API Binary (non compatibile)
|
|
||||||
- ~~8729~~ → API Binary SSL (non compatibile)
|
|
||||||
|
|
||||||
### 3️⃣ Test Manuale
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Test connessione porta 80
|
|
||||||
curl http://185.203.24.2:80/rest/system/identity \
|
|
||||||
-u admin:password \
|
|
||||||
--max-time 5
|
|
||||||
|
|
||||||
# Output atteso:
|
|
||||||
# {"name":"AlfaBit"}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 VERIFICA CONFIGURAZIONE ROUTER
|
|
||||||
|
|
||||||
### 1️⃣ Controlla Database
|
|
||||||
|
|
||||||
```sql
|
|
||||||
-- Su AlmaLinux
|
|
||||||
psql $DATABASE_URL -c "SELECT name, ip_address, api_port, username, enabled FROM routers WHERE enabled = true;"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output Atteso**:
|
|
||||||
```
|
|
||||||
name | ip_address | api_port | username | enabled
|
|
||||||
--------------+---------------+----------+----------+---------
|
|
||||||
Alfabit | 185.203.24.2 | 80 | admin | t
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verifica**:
|
|
||||||
- ✅ `api_port` = **80** (REST API HTTP)
|
|
||||||
- ✅ `enabled` = **true**
|
|
||||||
- ✅ `username` e `password` corretti
|
|
||||||
|
|
||||||
**Se porta errata**:
|
|
||||||
```sql
|
|
||||||
-- Cambia porta da 8728 a 80
|
|
||||||
UPDATE routers SET api_port = 80 WHERE ip_address = '185.203.24.2';
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2️⃣ Testa Connessione Python
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Su AlmaLinux
|
|
||||||
cd /opt/ids/python_ml
|
|
||||||
source venv/bin/activate
|
|
||||||
|
|
||||||
# Test connessione automatico (usa dati dal database)
|
|
||||||
python3 test_mikrotik_connection.py
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output atteso**:
|
|
||||||
```
|
|
||||||
✅ Connessione OK!
|
|
||||||
✅ Trovati X IP in lista 'ddos_blocked'
|
|
||||||
✅ IP bloccato con successo!
|
|
||||||
✅ IP sbloccato con successo!
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 DEPLOYMENT SU ALMALINUX
|
|
||||||
|
|
||||||
### Workflow Completo
|
|
||||||
|
|
||||||
#### 1️⃣ **Su Replit** (GIÀ FATTO ✅)
|
|
||||||
- File `python_ml/mikrotik_manager.py` modificato
|
|
||||||
- Fix già committato su Replit
|
|
||||||
|
|
||||||
#### 2️⃣ **Locale - Push GitLab**
|
|
||||||
```bash
|
|
||||||
# Dalla tua macchina locale (NON su Replit - è bloccato)
|
|
||||||
./push-gitlab.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
Input richiesti:
|
|
||||||
```
|
|
||||||
Commit message: Fix MikroTik API - porta non usata in base_url
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 3️⃣ **Su AlmaLinux - Pull & Deploy**
|
|
||||||
```bash
|
|
||||||
# SSH su ids.alfacom.it
|
|
||||||
ssh root@ids.alfacom.it
|
|
||||||
|
|
||||||
# Pull ultimi cambiamenti
|
|
||||||
cd /opt/ids
|
|
||||||
./update_from_git.sh
|
|
||||||
|
|
||||||
# Riavvia ML Backend per applicare fix
|
|
||||||
sudo systemctl restart ids-ml-backend
|
|
||||||
|
|
||||||
# Verifica servizio attivo
|
|
||||||
systemctl status ids-ml-backend
|
|
||||||
|
|
||||||
# Verifica API risponde
|
|
||||||
curl http://localhost:8000/health
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 4️⃣ **Test Blocco IP**
|
|
||||||
```bash
|
|
||||||
# Dalla dashboard web: https://ids.alfacom.it/routers
|
|
||||||
# 1. Verifica router configurati
|
|
||||||
# 2. Clicca "Test Connessione" su router 185.203.24.2
|
|
||||||
# 3. Dovrebbe mostrare ✅ "Connessione OK"
|
|
||||||
|
|
||||||
# Dalla dashboard detections:
|
|
||||||
# 1. Seleziona detection con score >= 80
|
|
||||||
# 2. Clicca "Blocca IP"
|
|
||||||
# 3. Verifica blocco su router
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔧 TROUBLESHOOTING
|
|
||||||
|
|
||||||
### Connessione Ancora Fallisce?
|
|
||||||
|
|
||||||
#### A. Verifica Servizio WWW su Router
|
|
||||||
|
|
||||||
**REST API usa servizio `www` (porta 80) o `www-ssl` (porta 443)**:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Sul router MikroTik (via Winbox/SSH)
|
|
||||||
/ip service print
|
|
||||||
|
|
||||||
# Verifica che www sia enabled:
|
|
||||||
# 0 www 80 * ← REST API HTTP
|
|
||||||
# 1 www-ssl 443 * ← REST API HTTPS
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix su MikroTik**:
|
|
||||||
```bash
|
|
||||||
# Abilita servizio www per REST API
|
|
||||||
/ip service enable www
|
|
||||||
/ip service set www port=80 address=0.0.0.0/0
|
|
||||||
|
|
||||||
# O con SSL (porta 443)
|
|
||||||
/ip service enable www-ssl
|
|
||||||
/ip service set www-ssl port=443
|
|
||||||
```
|
|
||||||
|
|
||||||
**NOTA**: `api` (porta 8728) è **API Binary**, NON REST!
|
|
||||||
|
|
||||||
#### B. Verifica Firewall AlmaLinux
|
|
||||||
```bash
|
|
||||||
# Su AlmaLinux - consenti traffico verso router
|
|
||||||
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" destination address="185.203.24.2" port protocol="tcp" port="8728" accept'
|
|
||||||
sudo firewall-cmd --reload
|
|
||||||
```
|
|
||||||
|
|
||||||
#### C. Test Connessione Raw
|
|
||||||
```bash
|
|
||||||
# Test TCP connessione porta 80
|
|
||||||
telnet 185.203.24.2 80
|
|
||||||
|
|
||||||
# Test REST API con curl
|
|
||||||
curl -v http://185.203.24.2:80/rest/system/identity \
|
|
||||||
-u admin:password \
|
|
||||||
--max-time 5
|
|
||||||
|
|
||||||
# Output atteso:
|
|
||||||
# {"name":"AlfaBit"}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Se timeout**: Servizio `www` non abilitato sul router
|
|
||||||
|
|
||||||
#### D. Credenziali Errate?
|
|
||||||
```sql
|
|
||||||
-- Verifica credenziali nel database
|
|
||||||
psql $DATABASE_URL -c "SELECT name, ip_address, username FROM routers WHERE ip_address = '185.203.24.2';"
|
|
||||||
|
|
||||||
-- Se password errata, aggiorna:
|
|
||||||
-- UPDATE routers SET password = 'nuova_password' WHERE ip_address = '185.203.24.2';
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ VERIFICA FINALE
|
|
||||||
|
|
||||||
Dopo il deployment, verifica che:
|
|
||||||
|
|
||||||
1. **ML Backend attivo**:
|
|
||||||
```bash
|
|
||||||
systemctl status ids-ml-backend # must be "active (running)"
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **API risponde**:
|
|
||||||
```bash
|
|
||||||
curl http://localhost:8000/health
|
|
||||||
# {"status":"healthy","database":"connected",...}
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Auto-blocking funziona**:
|
|
||||||
```bash
|
|
||||||
# Controlla log auto-blocking
|
|
||||||
journalctl -u ids-auto-block.timer -n 50
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **IP bloccati su router**:
|
|
||||||
- Dashboard: https://ids.alfacom.it/detections
|
|
||||||
- Filtra: "Bloccati"
|
|
||||||
- Verifica badge verde "Bloccato" visibile
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 CONFIGURAZIONE CORRETTA
|
|
||||||
|
|
||||||
| Parametro | Valore (RouterOS >= 7.1) | Note |
|
|
||||||
|-----------|--------------------------|------|
|
|
||||||
| **api_port** | **80** (HTTP) o **443** (HTTPS) | ✅ REST API |
|
|
||||||
| **Servizio Router** | `www` (HTTP) o `www-ssl` (HTTPS) | Abilita su MikroTik |
|
|
||||||
| **Endpoint** | `/rest/system/identity` | Test connessione |
|
|
||||||
| **Endpoint** | `/rest/ip/firewall/address-list` | Gestione blocchi |
|
|
||||||
| **Auth** | Basic (username:password base64) | Header Authorization |
|
|
||||||
| **Verify SSL** | False | Self-signed certs OK |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 RIEPILOGO
|
|
||||||
|
|
||||||
### ❌ ERRATO (API Binary - Timeout)
|
|
||||||
```bash
|
|
||||||
# Porta 8728 usa protocollo BINARIO, non HTTP REST
|
|
||||||
curl http://185.203.24.2:8728/rest/...
|
|
||||||
# Timeout: protocollo incompatibile
|
|
||||||
```
|
|
||||||
|
|
||||||
### ✅ CORRETTO (API REST - Funziona)
|
|
||||||
```bash
|
|
||||||
# Porta 80 usa protocollo HTTP REST standard
|
|
||||||
curl http://185.203.24.2:80/rest/system/identity \
|
|
||||||
-u admin:password
|
|
||||||
|
|
||||||
# Output: {"name":"AlfaBit"}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Database configurato**:
|
|
||||||
```sql
|
|
||||||
-- Router Alfabit configurato con porta 80
|
|
||||||
SELECT name, ip_address, api_port FROM routers;
|
|
||||||
-- Alfabit | 185.203.24.2 | 80
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📝 CHANGELOG
|
|
||||||
|
|
||||||
**25 Novembre 2024**:
|
|
||||||
1. ✅ Identificato problema: porta 8728 = API Binary (non HTTP)
|
|
||||||
2. ✅ Verificato RouterOS 7.14.2 supporta REST API
|
|
||||||
3. ✅ Configurato router con porta 80 (REST API HTTP)
|
|
||||||
4. ✅ Test curl manuale: `{"name":"AlfaBit"}` ✅
|
|
||||||
5. ✅ Router inserito in database con porta 80
|
|
||||||
|
|
||||||
**Test richiesto**: `python3 test_mikrotik_connection.py`
|
|
||||||
|
|
||||||
**Versione**: IDS 2.0.0 (Hybrid Detector)
|
|
||||||
**RouterOS**: 7.14.2 (stable)
|
|
||||||
**API Type**: REST (HTTP porta 80)
|
|
||||||
@ -1,43 +0,0 @@
|
|||||||
|
|
||||||
📦 Aggiornamento dipendenze Python...
|
|
||||||
Defaulting to user installation because normal site-packages is not writeable
|
|
||||||
Requirement already satisfied: fastapi==0.104.1 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 1)) (0.104.1)
|
|
||||||
Requirement already satisfied: uvicorn==0.24.0 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 2)) (0.24.0)
|
|
||||||
Requirement already satisfied: pandas==2.1.3 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 3)) (2.1.3)
|
|
||||||
Requirement already satisfied: numpy==1.26.2 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 4)) (1.26.2)
|
|
||||||
Requirement already satisfied: scikit-learn==1.3.2 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 5)) (1.3.2)
|
|
||||||
Requirement already satisfied: psycopg2-binary==2.9.9 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 6)) (2.9.9)
|
|
||||||
Requirement already satisfied: python-dotenv==1.0.0 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 7)) (1.0.0)
|
|
||||||
Requirement already satisfied: pydantic==2.5.0 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 8)) (2.5.0)
|
|
||||||
Requirement already satisfied: httpx==0.25.1 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 9)) (0.25.1)
|
|
||||||
Collecting Cython==3.0.5
|
|
||||||
Downloading Cython-3.0.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.6 MB)
|
|
||||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.6/3.6 MB 8.9 MB/s eta 0:00:00
|
|
||||||
Collecting xgboost==2.0.3
|
|
||||||
Using cached xgboost-2.0.3-py3-none-manylinux2014_x86_64.whl (297.1 MB)
|
|
||||||
Collecting joblib==1.3.2
|
|
||||||
Using cached joblib-1.3.2-py3-none-any.whl (302 kB)
|
|
||||||
Collecting eif==2.0.2
|
|
||||||
Using cached eif-2.0.2.tar.gz (1.6 MB)
|
|
||||||
Preparing metadata (setup.py) ... error
|
|
||||||
error: subprocess-exited-with-error
|
|
||||||
|
|
||||||
× python setup.py egg_info did not run successfully.
|
|
||||||
│ exit code: 1
|
|
||||||
╰─> [6 lines of output]
|
|
||||||
Traceback (most recent call last):
|
|
||||||
File "<string>", line 2, in <module>
|
|
||||||
File "<pip-setuptools-caller>", line 34, in <module>
|
|
||||||
File "/tmp/pip-install-843eies2/eif_72b54a0861444b02867269ed1670c0ce/setup.py", line 4, in <module>
|
|
||||||
from Cython.Distutils import build_ext
|
|
||||||
ModuleNotFoundError: No module named 'Cython'
|
|
||||||
[end of output]
|
|
||||||
|
|
||||||
note: This error originates from a subprocess, and is likely not a problem with pip.
|
|
||||||
error: metadata-generation-failed
|
|
||||||
|
|
||||||
× Encountered error while generating package metadata.
|
|
||||||
╰─> See above for output.
|
|
||||||
|
|
||||||
note: This is an issue with the package mentioned above, not pip.
|
|
||||||
hint: See above for details.
|
|
||||||
@ -1,60 +0,0 @@
|
|||||||
./deployment/install_ml_deps.sh
|
|
||||||
╔═══════════════════════════════════════════════╗
|
|
||||||
║ INSTALLAZIONE DIPENDENZE ML HYBRID ║
|
|
||||||
╚═══════════════════════════════════════════════╝
|
|
||||||
|
|
||||||
Directory corrente: /opt/ids/python_ml
|
|
||||||
|
|
||||||
Attivazione virtual environment...
|
|
||||||
Python in uso: /opt/ids/python_ml/venv/bin/python
|
|
||||||
|
|
||||||
📦 Step 1/3: Installazione build dependencies (Cython + numpy)...
|
|
||||||
Collecting Cython==3.0.5
|
|
||||||
Downloading Cython-3.0.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.2 kB)
|
|
||||||
Downloading Cython-3.0.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.6 MB)
|
|
||||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.6/3.6 MB 59.8 MB/s 0:00:00
|
|
||||||
Installing collected packages: Cython
|
|
||||||
Successfully installed Cython-3.0.5
|
|
||||||
✅ Cython installato con successo
|
|
||||||
|
|
||||||
📦 Step 2/3: Verifica numpy disponibile...
|
|
||||||
✅ numpy 1.26.2 già installato
|
|
||||||
|
|
||||||
📦 Step 3/3: Installazione dipendenze ML (xgboost, joblib, eif)...
|
|
||||||
Collecting xgboost==2.0.3
|
|
||||||
Downloading xgboost-2.0.3-py3-none-manylinux2014_x86_64.whl.metadata (2.0 kB)
|
|
||||||
Requirement already satisfied: joblib==1.3.2 in ./venv/lib64/python3.11/site-packages (1.3.2)
|
|
||||||
Collecting eif==2.0.2
|
|
||||||
Downloading eif-2.0.2.tar.gz (1.6 MB)
|
|
||||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 6.7 MB/s 0:00:00
|
|
||||||
Installing build dependencies ... done
|
|
||||||
Getting requirements to build wheel ... error
|
|
||||||
error: subprocess-exited-with-error
|
|
||||||
|
|
||||||
× Getting requirements to build wheel did not run successfully.
|
|
||||||
│ exit code: 1
|
|
||||||
╰─> [20 lines of output]
|
|
||||||
Traceback (most recent call last):
|
|
||||||
File "/opt/ids/python_ml/venv/lib64/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 389, in <module>
|
|
||||||
main()
|
|
||||||
File "/opt/ids/python_ml/venv/lib64/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 373, in main
|
|
||||||
json_out["return_val"] = hook(**hook_input["kwargs"])
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
File "/opt/ids/python_ml/venv/lib64/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 143, in get_requires_for_build_wheel
|
|
||||||
return hook(config_settings)
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
File "/tmp/pip-build-env-9buits4u/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 331, in get_requires_for_build_wheel
|
|
||||||
return self._get_build_requires(config_settings, requirements=[])
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
File "/tmp/pip-build-env-9buits4u/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 301, in _get_build_requires
|
|
||||||
self.run_setup()
|
|
||||||
File "/tmp/pip-build-env-9buits4u/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 512, in run_setup
|
|
||||||
super().run_setup(setup_script=setup_script)
|
|
||||||
File "/tmp/pip-build-env-9buits4u/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 317, in run_setup
|
|
||||||
exec(code, locals())
|
|
||||||
File "<string>", line 3, in <module>
|
|
||||||
ModuleNotFoundError: No module named 'numpy'
|
|
||||||
[end of output]
|
|
||||||
|
|
||||||
note: This error originates from a subprocess, and is likely not a problem with pip.
|
|
||||||
ERROR: Failed to build 'eif' when getting requirements to build wheel
|
|
||||||
@ -1,40 +0,0 @@
|
|||||||
./deployment/install_ml_deps.sh
|
|
||||||
╔═══════════════════════════════════════════════╗
|
|
||||||
║ INSTALLAZIONE DIPENDENZE ML HYBRID ║
|
|
||||||
╚═══════════════════════════════════════════════╝
|
|
||||||
|
|
||||||
📍 Directory corrente: /opt/ids/python_ml
|
|
||||||
|
|
||||||
📦 Step 1/2: Installazione Cython (richiesto per compilare eif)...
|
|
||||||
Collecting Cython==3.0.5
|
|
||||||
Downloading Cython-3.0.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.6 MB)
|
|
||||||
|████████████████████████████████| 3.6 MB 6.2 MB/s
|
|
||||||
Installing collected packages: Cython
|
|
||||||
Successfully installed Cython-3.0.5
|
|
||||||
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
|
|
||||||
✅ Cython installato con successo
|
|
||||||
|
|
||||||
📦 Step 2/2: Installazione dipendenze ML (xgboost, joblib, eif)...
|
|
||||||
Collecting xgboost==2.0.3
|
|
||||||
Downloading xgboost-2.0.3-py3-none-manylinux2014_x86_64.whl (297.1 MB)
|
|
||||||
|████████████████████████████████| 297.1 MB 13 kB/s
|
|
||||||
Collecting joblib==1.3.2
|
|
||||||
Downloading joblib-1.3.2-py3-none-any.whl (302 kB)
|
|
||||||
|████████████████████████████████| 302 kB 41.7 MB/s
|
|
||||||
Collecting eif==2.0.2
|
|
||||||
Downloading eif-2.0.2.tar.gz (1.6 MB)
|
|
||||||
|████████████████████████████████| 1.6 MB 59.4 MB/s
|
|
||||||
Preparing metadata (setup.py) ... error
|
|
||||||
ERROR: Command errored out with exit status 1:
|
|
||||||
command: /usr/bin/python3 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-xpd6jc3z/eif_1c539132fe1d4772ada0979407304392/setup.py'"'"'; __file__='"'"'/tmp/pip-install-xpd6jc3z/eif_1c539132fe1d4772ada0979407304392/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-lg0m0ish
|
|
||||||
cwd: /tmp/pip-install-xpd6jc3z/eif_1c539132fe1d4772ada0979407304392/
|
|
||||||
Complete output (5 lines):
|
|
||||||
Traceback (most recent call last):
|
|
||||||
File "<string>", line 1, in <module>
|
|
||||||
File "/tmp/pip-install-xpd6jc3z/eif_1c539132fe1d4772ada0979407304392/setup.py", line 3, in <module>
|
|
||||||
import numpy
|
|
||||||
ModuleNotFoundError: No module named 'numpy'
|
|
||||||
----------------------------------------
|
|
||||||
WARNING: Discarding https://files.pythonhosted.org/packages/83/b2/d87d869deeb192ab599c899b91a9ad1d3775d04f5b7adcaf7ff6daa54c24/eif-2.0.2.tar.gz#sha256=86e2c98caf530ae73d8bc7153c1bf6b9684c905c9dfc7bdab280846ada1e45ab (from https://pypi.org/simple/eif/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
|
|
||||||
ERROR: Could not find a version that satisfies the requirement eif==2.0.2 (from versions: 1.0.0, 1.0.1, 1.0.2, 2.0.2)
|
|
||||||
ERROR: No matching distribution found for eif==2.0.2
|
|
||||||
@ -1,54 +0,0 @@
|
|||||||
./deployment/train_hybrid_production.sh
|
|
||||||
=======================================================================
|
|
||||||
TRAINING HYBRID ML DETECTOR - DATI REALI
|
|
||||||
=======================================================================
|
|
||||||
|
|
||||||
📂 Caricamento credenziali database da .env...
|
|
||||||
✅ Credenziali caricate:
|
|
||||||
Host: localhost
|
|
||||||
Port: 5432
|
|
||||||
Database: ids_database
|
|
||||||
User: ids_user
|
|
||||||
Password: ****** (nascosta)
|
|
||||||
|
|
||||||
🎯 Parametri training:
|
|
||||||
Periodo: ultimi 7 giorni
|
|
||||||
Max records: 1000000
|
|
||||||
|
|
||||||
🐍 Python: /opt/ids/python_ml/venv/bin/python
|
|
||||||
|
|
||||||
📊 Verifica dati disponibili nel database...
|
|
||||||
primo_log | ultimo_log | periodo_totale | totale_records
|
|
||||||
---------------------+---------------------+----------------+----------------
|
|
||||||
2025-11-22 10:03:21 | 2025-11-24 17:58:17 | 2 giorni | 234,316,667
|
|
||||||
(1 row)
|
|
||||||
|
|
||||||
|
|
||||||
🚀 Avvio training...
|
|
||||||
|
|
||||||
=======================================================================
|
|
||||||
[WARNING] Extended Isolation Forest not available, using standard IF
|
|
||||||
|
|
||||||
======================================================================
|
|
||||||
IDS HYBRID ML TRAINING - UNSUPERVISED MODE
|
|
||||||
======================================================================
|
|
||||||
[TRAIN] Loading last 7 days of real traffic from database...
|
|
||||||
|
|
||||||
❌ Error: column "dest_ip" does not exist
|
|
||||||
LINE 5: dest_ip,
|
|
||||||
^
|
|
||||||
|
|
||||||
Traceback (most recent call last):
|
|
||||||
File "/opt/ids/python_ml/train_hybrid.py", line 365, in main
|
|
||||||
train_unsupervised(args)
|
|
||||||
File "/opt/ids/python_ml/train_hybrid.py", line 91, in train_unsupervised
|
|
||||||
logs_df = train_on_real_traffic(db_config, days=args.days)
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
File "/opt/ids/python_ml/train_hybrid.py", line 50, in train_on_real_traffic
|
|
||||||
cursor.execute(query, (days,))
|
|
||||||
File "/opt/ids/python_ml/venv/lib64/python3.11/site-packages/psycopg2/extras.py", line 236, in execute
|
|
||||||
return super().execute(query, vars)
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
psycopg2.errors.UndefinedColumn: column "dest_ip" does not exist
|
|
||||||
LINE 5: dest_ip,
|
|
||||||
^
|
|
||||||
@ -1,101 +0,0 @@
|
|||||||
./deployment/update_from_git.sh --db
|
|
||||||
|
|
||||||
╔═══════════════════════════════════════════════╗
|
|
||||||
║ AGGIORNAMENTO SISTEMA IDS DA GIT ║
|
|
||||||
╚═══════════════════════════════════════════════╝
|
|
||||||
|
|
||||||
Verifica configurazione git...
|
|
||||||
|
|
||||||
Backup configurazione locale...
|
|
||||||
✅ .env salvato in .env.backup
|
|
||||||
|
|
||||||
Verifica modifiche locali...
|
|
||||||
⚠ Ci sono modifiche locali non committate
|
|
||||||
Esegui 'git status' per vedere i dettagli
|
|
||||||
Vuoi procedere comunque? (y/n) y
|
|
||||||
Salvo modifiche locali temporaneamente...
|
|
||||||
No local changes to save
|
|
||||||
|
|
||||||
Download aggiornamenti da git.alfacom.it...
|
|
||||||
remote: Enumerating objects: 21, done.
|
|
||||||
remote: Counting objects: 100% (21/21), done.
|
|
||||||
remote: Compressing objects: 100% (13/13), done.
|
|
||||||
remote: Total 13 (delta 9), reused 0 (delta 0), pack-reused 0 (from 0)
|
|
||||||
Unpacking objects: 100% (13/13), 3.37 KiB | 492.00 KiB/s, done.
|
|
||||||
From https://git.alfacom.it/marco/ids.alfacom.it
|
|
||||||
3a945ec..152e226 main -> origin/main
|
|
||||||
* [new tag] v1.0.56 -> v1.0.56
|
|
||||||
From https://git.alfacom.it/marco/ids.alfacom.it
|
|
||||||
* branch main -> FETCH_HEAD
|
|
||||||
Updating 3a945ec..152e226
|
|
||||||
Fast-forward
|
|
||||||
attached_assets/Pasted--deployment-update-from-git-sh-db-AGGIOR-1764001889941_1764001889941.txt | 90 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
|
||||||
database-schema/schema.sql | 4 ++--
|
|
||||||
python_ml/requirements.txt | 2 +-
|
|
||||||
replit.md | 5 +++--
|
|
||||||
version.json | 16 ++++++++--------
|
|
||||||
5 files changed, 104 insertions(+), 13 deletions(-)
|
|
||||||
create mode 100644 attached_assets/Pasted--deployment-update-from-git-sh-db-AGGIOR-1764001889941_1764001889941.txt
|
|
||||||
✅ Aggiornamenti scaricati con successo
|
|
||||||
|
|
||||||
Ripristino configurazione locale...
|
|
||||||
✅ .env ripristinato
|
|
||||||
|
|
||||||
Aggiornamento dipendenze Node.js...
|
|
||||||
|
|
||||||
up to date, audited 492 packages in 2s
|
|
||||||
|
|
||||||
65 packages are looking for funding
|
|
||||||
run `npm fund` for details
|
|
||||||
|
|
||||||
9 vulnerabilities (3 low, 5 moderate, 1 high)
|
|
||||||
|
|
||||||
To address issues that do not require attention, run:
|
|
||||||
npm audit fix
|
|
||||||
|
|
||||||
To address all issues (including breaking changes), run:
|
|
||||||
npm audit fix --force
|
|
||||||
|
|
||||||
Run `npm audit` for details.
|
|
||||||
✅ Dipendenze Node.js aggiornate
|
|
||||||
|
|
||||||
📦 Aggiornamento dipendenze Python...
|
|
||||||
Defaulting to user installation because normal site-packages is not writeable
|
|
||||||
Requirement already satisfied: fastapi==0.104.1 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 1)) (0.104.1)
|
|
||||||
Requirement already satisfied: uvicorn==0.24.0 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 2)) (0.24.0)
|
|
||||||
Requirement already satisfied: pandas==2.1.3 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 3)) (2.1.3)
|
|
||||||
Requirement already satisfied: numpy==1.26.2 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 4)) (1.26.2)
|
|
||||||
Requirement already satisfied: scikit-learn==1.3.2 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 5)) (1.3.2)
|
|
||||||
Requirement already satisfied: psycopg2-binary==2.9.9 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 6)) (2.9.9)
|
|
||||||
Requirement already satisfied: python-dotenv==1.0.0 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 7)) (1.0.0)
|
|
||||||
Requirement already satisfied: pydantic==2.5.0 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 8)) (2.5.0)
|
|
||||||
Requirement already satisfied: httpx==0.25.1 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 9)) (0.25.1)
|
|
||||||
Collecting xgboost==2.0.3
|
|
||||||
Using cached xgboost-2.0.3-py3-none-manylinux2014_x86_64.whl (297.1 MB)
|
|
||||||
Collecting joblib==1.3.2
|
|
||||||
Using cached joblib-1.3.2-py3-none-any.whl (302 kB)
|
|
||||||
Collecting eif==2.0.2
|
|
||||||
Downloading eif-2.0.2.tar.gz (1.6 MB)
|
|
||||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 2.8 MB/s eta 0:00:00
|
|
||||||
Preparing metadata (setup.py) ... error
|
|
||||||
error: subprocess-exited-with-error
|
|
||||||
|
|
||||||
× python setup.py egg_info did not run successfully.
|
|
||||||
│ exit code: 1
|
|
||||||
╰─> [6 lines of output]
|
|
||||||
Traceback (most recent call last):
|
|
||||||
File "<string>", line 2, in <module>
|
|
||||||
File "<pip-setuptools-caller>", line 34, in <module>
|
|
||||||
File "/tmp/pip-install-7w_zhzdf/eif_d01f9f1e418b4512a5d7b4cf0e1128e2/setup.py", line 4, in <module>
|
|
||||||
from Cython.Distutils import build_ext
|
|
||||||
ModuleNotFoundError: No module named 'Cython'
|
|
||||||
[end of output]
|
|
||||||
|
|
||||||
note: This error originates from a subprocess, and is likely not a problem with pip.
|
|
||||||
error: metadata-generation-failed
|
|
||||||
|
|
||||||
× Encountered error while generating package metadata.
|
|
||||||
╰─> See above for output.
|
|
||||||
|
|
||||||
note: This is an issue with the package mentioned above, not pip.
|
|
||||||
hint: See above for details.
|
|
||||||
@ -1,90 +0,0 @@
|
|||||||
./deployment/update_from_git.sh --db
|
|
||||||
|
|
||||||
╔═══════════════════════════════════════════════╗
|
|
||||||
║ AGGIORNAMENTO SISTEMA IDS DA GIT ║
|
|
||||||
╚═══════════════════════════════════════════════╝
|
|
||||||
|
|
||||||
Verifica configurazione git...
|
|
||||||
|
|
||||||
Backup configurazione locale...
|
|
||||||
✅ .env salvato in .env.backup
|
|
||||||
|
|
||||||
Verifica modifiche locali...
|
|
||||||
⚠ Ci sono modifiche locali non committate
|
|
||||||
Esegui 'git status' per vedere i dettagli
|
|
||||||
Vuoi procedere comunque? (y/n) y
|
|
||||||
Salvo modifiche locali temporaneamente...
|
|
||||||
No local changes to save
|
|
||||||
|
|
||||||
Download aggiornamenti da git.alfacom.it...
|
|
||||||
remote: Enumerating objects: 51, done.
|
|
||||||
remote: Counting objects: 100% (51/51), done.
|
|
||||||
remote: Compressing objects: 100% (41/41), done.
|
|
||||||
remote: Total 41 (delta 32), reused 0 (delta 0), pack-reused 0 (from 0)
|
|
||||||
Unpacking objects: 100% (41/41), 31.17 KiB | 1.35 MiB/s, done.
|
|
||||||
From https://git.alfacom.it/marco/ids.alfacom.it
|
|
||||||
0fa2f11..3a945ec main -> origin/main
|
|
||||||
* [new tag] v1.0.55 -> v1.0.55
|
|
||||||
From https://git.alfacom.it/marco/ids.alfacom.it
|
|
||||||
* branch main -> FETCH_HEAD
|
|
||||||
Updating 0fa2f11..3a945ec
|
|
||||||
Fast-forward
|
|
||||||
database-schema/schema.sql | 4 +-
|
|
||||||
deployment/CHECKLIST_ML_HYBRID.md | 536 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
|
||||||
python_ml/dataset_loader.py | 384 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
|
||||||
python_ml/main.py | 120 ++++++++++++++++++++++++++++------
|
|
||||||
python_ml/ml_hybrid_detector.py | 705 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
|
||||||
python_ml/requirements.txt | 3 +
|
|
||||||
python_ml/train_hybrid.py | 378 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
|
||||||
python_ml/validation_metrics.py | 324 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
|
||||||
replit.md | 19 +++++-
|
|
||||||
version.json | 16 ++---
|
|
||||||
10 files changed, 2459 insertions(+), 30 deletions(-)
|
|
||||||
create mode 100644 deployment/CHECKLIST_ML_HYBRID.md
|
|
||||||
create mode 100644 python_ml/dataset_loader.py
|
|
||||||
create mode 100644 python_ml/ml_hybrid_detector.py
|
|
||||||
create mode 100644 python_ml/train_hybrid.py
|
|
||||||
create mode 100644 python_ml/validation_metrics.py
|
|
||||||
✅ Aggiornamenti scaricati con successo
|
|
||||||
|
|
||||||
🔄 Ripristino configurazione locale...
|
|
||||||
✅ .env ripristinato
|
|
||||||
|
|
||||||
📦 Aggiornamento dipendenze Node.js...
|
|
||||||
|
|
||||||
up to date, audited 492 packages in 3s
|
|
||||||
|
|
||||||
65 packages are looking for funding
|
|
||||||
run `npm fund` for details
|
|
||||||
|
|
||||||
9 vulnerabilities (3 low, 5 moderate, 1 high)
|
|
||||||
|
|
||||||
To address issues that do not require attention, run:
|
|
||||||
npm audit fix
|
|
||||||
|
|
||||||
To address all issues (including breaking changes), run:
|
|
||||||
npm audit fix --force
|
|
||||||
|
|
||||||
Run `npm audit` for details.
|
|
||||||
✅ Dipendenze Node.js aggiornate
|
|
||||||
|
|
||||||
📦 Aggiornamento dipendenze Python...
|
|
||||||
Defaulting to user installation because normal site-packages is not writeable
|
|
||||||
Requirement already satisfied: fastapi==0.104.1 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 1)) (0.104.1)
|
|
||||||
Requirement already satisfied: uvicorn==0.24.0 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 2)) (0.24.0)
|
|
||||||
Requirement already satisfied: pandas==2.1.3 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 3)) (2.1.3)
|
|
||||||
Requirement already satisfied: numpy==1.26.2 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 4)) (1.26.2)
|
|
||||||
Requirement already satisfied: scikit-learn==1.3.2 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 5)) (1.3.2)
|
|
||||||
Requirement already satisfied: psycopg2-binary==2.9.9 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 6)) (2.9.9)
|
|
||||||
Requirement already satisfied: python-dotenv==1.0.0 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 7)) (1.0.0)
|
|
||||||
Requirement already satisfied: pydantic==2.5.0 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 8)) (2.5.0)
|
|
||||||
Requirement already satisfied: httpx==0.25.1 in /home/ids/.local/lib/python3.11/site-packages (from -r requirements.txt (line 9)) (0.25.1)
|
|
||||||
Collecting xgboost==2.0.3
|
|
||||||
Downloading xgboost-2.0.3-py3-none-manylinux2014_x86_64.whl (297.1 MB)
|
|
||||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 297.1/297.1 MB 8.4 MB/s eta 0:00:00
|
|
||||||
Collecting joblib==1.3.2
|
|
||||||
Downloading joblib-1.3.2-py3-none-any.whl (302 kB)
|
|
||||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 302.2/302.2 kB 62.7 MB/s eta 0:00:00
|
|
||||||
ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11
|
|
||||||
ERROR: Could not find a version that satisfies the requirement eif==2.0.0 (from versions: 1.0.0, 1.0.1, 1.0.2, 2.0.2)
|
|
||||||
ERROR: No matching distribution found for eif==2.0.0
|
|
||||||
@ -1,51 +0,0 @@
|
|||||||
journalctl -u ids-list-fetcher -n 50 --no-pager
|
|
||||||
Jan 02 15:30:01 ids.alfacom.it ids-list-fetcher[9296]: Skipped (whitelisted): 0
|
|
||||||
Jan 02 15:30:01 ids.alfacom.it ids-list-fetcher[9296]: ============================================================
|
|
||||||
Jan 02 15:30:01 ids.alfacom.it systemd[1]: ids-list-fetcher.service: Deactivated successfully.
|
|
||||||
Jan 02 15:30:01 ids.alfacom.it systemd[1]: Finished IDS Public Lists Fetcher Service.
|
|
||||||
Jan 02 15:40:00 ids.alfacom.it systemd[1]: Starting IDS Public Lists Fetcher Service...
|
|
||||||
Jan 02 15:40:00 ids.alfacom.it ids-list-fetcher[9493]: ============================================================
|
|
||||||
Jan 02 15:40:00 ids.alfacom.it ids-list-fetcher[9493]: [2026-01-02 15:40:00] PUBLIC LISTS SYNC
|
|
||||||
Jan 02 15:40:00 ids.alfacom.it ids-list-fetcher[9493]: ============================================================
|
|
||||||
Jan 02 15:40:00 ids.alfacom.it ids-list-fetcher[9493]: Found 2 enabled lists
|
|
||||||
Jan 02 15:40:00 ids.alfacom.it ids-list-fetcher[9493]: [15:40:00] Downloading Spamhaus from https://www.spamhaus.org/drop/drop_v4.json...
|
|
||||||
Jan 02 15:40:00 ids.alfacom.it ids-list-fetcher[9493]: [15:40:00] Downloading AWS from https://ip-ranges.amazonaws.com/ip-ranges.json...
|
|
||||||
Jan 02 15:40:00 ids.alfacom.it ids-list-fetcher[9493]: [15:40:00] Parsing AWS...
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: [15:40:01] Found 9548 IPs, syncing to database...
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: [15:40:01] ✓ AWS: +0 -0 ~9511
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: [15:40:01] Parsing Spamhaus...
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: [15:40:01] Found 1468 IPs, syncing to database...
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: [15:40:01] ✓ Spamhaus: +0 -0 ~1464
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: ============================================================
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: SYNC SUMMARY
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: ============================================================
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: Success: 2/2
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: Errors: 0/2
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: Total IPs Added: 0
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: Total IPs Removed: 0
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: ============================================================
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: ============================================================
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: RUNNING MERGE LOGIC
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: ============================================================
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: ERROR:merge_logic:Failed to cleanup detections: operator does not exist: inet = text
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: LINE 9: d.source_ip::inet = wl.ip_inet
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: ^
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: ERROR:merge_logic:Failed to sync detections: operator does not exist: inet = text
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: LINE 29: bl.ip_inet = wl.ip_inet
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: ^
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: Traceback (most recent call last):
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: File "/opt/ids/python_ml/merge_logic.py", line 264, in sync_public_blacklist_detections
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: cur.execute("""
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: psycopg2.errors.UndefinedFunction: operator does not exist: inet = text
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: LINE 29: bl.ip_inet = wl.ip_inet
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: ^
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: Merge Logic Stats:
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: Created detections: 0
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: Cleaned invalid detections: 0
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: Skipped (whitelisted): 0
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it ids-list-fetcher[9493]: ============================================================
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it systemd[1]: ids-list-fetcher.service: Deactivated successfully.
|
|
||||||
Jan 02 15:40:01 ids.alfacom.it systemd[1]: Finished IDS Public Lists Fetcher Service.
|
|
||||||
@ -1,51 +0,0 @@
|
|||||||
journalctl -u ids-list-fetcher -n 50 --no-pager
|
|
||||||
Jan 02 17:10:02 ids.alfacom.it ids-list-fetcher[2139]: ============================================================
|
|
||||||
Jan 02 17:10:02 ids.alfacom.it ids-list-fetcher[2139]: ============================================================
|
|
||||||
Jan 02 17:10:02 ids.alfacom.it ids-list-fetcher[2139]: RUNNING MERGE LOGIC
|
|
||||||
Jan 02 17:10:02 ids.alfacom.it ids-list-fetcher[2139]: ============================================================
|
|
||||||
Jan 02 17:10:12 ids.alfacom.it ids-list-fetcher[2139]: INFO:merge_logic:Bulk sync complete: {'created': 0, 'cleaned': 0, 'skipped_whitelisted': 0}
|
|
||||||
Jan 02 17:10:12 ids.alfacom.it ids-list-fetcher[2139]: Merge Logic Stats:
|
|
||||||
Jan 02 17:10:12 ids.alfacom.it ids-list-fetcher[2139]: Created detections: 0
|
|
||||||
Jan 02 17:10:12 ids.alfacom.it ids-list-fetcher[2139]: Cleaned invalid detections: 0
|
|
||||||
Jan 02 17:10:12 ids.alfacom.it ids-list-fetcher[2139]: Skipped (whitelisted): 0
|
|
||||||
Jan 02 17:10:12 ids.alfacom.it ids-list-fetcher[2139]: ============================================================
|
|
||||||
Jan 02 17:10:12 ids.alfacom.it systemd[1]: ids-list-fetcher.service: Deactivated successfully.
|
|
||||||
Jan 02 17:10:12 ids.alfacom.it systemd[1]: Finished IDS Public Lists Fetcher Service.
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it systemd[1]: Starting IDS Public Lists Fetcher Service...
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: ============================================================
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [2026-01-02 17:12:35] PUBLIC LISTS SYNC
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: ============================================================
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: Found 4 enabled lists
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [17:12:35] Downloading Spamhaus from https://www.spamhaus.org/drop/drop_v4.json...
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [17:12:35] Downloading AWS from https://ip-ranges.amazonaws.com/ip-ranges.json...
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [17:12:35] Downloading Google Cloud from https://www.gstatic.com/ipranges/cloud.json...
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [17:12:35] Downloading Google globali from https://www.gstatic.com/ipranges/goog.json...
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [17:12:35] Parsing AWS...
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [17:12:35] Found 9548 IPs, syncing to database...
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [17:12:35] ✓ AWS: +0 -0 ~9548
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [17:12:35] Parsing Google globali...
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [17:12:35] ✗ Google globali: No valid IPs found in list
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [17:12:35] Parsing Google Cloud...
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [17:12:35] ✗ Google Cloud: No valid IPs found in list
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [17:12:35] Parsing Spamhaus...
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [17:12:35] Found 1468 IPs, syncing to database...
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: [17:12:35] ✓ Spamhaus: +0 -0 ~1468
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: ============================================================
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: SYNC SUMMARY
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: ============================================================
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: Success: 2/4
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: Errors: 2/4
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: Total IPs Added: 0
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: Total IPs Removed: 0
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: ============================================================
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: ============================================================
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: RUNNING MERGE LOGIC
|
|
||||||
Jan 02 17:12:35 ids.alfacom.it ids-list-fetcher[2279]: ============================================================
|
|
||||||
Jan 02 17:12:45 ids.alfacom.it ids-list-fetcher[2279]: INFO:merge_logic:Bulk sync complete: {'created': 0, 'cleaned': 0, 'skipped_whitelisted': 0}
|
|
||||||
Jan 02 17:12:45 ids.alfacom.it ids-list-fetcher[2279]: Merge Logic Stats:
|
|
||||||
Jan 02 17:12:45 ids.alfacom.it ids-list-fetcher[2279]: Created detections: 0
|
|
||||||
Jan 02 17:12:45 ids.alfacom.it ids-list-fetcher[2279]: Cleaned invalid detections: 0
|
|
||||||
Jan 02 17:12:45 ids.alfacom.it ids-list-fetcher[2279]: Skipped (whitelisted): 0
|
|
||||||
Jan 02 17:12:45 ids.alfacom.it ids-list-fetcher[2279]: ============================================================
|
|
||||||
Jan 02 17:12:45 ids.alfacom.it systemd[1]: ids-list-fetcher.service: Deactivated successfully.
|
|
||||||
Jan 02 17:12:45 ids.alfacom.it systemd[1]: Finished IDS Public Lists Fetcher Service.
|
|
||||||
@ -1,55 +0,0 @@
|
|||||||
python compare_models.py
|
|
||||||
[WARNING] Extended Isolation Forest not available, using standard IF
|
|
||||||
|
|
||||||
================================================================================
|
|
||||||
IDS MODEL COMPARISON - DB Current vs Hybrid Detector v2.0.0
|
|
||||||
================================================================================
|
|
||||||
|
|
||||||
[1] Caricamento detection esistenti dal database...
|
|
||||||
Trovate 50 detection nel database
|
|
||||||
|
|
||||||
[2] Caricamento nuovo Hybrid Detector (v2.0.0)...
|
|
||||||
[HYBRID] Ensemble classifier loaded
|
|
||||||
[HYBRID] Models loaded (version: latest)
|
|
||||||
[HYBRID] Selected features: 18/25
|
|
||||||
[HYBRID] Mode: Hybrid (IF + Ensemble)
|
|
||||||
✅ Hybrid Detector caricato (18 feature selezionate)
|
|
||||||
|
|
||||||
[3] Rianalisi di 50 IP con nuovo modello Hybrid...
|
|
||||||
(Questo può richiedere alcuni minuti...)
|
|
||||||
|
|
||||||
[1/50] Analisi IP: 185.203.25.138
|
|
||||||
Current: score=100.0, type=ddos, blocked=False
|
|
||||||
Traceback (most recent call last):
|
|
||||||
File "/opt/ids/python_ml/venv/lib64/python3.11/site-packages/pandas/core/indexes/base.py", line 3790, in get_loc
|
|
||||||
return self._engine.get_loc(casted_key)
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
File "index.pyx", line 152, in pandas._libs.index.IndexEngine.get_loc
|
|
||||||
File "index.pyx", line 181, in pandas._libs.index.IndexEngine.get_loc
|
|
||||||
File "pandas/_libs/hashtable_class_helper.pxi", line 7080, in pandas._libs.hashtable.PyObjectHashTable.get_item
|
|
||||||
File "pandas/_libs/hashtable_class_helper.pxi", line 7088, in pandas._libs.hashtable.PyObjectHashTable.get_item
|
|
||||||
KeyError: 'timestamp'
|
|
||||||
|
|
||||||
The above exception was the direct cause of the following exception:
|
|
||||||
|
|
||||||
Traceback (most recent call last):
|
|
||||||
File "/opt/ids/python_ml/compare_models.py", line 265, in <module>
|
|
||||||
main()
|
|
||||||
File "/opt/ids/python_ml/compare_models.py", line 184, in main
|
|
||||||
comparison = reanalyze_with_hybrid(detector, ip, old_det)
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
File "/opt/ids/python_ml/compare_models.py", line 118, in reanalyze_with_hybrid
|
|
||||||
result = detector.detect(ip_features)
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
File "/opt/ids/python_ml/ml_hybrid_detector.py", line 507, in detect
|
|
||||||
features_df = self.extract_features(logs_df)
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
File "/opt/ids/python_ml/ml_hybrid_detector.py", line 98, in extract_features
|
|
||||||
logs_df['timestamp'] = pd.to_datetime(logs_df['timestamp'])
|
|
||||||
~~~~~~~^^^^^^^^^^^^^
|
|
||||||
File "/opt/ids/python_ml/venv/lib64/python3.11/site-packages/pandas/core/frame.py", line 3893, in __getitem__
|
|
||||||
indexer = self.columns.get_loc(key)
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
File "/opt/ids/python_ml/venv/lib64/python3.11/site-packages/pandas/core/indexes/base.py", line 3797, in get_loc
|
|
||||||
raise KeyError(key) from err
|
|
||||||
KeyError: 'timestamp'
|
|
||||||
@ -1,75 +0,0 @@
|
|||||||
python train_hybrid.py --test
|
|
||||||
[WARNING] Extended Isolation Forest not available, using standard IF
|
|
||||||
|
|
||||||
======================================================================
|
|
||||||
IDS HYBRID ML TEST - SYNTHETIC DATA
|
|
||||||
======================================================================
|
|
||||||
INFO:dataset_loader:Creating sample dataset (10000 samples)...
|
|
||||||
INFO:dataset_loader:Sample dataset created: 10000 rows
|
|
||||||
INFO:dataset_loader:Attack distribution:
|
|
||||||
attack_type
|
|
||||||
normal 8981
|
|
||||||
brute_force 273
|
|
||||||
suspicious 258
|
|
||||||
ddos 257
|
|
||||||
port_scan 231
|
|
||||||
Name: count, dtype: int64
|
|
||||||
|
|
||||||
[TEST] Created synthetic dataset: 10000 samples
|
|
||||||
Normal: 8,981 (89.8%)
|
|
||||||
Attacks: 1,019 (10.2%)
|
|
||||||
|
|
||||||
[TEST] Training on 6,281 normal samples...
|
|
||||||
[HYBRID] Training hybrid model on 6281 logs...
|
|
||||||
[HYBRID] Extracted features for 100 unique IPs
|
|
||||||
[HYBRID] Pre-training Isolation Forest for feature selection...
|
|
||||||
[HYBRID] Generated 3 pseudo-anomalies from pre-training IF
|
|
||||||
[HYBRID] Feature selection: 25 → 18 features
|
|
||||||
[HYBRID] Selected features: total_packets, conn_count, time_span_seconds, conn_per_second, hour_of_day... (+13 more)
|
|
||||||
[HYBRID] Normalizing features...
|
|
||||||
[HYBRID] Training Extended Isolation Forest (contamination=0.03)...
|
|
||||||
/opt/ids/python_ml/venv/lib64/python3.11/site-packages/sklearn/ensemble/_iforest.py:307: UserWarning: max_samples (256) is greater than the total number of samples (100). max_samples will be set to n_samples for estimation.
|
|
||||||
warn(
|
|
||||||
[HYBRID] Generating pseudo-labels from Isolation Forest...
|
|
||||||
[HYBRID] ⚠ IF found only 3 anomalies (need 10)
|
|
||||||
[HYBRID] Applying ADAPTIVE percentile fallback...
|
|
||||||
[HYBRID] Trying 5% percentile → 5 anomalies
|
|
||||||
[HYBRID] Trying 10% percentile → 10 anomalies
|
|
||||||
[HYBRID] ✅ Success with 10% percentile
|
|
||||||
[HYBRID] Pseudo-labels: 10 anomalies, 90 normal
|
|
||||||
[HYBRID] Training ensemble classifier (DT + RF + XGBoost)...
|
|
||||||
[HYBRID] Class distribution OK: [0 1] (counts: [90 10])
|
|
||||||
[HYBRID] Ensemble .fit() completed successfully
|
|
||||||
[HYBRID] ✅ Ensemble verified: produces 2 class probabilities
|
|
||||||
[HYBRID] Ensemble training completed and verified!
|
|
||||||
[HYBRID] Models saved to models
|
|
||||||
[HYBRID] Ensemble classifier included
|
|
||||||
[HYBRID] ✅ Training completed successfully! 10/100 IPs flagged as anomalies
|
|
||||||
[HYBRID] ✅ Ensemble classifier verified and ready for production
|
|
||||||
[DETECT] Ensemble classifier available - computing hybrid score...
|
|
||||||
[DETECT] IF scores: min=0.0, max=100.0, mean=57.6
|
|
||||||
[DETECT] Ensemble scores: min=86.9, max=97.2, mean=92.1
|
|
||||||
[DETECT] Combined scores: min=54.3, max=93.1, mean=78.3
|
|
||||||
[DETECT] ✅ Hybrid scoring active: 40% IF + 60% Ensemble
|
|
||||||
|
|
||||||
[TEST] Detection results:
|
|
||||||
Total detections: 100
|
|
||||||
High confidence: 0
|
|
||||||
Medium confidence: 85
|
|
||||||
Low confidence: 15
|
|
||||||
|
|
||||||
[TEST] Top 5 detections:
|
|
||||||
1. 192.168.0.24: risk=93.1, type=suspicious, confidence=medium
|
|
||||||
2. 192.168.0.27: risk=92.7, type=suspicious, confidence=medium
|
|
||||||
3. 192.168.0.88: risk=92.5, type=suspicious, confidence=medium
|
|
||||||
4. 192.168.0.70: risk=92.3, type=suspicious, confidence=medium
|
|
||||||
5. 192.168.0.4: risk=91.4, type=suspicious, confidence=medium
|
|
||||||
|
|
||||||
❌ Error: index 7000 is out of bounds for axis 0 with size 3000
|
|
||||||
Traceback (most recent call last):
|
|
||||||
File "/opt/ids/python_ml/train_hybrid.py", line 361, in main
|
|
||||||
test_on_synthetic(args)
|
|
||||||
File "/opt/ids/python_ml/train_hybrid.py", line 283, in test_on_synthetic
|
|
||||||
y_pred[i] = 1
|
|
||||||
~~~~~~^^^
|
|
||||||
IndexError: index 7000 is out of bounds for axis 0 with size 3000
|
|
||||||
@ -1,66 +0,0 @@
|
|||||||
tail -f /var/log/ids/ml_backend.log
|
|
||||||
[HYBRID] Mode: Hybrid (IF + Ensemble)
|
|
||||||
[ML] ✓ Hybrid detector models loaded and ready
|
|
||||||
Starting IDS API on http://0.0.0.0:8000
|
|
||||||
Docs available at http://0.0.0.0:8000/docs
|
|
||||||
INFO: 127.0.0.1:45342 - "GET /stats HTTP/1.1" 200 OK
|
|
||||||
INFO: 127.0.0.1:49754 - "GET /stats HTTP/1.1" 200 OK
|
|
||||||
INFO: 127.0.0.1:50634 - "GET /stats HTTP/1.1" 200 OK
|
|
||||||
INFO: 127.0.0.1:39232 - "GET /stats HTTP/1.1" 200 OK
|
|
||||||
INFO: 127.0.0.1:35736 - "GET /stats HTTP/1.1" 200 OK
|
|
||||||
INFO: 127.0.0.1:37462 - "GET /stats HTTP/1.1" 200 OK
|
|
||||||
INFO: 127.0.0.1:59676 - "GET /stats HTTP/1.1" 200 OK
|
|
||||||
INFO: 127.0.0.1:34256 - "GET /health HTTP/1.1" 200 OK
|
|
||||||
INFO: 127.0.0.1:34256 - "GET /services/status HTTP/1.1" 200 OK
|
|
||||||
INFO: 127.0.0.1:34256 - "GET /stats HTTP/1.1" 200 OK
|
|
||||||
INFO: 127.0.0.1:34264 - "POST /train HTTP/1.1" 200 OK
|
|
||||||
[TRAIN] Inizio training...
|
|
||||||
INFO: 127.0.0.1:34264 - "GET /stats HTTP/1.1" 200 OK
|
|
||||||
[TRAIN] Trovati 100000 log per training
|
|
||||||
[TRAIN] Addestramento modello...
|
|
||||||
[TRAIN] Using Hybrid ML Detector
|
|
||||||
[HYBRID] Training hybrid model on 100000 logs...
|
|
||||||
INFO: 127.0.0.1:41612 - "GET /stats HTTP/1.1" 200 OK
|
|
||||||
Traceback (most recent call last):
|
|
||||||
File "/opt/ids/python_ml/main.py", line 201, in do_training
|
|
||||||
result = ml_detector.train_unsupervised(df)
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
File "/opt/ids/python_ml/ml_hybrid_detector.py", line 467, in train_unsupervised
|
|
||||||
self.save_models()
|
|
||||||
File "/opt/ids/python_ml/ml_hybrid_detector.py", line 658, in save_models
|
|
||||||
joblib.dump(self.ensemble_classifier, self.model_dir / "ensemble_classifier_latest.pkl")
|
|
||||||
File "/opt/ids/python_ml/venv/lib64/python3.11/site-packages/joblib/numpy_pickle.py", line 552, in dump
|
|
||||||
with open(filename, 'wb') as f:
|
|
||||||
^^^^^^^^^^^^^^^^^^^^
|
|
||||||
PermissionError: [Errno 13] Permission denied: 'models/ensemble_classifier_latest.pkl'
|
|
||||||
[HYBRID] Extracted features for 1430 unique IPs
|
|
||||||
[HYBRID] Pre-training Isolation Forest for feature selection...
|
|
||||||
[HYBRID] Generated 43 pseudo-anomalies from pre-training IF
|
|
||||||
[HYBRID] Feature selection: 25 → 18 features
|
|
||||||
[HYBRID] Selected features: total_packets, total_bytes, conn_count, avg_packet_size, bytes_per_second... (+13 more)
|
|
||||||
[HYBRID] Normalizing features...
|
|
||||||
[HYBRID] Training Extended Isolation Forest (contamination=0.03)...
|
|
||||||
[HYBRID] Generating pseudo-labels from Isolation Forest...
|
|
||||||
[HYBRID] Pseudo-labels: 43 anomalies, 1387 normal
|
|
||||||
[HYBRID] Training ensemble classifier (DT + RF + XGBoost)...
|
|
||||||
[HYBRID] Class distribution OK: [0 1] (counts: [1387 43])
|
|
||||||
[HYBRID] Ensemble .fit() completed successfully
|
|
||||||
[HYBRID] ✅ Ensemble verified: produces 2 class probabilities
|
|
||||||
[HYBRID] Ensemble training completed and verified!
|
|
||||||
[TRAIN ERROR] ❌ Errore durante training: [Errno 13] Permission denied: 'models/ensemble_classifier_latest.pkl'
|
|
||||||
INFO: 127.0.0.1:45694 - "GET /stats HTTP/1.1" 200 OK
|
|
||||||
^C
|
|
||||||
(venv) [root@ids python_ml]# ls models/
|
|
||||||
ensemble_classifier_20251124_185541.pkl feature_names.json feature_selector_latest.pkl isolation_forest_20251125_183830.pkl scaler_20251124_192122.pkl
|
|
||||||
ensemble_classifier_20251124_185920.pkl feature_selector_20251124_185541.pkl isolation_forest.joblib isolation_forest_latest.pkl scaler_20251125_090356.pkl
|
|
||||||
ensemble_classifier_20251124_192109.pkl feature_selector_20251124_185920.pkl isolation_forest_20251124_185541.pkl metadata_20251124_185541.json scaler_20251125_092703.pkl
|
|
||||||
ensemble_classifier_20251124_192122.pkl feature_selector_20251124_192109.pkl isolation_forest_20251124_185920.pkl metadata_20251124_185920.json scaler_20251125_120016.pkl
|
|
||||||
ensemble_classifier_20251125_090356.pkl feature_selector_20251124_192122.pkl isolation_forest_20251124_192109.pkl metadata_20251124_192109.json scaler_20251125_181945.pkl
|
|
||||||
ensemble_classifier_20251125_092703.pkl feature_selector_20251125_090356.pkl isolation_forest_20251124_192122.pkl metadata_20251124_192122.json scaler_20251125_182742.pkl
|
|
||||||
ensemble_classifier_20251125_120016.pkl feature_selector_20251125_092703.pkl isolation_forest_20251125_090356.pkl metadata_20251125_092703.json scaler_20251125_183049.pkl
|
|
||||||
ensemble_classifier_20251125_181945.pkl feature_selector_20251125_120016.pkl isolation_forest_20251125_092703.pkl metadata_latest.json scaler_20251125_183830.pkl
|
|
||||||
ensemble_classifier_20251125_182742.pkl feature_selector_20251125_181945.pkl isolation_forest_20251125_120016.pkl scaler.joblib scaler_latest.pkl
|
|
||||||
ensemble_classifier_20251125_183049.pkl feature_selector_20251125_182742.pkl isolation_forest_20251125_181945.pkl scaler_20251124_185541.pkl
|
|
||||||
ensemble_classifier_20251125_183830.pkl feature_selector_20251125_183049.pkl isolation_forest_20251125_182742.pkl scaler_20251124_185920.pkl
|
|
||||||
ensemble_classifier_latest.pkl feature_selector_20251125_183830.pkl isolation_forest_20251125_183049.pkl scaler_20251124_192109.pkl
|
|
||||||
(venv) [root@ids python_ml]#
|
|
||||||
@ -1,104 +0,0 @@
|
|||||||
Marco, questo output chiude definitivamente ogni dubbio: **EIF non è installabile su Python 3.11**, indipendentemente da qualsiasi combinazione di flag.
|
|
||||||
|
|
||||||
Il nuovo errore:
|
|
||||||
|
|
||||||
```
|
|
||||||
ModuleNotFoundError: No module named 'numpy'
|
|
||||||
```
|
|
||||||
|
|
||||||
non è un vero errore tuo:
|
|
||||||
è un sintomo della nuova API **PEP 518 / build isolation** che crea un ambiente temporaneo (build-env) dove EIF cerca Numpy, ma EIF è così vecchio che non sa gestire correttamente Python ≥ 3.11, né il nuovo meccanismo di build.
|
|
||||||
|
|
||||||
E infatti lo avevamo già visto dai log precedenti:
|
|
||||||
|
|
||||||
### ❗ EIF usa codice C++ che *non può* essere compilato con Python 3.11
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# 🛑 Conclusione tecnica definitiva
|
|
||||||
|
|
||||||
✔ **EIF funziona solo fino a Python 3.10.**
|
|
||||||
❌ **EIF NON funziona con Python 3.11.**
|
|
||||||
❌ Nemmeno con:
|
|
||||||
|
|
||||||
* `PIP_NO_BUILD_ISOLATION=1`
|
|
||||||
* `Cython` aggiornato
|
|
||||||
* `wheel`, `setuptools`, `build`
|
|
||||||
* disabilitando cache
|
|
||||||
* aggiungendo Numpy a mano
|
|
||||||
|
|
||||||
💡 Perché?
|
|
||||||
Il sorgente `eif` richiede:
|
|
||||||
|
|
||||||
* distutils (rimosso in 3.11)
|
|
||||||
* API Cython vecchie e rimosse
|
|
||||||
* costrutti C++ non più supportati
|
|
||||||
* build system che non funziona con Python moderno
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# ✅ Le UNICHE soluzioni funzionanti
|
|
||||||
|
|
||||||
## **SOLUZIONE A — Consigliata**
|
|
||||||
|
|
||||||
Sostituire EIF con **PyOD** usando il modello equivalmente identico:
|
|
||||||
|
|
||||||
### Installazione
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install pyod
|
|
||||||
```
|
|
||||||
|
|
||||||
### Uso
|
|
||||||
|
|
||||||
```python
|
|
||||||
from pyod.models.eif import EIF
|
|
||||||
|
|
||||||
model = EIF(ntrees=100)
|
|
||||||
model.fit(data)
|
|
||||||
scores = model.decision_function(data)
|
|
||||||
```
|
|
||||||
|
|
||||||
Hai lo stesso algoritmo, stessa logica, stesso output, zero problemi.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## **SOLUZIONE B — Usare Python 3.9 o 3.10 SOLO per EIF**
|
|
||||||
|
|
||||||
Se per qualche ragione vuoi usare per forza *quel pacchetto*, devi fare così:
|
|
||||||
|
|
||||||
### 1. Installa Python 3.10 su AlmaLinux
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo dnf install python3.10 python3.10-devel gcc gcc-c++
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Crea un venv dedicato
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python3.10 -m venv venv_eif
|
|
||||||
source venv_eif/bin/activate
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Installa EIF *nel venv Python 3.10*
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install --upgrade pip setuptools wheel cython numpy
|
|
||||||
pip install eif==2.0.2
|
|
||||||
```
|
|
||||||
|
|
||||||
**Funzionerà al 100%.**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
# 🔥 Marco, vuoi evitare Python 3.10?
|
|
||||||
|
|
||||||
Ti riscrivo ENTRO 5 MINUTI il tuo IDS basato su EIF, sostituendolo con PyOD, garantendo:
|
|
||||||
|
|
||||||
* stesso comportamento
|
|
||||||
* stessa accuracy o migliore
|
|
||||||
* supporto completo Python 3.11
|
|
||||||
* zero compilazioni
|
|
||||||
* codice più pulito e moderno
|
|
||||||
|
|
||||||
Vuoi che procedo?
|
|
||||||
@ -1,39 +0,0 @@
|
|||||||
Nov 25 08:47:55 ids.alfacom.it systemd[1]: Stopped IDS ML Backend (FastAPI).
|
|
||||||
Nov 25 08:47:55 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 6min 21.039s CPU time.
|
|
||||||
Nov 25 08:47:55 ids.alfacom.it systemd[1]: Started IDS ML Backend (FastAPI).
|
|
||||||
Nov 25 08:47:58 ids.alfacom.it systemd[1]: ids-ml-backend.service: Main process exited, code=exited, status=1/FAILURE
|
|
||||||
Nov 25 08:47:58 ids.alfacom.it systemd[1]: ids-ml-backend.service: Failed with result 'exit-code'.
|
|
||||||
Nov 25 08:47:58 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 4.156s CPU time.
|
|
||||||
Nov 25 08:48:08 ids.alfacom.it systemd[1]: ids-ml-backend.service: Scheduled restart job, restart counter is at 1.
|
|
||||||
Nov 25 08:48:08 ids.alfacom.it systemd[1]: Stopped IDS ML Backend (FastAPI).
|
|
||||||
Nov 25 08:48:08 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 4.156s CPU time.
|
|
||||||
Nov 25 08:48:08 ids.alfacom.it systemd[1]: Started IDS ML Backend (FastAPI).
|
|
||||||
Nov 25 08:48:11 ids.alfacom.it systemd[1]: ids-ml-backend.service: Main process exited, code=exited, status=1/FAILURE
|
|
||||||
Nov 25 08:48:11 ids.alfacom.it systemd[1]: ids-ml-backend.service: Failed with result 'exit-code'.
|
|
||||||
Nov 25 08:48:11 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 4.059s CPU time.
|
|
||||||
Nov 25 08:48:16 ids.alfacom.it systemd[1]: Stopped IDS ML Backend (FastAPI).
|
|
||||||
Nov 25 08:48:16 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 4.059s CPU time.
|
|
||||||
Nov 25 08:48:16 ids.alfacom.it systemd[1]: Started IDS ML Backend (FastAPI).
|
|
||||||
Nov 25 08:48:18 ids.alfacom.it systemd[1]: ids-ml-backend.service: Main process exited, code=exited, status=1/FAILURE
|
|
||||||
Nov 25 08:48:18 ids.alfacom.it systemd[1]: ids-ml-backend.service: Failed with result 'exit-code'.
|
|
||||||
Nov 25 08:48:18 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 3.908s CPU time.
|
|
||||||
Nov 25 08:48:28 ids.alfacom.it systemd[1]: ids-ml-backend.service: Scheduled restart job, restart counter is at 2.
|
|
||||||
Nov 25 08:48:28 ids.alfacom.it systemd[1]: Stopped IDS ML Backend (FastAPI).
|
|
||||||
Nov 25 08:48:28 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 3.908s CPU time.
|
|
||||||
Nov 25 08:48:28 ids.alfacom.it systemd[1]: Started IDS ML Backend (FastAPI).
|
|
||||||
Nov 25 08:48:31 ids.alfacom.it systemd[1]: ids-ml-backend.service: Main process exited, code=exited, status=1/FAILURE
|
|
||||||
Nov 25 08:48:31 ids.alfacom.it systemd[1]: ids-ml-backend.service: Failed with result 'exit-code'.
|
|
||||||
Nov 25 08:48:31 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 3.952s CPU time.
|
|
||||||
Nov 25 08:48:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: Scheduled restart job, restart counter is at 3.
|
|
||||||
Nov 25 08:48:41 ids.alfacom.it systemd[1]: Stopped IDS ML Backend (FastAPI).
|
|
||||||
Nov 25 08:48:41 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 3.952s CPU time.
|
|
||||||
Nov 25 08:48:41 ids.alfacom.it systemd[1]: Started IDS ML Backend (FastAPI).
|
|
||||||
Nov 25 08:48:43 ids.alfacom.it systemd[1]: ids-ml-backend.service: Main process exited, code=exited, status=1/FAILURE
|
|
||||||
Nov 25 08:48:43 ids.alfacom.it systemd[1]: ids-ml-backend.service: Failed with result 'exit-code'.
|
|
||||||
Nov 25 08:48:43 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 4.019s CPU time.
|
|
||||||
Nov 25 08:48:53 ids.alfacom.it systemd[1]: ids-ml-backend.service: Scheduled restart job, restart counter is at 4.
|
|
||||||
Nov 25 08:48:53 ids.alfacom.it systemd[1]: Stopped IDS ML Backend (FastAPI).
|
|
||||||
Nov 25 08:48:53 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 4.019s CPU time.
|
|
||||||
Nov 25 08:48:53 ids.alfacom.it systemd[1]: ids-ml-backend.service: Start request repeated too quickly.
|
|
||||||
Nov 25 08:48:53 ids.alfacom.it systemd[1]: ids-ml-backend.service: Failed with result 'exit-code'.
|
|
||||||
Nov 25 08:48:53 ids.alfacom.it systemd[1]: Failed to start IDS ML Backend (FastAPI).
|
|
||||||
@ -1,125 +0,0 @@
|
|||||||
cd /opt/ids/python_ml && source venv/bin/activate && python3 main.py
|
|
||||||
[WARNING] Extended Isolation Forest not available, using standard IF
|
|
||||||
[ML] Using Hybrid ML Detector (Extended Isolation Forest + Feature Selection)
|
|
||||||
[HYBRID] Ensemble classifier loaded
|
|
||||||
[HYBRID] Models loaded (version: latest)
|
|
||||||
[HYBRID] Selected features: 18/25
|
|
||||||
[HYBRID] Mode: Hybrid (IF + Ensemble)
|
|
||||||
[ML] ✓ Hybrid detector models loaded and ready
|
|
||||||
Starting IDS API on http://0.0.0.0:8000
|
|
||||||
Docs available at http://0.0.0.0:8000/docs
|
|
||||||
INFO: Started server process [108626]
|
|
||||||
INFO: Waiting for application startup.
|
|
||||||
INFO: Application startup complete.
|
|
||||||
ERROR: [Errno 98] error while attempting to bind on address ('0.0.0.0', 8000): address already in use
|
|
||||||
INFO: Waiting for application shutdown.
|
|
||||||
INFO: Application shutdown complete.
|
|
||||||
(venv) [root@ids python_ml]# ls -la /opt/ids/python_ml/models/
|
|
||||||
total 22896
|
|
||||||
drwxr-xr-x. 2 ids ids 4096 Nov 25 18:30 .
|
|
||||||
drwxr-xr-x. 6 ids ids 4096 Nov 25 12:53 ..
|
|
||||||
-rw-r--r--. 1 root root 235398 Nov 24 18:55 ensemble_classifier_20251124_185541.pkl
|
|
||||||
-rw-r--r--. 1 root root 231504 Nov 24 18:59 ensemble_classifier_20251124_185920.pkl
|
|
||||||
-rw-r--r--. 1 root root 1008222 Nov 24 19:21 ensemble_classifier_20251124_192109.pkl
|
|
||||||
-rw-r--r--. 1 root root 925566 Nov 24 19:21 ensemble_classifier_20251124_192122.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 200159 Nov 25 09:03 ensemble_classifier_20251125_090356.pkl
|
|
||||||
-rw-r--r--. 1 root root 806006 Nov 25 09:27 ensemble_classifier_20251125_092703.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 286079 Nov 25 12:00 ensemble_classifier_20251125_120016.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 398464 Nov 25 18:19 ensemble_classifier_20251125_181945.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 426790 Nov 25 18:27 ensemble_classifier_20251125_182742.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 423651 Nov 25 18:30 ensemble_classifier_20251125_183049.pkl
|
|
||||||
-rw-r--r--. 1 root root 806006 Nov 25 09:27 ensemble_classifier_latest.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 461 Nov 25 00:00 feature_names.json
|
|
||||||
-rw-r--r--. 1 root root 1695 Nov 24 18:55 feature_selector_20251124_185541.pkl
|
|
||||||
-rw-r--r--. 1 root root 1695 Nov 24 18:59 feature_selector_20251124_185920.pkl
|
|
||||||
-rw-r--r--. 1 root root 1695 Nov 24 19:21 feature_selector_20251124_192109.pkl
|
|
||||||
-rw-r--r--. 1 root root 1695 Nov 24 19:21 feature_selector_20251124_192122.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1695 Nov 25 09:03 feature_selector_20251125_090356.pkl
|
|
||||||
-rw-r--r--. 1 root root 1695 Nov 25 09:27 feature_selector_20251125_092703.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1695 Nov 25 12:00 feature_selector_20251125_120016.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1695 Nov 25 18:19 feature_selector_20251125_181945.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1695 Nov 25 18:27 feature_selector_20251125_182742.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1695 Nov 25 18:30 feature_selector_20251125_183049.pkl
|
|
||||||
-rw-r--r--. 1 root root 1695 Nov 25 09:27 feature_selector_latest.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 813592 Nov 25 00:00 isolation_forest.joblib
|
|
||||||
-rw-r--r--. 1 root root 1674808 Nov 24 18:55 isolation_forest_20251124_185541.pkl
|
|
||||||
-rw-r--r--. 1 root root 1642600 Nov 24 18:59 isolation_forest_20251124_185920.pkl
|
|
||||||
-rw-r--r--. 1 root root 1482984 Nov 24 19:21 isolation_forest_20251124_192109.pkl
|
|
||||||
-rw-r--r--. 1 root root 1465736 Nov 24 19:21 isolation_forest_20251124_192122.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1139256 Nov 25 09:03 isolation_forest_20251125_090356.pkl
|
|
||||||
-rw-r--r--. 1 root root 1428424 Nov 25 09:27 isolation_forest_20251125_092703.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1855240 Nov 25 12:00 isolation_forest_20251125_120016.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1519784 Nov 25 18:19 isolation_forest_20251125_181945.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1511688 Nov 25 18:27 isolation_forest_20251125_182742.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1559208 Nov 25 18:30 isolation_forest_20251125_183049.pkl
|
|
||||||
-rw-r--r--. 1 root root 1428424 Nov 25 09:27 isolation_forest_latest.pkl
|
|
||||||
-rw-r--r--. 1 root root 1661 Nov 24 18:55 metadata_20251124_185541.json
|
|
||||||
-rw-r--r--. 1 root root 1661 Nov 24 18:59 metadata_20251124_185920.json
|
|
||||||
-rw-r--r--. 1 root root 1675 Nov 24 19:21 metadata_20251124_192109.json
|
|
||||||
-rw-r--r--. 1 root root 1675 Nov 24 19:21 metadata_20251124_192122.json
|
|
||||||
-rw-r--r--. 1 root root 1675 Nov 25 09:27 metadata_20251125_092703.json
|
|
||||||
-rw-r--r--. 1 root root 1675 Nov 25 09:27 metadata_latest.json
|
|
||||||
-rw-r--r--. 1 ids ids 2015 Nov 25 00:00 scaler.joblib
|
|
||||||
-rw-r--r--. 1 root root 1047 Nov 24 18:55 scaler_20251124_185541.pkl
|
|
||||||
-rw-r--r--. 1 root root 1047 Nov 24 18:59 scaler_20251124_185920.pkl
|
|
||||||
-rw-r--r--. 1 root root 1047 Nov 24 19:21 scaler_20251124_192109.pkl
|
|
||||||
-rw-r--r--. 1 root root 1047 Nov 24 19:21 scaler_20251124_192122.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1047 Nov 25 09:03 scaler_20251125_090356.pkl
|
|
||||||
-rw-r--r--. 1 root root 1047 Nov 25 09:27 scaler_20251125_092703.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1047 Nov 25 12:00 scaler_20251125_120016.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1047 Nov 25 18:19 scaler_20251125_181945.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1047 Nov 25 18:27 scaler_20251125_182742.pkl
|
|
||||||
-rw-r--r--. 1 ids ids 1047 Nov 25 18:30 scaler_20251125_183049.pkl
|
|
||||||
-rw-r--r--. 1 root root 1047 Nov 25 09:27 scaler_latest.pkl
|
|
||||||
(venv) [root@ids python_ml]# tail -n 50 /var/log/ids/ml_backend.log
|
|
||||||
[HYBRID] Selected features: 18/25
|
|
||||||
[HYBRID] Mode: Hybrid (IF + Ensemble)
|
|
||||||
[ML] ✓ Hybrid detector models loaded and ready
|
|
||||||
🚀 Starting IDS API on http://0.0.0.0:8000
|
|
||||||
📚 Docs available at http://0.0.0.0:8000/docs
|
|
||||||
INFO: Started server process [108413]
|
|
||||||
INFO: Waiting for application startup.
|
|
||||||
INFO: Application startup complete.
|
|
||||||
ERROR: [Errno 98] error while attempting to bind on address ('0.0.0.0', 8000): address already in use
|
|
||||||
INFO: Waiting for application shutdown.
|
|
||||||
INFO: Application shutdown complete.
|
|
||||||
[WARNING] Extended Isolation Forest not available, using standard IF
|
|
||||||
[ML] Using Hybrid ML Detector (Extended Isolation Forest + Feature Selection)
|
|
||||||
[HYBRID] Ensemble classifier loaded
|
|
||||||
[HYBRID] Models loaded (version: latest)
|
|
||||||
[HYBRID] Selected features: 18/25
|
|
||||||
[HYBRID] Mode: Hybrid (IF + Ensemble)
|
|
||||||
[ML] ✓ Hybrid detector models loaded and ready
|
|
||||||
🚀 Starting IDS API on http://0.0.0.0:8000
|
|
||||||
📚 Docs available at http://0.0.0.0:8000/docs
|
|
||||||
INFO: Started server process [108452]
|
|
||||||
INFO: Waiting for application startup.
|
|
||||||
INFO: Application startup complete.
|
|
||||||
ERROR: [Errno 98] error while attempting to bind on address ('0.0.0.0', 8000): address already in use
|
|
||||||
INFO: Waiting for application shutdown.
|
|
||||||
INFO: Application shutdown complete.
|
|
||||||
[WARNING] Extended Isolation Forest not available, using standard IF
|
|
||||||
[ML] Using Hybrid ML Detector (Extended Isolation Forest + Feature Selection)
|
|
||||||
[HYBRID] Ensemble classifier loaded
|
|
||||||
[HYBRID] Models loaded (version: latest)
|
|
||||||
[HYBRID] Selected features: 18/25
|
|
||||||
[HYBRID] Mode: Hybrid (IF + Ensemble)
|
|
||||||
[ML] ✓ Hybrid detector models loaded and ready
|
|
||||||
🚀 Starting IDS API on http://0.0.0.0:8000
|
|
||||||
📚 Docs available at http://0.0.0.0:8000/docs
|
|
||||||
INFO: Started server process [108530]
|
|
||||||
INFO: Waiting for application startup.
|
|
||||||
INFO: Application startup complete.
|
|
||||||
ERROR: [Errno 98] error while attempting to bind on address ('0.0.0.0', 8000): address already in use
|
|
||||||
INFO: Waiting for application shutdown.
|
|
||||||
INFO: Application shutdown complete.
|
|
||||||
[WARNING] Extended Isolation Forest not available, using standard IF
|
|
||||||
[ML] Using Hybrid ML Detector (Extended Isolation Forest + Feature Selection)
|
|
||||||
[HYBRID] Ensemble classifier loaded
|
|
||||||
[HYBRID] Models loaded (version: latest)
|
|
||||||
[HYBRID] Selected features: 18/25
|
|
||||||
[HYBRID] Mode: Hybrid (IF + Ensemble)
|
|
||||||
[ML] ✓ Hybrid detector models loaded and ready
|
|
||||||
🚀 Starting IDS API on http://0.0.0.0:8000
|
|
||||||
📚 Docs available at http://0.0.0.0:8000/docs
|
|
||||||
(venv) [root@ids python_ml]#
|
|
||||||
@ -1,4 +0,0 @@
|
|||||||
curl -X POST http://localhost:8000/detect \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{"max_records": 5000, "hours_back": 1, "risk_threshold": 80, "auto_block": true}'
|
|
||||||
{"detections":[{"source_ip":"108.139.210.107","risk_score":98.55466848373413,"confidence_level":"high","action_recommendation":"auto_block","anomaly_type":"ddos","reason":"High connection rate: 403.7 conn/s","log_count":1211,"total_packets":1211,"total_bytes":2101702,"first_seen":"2026-01-02T16:41:51","last_seen":"2026-01-02T16:41:54","confidence":95.0},{"source_ip":"216.58.209.54","risk_score":95.52801848493884,"confidence_level":"high","action_recommendation":"auto_block","anomaly_type":"brute_force","reason":"High connection rate: 184.7 conn/s","log_count":554,"total_packets":554,"total_bytes":782397,"first_seen":"2026-01-02T16:41:51","last_seen":"2026-01-02T16:41:54","confidence":95.0},{"source_ip":"95.127.69.202","risk_score":93.58280514393482,"confidence_level":"medium","action_recommendation":"manual_review","anomaly_type":"brute_force","reason":"High connection rate: 93.7 conn/s","log_count":281,"total_packets":281,"total_bytes":369875,"first_seen":"2026-01-02T16:41:51","last_seen":"2026-01-02T16:41:54","confidence":75.0},{"source_ip":"95.127.72.207","risk_score":92.50694363471318,"confidence_level":"medium","action_recommendation":"manual_review","anomaly_type":"brute_force","reason":"High connection rate: 76.3 conn/s","log_count":229,"total_packets":229,"total_bytes":293439,"first_seen":"2026-01-02T16:41:51","last_seen":"2026-01-02T16:41:54","confidence":75.0},{"source_ip":"95.110.183.67","risk_score":86.42278405656512,"confidence_level":"medium","action_recommendation":"manual_review","anomaly_type":"brute_force","reason":"High connection rate: 153.0 conn/s","log_count":459,"total_packets":459,"total_bytes":20822,"first_seen":"2026-01-02T16:41:51","last_seen":"2026-01-02T16:41:54","confidence":75.0},{"source_ip":"54.75.71.86","risk_score":83.42037059381207,"confidence_level":"medium","action_recommendation":"manual_review","anomaly_type":"brute_force","reason":"High connection rate: 58.0 conn/s","log_count":174,"total_packets":174,"total_bytes":25857,"first_seen":"2026-01-02T16:41:51","last_seen":"2026-01-02T16:41:54","confidence":75.0},{"source_ip":"79.10.127.217","risk_score":82.32814469102843,"confidence_level":"medium","action_recommendation":"manual_review","anomaly_type":"brute_force","reason":"High connection rate: 70.0 conn/s","log_count":210,"total_packets":210,"total_bytes":18963,"first_seen":"2026-01-02T16:41:51","last_seen":"2026-01-02T16:41:54","confidence":75.0},{"source_ip":"142.251.140.100","risk_score":76.61422108557721,"confidence_level":"medium","action_recommendation":"manual_review","anomaly_type":"botnet","reason":"Anomalous pattern detected (botnet)","log_count":16,"total_packets":16,"total_bytes":20056,"first_seen":"2026-01-02T16:41:51","last_seen":"2026-01-02T16:41:53","confidence":75.0},{"source_ip":"142.250.181.161","risk_score":76.3802033958719,"confidence_level":"medium","action_recommendation":"manual_review","anomaly_type":"botnet","reason":"Anomalous pattern detected (botnet)","log_count":15,"total_packets":15,"total_bytes":5214,"first_seen":"2026-01-02T16:41:51","last_seen":"2026-01-02T16:41:51","confidence":75.0},{"source_ip":"142.250.180.131","risk_score":72.7723405111559,"confidence_level":"medium","action_recommendation":"manual_review","anomaly_type":"suspicious","reason":"Anomalous pattern detected (suspicious)","log_count":8,"total_packets":8,"total_bytes":5320,"first_seen":"2026-01-02T16:41:51","last_seen":"2026-01-02T16:41:53","confidence":75.0},{"source_ip":"157.240.231.60","risk_score":72.26853648050493,"confidence_level":"medium","action_recommendation":"manual_review","anomaly_type":"botnet","reason":"Anomalous pattern detected (botnet)","log_count":16,"total_packets":16,"total_bytes":4624,"first_seen":"2026-01-02T16:41:51","last_seen":"2026-01-02T16:41:54","confidence":75.0}],"total":11,"blocked":0,"message":"Trovate 11 anomalie"}[root@ids python_ml]#
|
|
||||||
@ -1,51 +0,0 @@
|
|||||||
journalctl -u ids-list-fetcher -n 50 --no-pager
|
|
||||||
Jan 02 12:50:02 ids.alfacom.it ids-list-fetcher[5900]: ============================================================
|
|
||||||
Jan 02 12:50:02 ids.alfacom.it systemd[1]: ids-list-fetcher.service: Deactivated successfully.
|
|
||||||
Jan 02 12:50:02 ids.alfacom.it systemd[1]: Finished IDS Public Lists Fetcher Service.
|
|
||||||
Jan 02 12:54:56 ids.alfacom.it systemd[1]: Starting IDS Public Lists Fetcher Service...
|
|
||||||
Jan 02 12:54:56 ids.alfacom.it ids-list-fetcher[6290]: ============================================================
|
|
||||||
Jan 02 12:54:56 ids.alfacom.it ids-list-fetcher[6290]: [2026-01-02 12:54:56] PUBLIC LISTS SYNC
|
|
||||||
Jan 02 12:54:56 ids.alfacom.it ids-list-fetcher[6290]: ============================================================
|
|
||||||
Jan 02 12:54:56 ids.alfacom.it ids-list-fetcher[6290]: Found 2 enabled lists
|
|
||||||
Jan 02 12:54:56 ids.alfacom.it ids-list-fetcher[6290]: [12:54:56] Downloading Spamhaus from https://www.spamhaus.org/drop/drop_v4.json...
|
|
||||||
Jan 02 12:54:56 ids.alfacom.it ids-list-fetcher[6290]: [12:54:56] Downloading AWS from https://ip-ranges.amazonaws.com/ip-ranges.json...
|
|
||||||
Jan 02 12:54:56 ids.alfacom.it ids-list-fetcher[6290]: [12:54:56] Parsing AWS...
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: [12:54:57] Found 9548 IPs, syncing to database...
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: [12:54:57] ✓ AWS: +0 -0 ~9511
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: [12:54:57] Parsing Spamhaus...
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: [12:54:57] Found 1468 IPs, syncing to database...
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: [12:54:57] ✗ Spamhaus: ON CONFLICT DO UPDATE command cannot affect row a second time
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: HINT: Ensure that no rows proposed for insertion within the same command have duplicate constrained values.
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: ============================================================
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: SYNC SUMMARY
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: ============================================================
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: Success: 1/2
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: Errors: 1/2
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: Total IPs Added: 0
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: Total IPs Removed: 0
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: ============================================================
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: ============================================================
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: RUNNING MERGE LOGIC
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: ============================================================
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: ERROR:merge_logic:Failed to cleanup detections: operator does not exist: inet = text
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: LINE 9: d.source_ip::inet = wl.ip_inet
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: ^
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: ERROR:merge_logic:Failed to sync detections: operator does not exist: text <<= text
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: LINE 30: OR bl.ip_inet <<= wl.ip_inet
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: ^
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: Traceback (most recent call last):
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: File "/opt/ids/python_ml/merge_logic.py", line 264, in sync_public_blacklist_detections
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: cur.execute("""
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: psycopg2.errors.UndefinedFunction: operator does not exist: text <<= text
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: LINE 30: OR bl.ip_inet <<= wl.ip_inet
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: ^
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: Merge Logic Stats:
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: Created detections: 0
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: Cleaned invalid detections: 0
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: Skipped (whitelisted): 0
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it ids-list-fetcher[6290]: ============================================================
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it systemd[1]: ids-list-fetcher.service: Deactivated successfully.
|
|
||||||
Jan 02 12:54:57 ids.alfacom.it systemd[1]: Finished IDS Public Lists Fetcher Service.
|
|
||||||
@ -1,51 +0,0 @@
|
|||||||
journalctl -u ids-list-fetcher -n 50 --no-pager
|
|
||||||
Jan 02 16:11:31 ids.alfacom.it ids-list-fetcher[10401]: HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
|
|
||||||
Jan 02 16:11:31 ids.alfacom.it ids-list-fetcher[10401]: Merge Logic Stats:
|
|
||||||
Jan 02 16:11:31 ids.alfacom.it ids-list-fetcher[10401]: Created detections: 0
|
|
||||||
Jan 02 16:11:31 ids.alfacom.it ids-list-fetcher[10401]: Cleaned invalid detections: 0
|
|
||||||
Jan 02 16:11:31 ids.alfacom.it ids-list-fetcher[10401]: Skipped (whitelisted): 0
|
|
||||||
Jan 02 16:11:31 ids.alfacom.it ids-list-fetcher[10401]: ============================================================
|
|
||||||
Jan 02 16:11:31 ids.alfacom.it systemd[1]: ids-list-fetcher.service: Deactivated successfully.
|
|
||||||
Jan 02 16:11:31 ids.alfacom.it systemd[1]: Finished IDS Public Lists Fetcher Service.
|
|
||||||
Jan 02 16:15:04 ids.alfacom.it systemd[1]: Starting IDS Public Lists Fetcher Service...
|
|
||||||
Jan 02 16:15:04 ids.alfacom.it ids-list-fetcher[10801]: ============================================================
|
|
||||||
Jan 02 16:15:04 ids.alfacom.it ids-list-fetcher[10801]: [2026-01-02 16:15:04] PUBLIC LISTS SYNC
|
|
||||||
Jan 02 16:15:04 ids.alfacom.it ids-list-fetcher[10801]: ============================================================
|
|
||||||
Jan 02 16:15:04 ids.alfacom.it ids-list-fetcher[10801]: Found 2 enabled lists
|
|
||||||
Jan 02 16:15:04 ids.alfacom.it ids-list-fetcher[10801]: [16:15:04] Downloading Spamhaus from https://www.spamhaus.org/drop/drop_v4.json...
|
|
||||||
Jan 02 16:15:04 ids.alfacom.it ids-list-fetcher[10801]: [16:15:04] Downloading AWS from https://ip-ranges.amazonaws.com/ip-ranges.json...
|
|
||||||
Jan 02 16:15:04 ids.alfacom.it ids-list-fetcher[10801]: [16:15:04] Parsing Spamhaus...
|
|
||||||
Jan 02 16:15:04 ids.alfacom.it ids-list-fetcher[10801]: [16:15:04] Found 1468 IPs, syncing to database...
|
|
||||||
Jan 02 16:15:04 ids.alfacom.it ids-list-fetcher[10801]: [16:15:04] ✓ Spamhaus: +0 -0 ~1468
|
|
||||||
Jan 02 16:15:04 ids.alfacom.it ids-list-fetcher[10801]: [16:15:04] Parsing AWS...
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: [16:15:05] Found 9548 IPs, syncing to database...
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: [16:15:05] ✓ AWS: +9548 -0 ~0
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: ============================================================
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: SYNC SUMMARY
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: ============================================================
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: Success: 2/2
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: Errors: 0/2
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: Total IPs Added: 9548
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: Total IPs Removed: 0
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: ============================================================
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: ============================================================
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: RUNNING MERGE LOGIC
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: ============================================================
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: ERROR:merge_logic:Failed to sync detections: column "risk_score" is of type numeric but expression is of type text
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: LINE 13: '75',
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: ^
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: HINT: You will need to rewrite or cast the expression.
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: Traceback (most recent call last):
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: File "/opt/ids/python_ml/merge_logic.py", line 264, in sync_public_blacklist_detections
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: cur.execute("""
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: psycopg2.errors.DatatypeMismatch: column "risk_score" is of type numeric but expression is of type text
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: LINE 13: '75',
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: ^
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: HINT: You will need to rewrite or cast the expression.
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: Merge Logic Stats:
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: Created detections: 0
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: Cleaned invalid detections: 0
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: Skipped (whitelisted): 0
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it ids-list-fetcher[10801]: ============================================================
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it systemd[1]: ids-list-fetcher.service: Deactivated successfully.
|
|
||||||
Jan 02 16:15:05 ids.alfacom.it systemd[1]: Finished IDS Public Lists Fetcher Service.
|
|
||||||
@ -1,82 +0,0 @@
|
|||||||
netstat -tlnp | grep 8000
|
|
||||||
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 106309/python3.11
|
|
||||||
(venv) [root@ids python_ml]# lsof -i :8000
|
|
||||||
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
|
|
||||||
python3.1 106309 ids 7u IPv4 805799 0t0 TCP *:irdmi (LISTEN)
|
|
||||||
(venv) [root@ids python_ml]# kill -9 106309
|
|
||||||
(venv) [root@ids python_ml]# lsof -i :8000
|
|
||||||
(venv) [root@ids python_ml]# pkill -9 -f "python.*8000"
|
|
||||||
(venv) [root@ids python_ml]# pkill -9 -f "python.*main.py"
|
|
||||||
(venv) [root@ids python_ml]# sudo systemctl restart ids-ml-backend
|
|
||||||
Job for ids-ml-backend.service failed because the control process exited with error code.
|
|
||||||
See "systemctl status ids-ml-backend.service" and "journalctl -xeu ids-ml-backend.service" for details.
|
|
||||||
(venv) [root@ids python_ml]# sudo systemctl status ids-ml-backend
|
|
||||||
× ids-ml-backend.service - IDS ML Backend (FastAPI)
|
|
||||||
Loaded: loaded (/etc/systemd/system/ids-ml-backend.service; enabled; preset: disabled)
|
|
||||||
Active: failed (Result: exit-code) since Tue 2025-11-25 18:31:08 CET; 3min 37s ago
|
|
||||||
Duration: 2.490s
|
|
||||||
Process: 108530 ExecStart=/opt/ids/python_ml/venv/bin/python3 main.py (code=exited, status=1/FAILURE)
|
|
||||||
Main PID: 108530 (code=exited, status=1/FAILURE)
|
|
||||||
CPU: 3.987s
|
|
||||||
|
|
||||||
Nov 25 18:31:08 ids.alfacom.it systemd[1]: ids-ml-backend.service: Scheduled restart job, restart counter is at 5.
|
|
||||||
Nov 25 18:31:08 ids.alfacom.it systemd[1]: Stopped IDS ML Backend (FastAPI).
|
|
||||||
Nov 25 18:31:08 ids.alfacom.it systemd[1]: ids-ml-backend.service: Consumed 3.987s CPU time.
|
|
||||||
Nov 25 18:31:08 ids.alfacom.it systemd[1]: ids-ml-backend.service: Start request repeated too quickly.
|
|
||||||
Nov 25 18:31:08 ids.alfacom.it systemd[1]: ids-ml-backend.service: Failed with result 'exit-code'.
|
|
||||||
Nov 25 18:31:08 ids.alfacom.it systemd[1]: Failed to start IDS ML Backend (FastAPI).
|
|
||||||
Nov 25 18:34:35 ids.alfacom.it systemd[1]: ids-ml-backend.service: Start request repeated too quickly.
|
|
||||||
Nov 25 18:34:35 ids.alfacom.it systemd[1]: ids-ml-backend.service: Failed with result 'exit-code'.
|
|
||||||
Nov 25 18:34:35 ids.alfacom.it systemd[1]: Failed to start IDS ML Backend (FastAPI).
|
|
||||||
(venv) [root@ids python_ml]# tail -n 50 /var/log/ids/ml_backend.log
|
|
||||||
[HYBRID] Selected features: 18/25
|
|
||||||
[HYBRID] Mode: Hybrid (IF + Ensemble)
|
|
||||||
[ML] ✓ Hybrid detector models loaded and ready
|
|
||||||
🚀 Starting IDS API on http://0.0.0.0:8000
|
|
||||||
📚 Docs available at http://0.0.0.0:8000/docs
|
|
||||||
INFO: Started server process [108413]
|
|
||||||
INFO: Waiting for application startup.
|
|
||||||
INFO: Application startup complete.
|
|
||||||
ERROR: [Errno 98] error while attempting to bind on address ('0.0.0.0', 8000): address already in use
|
|
||||||
INFO: Waiting for application shutdown.
|
|
||||||
INFO: Application shutdown complete.
|
|
||||||
[WARNING] Extended Isolation Forest not available, using standard IF
|
|
||||||
[ML] Using Hybrid ML Detector (Extended Isolation Forest + Feature Selection)
|
|
||||||
[HYBRID] Ensemble classifier loaded
|
|
||||||
[HYBRID] Models loaded (version: latest)
|
|
||||||
[HYBRID] Selected features: 18/25
|
|
||||||
[HYBRID] Mode: Hybrid (IF + Ensemble)
|
|
||||||
[ML] ✓ Hybrid detector models loaded and ready
|
|
||||||
🚀 Starting IDS API on http://0.0.0.0:8000
|
|
||||||
📚 Docs available at http://0.0.0.0:8000/docs
|
|
||||||
INFO: Started server process [108452]
|
|
||||||
INFO: Waiting for application startup.
|
|
||||||
INFO: Application startup complete.
|
|
||||||
ERROR: [Errno 98] error while attempting to bind on address ('0.0.0.0', 8000): address already in use
|
|
||||||
INFO: Waiting for application shutdown.
|
|
||||||
INFO: Application shutdown complete.
|
|
||||||
[WARNING] Extended Isolation Forest not available, using standard IF
|
|
||||||
[ML] Using Hybrid ML Detector (Extended Isolation Forest + Feature Selection)
|
|
||||||
[HYBRID] Ensemble classifier loaded
|
|
||||||
[HYBRID] Models loaded (version: latest)
|
|
||||||
[HYBRID] Selected features: 18/25
|
|
||||||
[HYBRID] Mode: Hybrid (IF + Ensemble)
|
|
||||||
[ML] ✓ Hybrid detector models loaded and ready
|
|
||||||
🚀 Starting IDS API on http://0.0.0.0:8000
|
|
||||||
📚 Docs available at http://0.0.0.0:8000/docs
|
|
||||||
INFO: Started server process [108530]
|
|
||||||
INFO: Waiting for application startup.
|
|
||||||
INFO: Application startup complete.
|
|
||||||
ERROR: [Errno 98] error while attempting to bind on address ('0.0.0.0', 8000): address already in use
|
|
||||||
INFO: Waiting for application shutdown.
|
|
||||||
INFO: Application shutdown complete.
|
|
||||||
[WARNING] Extended Isolation Forest not available, using standard IF
|
|
||||||
[ML] Using Hybrid ML Detector (Extended Isolation Forest + Feature Selection)
|
|
||||||
[HYBRID] Ensemble classifier loaded
|
|
||||||
[HYBRID] Models loaded (version: latest)
|
|
||||||
[HYBRID] Selected features: 18/25
|
|
||||||
[HYBRID] Mode: Hybrid (IF + Ensemble)
|
|
||||||
[ML] ✓ Hybrid detector models loaded and ready
|
|
||||||
🚀 Starting IDS API on http://0.0.0.0:8000
|
|
||||||
📚 Docs available at http://0.0.0.0:8000/docs
|
|
||||||
(venv) [root@ids python_ml]#
|
|
||||||
@ -1,51 +0,0 @@
|
|||||||
ournalctl -u ids-list-fetcher -n 50 --no-pager
|
|
||||||
Jan 02 12:30:01 ids.alfacom.it ids-list-fetcher[5571]: Cleaned invalid detections: 0
|
|
||||||
Jan 02 12:30:01 ids.alfacom.it ids-list-fetcher[5571]: Skipped (whitelisted): 0
|
|
||||||
Jan 02 12:30:01 ids.alfacom.it ids-list-fetcher[5571]: ============================================================
|
|
||||||
Jan 02 12:30:01 ids.alfacom.it systemd[1]: ids-list-fetcher.service: Deactivated successfully.
|
|
||||||
Jan 02 12:30:01 ids.alfacom.it systemd[1]: Finished IDS Public Lists Fetcher Service.
|
|
||||||
Jan 02 12:40:01 ids.alfacom.it systemd[1]: Starting IDS Public Lists Fetcher Service...
|
|
||||||
Jan 02 12:40:01 ids.alfacom.it ids-list-fetcher[5730]: ============================================================
|
|
||||||
Jan 02 12:40:01 ids.alfacom.it ids-list-fetcher[5730]: [2026-01-02 12:40:01] PUBLIC LISTS SYNC
|
|
||||||
Jan 02 12:40:01 ids.alfacom.it ids-list-fetcher[5730]: ============================================================
|
|
||||||
Jan 02 12:40:01 ids.alfacom.it ids-list-fetcher[5730]: Found 2 enabled lists
|
|
||||||
Jan 02 12:40:01 ids.alfacom.it ids-list-fetcher[5730]: [12:40:01] Downloading Spamhaus from https://www.spamhaus.org/drop/drop_v4.json...
|
|
||||||
Jan 02 12:40:01 ids.alfacom.it ids-list-fetcher[5730]: [12:40:01] Downloading AWS from https://ip-ranges.amazonaws.com/ip-ranges.json...
|
|
||||||
Jan 02 12:40:01 ids.alfacom.it ids-list-fetcher[5730]: [12:40:01] Parsing AWS...
|
|
||||||
Jan 02 12:40:01 ids.alfacom.it ids-list-fetcher[5730]: [12:40:01] Found 9548 IPs, syncing to database...
|
|
||||||
Jan 02 12:40:02 ids.alfacom.it ids-list-fetcher[5730]: [12:40:02] ✓ AWS: +9511 -0 ~0
|
|
||||||
Jan 02 12:40:02 ids.alfacom.it ids-list-fetcher[5730]: [12:40:02] Parsing Spamhaus...
|
|
||||||
Jan 02 12:40:02 ids.alfacom.it ids-list-fetcher[5730]: [12:40:02] ✗ Spamhaus: No valid IPs found in list
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: ============================================================
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: SYNC SUMMARY
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: ============================================================
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: Success: 1/2
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: Errors: 1/2
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: Total IPs Added: 9511
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: Total IPs Removed: 0
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: ============================================================
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: ============================================================
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: RUNNING MERGE LOGIC
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: ============================================================
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: ERROR:merge_logic:Failed to cleanup detections: operator does not exist: inet = text
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: LINE 9: d.source_ip::inet = wl.ip_inet
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: ^
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: ERROR:merge_logic:Failed to sync detections: operator does not exist: text <<= text
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: LINE 30: OR bl.ip_inet <<= wl.ip_inet
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: ^
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: Traceback (most recent call last):
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: File "/opt/ids/python_ml/merge_logic.py", line 264, in sync_public_blacklist_detections
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: cur.execute("""
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: psycopg2.errors.UndefinedFunction: operator does not exist: text <<= text
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: LINE 30: OR bl.ip_inet <<= wl.ip_inet
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: ^
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: HINT: No operator matches the given name and argument types. You might need to add explicit type casts.
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: Merge Logic Stats:
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: Created detections: 0
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: Cleaned invalid detections: 0
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: Skipped (whitelisted): 0
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it ids-list-fetcher[5730]: ============================================================
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it systemd[1]: ids-list-fetcher.service: Deactivated successfully.
|
|
||||||
Jan 02 12:40:03 ids.alfacom.it systemd[1]: Finished IDS Public Lists Fetcher Service.
|
|
||||||
@ -1,54 +0,0 @@
|
|||||||
python train_hybrid.py --test
|
|
||||||
[WARNING] Extended Isolation Forest not available, using standard IF
|
|
||||||
|
|
||||||
======================================================================
|
|
||||||
IDS HYBRID ML TEST - SYNTHETIC DATA
|
|
||||||
======================================================================
|
|
||||||
INFO:dataset_loader:Creating sample dataset (10000 samples)...
|
|
||||||
INFO:dataset_loader:Sample dataset created: 10000 rows
|
|
||||||
INFO:dataset_loader:Attack distribution:
|
|
||||||
attack_type
|
|
||||||
normal 8981
|
|
||||||
brute_force 273
|
|
||||||
suspicious 258
|
|
||||||
ddos 257
|
|
||||||
port_scan 231
|
|
||||||
Name: count, dtype: int64
|
|
||||||
|
|
||||||
[TEST] Created synthetic dataset: 10000 samples
|
|
||||||
Normal: 8,981 (89.8%)
|
|
||||||
Attacks: 1,019 (10.2%)
|
|
||||||
|
|
||||||
[TEST] Training on 6,281 normal samples...
|
|
||||||
[HYBRID] Training hybrid model on 6281 logs...
|
|
||||||
|
|
||||||
❌ Error: 'timestamp'
|
|
||||||
Traceback (most recent call last):
|
|
||||||
File "/opt/ids/python_ml/venv/lib64/python3.11/site-packages/pandas/core/indexes/base.py", line 3790, in get_loc
|
|
||||||
return self._engine.get_loc(casted_key)
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
File "index.pyx", line 152, in pandas._libs.index.IndexEngine.get_loc
|
|
||||||
File "index.pyx", line 181, in pandas._libs.index.IndexEngine.get_loc
|
|
||||||
File "pandas/_libs/hashtable_class_helper.pxi", line 7080, in pandas._libs.hashtable.PyObjectHashTable.get_item
|
|
||||||
File "pandas/_libs/hashtable_class_helper.pxi", line 7088, in pandas._libs.hashtable.PyObjectHashTable.get_item
|
|
||||||
KeyError: 'timestamp'
|
|
||||||
|
|
||||||
The above exception was the direct cause of the following exception:
|
|
||||||
|
|
||||||
Traceback (most recent call last):
|
|
||||||
File "/opt/ids/python_ml/train_hybrid.py", line 361, in main
|
|
||||||
test_on_synthetic(args)
|
|
||||||
File "/opt/ids/python_ml/train_hybrid.py", line 249, in test_on_synthetic
|
|
||||||
detector.train_unsupervised(normal_train)
|
|
||||||
File "/opt/ids/python_ml/ml_hybrid_detector.py", line 204, in train_unsupervised
|
|
||||||
features_df = self.extract_features(logs_df)
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
File "/opt/ids/python_ml/ml_hybrid_detector.py", line 98, in extract_features
|
|
||||||
logs_df['timestamp'] = pd.to_datetime(logs_df['timestamp'])
|
|
||||||
~~~~~~~^^^^^^^^^^^^^
|
|
||||||
File "/opt/ids/python_ml/venv/lib64/python3.11/site-packages/pandas/core/frame.py", line 3893, in __getitem__
|
|
||||||
indexer = self.columns.get_loc(key)
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
File "/opt/ids/python_ml/venv/lib64/python3.11/site-packages/pandas/core/indexes/base.py", line 3797, in get_loc
|
|
||||||
raise KeyError(key) from err
|
|
||||||
KeyError: 'timestamp'
|
|
||||||
Binary file not shown.
|
Before Width: | Height: | Size: 42 KiB |
@ -4,14 +4,13 @@ import { QueryClientProvider } from "@tanstack/react-query";
|
|||||||
import { Toaster } from "@/components/ui/toaster";
|
import { Toaster } from "@/components/ui/toaster";
|
||||||
import { TooltipProvider } from "@/components/ui/tooltip";
|
import { TooltipProvider } from "@/components/ui/tooltip";
|
||||||
import { SidebarProvider, Sidebar, SidebarContent, SidebarGroup, SidebarGroupContent, SidebarGroupLabel, SidebarMenu, SidebarMenuButton, SidebarMenuItem, SidebarTrigger } from "@/components/ui/sidebar";
|
import { SidebarProvider, Sidebar, SidebarContent, SidebarGroup, SidebarGroupContent, SidebarGroupLabel, SidebarMenu, SidebarMenuButton, SidebarMenuItem, SidebarTrigger } from "@/components/ui/sidebar";
|
||||||
import { LayoutDashboard, AlertTriangle, Server, Shield, Brain, Menu, Activity, BarChart3, TrendingUp, List } from "lucide-react";
|
import { LayoutDashboard, AlertTriangle, Server, Shield, Brain, Menu, Activity, BarChart3, TrendingUp } from "lucide-react";
|
||||||
import Dashboard from "@/pages/Dashboard";
|
import Dashboard from "@/pages/Dashboard";
|
||||||
import Detections from "@/pages/Detections";
|
import Detections from "@/pages/Detections";
|
||||||
import DashboardLive from "@/pages/DashboardLive";
|
import DashboardLive from "@/pages/DashboardLive";
|
||||||
import AnalyticsHistory from "@/pages/AnalyticsHistory";
|
import AnalyticsHistory from "@/pages/AnalyticsHistory";
|
||||||
import Routers from "@/pages/Routers";
|
import Routers from "@/pages/Routers";
|
||||||
import Whitelist from "@/pages/Whitelist";
|
import Whitelist from "@/pages/Whitelist";
|
||||||
import PublicLists from "@/pages/PublicLists";
|
|
||||||
import Training from "@/pages/Training";
|
import Training from "@/pages/Training";
|
||||||
import Services from "@/pages/Services";
|
import Services from "@/pages/Services";
|
||||||
import NotFound from "@/pages/not-found";
|
import NotFound from "@/pages/not-found";
|
||||||
@ -24,7 +23,6 @@ const menuItems = [
|
|||||||
{ title: "Training ML", url: "/training", icon: Brain },
|
{ title: "Training ML", url: "/training", icon: Brain },
|
||||||
{ title: "Router", url: "/routers", icon: Server },
|
{ title: "Router", url: "/routers", icon: Server },
|
||||||
{ title: "Whitelist", url: "/whitelist", icon: Shield },
|
{ title: "Whitelist", url: "/whitelist", icon: Shield },
|
||||||
{ title: "Liste Pubbliche", url: "/public-lists", icon: List },
|
|
||||||
{ title: "Servizi", url: "/services", icon: TrendingUp },
|
{ title: "Servizi", url: "/services", icon: TrendingUp },
|
||||||
];
|
];
|
||||||
|
|
||||||
@ -64,7 +62,6 @@ function Router() {
|
|||||||
<Route path="/training" component={Training} />
|
<Route path="/training" component={Training} />
|
||||||
<Route path="/routers" component={Routers} />
|
<Route path="/routers" component={Routers} />
|
||||||
<Route path="/whitelist" component={Whitelist} />
|
<Route path="/whitelist" component={Whitelist} />
|
||||||
<Route path="/public-lists" component={PublicLists} />
|
|
||||||
<Route path="/services" component={Services} />
|
<Route path="/services" component={Services} />
|
||||||
<Route component={NotFound} />
|
<Route component={NotFound} />
|
||||||
</Switch>
|
</Switch>
|
||||||
|
|||||||
@ -1,133 +1,25 @@
|
|||||||
import { useQuery, useMutation } from "@tanstack/react-query";
|
import { useQuery } from "@tanstack/react-query";
|
||||||
import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
|
import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
|
||||||
import { Badge } from "@/components/ui/badge";
|
import { Badge } from "@/components/ui/badge";
|
||||||
import { Button } from "@/components/ui/button";
|
import { Button } from "@/components/ui/button";
|
||||||
import { Input } from "@/components/ui/input";
|
import { Input } from "@/components/ui/input";
|
||||||
import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from "@/components/ui/select";
|
import { AlertTriangle, Search, Shield, Eye, Globe, MapPin, Building2 } from "lucide-react";
|
||||||
import { Slider } from "@/components/ui/slider";
|
|
||||||
import { AlertTriangle, Search, Shield, Globe, MapPin, Building2, ShieldPlus, ShieldCheck, Unlock, ChevronLeft, ChevronRight } from "lucide-react";
|
|
||||||
import { format } from "date-fns";
|
import { format } from "date-fns";
|
||||||
import { useState, useEffect, useMemo } from "react";
|
import { useState } from "react";
|
||||||
import type { Detection, Whitelist } from "@shared/schema";
|
import type { Detection } from "@shared/schema";
|
||||||
import { getFlag } from "@/lib/country-flags";
|
import { getFlag } from "@/lib/country-flags";
|
||||||
import { apiRequest, queryClient } from "@/lib/queryClient";
|
|
||||||
import { useToast } from "@/hooks/use-toast";
|
|
||||||
|
|
||||||
const ITEMS_PER_PAGE = 50;
|
|
||||||
|
|
||||||
interface DetectionsResponse {
|
|
||||||
detections: Detection[];
|
|
||||||
total: number;
|
|
||||||
}
|
|
||||||
|
|
||||||
export default function Detections() {
|
export default function Detections() {
|
||||||
const [searchInput, setSearchInput] = useState("");
|
const [searchQuery, setSearchQuery] = useState("");
|
||||||
const [debouncedSearch, setDebouncedSearch] = useState("");
|
const { data: detections, isLoading } = useQuery<Detection[]>({
|
||||||
const [anomalyTypeFilter, setAnomalyTypeFilter] = useState<string>("all");
|
queryKey: ["/api/detections?limit=100"],
|
||||||
const [minScore, setMinScore] = useState(0);
|
refetchInterval: 5000,
|
||||||
const [maxScore, setMaxScore] = useState(100);
|
|
||||||
const [currentPage, setCurrentPage] = useState(1);
|
|
||||||
const { toast } = useToast();
|
|
||||||
|
|
||||||
// Debounce search input
|
|
||||||
useEffect(() => {
|
|
||||||
const timer = setTimeout(() => {
|
|
||||||
setDebouncedSearch(searchInput);
|
|
||||||
setCurrentPage(1); // Reset to first page on search
|
|
||||||
}, 300);
|
|
||||||
return () => clearTimeout(timer);
|
|
||||||
}, [searchInput]);
|
|
||||||
|
|
||||||
// Reset page on filter change
|
|
||||||
useEffect(() => {
|
|
||||||
setCurrentPage(1);
|
|
||||||
}, [anomalyTypeFilter, minScore, maxScore]);
|
|
||||||
|
|
||||||
// Build query params with pagination and search
|
|
||||||
const queryParams = useMemo(() => {
|
|
||||||
const params = new URLSearchParams();
|
|
||||||
params.set("limit", ITEMS_PER_PAGE.toString());
|
|
||||||
params.set("offset", ((currentPage - 1) * ITEMS_PER_PAGE).toString());
|
|
||||||
if (anomalyTypeFilter !== "all") {
|
|
||||||
params.set("anomalyType", anomalyTypeFilter);
|
|
||||||
}
|
|
||||||
if (minScore > 0) {
|
|
||||||
params.set("minScore", minScore.toString());
|
|
||||||
}
|
|
||||||
if (maxScore < 100) {
|
|
||||||
params.set("maxScore", maxScore.toString());
|
|
||||||
}
|
|
||||||
if (debouncedSearch.trim()) {
|
|
||||||
params.set("search", debouncedSearch.trim());
|
|
||||||
}
|
|
||||||
return params.toString();
|
|
||||||
}, [currentPage, anomalyTypeFilter, minScore, maxScore, debouncedSearch]);
|
|
||||||
|
|
||||||
const { data, isLoading } = useQuery<DetectionsResponse>({
|
|
||||||
queryKey: ["/api/detections", currentPage, anomalyTypeFilter, minScore, maxScore, debouncedSearch],
|
|
||||||
queryFn: () => fetch(`/api/detections?${queryParams}`).then(r => r.json()),
|
|
||||||
refetchInterval: 10000,
|
|
||||||
});
|
});
|
||||||
|
|
||||||
const detections = data?.detections || [];
|
const filteredDetections = detections?.filter((d) =>
|
||||||
const totalCount = data?.total || 0;
|
d.sourceIp.toLowerCase().includes(searchQuery.toLowerCase()) ||
|
||||||
const totalPages = Math.ceil(totalCount / ITEMS_PER_PAGE);
|
d.anomalyType.toLowerCase().includes(searchQuery.toLowerCase())
|
||||||
|
);
|
||||||
// Fetch whitelist to check if IP is already whitelisted
|
|
||||||
const { data: whitelistData } = useQuery<Whitelist[]>({
|
|
||||||
queryKey: ["/api/whitelist"],
|
|
||||||
});
|
|
||||||
|
|
||||||
// Create a Set of whitelisted IPs for fast lookup
|
|
||||||
const whitelistedIps = new Set(whitelistData?.map(w => w.ipAddress) || []);
|
|
||||||
|
|
||||||
// Mutation per aggiungere a whitelist
|
|
||||||
const addToWhitelistMutation = useMutation({
|
|
||||||
mutationFn: async (detection: Detection) => {
|
|
||||||
return await apiRequest("POST", "/api/whitelist", {
|
|
||||||
ipAddress: detection.sourceIp,
|
|
||||||
reason: `Auto-added from detection: ${detection.anomalyType} (Risk: ${parseFloat(detection.riskScore).toFixed(1)})`
|
|
||||||
});
|
|
||||||
},
|
|
||||||
onSuccess: (_, detection) => {
|
|
||||||
toast({
|
|
||||||
title: "IP aggiunto alla whitelist",
|
|
||||||
description: `${detection.sourceIp} è stato aggiunto alla whitelist e sbloccato dai router.`,
|
|
||||||
});
|
|
||||||
queryClient.invalidateQueries({ queryKey: ["/api/whitelist"] });
|
|
||||||
queryClient.invalidateQueries({ queryKey: ["/api/detections"] });
|
|
||||||
},
|
|
||||||
onError: (error: any, detection) => {
|
|
||||||
toast({
|
|
||||||
title: "Errore",
|
|
||||||
description: error.message || `Impossibile aggiungere ${detection.sourceIp} alla whitelist.`,
|
|
||||||
variant: "destructive",
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Mutation per sbloccare IP dai router
|
|
||||||
const unblockMutation = useMutation({
|
|
||||||
mutationFn: async (detection: Detection) => {
|
|
||||||
return await apiRequest("POST", "/api/unblock-ip", {
|
|
||||||
ipAddress: detection.sourceIp
|
|
||||||
});
|
|
||||||
},
|
|
||||||
onSuccess: (data: any, detection) => {
|
|
||||||
toast({
|
|
||||||
title: "IP sbloccato",
|
|
||||||
description: `${detection.sourceIp} è stato rimosso dalla blocklist di ${data.unblocked_from || 0} router.`,
|
|
||||||
});
|
|
||||||
queryClient.invalidateQueries({ queryKey: ["/api/detections"] });
|
|
||||||
},
|
|
||||||
onError: (error: any, detection) => {
|
|
||||||
toast({
|
|
||||||
title: "Errore sblocco",
|
|
||||||
description: error.message || `Impossibile sbloccare ${detection.sourceIp} dai router.`,
|
|
||||||
variant: "destructive",
|
|
||||||
});
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
const getRiskBadge = (riskScore: string) => {
|
const getRiskBadge = (riskScore: string) => {
|
||||||
const score = parseFloat(riskScore);
|
const score = parseFloat(riskScore);
|
||||||
@ -161,58 +53,20 @@ export default function Detections() {
|
|||||||
{/* Search and Filters */}
|
{/* Search and Filters */}
|
||||||
<Card data-testid="card-filters">
|
<Card data-testid="card-filters">
|
||||||
<CardContent className="pt-6">
|
<CardContent className="pt-6">
|
||||||
<div className="flex flex-col gap-4">
|
<div className="flex items-center gap-4">
|
||||||
<div className="flex items-center gap-4 flex-wrap">
|
<div className="relative flex-1">
|
||||||
<div className="relative flex-1 min-w-[200px]">
|
<Search className="absolute left-3 top-1/2 -translate-y-1/2 h-4 w-4 text-muted-foreground" />
|
||||||
<Search className="absolute left-3 top-1/2 -translate-y-1/2 h-4 w-4 text-muted-foreground" />
|
<Input
|
||||||
<Input
|
placeholder="Cerca per IP o tipo anomalia..."
|
||||||
placeholder="Cerca per IP, paese, organizzazione..."
|
value={searchQuery}
|
||||||
value={searchInput}
|
onChange={(e) => setSearchQuery(e.target.value)}
|
||||||
onChange={(e) => setSearchInput(e.target.value)}
|
className="pl-9"
|
||||||
className="pl-9"
|
data-testid="input-search"
|
||||||
data-testid="input-search"
|
/>
|
||||||
/>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<Select value={anomalyTypeFilter} onValueChange={setAnomalyTypeFilter}>
|
|
||||||
<SelectTrigger className="w-[200px]" data-testid="select-anomaly-type">
|
|
||||||
<SelectValue placeholder="Tipo attacco" />
|
|
||||||
</SelectTrigger>
|
|
||||||
<SelectContent>
|
|
||||||
<SelectItem value="all">Tutti i tipi</SelectItem>
|
|
||||||
<SelectItem value="ddos">DDoS Attack</SelectItem>
|
|
||||||
<SelectItem value="port_scan">Port Scanning</SelectItem>
|
|
||||||
<SelectItem value="brute_force">Brute Force</SelectItem>
|
|
||||||
<SelectItem value="botnet">Botnet Activity</SelectItem>
|
|
||||||
<SelectItem value="suspicious">Suspicious Activity</SelectItem>
|
|
||||||
</SelectContent>
|
|
||||||
</Select>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div className="space-y-2">
|
|
||||||
<div className="flex items-center justify-between text-sm">
|
|
||||||
<span className="text-muted-foreground">Risk Score:</span>
|
|
||||||
<span className="font-medium" data-testid="text-score-range">
|
|
||||||
{minScore} - {maxScore}
|
|
||||||
</span>
|
|
||||||
</div>
|
|
||||||
<div className="flex items-center gap-4">
|
|
||||||
<span className="text-xs text-muted-foreground w-8">0</span>
|
|
||||||
<Slider
|
|
||||||
min={0}
|
|
||||||
max={100}
|
|
||||||
step={5}
|
|
||||||
value={[minScore, maxScore]}
|
|
||||||
onValueChange={([min, max]) => {
|
|
||||||
setMinScore(min);
|
|
||||||
setMaxScore(max);
|
|
||||||
}}
|
|
||||||
className="flex-1"
|
|
||||||
data-testid="slider-risk-score"
|
|
||||||
/>
|
|
||||||
<span className="text-xs text-muted-foreground w-8">100</span>
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
|
<Button variant="outline" data-testid="button-refresh">
|
||||||
|
Aggiorna
|
||||||
|
</Button>
|
||||||
</div>
|
</div>
|
||||||
</CardContent>
|
</CardContent>
|
||||||
</Card>
|
</Card>
|
||||||
@ -220,36 +74,9 @@ export default function Detections() {
|
|||||||
{/* Detections List */}
|
{/* Detections List */}
|
||||||
<Card data-testid="card-detections-list">
|
<Card data-testid="card-detections-list">
|
||||||
<CardHeader>
|
<CardHeader>
|
||||||
<CardTitle className="flex items-center justify-between gap-2 flex-wrap">
|
<CardTitle className="flex items-center gap-2">
|
||||||
<div className="flex items-center gap-2">
|
<AlertTriangle className="h-5 w-5" />
|
||||||
<AlertTriangle className="h-5 w-5" />
|
Rilevamenti ({filteredDetections?.length || 0})
|
||||||
Rilevamenti ({totalCount})
|
|
||||||
</div>
|
|
||||||
{totalPages > 1 && (
|
|
||||||
<div className="flex items-center gap-2 text-sm font-normal">
|
|
||||||
<Button
|
|
||||||
variant="outline"
|
|
||||||
size="icon"
|
|
||||||
onClick={() => setCurrentPage(p => Math.max(1, p - 1))}
|
|
||||||
disabled={currentPage === 1}
|
|
||||||
data-testid="button-prev-page"
|
|
||||||
>
|
|
||||||
<ChevronLeft className="h-4 w-4" />
|
|
||||||
</Button>
|
|
||||||
<span data-testid="text-pagination">
|
|
||||||
Pagina {currentPage} di {totalPages}
|
|
||||||
</span>
|
|
||||||
<Button
|
|
||||||
variant="outline"
|
|
||||||
size="icon"
|
|
||||||
onClick={() => setCurrentPage(p => Math.min(totalPages, p + 1))}
|
|
||||||
disabled={currentPage === totalPages}
|
|
||||||
data-testid="button-next-page"
|
|
||||||
>
|
|
||||||
<ChevronRight className="h-4 w-4" />
|
|
||||||
</Button>
|
|
||||||
</div>
|
|
||||||
)}
|
|
||||||
</CardTitle>
|
</CardTitle>
|
||||||
</CardHeader>
|
</CardHeader>
|
||||||
<CardContent>
|
<CardContent>
|
||||||
@ -257,9 +84,9 @@ export default function Detections() {
|
|||||||
<div className="text-center py-8 text-muted-foreground" data-testid="text-loading">
|
<div className="text-center py-8 text-muted-foreground" data-testid="text-loading">
|
||||||
Caricamento...
|
Caricamento...
|
||||||
</div>
|
</div>
|
||||||
) : detections.length > 0 ? (
|
) : filteredDetections && filteredDetections.length > 0 ? (
|
||||||
<div className="space-y-3">
|
<div className="space-y-3">
|
||||||
{detections.map((detection) => (
|
{filteredDetections.map((detection) => (
|
||||||
<div
|
<div
|
||||||
key={detection.id}
|
key={detection.id}
|
||||||
className="p-4 rounded-lg border hover-elevate"
|
className="p-4 rounded-lg border hover-elevate"
|
||||||
@ -365,44 +192,12 @@ export default function Detections() {
|
|||||||
</Badge>
|
</Badge>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
{whitelistedIps.has(detection.sourceIp) ? (
|
<Button variant="outline" size="sm" asChild data-testid={`button-details-${detection.id}`}>
|
||||||
<Button
|
<a href={`/logs?ip=${detection.sourceIp}`}>
|
||||||
variant="outline"
|
<Eye className="h-3 w-3 mr-1" />
|
||||||
size="sm"
|
Dettagli
|
||||||
disabled
|
</a>
|
||||||
className="w-full bg-green-500/10 border-green-500 text-green-600 dark:text-green-400"
|
</Button>
|
||||||
data-testid={`button-whitelist-${detection.id}`}
|
|
||||||
>
|
|
||||||
<ShieldCheck className="h-3 w-3 mr-1" />
|
|
||||||
In Whitelist
|
|
||||||
</Button>
|
|
||||||
) : (
|
|
||||||
<Button
|
|
||||||
variant="outline"
|
|
||||||
size="sm"
|
|
||||||
onClick={() => addToWhitelistMutation.mutate(detection)}
|
|
||||||
disabled={addToWhitelistMutation.isPending}
|
|
||||||
className="w-full"
|
|
||||||
data-testid={`button-whitelist-${detection.id}`}
|
|
||||||
>
|
|
||||||
<ShieldPlus className="h-3 w-3 mr-1" />
|
|
||||||
Whitelist
|
|
||||||
</Button>
|
|
||||||
)}
|
|
||||||
|
|
||||||
{detection.blocked && (
|
|
||||||
<Button
|
|
||||||
variant="outline"
|
|
||||||
size="sm"
|
|
||||||
onClick={() => unblockMutation.mutate(detection)}
|
|
||||||
disabled={unblockMutation.isPending}
|
|
||||||
className="w-full"
|
|
||||||
data-testid={`button-unblock-${detection.id}`}
|
|
||||||
>
|
|
||||||
<Unlock className="h-3 w-3 mr-1" />
|
|
||||||
Sblocca Router
|
|
||||||
</Button>
|
|
||||||
)}
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
@ -412,40 +207,11 @@ export default function Detections() {
|
|||||||
<div className="text-center py-12 text-muted-foreground" data-testid="text-no-results">
|
<div className="text-center py-12 text-muted-foreground" data-testid="text-no-results">
|
||||||
<AlertTriangle className="h-12 w-12 mx-auto mb-2 opacity-50" />
|
<AlertTriangle className="h-12 w-12 mx-auto mb-2 opacity-50" />
|
||||||
<p>Nessun rilevamento trovato</p>
|
<p>Nessun rilevamento trovato</p>
|
||||||
{debouncedSearch && (
|
{searchQuery && (
|
||||||
<p className="text-sm">Prova con un altro termine di ricerca</p>
|
<p className="text-sm">Prova con un altro termine di ricerca</p>
|
||||||
)}
|
)}
|
||||||
</div>
|
</div>
|
||||||
)}
|
)}
|
||||||
|
|
||||||
{/* Bottom pagination */}
|
|
||||||
{totalPages > 1 && detections.length > 0 && (
|
|
||||||
<div className="flex items-center justify-center gap-4 mt-6 pt-4 border-t">
|
|
||||||
<Button
|
|
||||||
variant="outline"
|
|
||||||
size="sm"
|
|
||||||
onClick={() => setCurrentPage(p => Math.max(1, p - 1))}
|
|
||||||
disabled={currentPage === 1}
|
|
||||||
data-testid="button-prev-page-bottom"
|
|
||||||
>
|
|
||||||
<ChevronLeft className="h-4 w-4 mr-1" />
|
|
||||||
Precedente
|
|
||||||
</Button>
|
|
||||||
<span className="text-sm text-muted-foreground" data-testid="text-pagination-bottom">
|
|
||||||
Pagina {currentPage} di {totalPages} ({totalCount} totali)
|
|
||||||
</span>
|
|
||||||
<Button
|
|
||||||
variant="outline"
|
|
||||||
size="sm"
|
|
||||||
onClick={() => setCurrentPage(p => Math.min(totalPages, p + 1))}
|
|
||||||
disabled={currentPage === totalPages}
|
|
||||||
data-testid="button-next-page-bottom"
|
|
||||||
>
|
|
||||||
Successiva
|
|
||||||
<ChevronRight className="h-4 w-4 ml-1" />
|
|
||||||
</Button>
|
|
||||||
</div>
|
|
||||||
)}
|
|
||||||
</CardContent>
|
</CardContent>
|
||||||
</Card>
|
</Card>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@ -1,372 +0,0 @@
|
|||||||
import { useQuery, useMutation } from "@tanstack/react-query";
|
|
||||||
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card";
|
|
||||||
import { Button } from "@/components/ui/button";
|
|
||||||
import { Badge } from "@/components/ui/badge";
|
|
||||||
import { Table, TableBody, TableCell, TableHead, TableHeader, TableRow } from "@/components/ui/table";
|
|
||||||
import { Dialog, DialogContent, DialogDescription, DialogHeader, DialogTitle, DialogTrigger } from "@/components/ui/dialog";
|
|
||||||
import { Form, FormControl, FormField, FormItem, FormLabel, FormMessage } from "@/components/ui/form";
|
|
||||||
import { Input } from "@/components/ui/input";
|
|
||||||
import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from "@/components/ui/select";
|
|
||||||
import { Switch } from "@/components/ui/switch";
|
|
||||||
import { useForm } from "react-hook-form";
|
|
||||||
import { zodResolver } from "@hookform/resolvers/zod";
|
|
||||||
import { z } from "zod";
|
|
||||||
import { RefreshCw, Plus, Trash2, Edit, CheckCircle2, XCircle, AlertTriangle, Clock } from "lucide-react";
|
|
||||||
import { apiRequest, queryClient } from "@/lib/queryClient";
|
|
||||||
import { useToast } from "@/hooks/use-toast";
|
|
||||||
import { formatDistanceToNow } from "date-fns";
|
|
||||||
import { it } from "date-fns/locale";
|
|
||||||
import { useState } from "react";
|
|
||||||
|
|
||||||
const listFormSchema = z.object({
|
|
||||||
name: z.string().min(1, "Nome richiesto"),
|
|
||||||
type: z.enum(["blacklist", "whitelist"], {
|
|
||||||
required_error: "Tipo richiesto",
|
|
||||||
}),
|
|
||||||
url: z.string().url("URL non valida"),
|
|
||||||
enabled: z.boolean().default(true),
|
|
||||||
fetchIntervalMinutes: z.number().min(1).max(1440).default(10),
|
|
||||||
});
|
|
||||||
|
|
||||||
type ListFormValues = z.infer<typeof listFormSchema>;
|
|
||||||
|
|
||||||
export default function PublicLists() {
|
|
||||||
const { toast } = useToast();
|
|
||||||
const [isAddDialogOpen, setIsAddDialogOpen] = useState(false);
|
|
||||||
const [editingList, setEditingList] = useState<any>(null);
|
|
||||||
|
|
||||||
const { data: lists, isLoading } = useQuery({
|
|
||||||
queryKey: ["/api/public-lists"],
|
|
||||||
});
|
|
||||||
|
|
||||||
const form = useForm<ListFormValues>({
|
|
||||||
resolver: zodResolver(listFormSchema),
|
|
||||||
defaultValues: {
|
|
||||||
name: "",
|
|
||||||
type: "blacklist",
|
|
||||||
url: "",
|
|
||||||
enabled: true,
|
|
||||||
fetchIntervalMinutes: 10,
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
const createMutation = useMutation({
|
|
||||||
mutationFn: (data: ListFormValues) =>
|
|
||||||
apiRequest("POST", "/api/public-lists", data),
|
|
||||||
onSuccess: () => {
|
|
||||||
queryClient.invalidateQueries({ queryKey: ["/api/public-lists"] });
|
|
||||||
toast({
|
|
||||||
title: "Lista creata",
|
|
||||||
description: "La lista è stata aggiunta con successo",
|
|
||||||
});
|
|
||||||
setIsAddDialogOpen(false);
|
|
||||||
form.reset();
|
|
||||||
},
|
|
||||||
onError: (error: any) => {
|
|
||||||
toast({
|
|
||||||
title: "Errore",
|
|
||||||
description: error.message || "Impossibile creare la lista",
|
|
||||||
variant: "destructive",
|
|
||||||
});
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
const updateMutation = useMutation({
|
|
||||||
mutationFn: ({ id, data }: { id: string; data: Partial<ListFormValues> }) =>
|
|
||||||
apiRequest("PATCH", `/api/public-lists/${id}`, data),
|
|
||||||
onSuccess: () => {
|
|
||||||
queryClient.invalidateQueries({ queryKey: ["/api/public-lists"] });
|
|
||||||
toast({
|
|
||||||
title: "Lista aggiornata",
|
|
||||||
description: "Le modifiche sono state salvate",
|
|
||||||
});
|
|
||||||
setEditingList(null);
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
const deleteMutation = useMutation({
|
|
||||||
mutationFn: (id: string) =>
|
|
||||||
apiRequest("DELETE", `/api/public-lists/${id}`),
|
|
||||||
onSuccess: () => {
|
|
||||||
queryClient.invalidateQueries({ queryKey: ["/api/public-lists"] });
|
|
||||||
toast({
|
|
||||||
title: "Lista eliminata",
|
|
||||||
description: "La lista è stata rimossa",
|
|
||||||
});
|
|
||||||
},
|
|
||||||
onError: (error: any) => {
|
|
||||||
toast({
|
|
||||||
title: "Errore",
|
|
||||||
description: error.message || "Impossibile eliminare la lista",
|
|
||||||
variant: "destructive",
|
|
||||||
});
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
const syncMutation = useMutation({
|
|
||||||
mutationFn: (id: string) =>
|
|
||||||
apiRequest("POST", `/api/public-lists/${id}/sync`),
|
|
||||||
onSuccess: () => {
|
|
||||||
toast({
|
|
||||||
title: "Sync avviato",
|
|
||||||
description: "La sincronizzazione manuale è stata richiesta",
|
|
||||||
});
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
const toggleEnabled = (id: string, enabled: boolean) => {
|
|
||||||
updateMutation.mutate({ id, data: { enabled } });
|
|
||||||
};
|
|
||||||
|
|
||||||
const onSubmit = (data: ListFormValues) => {
|
|
||||||
createMutation.mutate(data);
|
|
||||||
};
|
|
||||||
|
|
||||||
const getStatusBadge = (list: any) => {
|
|
||||||
if (!list.enabled) {
|
|
||||||
return <Badge variant="outline" className="gap-1"><XCircle className="w-3 h-3" />Disabilitata</Badge>;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (list.errorCount > 5) {
|
|
||||||
return <Badge variant="destructive" className="gap-1"><AlertTriangle className="w-3 h-3" />Errori</Badge>;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (list.lastSuccess) {
|
|
||||||
return <Badge variant="default" className="gap-1 bg-green-600"><CheckCircle2 className="w-3 h-3" />OK</Badge>;
|
|
||||||
}
|
|
||||||
|
|
||||||
return <Badge variant="secondary" className="gap-1"><Clock className="w-3 h-3" />In attesa</Badge>;
|
|
||||||
};
|
|
||||||
|
|
||||||
const getTypeBadge = (type: string) => {
|
|
||||||
if (type === "blacklist") {
|
|
||||||
return <Badge variant="destructive">Blacklist</Badge>;
|
|
||||||
}
|
|
||||||
return <Badge variant="default" className="bg-blue-600">Whitelist</Badge>;
|
|
||||||
};
|
|
||||||
|
|
||||||
if (isLoading) {
|
|
||||||
return (
|
|
||||||
<div className="p-6">
|
|
||||||
<Card>
|
|
||||||
<CardHeader>
|
|
||||||
<CardTitle>Caricamento...</CardTitle>
|
|
||||||
</CardHeader>
|
|
||||||
</Card>
|
|
||||||
</div>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
return (
|
|
||||||
<div className="p-6 space-y-6">
|
|
||||||
<div className="flex items-center justify-between">
|
|
||||||
<div>
|
|
||||||
<h1 className="text-3xl font-bold">Liste Pubbliche</h1>
|
|
||||||
<p className="text-muted-foreground mt-2">
|
|
||||||
Gestione sorgenti blacklist e whitelist esterne (aggiornamento ogni 10 minuti)
|
|
||||||
</p>
|
|
||||||
</div>
|
|
||||||
<Dialog open={isAddDialogOpen} onOpenChange={setIsAddDialogOpen}>
|
|
||||||
<DialogTrigger asChild>
|
|
||||||
<Button data-testid="button-add-list">
|
|
||||||
<Plus className="w-4 h-4 mr-2" />
|
|
||||||
Aggiungi Lista
|
|
||||||
</Button>
|
|
||||||
</DialogTrigger>
|
|
||||||
<DialogContent className="max-w-2xl">
|
|
||||||
<DialogHeader>
|
|
||||||
<DialogTitle>Aggiungi Lista Pubblica</DialogTitle>
|
|
||||||
<DialogDescription>
|
|
||||||
Configura una nuova sorgente blacklist o whitelist
|
|
||||||
</DialogDescription>
|
|
||||||
</DialogHeader>
|
|
||||||
<Form {...form}>
|
|
||||||
<form onSubmit={form.handleSubmit(onSubmit)} className="space-y-4">
|
|
||||||
<FormField
|
|
||||||
control={form.control}
|
|
||||||
name="name"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem>
|
|
||||||
<FormLabel>Nome</FormLabel>
|
|
||||||
<FormControl>
|
|
||||||
<Input placeholder="es. Spamhaus DROP" {...field} data-testid="input-list-name" />
|
|
||||||
</FormControl>
|
|
||||||
<FormMessage />
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
<FormField
|
|
||||||
control={form.control}
|
|
||||||
name="type"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem>
|
|
||||||
<FormLabel>Tipo</FormLabel>
|
|
||||||
<Select onValueChange={field.onChange} defaultValue={field.value}>
|
|
||||||
<FormControl>
|
|
||||||
<SelectTrigger data-testid="select-list-type">
|
|
||||||
<SelectValue placeholder="Seleziona tipo" />
|
|
||||||
</SelectTrigger>
|
|
||||||
</FormControl>
|
|
||||||
<SelectContent>
|
|
||||||
<SelectItem value="blacklist">Blacklist</SelectItem>
|
|
||||||
<SelectItem value="whitelist">Whitelist</SelectItem>
|
|
||||||
</SelectContent>
|
|
||||||
</Select>
|
|
||||||
<FormMessage />
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
<FormField
|
|
||||||
control={form.control}
|
|
||||||
name="url"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem>
|
|
||||||
<FormLabel>URL</FormLabel>
|
|
||||||
<FormControl>
|
|
||||||
<Input placeholder="https://example.com/list.txt" {...field} data-testid="input-list-url" />
|
|
||||||
</FormControl>
|
|
||||||
<FormMessage />
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
<FormField
|
|
||||||
control={form.control}
|
|
||||||
name="fetchIntervalMinutes"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem>
|
|
||||||
<FormLabel>Intervallo Sync (minuti)</FormLabel>
|
|
||||||
<FormControl>
|
|
||||||
<Input
|
|
||||||
type="number"
|
|
||||||
{...field}
|
|
||||||
onChange={(e) => field.onChange(parseInt(e.target.value))}
|
|
||||||
data-testid="input-list-interval"
|
|
||||||
/>
|
|
||||||
</FormControl>
|
|
||||||
<FormMessage />
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
<FormField
|
|
||||||
control={form.control}
|
|
||||||
name="enabled"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem className="flex items-center justify-between">
|
|
||||||
<FormLabel>Abilitata</FormLabel>
|
|
||||||
<FormControl>
|
|
||||||
<Switch
|
|
||||||
checked={field.value}
|
|
||||||
onCheckedChange={field.onChange}
|
|
||||||
data-testid="switch-list-enabled"
|
|
||||||
/>
|
|
||||||
</FormControl>
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
<div className="flex justify-end gap-2 pt-4">
|
|
||||||
<Button type="button" variant="outline" onClick={() => setIsAddDialogOpen(false)}>
|
|
||||||
Annulla
|
|
||||||
</Button>
|
|
||||||
<Button type="submit" disabled={createMutation.isPending} data-testid="button-save-list">
|
|
||||||
{createMutation.isPending ? "Salvataggio..." : "Salva"}
|
|
||||||
</Button>
|
|
||||||
</div>
|
|
||||||
</form>
|
|
||||||
</Form>
|
|
||||||
</DialogContent>
|
|
||||||
</Dialog>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<Card>
|
|
||||||
<CardHeader>
|
|
||||||
<CardTitle>Sorgenti Configurate</CardTitle>
|
|
||||||
<CardDescription>
|
|
||||||
{lists?.length || 0} liste configurate
|
|
||||||
</CardDescription>
|
|
||||||
</CardHeader>
|
|
||||||
<CardContent>
|
|
||||||
<Table>
|
|
||||||
<TableHeader>
|
|
||||||
<TableRow>
|
|
||||||
<TableHead>Nome</TableHead>
|
|
||||||
<TableHead>Tipo</TableHead>
|
|
||||||
<TableHead>Stato</TableHead>
|
|
||||||
<TableHead>IP Totali</TableHead>
|
|
||||||
<TableHead>IP Attivi</TableHead>
|
|
||||||
<TableHead>Ultimo Sync</TableHead>
|
|
||||||
<TableHead className="text-right">Azioni</TableHead>
|
|
||||||
</TableRow>
|
|
||||||
</TableHeader>
|
|
||||||
<TableBody>
|
|
||||||
{lists?.map((list: any) => (
|
|
||||||
<TableRow key={list.id} data-testid={`row-list-${list.id}`}>
|
|
||||||
<TableCell className="font-medium">
|
|
||||||
<div>
|
|
||||||
<div>{list.name}</div>
|
|
||||||
<div className="text-xs text-muted-foreground truncate max-w-xs">
|
|
||||||
{list.url}
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</TableCell>
|
|
||||||
<TableCell>{getTypeBadge(list.type)}</TableCell>
|
|
||||||
<TableCell>{getStatusBadge(list)}</TableCell>
|
|
||||||
<TableCell data-testid={`text-total-ips-${list.id}`}>{list.totalIps?.toLocaleString() || 0}</TableCell>
|
|
||||||
<TableCell data-testid={`text-active-ips-${list.id}`}>{list.activeIps?.toLocaleString() || 0}</TableCell>
|
|
||||||
<TableCell>
|
|
||||||
{list.lastSuccess ? (
|
|
||||||
<span className="text-sm">
|
|
||||||
{formatDistanceToNow(new Date(list.lastSuccess), {
|
|
||||||
addSuffix: true,
|
|
||||||
locale: it,
|
|
||||||
})}
|
|
||||||
</span>
|
|
||||||
) : (
|
|
||||||
<span className="text-sm text-muted-foreground">Mai</span>
|
|
||||||
)}
|
|
||||||
</TableCell>
|
|
||||||
<TableCell className="text-right">
|
|
||||||
<div className="flex items-center justify-end gap-2">
|
|
||||||
<Switch
|
|
||||||
checked={list.enabled}
|
|
||||||
onCheckedChange={(checked) => toggleEnabled(list.id, checked)}
|
|
||||||
data-testid={`switch-enable-${list.id}`}
|
|
||||||
/>
|
|
||||||
<Button
|
|
||||||
variant="outline"
|
|
||||||
size="icon"
|
|
||||||
onClick={() => syncMutation.mutate(list.id)}
|
|
||||||
disabled={syncMutation.isPending}
|
|
||||||
data-testid={`button-sync-${list.id}`}
|
|
||||||
>
|
|
||||||
<RefreshCw className="w-4 h-4" />
|
|
||||||
</Button>
|
|
||||||
<Button
|
|
||||||
variant="destructive"
|
|
||||||
size="icon"
|
|
||||||
onClick={() => {
|
|
||||||
if (confirm(`Eliminare la lista "${list.name}"?`)) {
|
|
||||||
deleteMutation.mutate(list.id);
|
|
||||||
}
|
|
||||||
}}
|
|
||||||
data-testid={`button-delete-${list.id}`}
|
|
||||||
>
|
|
||||||
<Trash2 className="w-4 h-4" />
|
|
||||||
</Button>
|
|
||||||
</div>
|
|
||||||
</TableCell>
|
|
||||||
</TableRow>
|
|
||||||
))}
|
|
||||||
{(!lists || lists.length === 0) && (
|
|
||||||
<TableRow>
|
|
||||||
<TableCell colSpan={7} className="text-center text-muted-foreground py-8">
|
|
||||||
Nessuna lista configurata. Aggiungi la prima lista.
|
|
||||||
</TableCell>
|
|
||||||
</TableRow>
|
|
||||||
)}
|
|
||||||
</TableBody>
|
|
||||||
</Table>
|
|
||||||
</CardContent>
|
|
||||||
</Card>
|
|
||||||
</div>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
@ -1,108 +1,19 @@
|
|||||||
import { useState } from "react";
|
|
||||||
import { useQuery, useMutation } from "@tanstack/react-query";
|
import { useQuery, useMutation } from "@tanstack/react-query";
|
||||||
import { queryClient, apiRequest } from "@/lib/queryClient";
|
import { queryClient, apiRequest } from "@/lib/queryClient";
|
||||||
import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
|
import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
|
||||||
import { Badge } from "@/components/ui/badge";
|
import { Badge } from "@/components/ui/badge";
|
||||||
import { Button } from "@/components/ui/button";
|
import { Button } from "@/components/ui/button";
|
||||||
import {
|
import { Server, Plus, Trash2 } from "lucide-react";
|
||||||
Dialog,
|
|
||||||
DialogContent,
|
|
||||||
DialogDescription,
|
|
||||||
DialogHeader,
|
|
||||||
DialogTitle,
|
|
||||||
DialogTrigger,
|
|
||||||
DialogFooter,
|
|
||||||
} from "@/components/ui/dialog";
|
|
||||||
import {
|
|
||||||
Form,
|
|
||||||
FormControl,
|
|
||||||
FormDescription,
|
|
||||||
FormField,
|
|
||||||
FormItem,
|
|
||||||
FormLabel,
|
|
||||||
FormMessage,
|
|
||||||
} from "@/components/ui/form";
|
|
||||||
import { Input } from "@/components/ui/input";
|
|
||||||
import { Switch } from "@/components/ui/switch";
|
|
||||||
import { Server, Plus, Trash2, Edit } from "lucide-react";
|
|
||||||
import { format } from "date-fns";
|
import { format } from "date-fns";
|
||||||
import { useForm } from "react-hook-form";
|
|
||||||
import { zodResolver } from "@hookform/resolvers/zod";
|
|
||||||
import { insertRouterSchema, type InsertRouter } from "@shared/schema";
|
|
||||||
import type { Router } from "@shared/schema";
|
import type { Router } from "@shared/schema";
|
||||||
import { useToast } from "@/hooks/use-toast";
|
import { useToast } from "@/hooks/use-toast";
|
||||||
|
|
||||||
export default function Routers() {
|
export default function Routers() {
|
||||||
const { toast } = useToast();
|
const { toast } = useToast();
|
||||||
const [addDialogOpen, setAddDialogOpen] = useState(false);
|
|
||||||
const [editDialogOpen, setEditDialogOpen] = useState(false);
|
|
||||||
const [editingRouter, setEditingRouter] = useState<Router | null>(null);
|
|
||||||
|
|
||||||
const { data: routers, isLoading } = useQuery<Router[]>({
|
const { data: routers, isLoading } = useQuery<Router[]>({
|
||||||
queryKey: ["/api/routers"],
|
queryKey: ["/api/routers"],
|
||||||
});
|
});
|
||||||
|
|
||||||
const addForm = useForm<InsertRouter>({
|
|
||||||
resolver: zodResolver(insertRouterSchema),
|
|
||||||
defaultValues: {
|
|
||||||
name: "",
|
|
||||||
ipAddress: "",
|
|
||||||
apiPort: 8729,
|
|
||||||
username: "",
|
|
||||||
password: "",
|
|
||||||
enabled: true,
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
const editForm = useForm<InsertRouter>({
|
|
||||||
resolver: zodResolver(insertRouterSchema),
|
|
||||||
});
|
|
||||||
|
|
||||||
const addMutation = useMutation({
|
|
||||||
mutationFn: async (data: InsertRouter) => {
|
|
||||||
return await apiRequest("POST", "/api/routers", data);
|
|
||||||
},
|
|
||||||
onSuccess: () => {
|
|
||||||
queryClient.invalidateQueries({ queryKey: ["/api/routers"] });
|
|
||||||
toast({
|
|
||||||
title: "Router aggiunto",
|
|
||||||
description: "Il router è stato configurato con successo",
|
|
||||||
});
|
|
||||||
setAddDialogOpen(false);
|
|
||||||
addForm.reset();
|
|
||||||
},
|
|
||||||
onError: (error: any) => {
|
|
||||||
toast({
|
|
||||||
title: "Errore",
|
|
||||||
description: error.message || "Impossibile aggiungere il router",
|
|
||||||
variant: "destructive",
|
|
||||||
});
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
const updateMutation = useMutation({
|
|
||||||
mutationFn: async ({ id, data }: { id: string; data: InsertRouter }) => {
|
|
||||||
return await apiRequest("PUT", `/api/routers/${id}`, data);
|
|
||||||
},
|
|
||||||
onSuccess: () => {
|
|
||||||
queryClient.invalidateQueries({ queryKey: ["/api/routers"] });
|
|
||||||
toast({
|
|
||||||
title: "Router aggiornato",
|
|
||||||
description: "Le modifiche sono state salvate con successo",
|
|
||||||
});
|
|
||||||
setEditDialogOpen(false);
|
|
||||||
setEditingRouter(null);
|
|
||||||
editForm.reset();
|
|
||||||
},
|
|
||||||
onError: (error: any) => {
|
|
||||||
toast({
|
|
||||||
title: "Errore",
|
|
||||||
description: error.message || "Impossibile aggiornare il router",
|
|
||||||
variant: "destructive",
|
|
||||||
});
|
|
||||||
},
|
|
||||||
});
|
|
||||||
|
|
||||||
const deleteMutation = useMutation({
|
const deleteMutation = useMutation({
|
||||||
mutationFn: async (id: string) => {
|
mutationFn: async (id: string) => {
|
||||||
await apiRequest("DELETE", `/api/routers/${id}`);
|
await apiRequest("DELETE", `/api/routers/${id}`);
|
||||||
@ -123,29 +34,6 @@ export default function Routers() {
|
|||||||
},
|
},
|
||||||
});
|
});
|
||||||
|
|
||||||
const handleAddSubmit = (data: InsertRouter) => {
|
|
||||||
addMutation.mutate(data);
|
|
||||||
};
|
|
||||||
|
|
||||||
const handleEditSubmit = (data: InsertRouter) => {
|
|
||||||
if (editingRouter) {
|
|
||||||
updateMutation.mutate({ id: editingRouter.id, data });
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
const handleEdit = (router: Router) => {
|
|
||||||
setEditingRouter(router);
|
|
||||||
editForm.reset({
|
|
||||||
name: router.name,
|
|
||||||
ipAddress: router.ipAddress,
|
|
||||||
apiPort: router.apiPort,
|
|
||||||
username: router.username,
|
|
||||||
password: router.password,
|
|
||||||
enabled: router.enabled,
|
|
||||||
});
|
|
||||||
setEditDialogOpen(true);
|
|
||||||
};
|
|
||||||
|
|
||||||
return (
|
return (
|
||||||
<div className="flex flex-col gap-6 p-6" data-testid="page-routers">
|
<div className="flex flex-col gap-6 p-6" data-testid="page-routers">
|
||||||
<div className="flex items-center justify-between">
|
<div className="flex items-center justify-between">
|
||||||
@ -155,152 +43,10 @@ export default function Routers() {
|
|||||||
Gestisci i router connessi al sistema IDS
|
Gestisci i router connessi al sistema IDS
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
|
<Button data-testid="button-add-router">
|
||||||
<Dialog open={addDialogOpen} onOpenChange={setAddDialogOpen}>
|
<Plus className="h-4 w-4 mr-2" />
|
||||||
<DialogTrigger asChild>
|
Aggiungi Router
|
||||||
<Button data-testid="button-add-router">
|
</Button>
|
||||||
<Plus className="h-4 w-4 mr-2" />
|
|
||||||
Aggiungi Router
|
|
||||||
</Button>
|
|
||||||
</DialogTrigger>
|
|
||||||
<DialogContent className="sm:max-w-[500px]" data-testid="dialog-add-router">
|
|
||||||
<DialogHeader>
|
|
||||||
<DialogTitle>Aggiungi Router MikroTik</DialogTitle>
|
|
||||||
<DialogDescription>
|
|
||||||
Configura un nuovo router MikroTik per il sistema IDS. Assicurati che l'API RouterOS (porta 8729/8728) sia abilitata.
|
|
||||||
</DialogDescription>
|
|
||||||
</DialogHeader>
|
|
||||||
|
|
||||||
<Form {...addForm}>
|
|
||||||
<form onSubmit={addForm.handleSubmit(handleAddSubmit)} className="space-y-4">
|
|
||||||
<FormField
|
|
||||||
control={addForm.control}
|
|
||||||
name="name"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem>
|
|
||||||
<FormLabel>Nome Router</FormLabel>
|
|
||||||
<FormControl>
|
|
||||||
<Input placeholder="es. MikroTik Ufficio" {...field} data-testid="input-name" />
|
|
||||||
</FormControl>
|
|
||||||
<FormDescription>
|
|
||||||
Nome descrittivo per identificare il router
|
|
||||||
</FormDescription>
|
|
||||||
<FormMessage />
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
|
|
||||||
<FormField
|
|
||||||
control={addForm.control}
|
|
||||||
name="ipAddress"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem>
|
|
||||||
<FormLabel>Indirizzo IP</FormLabel>
|
|
||||||
<FormControl>
|
|
||||||
<Input placeholder="es. 192.168.1.1" {...field} data-testid="input-ip" />
|
|
||||||
</FormControl>
|
|
||||||
<FormDescription>
|
|
||||||
Indirizzo IP o hostname del router
|
|
||||||
</FormDescription>
|
|
||||||
<FormMessage />
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
|
|
||||||
<FormField
|
|
||||||
control={addForm.control}
|
|
||||||
name="apiPort"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem>
|
|
||||||
<FormLabel>Porta API</FormLabel>
|
|
||||||
<FormControl>
|
|
||||||
<Input
|
|
||||||
type="number"
|
|
||||||
placeholder="8729"
|
|
||||||
{...field}
|
|
||||||
onChange={(e) => field.onChange(parseInt(e.target.value))}
|
|
||||||
data-testid="input-port"
|
|
||||||
/>
|
|
||||||
</FormControl>
|
|
||||||
<FormDescription>
|
|
||||||
Porta RouterOS API MikroTik (8729 per API-SSL, 8728 per API)
|
|
||||||
</FormDescription>
|
|
||||||
<FormMessage />
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
|
|
||||||
<FormField
|
|
||||||
control={addForm.control}
|
|
||||||
name="username"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem>
|
|
||||||
<FormLabel>Username</FormLabel>
|
|
||||||
<FormControl>
|
|
||||||
<Input placeholder="admin" {...field} data-testid="input-username" />
|
|
||||||
</FormControl>
|
|
||||||
<FormMessage />
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
|
|
||||||
<FormField
|
|
||||||
control={addForm.control}
|
|
||||||
name="password"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem>
|
|
||||||
<FormLabel>Password</FormLabel>
|
|
||||||
<FormControl>
|
|
||||||
<Input type="password" placeholder="••••••••" {...field} data-testid="input-password" />
|
|
||||||
</FormControl>
|
|
||||||
<FormMessage />
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
|
|
||||||
<FormField
|
|
||||||
control={addForm.control}
|
|
||||||
name="enabled"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem className="flex flex-row items-center justify-between rounded-lg border p-3">
|
|
||||||
<div className="space-y-0.5">
|
|
||||||
<FormLabel>Abilitato</FormLabel>
|
|
||||||
<FormDescription>
|
|
||||||
Attiva il router per il blocco automatico degli IP
|
|
||||||
</FormDescription>
|
|
||||||
</div>
|
|
||||||
<FormControl>
|
|
||||||
<Switch
|
|
||||||
checked={field.value}
|
|
||||||
onCheckedChange={field.onChange}
|
|
||||||
data-testid="switch-enabled"
|
|
||||||
/>
|
|
||||||
</FormControl>
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
|
|
||||||
<DialogFooter>
|
|
||||||
<Button
|
|
||||||
type="button"
|
|
||||||
variant="outline"
|
|
||||||
onClick={() => setAddDialogOpen(false)}
|
|
||||||
data-testid="button-cancel"
|
|
||||||
>
|
|
||||||
Annulla
|
|
||||||
</Button>
|
|
||||||
<Button
|
|
||||||
type="submit"
|
|
||||||
disabled={addMutation.isPending}
|
|
||||||
data-testid="button-submit"
|
|
||||||
>
|
|
||||||
{addMutation.isPending ? "Salvataggio..." : "Salva Router"}
|
|
||||||
</Button>
|
|
||||||
</DialogFooter>
|
|
||||||
</form>
|
|
||||||
</Form>
|
|
||||||
</DialogContent>
|
|
||||||
</Dialog>
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<Card data-testid="card-routers">
|
<Card data-testid="card-routers">
|
||||||
@ -368,11 +114,9 @@ export default function Routers() {
|
|||||||
variant="outline"
|
variant="outline"
|
||||||
size="sm"
|
size="sm"
|
||||||
className="flex-1"
|
className="flex-1"
|
||||||
onClick={() => handleEdit(router)}
|
data-testid={`button-test-${router.id}`}
|
||||||
data-testid={`button-edit-${router.id}`}
|
|
||||||
>
|
>
|
||||||
<Edit className="h-4 w-4 mr-2" />
|
Test Connessione
|
||||||
Modifica
|
|
||||||
</Button>
|
</Button>
|
||||||
<Button
|
<Button
|
||||||
variant="outline"
|
variant="outline"
|
||||||
@ -396,140 +140,6 @@ export default function Routers() {
|
|||||||
)}
|
)}
|
||||||
</CardContent>
|
</CardContent>
|
||||||
</Card>
|
</Card>
|
||||||
|
|
||||||
<Dialog open={editDialogOpen} onOpenChange={setEditDialogOpen}>
|
|
||||||
<DialogContent className="sm:max-w-[500px]" data-testid="dialog-edit-router">
|
|
||||||
<DialogHeader>
|
|
||||||
<DialogTitle>Modifica Router</DialogTitle>
|
|
||||||
<DialogDescription>
|
|
||||||
Modifica le impostazioni del router {editingRouter?.name}
|
|
||||||
</DialogDescription>
|
|
||||||
</DialogHeader>
|
|
||||||
|
|
||||||
<Form {...editForm}>
|
|
||||||
<form onSubmit={editForm.handleSubmit(handleEditSubmit)} className="space-y-4">
|
|
||||||
<FormField
|
|
||||||
control={editForm.control}
|
|
||||||
name="name"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem>
|
|
||||||
<FormLabel>Nome Router</FormLabel>
|
|
||||||
<FormControl>
|
|
||||||
<Input placeholder="es. MikroTik Ufficio" {...field} data-testid="input-edit-name" />
|
|
||||||
</FormControl>
|
|
||||||
<FormMessage />
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
|
|
||||||
<FormField
|
|
||||||
control={editForm.control}
|
|
||||||
name="ipAddress"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem>
|
|
||||||
<FormLabel>Indirizzo IP</FormLabel>
|
|
||||||
<FormControl>
|
|
||||||
<Input placeholder="es. 192.168.1.1" {...field} data-testid="input-edit-ip" />
|
|
||||||
</FormControl>
|
|
||||||
<FormMessage />
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
|
|
||||||
<FormField
|
|
||||||
control={editForm.control}
|
|
||||||
name="apiPort"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem>
|
|
||||||
<FormLabel>Porta API</FormLabel>
|
|
||||||
<FormControl>
|
|
||||||
<Input
|
|
||||||
type="number"
|
|
||||||
placeholder="8729"
|
|
||||||
{...field}
|
|
||||||
onChange={(e) => field.onChange(parseInt(e.target.value))}
|
|
||||||
data-testid="input-edit-port"
|
|
||||||
/>
|
|
||||||
</FormControl>
|
|
||||||
<FormDescription>
|
|
||||||
Porta RouterOS API MikroTik (8729 per API-SSL, 8728 per API)
|
|
||||||
</FormDescription>
|
|
||||||
<FormMessage />
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
|
|
||||||
<FormField
|
|
||||||
control={editForm.control}
|
|
||||||
name="username"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem>
|
|
||||||
<FormLabel>Username</FormLabel>
|
|
||||||
<FormControl>
|
|
||||||
<Input placeholder="admin" {...field} data-testid="input-edit-username" />
|
|
||||||
</FormControl>
|
|
||||||
<FormMessage />
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
|
|
||||||
<FormField
|
|
||||||
control={editForm.control}
|
|
||||||
name="password"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem>
|
|
||||||
<FormLabel>Password</FormLabel>
|
|
||||||
<FormControl>
|
|
||||||
<Input type="password" placeholder="••••••••" {...field} data-testid="input-edit-password" />
|
|
||||||
</FormControl>
|
|
||||||
<FormMessage />
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
|
|
||||||
<FormField
|
|
||||||
control={editForm.control}
|
|
||||||
name="enabled"
|
|
||||||
render={({ field }) => (
|
|
||||||
<FormItem className="flex flex-row items-center justify-between rounded-lg border p-3">
|
|
||||||
<div className="space-y-0.5">
|
|
||||||
<FormLabel>Abilitato</FormLabel>
|
|
||||||
<FormDescription>
|
|
||||||
Attiva il router per il blocco automatico degli IP
|
|
||||||
</FormDescription>
|
|
||||||
</div>
|
|
||||||
<FormControl>
|
|
||||||
<Switch
|
|
||||||
checked={field.value}
|
|
||||||
onCheckedChange={field.onChange}
|
|
||||||
data-testid="switch-edit-enabled"
|
|
||||||
/>
|
|
||||||
</FormControl>
|
|
||||||
</FormItem>
|
|
||||||
)}
|
|
||||||
/>
|
|
||||||
|
|
||||||
<DialogFooter>
|
|
||||||
<Button
|
|
||||||
type="button"
|
|
||||||
variant="outline"
|
|
||||||
onClick={() => setEditDialogOpen(false)}
|
|
||||||
data-testid="button-edit-cancel"
|
|
||||||
>
|
|
||||||
Annulla
|
|
||||||
</Button>
|
|
||||||
<Button
|
|
||||||
type="submit"
|
|
||||||
disabled={updateMutation.isPending}
|
|
||||||
data-testid="button-edit-submit"
|
|
||||||
>
|
|
||||||
{updateMutation.isPending ? "Salvataggio..." : "Salva Modifiche"}
|
|
||||||
</Button>
|
|
||||||
</DialogFooter>
|
|
||||||
</form>
|
|
||||||
</Form>
|
|
||||||
</DialogContent>
|
|
||||||
</Dialog>
|
|
||||||
</div>
|
</div>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|||||||
@ -198,19 +198,14 @@ export default function TrainingPage() {
|
|||||||
<div className="grid grid-cols-1 md:grid-cols-2 gap-4">
|
<div className="grid grid-cols-1 md:grid-cols-2 gap-4">
|
||||||
<Card data-testid="card-train-action">
|
<Card data-testid="card-train-action">
|
||||||
<CardHeader>
|
<CardHeader>
|
||||||
<div className="flex items-center justify-between">
|
<CardTitle className="flex items-center gap-2">
|
||||||
<CardTitle className="flex items-center gap-2">
|
<Brain className="h-5 w-5" />
|
||||||
<Brain className="h-5 w-5" />
|
Addestramento Modello
|
||||||
Addestramento Modello
|
</CardTitle>
|
||||||
</CardTitle>
|
|
||||||
<Badge variant="secondary" className="bg-blue-50 text-blue-700 dark:bg-blue-950 dark:text-blue-300" data-testid="badge-model-version">
|
|
||||||
Hybrid ML v2.0.0
|
|
||||||
</Badge>
|
|
||||||
</div>
|
|
||||||
</CardHeader>
|
</CardHeader>
|
||||||
<CardContent className="space-y-4">
|
<CardContent className="space-y-4">
|
||||||
<p className="text-sm text-muted-foreground">
|
<p className="text-sm text-muted-foreground">
|
||||||
Addestra il modello Hybrid ML (Isolation Forest + Ensemble Classifier) analizzando i log recenti per rilevare pattern di traffico normale.
|
Addestra il modello Isolation Forest analizzando i log recenti per rilevare pattern di traffico normale.
|
||||||
</p>
|
</p>
|
||||||
<Dialog open={isTrainDialogOpen} onOpenChange={setIsTrainDialogOpen}>
|
<Dialog open={isTrainDialogOpen} onOpenChange={setIsTrainDialogOpen}>
|
||||||
<DialogTrigger asChild>
|
<DialogTrigger asChild>
|
||||||
@ -278,19 +273,14 @@ export default function TrainingPage() {
|
|||||||
|
|
||||||
<Card data-testid="card-detect-action">
|
<Card data-testid="card-detect-action">
|
||||||
<CardHeader>
|
<CardHeader>
|
||||||
<div className="flex items-center justify-between">
|
<CardTitle className="flex items-center gap-2">
|
||||||
<CardTitle className="flex items-center gap-2">
|
<Search className="h-5 w-5" />
|
||||||
<Search className="h-5 w-5" />
|
Rilevamento Anomalie
|
||||||
Rilevamento Anomalie
|
</CardTitle>
|
||||||
</CardTitle>
|
|
||||||
<Badge variant="secondary" className="bg-green-50 text-green-700 dark:bg-green-950 dark:text-green-300" data-testid="badge-detection-version">
|
|
||||||
Hybrid ML v2.0.0
|
|
||||||
</Badge>
|
|
||||||
</div>
|
|
||||||
</CardHeader>
|
</CardHeader>
|
||||||
<CardContent className="space-y-4">
|
<CardContent className="space-y-4">
|
||||||
<p className="text-sm text-muted-foreground">
|
<p className="text-sm text-muted-foreground">
|
||||||
Analizza i log recenti per rilevare anomalie e IP sospetti con il modello Hybrid ML. Blocca automaticamente gli IP critici (risk_score ≥ 80).
|
Analizza i log recenti per rilevare anomalie e IP sospetti. Opzionalmente blocca automaticamente gli IP critici.
|
||||||
</p>
|
</p>
|
||||||
<Dialog open={isDetectDialogOpen} onOpenChange={setIsDetectDialogOpen}>
|
<Dialog open={isDetectDialogOpen} onOpenChange={setIsDetectDialogOpen}>
|
||||||
<DialogTrigger asChild>
|
<DialogTrigger asChild>
|
||||||
|
|||||||
@ -2,7 +2,7 @@ import { useQuery, useMutation } from "@tanstack/react-query";
|
|||||||
import { queryClient, apiRequest } from "@/lib/queryClient";
|
import { queryClient, apiRequest } from "@/lib/queryClient";
|
||||||
import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
|
import { Card, CardContent, CardHeader, CardTitle } from "@/components/ui/card";
|
||||||
import { Button } from "@/components/ui/button";
|
import { Button } from "@/components/ui/button";
|
||||||
import { Shield, Plus, Trash2, CheckCircle2, XCircle, Search } from "lucide-react";
|
import { Shield, Plus, Trash2, CheckCircle2, XCircle } from "lucide-react";
|
||||||
import { format } from "date-fns";
|
import { format } from "date-fns";
|
||||||
import { useState } from "react";
|
import { useState } from "react";
|
||||||
import { useForm } from "react-hook-form";
|
import { useForm } from "react-hook-form";
|
||||||
@ -44,7 +44,6 @@ const whitelistFormSchema = insertWhitelistSchema.extend({
|
|||||||
export default function WhitelistPage() {
|
export default function WhitelistPage() {
|
||||||
const { toast } = useToast();
|
const { toast } = useToast();
|
||||||
const [isAddDialogOpen, setIsAddDialogOpen] = useState(false);
|
const [isAddDialogOpen, setIsAddDialogOpen] = useState(false);
|
||||||
const [searchQuery, setSearchQuery] = useState("");
|
|
||||||
|
|
||||||
const form = useForm<z.infer<typeof whitelistFormSchema>>({
|
const form = useForm<z.infer<typeof whitelistFormSchema>>({
|
||||||
resolver: zodResolver(whitelistFormSchema),
|
resolver: zodResolver(whitelistFormSchema),
|
||||||
@ -60,13 +59,6 @@ export default function WhitelistPage() {
|
|||||||
queryKey: ["/api/whitelist"],
|
queryKey: ["/api/whitelist"],
|
||||||
});
|
});
|
||||||
|
|
||||||
// Filter whitelist based on search query
|
|
||||||
const filteredWhitelist = whitelist?.filter((item) =>
|
|
||||||
item.ipAddress.toLowerCase().includes(searchQuery.toLowerCase()) ||
|
|
||||||
item.reason?.toLowerCase().includes(searchQuery.toLowerCase()) ||
|
|
||||||
item.comment?.toLowerCase().includes(searchQuery.toLowerCase())
|
|
||||||
);
|
|
||||||
|
|
||||||
const addMutation = useMutation({
|
const addMutation = useMutation({
|
||||||
mutationFn: async (data: z.infer<typeof whitelistFormSchema>) => {
|
mutationFn: async (data: z.infer<typeof whitelistFormSchema>) => {
|
||||||
return await apiRequest("POST", "/api/whitelist", data);
|
return await apiRequest("POST", "/api/whitelist", data);
|
||||||
@ -197,27 +189,11 @@ export default function WhitelistPage() {
|
|||||||
</Dialog>
|
</Dialog>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
{/* Search Bar */}
|
|
||||||
<Card data-testid="card-search">
|
|
||||||
<CardContent className="pt-6">
|
|
||||||
<div className="relative">
|
|
||||||
<Search className="absolute left-3 top-1/2 -translate-y-1/2 h-4 w-4 text-muted-foreground" />
|
|
||||||
<Input
|
|
||||||
placeholder="Cerca per IP, motivo o note..."
|
|
||||||
value={searchQuery}
|
|
||||||
onChange={(e) => setSearchQuery(e.target.value)}
|
|
||||||
className="pl-9"
|
|
||||||
data-testid="input-search-whitelist"
|
|
||||||
/>
|
|
||||||
</div>
|
|
||||||
</CardContent>
|
|
||||||
</Card>
|
|
||||||
|
|
||||||
<Card data-testid="card-whitelist">
|
<Card data-testid="card-whitelist">
|
||||||
<CardHeader>
|
<CardHeader>
|
||||||
<CardTitle className="flex items-center gap-2">
|
<CardTitle className="flex items-center gap-2">
|
||||||
<Shield className="h-5 w-5" />
|
<Shield className="h-5 w-5" />
|
||||||
IP Protetti ({filteredWhitelist?.length || 0}{searchQuery && whitelist ? ` di ${whitelist.length}` : ''})
|
IP Protetti ({whitelist?.length || 0})
|
||||||
</CardTitle>
|
</CardTitle>
|
||||||
</CardHeader>
|
</CardHeader>
|
||||||
<CardContent>
|
<CardContent>
|
||||||
@ -225,9 +201,9 @@ export default function WhitelistPage() {
|
|||||||
<div className="text-center py-8 text-muted-foreground" data-testid="text-loading">
|
<div className="text-center py-8 text-muted-foreground" data-testid="text-loading">
|
||||||
Caricamento...
|
Caricamento...
|
||||||
</div>
|
</div>
|
||||||
) : filteredWhitelist && filteredWhitelist.length > 0 ? (
|
) : whitelist && whitelist.length > 0 ? (
|
||||||
<div className="space-y-3">
|
<div className="space-y-3">
|
||||||
{filteredWhitelist.map((item) => (
|
{whitelist.map((item) => (
|
||||||
<div
|
<div
|
||||||
key={item.id}
|
key={item.id}
|
||||||
className="p-4 rounded-lg border hover-elevate"
|
className="p-4 rounded-lg border hover-elevate"
|
||||||
|
|||||||
@ -13,7 +13,6 @@ set -e
|
|||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
MIGRATIONS_DIR="$SCRIPT_DIR/migrations"
|
MIGRATIONS_DIR="$SCRIPT_DIR/migrations"
|
||||||
IDS_DIR="$(dirname "$SCRIPT_DIR")"
|
IDS_DIR="$(dirname "$SCRIPT_DIR")"
|
||||||
DEPLOYMENT_MIGRATIONS_DIR="$IDS_DIR/deployment/migrations"
|
|
||||||
|
|
||||||
# Carica variabili ambiente ed esportale
|
# Carica variabili ambiente ed esportale
|
||||||
if [ -f "$IDS_DIR/.env" ]; then
|
if [ -f "$IDS_DIR/.env" ]; then
|
||||||
@ -80,25 +79,9 @@ echo -e "${CYAN}📊 Versione database corrente: ${YELLOW}${CURRENT_VERSION}${NC
|
|||||||
# STEP 3: Trova migrazioni da applicare
|
# STEP 3: Trova migrazioni da applicare
|
||||||
# =============================================================================
|
# =============================================================================
|
||||||
# Formato migrazioni: 001_description.sql, 002_another.sql, etc.
|
# Formato migrazioni: 001_description.sql, 002_another.sql, etc.
|
||||||
# Cerca in ENTRAMBE le cartelle: database-schema/migrations E deployment/migrations
|
|
||||||
MIGRATIONS_TO_APPLY=()
|
MIGRATIONS_TO_APPLY=()
|
||||||
|
|
||||||
# Combina migrations da entrambe le cartelle e ordina per numero
|
for migration_file in $(find "$MIGRATIONS_DIR" -name "[0-9][0-9][0-9]_*.sql" | sort); do
|
||||||
ALL_MIGRATIONS=""
|
|
||||||
if [ -d "$MIGRATIONS_DIR" ]; then
|
|
||||||
ALL_MIGRATIONS+=$(find "$MIGRATIONS_DIR" -name "[0-9][0-9][0-9]_*.sql" 2>/dev/null || true)
|
|
||||||
fi
|
|
||||||
if [ -d "$DEPLOYMENT_MIGRATIONS_DIR" ]; then
|
|
||||||
if [ -n "$ALL_MIGRATIONS" ]; then
|
|
||||||
ALL_MIGRATIONS+=$'\n'
|
|
||||||
fi
|
|
||||||
ALL_MIGRATIONS+=$(find "$DEPLOYMENT_MIGRATIONS_DIR" -name "[0-9][0-9][0-9]_*.sql" 2>/dev/null || true)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Ordina le migrations per nome file (NNN_*.sql) estraendo il basename
|
|
||||||
SORTED_MIGRATIONS=$(echo "$ALL_MIGRATIONS" | grep -v '^$' | while read f; do echo "$(basename "$f"):$f"; done | sort | cut -d':' -f2)
|
|
||||||
|
|
||||||
for migration_file in $SORTED_MIGRATIONS; do
|
|
||||||
MIGRATION_NAME=$(basename "$migration_file")
|
MIGRATION_NAME=$(basename "$migration_file")
|
||||||
|
|
||||||
# Estrai numero versione dal nome file (001, 002, etc.)
|
# Estrai numero versione dal nome file (001, 002, etc.)
|
||||||
|
|||||||
@ -2,9 +2,9 @@
|
|||||||
-- PostgreSQL database dump
|
-- PostgreSQL database dump
|
||||||
--
|
--
|
||||||
|
|
||||||
\restrict Jq3ohS02Qcz3l9bNbeQprTZolEFbFh84eEwk4en2HkAqc2Xojxrd4AFqHJvBETG
|
\restrict a7qFLOPqkT1ff2fMYtHLTQ9mIFB4wXd50hXWp5fAFi2WKdrj19FkjOUs6F4BkGH
|
||||||
|
|
||||||
-- Dumped from database version 16.11 (74c6bb6)
|
-- Dumped from database version 16.9 (415ebe8)
|
||||||
-- Dumped by pg_dump version 16.10
|
-- Dumped by pg_dump version 16.10
|
||||||
|
|
||||||
SET statement_timeout = 0;
|
SET statement_timeout = 0;
|
||||||
@ -45,9 +45,7 @@ CREATE TABLE public.detections (
|
|||||||
organization text,
|
organization text,
|
||||||
as_number text,
|
as_number text,
|
||||||
as_name text,
|
as_name text,
|
||||||
isp text,
|
isp text
|
||||||
detection_source text DEFAULT 'ml_model'::text,
|
|
||||||
blacklist_id character varying
|
|
||||||
);
|
);
|
||||||
|
|
||||||
|
|
||||||
@ -98,44 +96,6 @@ CREATE TABLE public.network_logs (
|
|||||||
);
|
);
|
||||||
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Name: public_blacklist_ips; Type: TABLE; Schema: public; Owner: -
|
|
||||||
--
|
|
||||||
|
|
||||||
CREATE TABLE public.public_blacklist_ips (
|
|
||||||
id character varying DEFAULT (gen_random_uuid())::text NOT NULL,
|
|
||||||
ip_address text NOT NULL,
|
|
||||||
cidr_range text,
|
|
||||||
ip_inet text,
|
|
||||||
cidr_inet text,
|
|
||||||
list_id character varying NOT NULL,
|
|
||||||
first_seen timestamp without time zone DEFAULT now() NOT NULL,
|
|
||||||
last_seen timestamp without time zone DEFAULT now() NOT NULL,
|
|
||||||
is_active boolean DEFAULT true NOT NULL
|
|
||||||
);
|
|
||||||
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Name: public_lists; Type: TABLE; Schema: public; Owner: -
|
|
||||||
--
|
|
||||||
|
|
||||||
CREATE TABLE public.public_lists (
|
|
||||||
id character varying DEFAULT (gen_random_uuid())::text NOT NULL,
|
|
||||||
name text NOT NULL,
|
|
||||||
type text NOT NULL,
|
|
||||||
url text NOT NULL,
|
|
||||||
enabled boolean DEFAULT true NOT NULL,
|
|
||||||
fetch_interval_minutes integer DEFAULT 10 NOT NULL,
|
|
||||||
last_fetch timestamp without time zone,
|
|
||||||
last_success timestamp without time zone,
|
|
||||||
total_ips integer DEFAULT 0 NOT NULL,
|
|
||||||
active_ips integer DEFAULT 0 NOT NULL,
|
|
||||||
error_count integer DEFAULT 0 NOT NULL,
|
|
||||||
last_error text,
|
|
||||||
created_at timestamp without time zone DEFAULT now() NOT NULL
|
|
||||||
);
|
|
||||||
|
|
||||||
|
|
||||||
--
|
--
|
||||||
-- Name: routers; Type: TABLE; Schema: public; Owner: -
|
-- Name: routers; Type: TABLE; Schema: public; Owner: -
|
||||||
--
|
--
|
||||||
@ -193,10 +153,7 @@ CREATE TABLE public.whitelist (
|
|||||||
reason text,
|
reason text,
|
||||||
created_by text,
|
created_by text,
|
||||||
active boolean DEFAULT true NOT NULL,
|
active boolean DEFAULT true NOT NULL,
|
||||||
created_at timestamp without time zone DEFAULT now() NOT NULL,
|
created_at timestamp without time zone DEFAULT now() NOT NULL
|
||||||
source text DEFAULT 'manual'::text,
|
|
||||||
list_id character varying,
|
|
||||||
ip_inet text
|
|
||||||
);
|
);
|
||||||
|
|
||||||
|
|
||||||
@ -232,30 +189,6 @@ ALTER TABLE ONLY public.network_logs
|
|||||||
ADD CONSTRAINT network_logs_pkey PRIMARY KEY (id);
|
ADD CONSTRAINT network_logs_pkey PRIMARY KEY (id);
|
||||||
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Name: public_blacklist_ips public_blacklist_ips_ip_address_list_id_key; Type: CONSTRAINT; Schema: public; Owner: -
|
|
||||||
--
|
|
||||||
|
|
||||||
ALTER TABLE ONLY public.public_blacklist_ips
|
|
||||||
ADD CONSTRAINT public_blacklist_ips_ip_address_list_id_key UNIQUE (ip_address, list_id);
|
|
||||||
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Name: public_blacklist_ips public_blacklist_ips_pkey; Type: CONSTRAINT; Schema: public; Owner: -
|
|
||||||
--
|
|
||||||
|
|
||||||
ALTER TABLE ONLY public.public_blacklist_ips
|
|
||||||
ADD CONSTRAINT public_blacklist_ips_pkey PRIMARY KEY (id);
|
|
||||||
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Name: public_lists public_lists_pkey; Type: CONSTRAINT; Schema: public; Owner: -
|
|
||||||
--
|
|
||||||
|
|
||||||
ALTER TABLE ONLY public.public_lists
|
|
||||||
ADD CONSTRAINT public_lists_pkey PRIMARY KEY (id);
|
|
||||||
|
|
||||||
|
|
||||||
--
|
--
|
||||||
-- Name: routers routers_ip_address_unique; Type: CONSTRAINT; Schema: public; Owner: -
|
-- Name: routers routers_ip_address_unique; Type: CONSTRAINT; Schema: public; Owner: -
|
||||||
--
|
--
|
||||||
@ -375,17 +308,9 @@ ALTER TABLE ONLY public.network_logs
|
|||||||
ADD CONSTRAINT network_logs_router_id_routers_id_fk FOREIGN KEY (router_id) REFERENCES public.routers(id);
|
ADD CONSTRAINT network_logs_router_id_routers_id_fk FOREIGN KEY (router_id) REFERENCES public.routers(id);
|
||||||
|
|
||||||
|
|
||||||
--
|
|
||||||
-- Name: public_blacklist_ips public_blacklist_ips_list_id_fkey; Type: FK CONSTRAINT; Schema: public; Owner: -
|
|
||||||
--
|
|
||||||
|
|
||||||
ALTER TABLE ONLY public.public_blacklist_ips
|
|
||||||
ADD CONSTRAINT public_blacklist_ips_list_id_fkey FOREIGN KEY (list_id) REFERENCES public.public_lists(id) ON DELETE CASCADE;
|
|
||||||
|
|
||||||
|
|
||||||
--
|
--
|
||||||
-- PostgreSQL database dump complete
|
-- PostgreSQL database dump complete
|
||||||
--
|
--
|
||||||
|
|
||||||
\unrestrict Jq3ohS02Qcz3l9bNbeQprTZolEFbFh84eEwk4en2HkAqc2Xojxrd4AFqHJvBETG
|
\unrestrict a7qFLOPqkT1ff2fMYtHLTQ9mIFB4wXd50hXWp5fAFi2WKdrj19FkjOUs6F4BkGH
|
||||||
|
|
||||||
|
|||||||
@ -1,260 +0,0 @@
|
|||||||
# Auto-Blocking Setup - IDS MikroTik
|
|
||||||
|
|
||||||
## 📋 Panoramica
|
|
||||||
|
|
||||||
Sistema di auto-blocking automatico che rileva e blocca IP con **risk_score >= 80** ogni 5 minuti.
|
|
||||||
|
|
||||||
**Componenti**:
|
|
||||||
1. `python_ml/auto_block.py` - Script Python che chiama API ML
|
|
||||||
2. `deployment/systemd/ids-auto-block.service` - Systemd service
|
|
||||||
3. `deployment/systemd/ids-auto-block.timer` - Timer esecuzione ogni 5 minuti
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Installazione su AlmaLinux
|
|
||||||
|
|
||||||
### 1️⃣ Prerequisiti
|
|
||||||
|
|
||||||
Verifica che questi servizi siano attivi:
|
|
||||||
```bash
|
|
||||||
sudo systemctl status ids-ml-backend # ML Backend FastAPI
|
|
||||||
sudo systemctl status postgresql-16 # Database PostgreSQL
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2️⃣ Copia File Systemd
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Service file
|
|
||||||
sudo cp /opt/ids/deployment/systemd/ids-auto-block.service /etc/systemd/system/
|
|
||||||
|
|
||||||
# Timer file
|
|
||||||
sudo cp /opt/ids/deployment/systemd/ids-auto-block.timer /etc/systemd/system/
|
|
||||||
|
|
||||||
# Verifica permessi
|
|
||||||
sudo chown root:root /etc/systemd/system/ids-auto-block.*
|
|
||||||
sudo chmod 644 /etc/systemd/system/ids-auto-block.*
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3️⃣ Rendi Eseguibile Script Python
|
|
||||||
|
|
||||||
```bash
|
|
||||||
chmod +x /opt/ids/python_ml/auto_block.py
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4️⃣ Installa Dipendenza Python (requests)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Attiva virtual environment
|
|
||||||
cd /opt/ids/python_ml
|
|
||||||
source venv/bin/activate
|
|
||||||
|
|
||||||
# Installa requests
|
|
||||||
pip install requests
|
|
||||||
|
|
||||||
# Esci da venv
|
|
||||||
deactivate
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5️⃣ Crea Directory Log
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo mkdir -p /var/log/ids
|
|
||||||
sudo chown ids:ids /var/log/ids
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6️⃣ Ricarica Systemd e Avvia Timer
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Ricarica systemd
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
|
|
||||||
# Abilita timer (autostart al boot)
|
|
||||||
sudo systemctl enable ids-auto-block.timer
|
|
||||||
|
|
||||||
# Avvia timer
|
|
||||||
sudo systemctl start ids-auto-block.timer
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Verifica Funzionamento
|
|
||||||
|
|
||||||
### Test Manuale (esegui subito)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Esegui auto-blocking adesso (non aspettare 5 min)
|
|
||||||
sudo systemctl start ids-auto-block.service
|
|
||||||
|
|
||||||
# Controlla log output
|
|
||||||
journalctl -u ids-auto-block -n 30
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output atteso**:
|
|
||||||
```
|
|
||||||
[2024-11-25 12:00:00] 🔍 Starting auto-block detection...
|
|
||||||
✓ Detection completata: 14 anomalie rilevate, 14 IP bloccati
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verifica Timer Attivo
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Status timer
|
|
||||||
systemctl status ids-auto-block.timer
|
|
||||||
|
|
||||||
# Prossime esecuzioni
|
|
||||||
systemctl list-timers ids-auto-block.timer
|
|
||||||
|
|
||||||
# Ultima esecuzione
|
|
||||||
journalctl -u ids-auto-block.service -n 1
|
|
||||||
```
|
|
||||||
|
|
||||||
### Verifica IP Bloccati
|
|
||||||
|
|
||||||
**Database**:
|
|
||||||
```sql
|
|
||||||
SELECT COUNT(*) FROM detections WHERE blocked = true;
|
|
||||||
```
|
|
||||||
|
|
||||||
**MikroTik Router**:
|
|
||||||
```
|
|
||||||
/ip firewall address-list print where list=blocked_ips
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Monitoring
|
|
||||||
|
|
||||||
### Log in Tempo Reale
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Log auto-blocking
|
|
||||||
tail -f /var/log/ids/auto_block.log
|
|
||||||
|
|
||||||
# O via journalctl
|
|
||||||
journalctl -u ids-auto-block -f
|
|
||||||
```
|
|
||||||
|
|
||||||
### Statistiche Blocchi
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Conta esecuzioni ultimo giorno
|
|
||||||
journalctl -u ids-auto-block --since "1 day ago" | grep "Detection completata" | wc -l
|
|
||||||
|
|
||||||
# Totale IP bloccati oggi
|
|
||||||
journalctl -u ids-auto-block --since today | grep "IP bloccati"
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ⚙️ Configurazione
|
|
||||||
|
|
||||||
### Modifica Frequenza Esecuzione
|
|
||||||
|
|
||||||
Edita `/etc/systemd/system/ids-auto-block.timer`:
|
|
||||||
|
|
||||||
```ini
|
|
||||||
[Timer]
|
|
||||||
# Cambia 5min con frequenza desiderata (es: 10min, 1h, 30s)
|
|
||||||
OnUnitActiveSec=10min # Esegui ogni 10 minuti
|
|
||||||
```
|
|
||||||
|
|
||||||
Poi ricarica:
|
|
||||||
```bash
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
sudo systemctl restart ids-auto-block.timer
|
|
||||||
```
|
|
||||||
|
|
||||||
### Modifica Threshold Risk Score
|
|
||||||
|
|
||||||
Edita `python_ml/auto_block.py`:
|
|
||||||
|
|
||||||
```python
|
|
||||||
"risk_threshold": 80.0, # Cambia soglia (80, 90, 100, etc)
|
|
||||||
```
|
|
||||||
|
|
||||||
Poi riavvia timer:
|
|
||||||
```bash
|
|
||||||
sudo systemctl restart ids-auto-block.timer
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🛠️ Troubleshooting
|
|
||||||
|
|
||||||
### Problema: Nessun IP bloccato
|
|
||||||
|
|
||||||
**Verifica ML Backend attivo**:
|
|
||||||
```bash
|
|
||||||
systemctl status ids-ml-backend
|
|
||||||
curl http://localhost:8000/health
|
|
||||||
```
|
|
||||||
|
|
||||||
**Verifica router configurati**:
|
|
||||||
```sql
|
|
||||||
SELECT * FROM routers WHERE enabled = true;
|
|
||||||
```
|
|
||||||
|
|
||||||
Deve esserci almeno 1 router!
|
|
||||||
|
|
||||||
### Problema: Errore "Connection refused"
|
|
||||||
|
|
||||||
ML Backend non risponde su porta 8000:
|
|
||||||
```bash
|
|
||||||
# Riavvia ML backend
|
|
||||||
sudo systemctl restart ids-ml-backend
|
|
||||||
|
|
||||||
# Verifica porta listening
|
|
||||||
netstat -tlnp | grep 8000
|
|
||||||
```
|
|
||||||
|
|
||||||
### Problema: Script non eseguito
|
|
||||||
|
|
||||||
**Verifica timer attivo**:
|
|
||||||
```bash
|
|
||||||
systemctl status ids-auto-block.timer
|
|
||||||
```
|
|
||||||
|
|
||||||
**Forza esecuzione manuale**:
|
|
||||||
```bash
|
|
||||||
sudo systemctl start ids-auto-block.service
|
|
||||||
journalctl -u ids-auto-block -n 50
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔄 Disinstallazione
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Stop e disabilita timer
|
|
||||||
sudo systemctl stop ids-auto-block.timer
|
|
||||||
sudo systemctl disable ids-auto-block.timer
|
|
||||||
|
|
||||||
# Rimuovi file systemd
|
|
||||||
sudo rm /etc/systemd/system/ids-auto-block.*
|
|
||||||
|
|
||||||
# Ricarica systemd
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📝 Note
|
|
||||||
|
|
||||||
- **Frequenza**: 5 minuti (configurabile)
|
|
||||||
- **Risk Threshold**: 80 (solo IP critici)
|
|
||||||
- **Timeout**: 180 secondi (3 minuti max per detection)
|
|
||||||
- **Logs**: `/var/log/ids/auto_block.log` + journalctl
|
|
||||||
- **Dipendenze**: ids-ml-backend.service, postgresql-16.service
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Checklist Post-Installazione
|
|
||||||
|
|
||||||
- [ ] File copiati in `/etc/systemd/system/`
|
|
||||||
- [ ] Script `auto_block.py` eseguibile
|
|
||||||
- [ ] Dipendenza `requests` installata in venv
|
|
||||||
- [ ] Directory log creata (`/var/log/ids`)
|
|
||||||
- [ ] Timer abilitato e avviato
|
|
||||||
- [ ] Test manuale eseguito con successo
|
|
||||||
- [ ] IP bloccati su MikroTik verificati
|
|
||||||
- [ ] Monitoring attivo (journalctl -f)
|
|
||||||
@ -1,549 +0,0 @@
|
|||||||
# Deployment Checklist - Hybrid ML Detector
|
|
||||||
|
|
||||||
Sistema ML avanzato per riduzione falsi positivi 80-90% con Extended Isolation Forest
|
|
||||||
|
|
||||||
## 📋 Pre-requisiti
|
|
||||||
|
|
||||||
- [ ] Server AlmaLinux 9 con accesso SSH
|
|
||||||
- [ ] PostgreSQL con database IDS attivo
|
|
||||||
- [ ] Python 3.11+ installato
|
|
||||||
- [ ] Venv attivo: `/opt/ids/python_ml/venv`
|
|
||||||
- [ ] Almeno 7 giorni di traffico real nel database (per training su dati reali)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔧 Step 1: Installazione Dipendenze
|
|
||||||
|
|
||||||
✅ **SEMPLIFICATO**: Nessuna compilazione richiesta, solo wheels pre-compilati!
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# SSH al server
|
|
||||||
ssh user@ids.alfacom.it
|
|
||||||
|
|
||||||
# Esegui script installazione ML dependencies
|
|
||||||
cd /opt/ids
|
|
||||||
chmod +x deployment/install_ml_deps.sh
|
|
||||||
./deployment/install_ml_deps.sh
|
|
||||||
|
|
||||||
# Output atteso:
|
|
||||||
# 🔧 Attivazione virtual environment...
|
|
||||||
# 📍 Python in uso: /opt/ids/python_ml/venv/bin/python
|
|
||||||
# ✅ pip/setuptools/wheel aggiornati
|
|
||||||
# ✅ Dipendenze ML installate con successo
|
|
||||||
# ✅ sklearn IsolationForest OK
|
|
||||||
# ✅ XGBoost OK
|
|
||||||
# ✅ TUTTO OK! Hybrid ML Detector pronto per l'uso
|
|
||||||
# ℹ️ INFO: Sistema usa sklearn.IsolationForest (compatibile Python 3.11+)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Dipendenze ML**:
|
|
||||||
- `xgboost==2.0.3` - Gradient Boosting per ensemble classifier
|
|
||||||
- `joblib==1.3.2` - Model persistence e serializzazione
|
|
||||||
- `sklearn.IsolationForest` - Anomaly detection (già in scikit-learn==1.3.2)
|
|
||||||
|
|
||||||
**Perché sklearn.IsolationForest invece di Extended IF?**
|
|
||||||
1. **Compatibilità Python 3.11+**: Wheels pre-compilati, zero compilazione
|
|
||||||
2. **Production-grade**: Libreria mantenuta e stabile
|
|
||||||
3. **Metrics raggiungibili**: Target 95% precision, 88-92% recall con IF standard + ensemble
|
|
||||||
4. **Fallback già implementato**: Codice supportava già IF standard come fallback
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🧪 Step 2: Quick Test (Dataset Sintetico)
|
|
||||||
|
|
||||||
Testa il sistema con dataset sintetico per verificare funzionamento:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd /opt/ids/python_ml
|
|
||||||
|
|
||||||
# Test rapido con 10k samples sintetici
|
|
||||||
python train_hybrid.py --test
|
|
||||||
|
|
||||||
# Cosa aspettarsi:
|
|
||||||
# - Dataset creato: 10000 samples (90% normal, 10% attacks)
|
|
||||||
# - Training completato su ~7000 normal samples
|
|
||||||
# - Detection results con confidence scoring
|
|
||||||
# - Validation metrics (Precision, Recall, F1, FPR)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output atteso**:
|
|
||||||
```
|
|
||||||
[TEST] Created synthetic dataset: 10,000 samples
|
|
||||||
Normal: 9,000 (90.0%)
|
|
||||||
Attacks: 1,000 (10.0%)
|
|
||||||
|
|
||||||
[TEST] Training on 6,300 normal samples...
|
|
||||||
[HYBRID] Training unsupervised model on 6,300 logs...
|
|
||||||
[HYBRID] Extracted features for X unique IPs
|
|
||||||
[HYBRID] Feature selection: 25 → 18 features
|
|
||||||
[HYBRID] Training Extended Isolation Forest...
|
|
||||||
[HYBRID] Training completed! X/Y IPs flagged as anomalies
|
|
||||||
|
|
||||||
[TEST] Detection results:
|
|
||||||
Total detections: XX
|
|
||||||
High confidence: XX
|
|
||||||
Medium confidence: XX
|
|
||||||
Low confidence: XX
|
|
||||||
|
|
||||||
╔══════════════════════════════════════════════════════════════╗
|
|
||||||
║ Synthetic Test Results ║
|
|
||||||
╚══════════════════════════════════════════════════════════════╝
|
|
||||||
|
|
||||||
🎯 Primary Metrics:
|
|
||||||
Precision: XX.XX% (of 100 flagged, how many are real attacks)
|
|
||||||
Recall: XX.XX% (of 100 attacks, how many detected)
|
|
||||||
F1-Score: XX.XX% (harmonic mean of P&R)
|
|
||||||
|
|
||||||
⚠️ False Positive Analysis:
|
|
||||||
FP Rate: XX.XX% (normal traffic flagged as attack)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Criterio successo**:
|
|
||||||
- Precision ≥ 70% (test sintetico)
|
|
||||||
- FPR ≤ 10%
|
|
||||||
- Nessun crash
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Step 3: Training su Traffico Reale
|
|
||||||
|
|
||||||
Addestra il modello sui log reali (ultimi 7 giorni):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd /opt/ids/python_ml
|
|
||||||
|
|
||||||
# Training su database (ultimi 7 giorni)
|
|
||||||
python train_hybrid.py --train --source database \
|
|
||||||
--db-host localhost \
|
|
||||||
--db-port 5432 \
|
|
||||||
--db-name ids \
|
|
||||||
--db-user postgres \
|
|
||||||
--db-password "YOUR_PASSWORD" \
|
|
||||||
--days 7
|
|
||||||
|
|
||||||
# Modelli salvati in: python_ml/models/
|
|
||||||
# - isolation_forest_latest.pkl
|
|
||||||
# - scaler_latest.pkl
|
|
||||||
# - feature_selector_latest.pkl
|
|
||||||
# - metadata_latest.json
|
|
||||||
```
|
|
||||||
|
|
||||||
**Cosa succede**:
|
|
||||||
1. Carica ultimi 7 giorni di `network_logs` (fino a 1M records)
|
|
||||||
2. Estrae 25 features per ogni source_ip
|
|
||||||
3. Applica Chi-Square feature selection → 18 features
|
|
||||||
4. Addestra Extended Isolation Forest (contamination=3%)
|
|
||||||
5. Salva modelli in `models/`
|
|
||||||
|
|
||||||
**Criterio successo**:
|
|
||||||
- Training completato senza errori
|
|
||||||
- File modelli creati in `python_ml/models/`
|
|
||||||
- Log mostra "✅ Training completed!"
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Step 4: (Opzionale) Validazione CICIDS2017
|
|
||||||
|
|
||||||
Per validare con dataset scientifico (solo se si vuole benchmark accurato):
|
|
||||||
|
|
||||||
### 4.1 Download CICIDS2017
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Crea directory dataset
|
|
||||||
mkdir -p /opt/ids/python_ml/datasets/cicids2017
|
|
||||||
|
|
||||||
# Scarica manualmente da:
|
|
||||||
# https://www.unb.ca/cic/datasets/ids-2017.html
|
|
||||||
# Estrai i file CSV in: /opt/ids/python_ml/datasets/cicids2017/
|
|
||||||
|
|
||||||
# File richiesti (8 giorni):
|
|
||||||
# - Monday-WorkingHours.pcap_ISCX.csv
|
|
||||||
# - Tuesday-WorkingHours.pcap_ISCX.csv
|
|
||||||
# - ... (tutti i file CSV)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4.2 Validazione (10% sample per test)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd /opt/ids/python_ml
|
|
||||||
|
|
||||||
# Validazione con 10% del dataset (test veloce)
|
|
||||||
python train_hybrid.py --validate --sample 0.1
|
|
||||||
|
|
||||||
# Validazione completa (LENTO - può richiedere ore!)
|
|
||||||
# python train_hybrid.py --validate
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output atteso**:
|
|
||||||
```
|
|
||||||
╔══════════════════════════════════════════════════════════════╗
|
|
||||||
║ CICIDS2017 Validation Results ║
|
|
||||||
╚══════════════════════════════════════════════════════════════╝
|
|
||||||
|
|
||||||
🎯 Primary Metrics:
|
|
||||||
Precision: ≥90.00% ✅ TARGET
|
|
||||||
Recall: ≥80.00% ✅ TARGET
|
|
||||||
F1-Score: ≥85.00% ✅ TARGET
|
|
||||||
|
|
||||||
⚠️ False Positive Analysis:
|
|
||||||
FP Rate: ≤5.00% ✅ TARGET
|
|
||||||
|
|
||||||
[VALIDATE] Checking production deployment criteria...
|
|
||||||
✅ Model ready for production deployment!
|
|
||||||
```
|
|
||||||
|
|
||||||
**Criterio successo production**:
|
|
||||||
- Precision ≥ 90%
|
|
||||||
- Recall ≥ 80%
|
|
||||||
- FPR ≤ 5%
|
|
||||||
- F1-Score ≥ 85%
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Step 5: Deploy in Produzione
|
|
||||||
|
|
||||||
### 5.1 Configura Environment Variable
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Aggiungi al .env del ML backend
|
|
||||||
echo "USE_HYBRID_DETECTOR=true" >> /opt/ids/python_ml/.env
|
|
||||||
|
|
||||||
# Oppure export manuale
|
|
||||||
export USE_HYBRID_DETECTOR=true
|
|
||||||
```
|
|
||||||
|
|
||||||
**Default**: `USE_HYBRID_DETECTOR=true` (nuovo detector attivo)
|
|
||||||
|
|
||||||
Per rollback: `USE_HYBRID_DETECTOR=false` (usa legacy detector)
|
|
||||||
|
|
||||||
### 5.2 Restart ML Backend
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Systemd service
|
|
||||||
sudo systemctl restart ids-ml-backend
|
|
||||||
|
|
||||||
# Verifica startup
|
|
||||||
sudo systemctl status ids-ml-backend
|
|
||||||
sudo journalctl -u ids-ml-backend -f
|
|
||||||
|
|
||||||
# Cerca log:
|
|
||||||
# "[ML] Using Hybrid ML Detector (Extended Isolation Forest + Feature Selection)"
|
|
||||||
# "[HYBRID] Models loaded (version: latest)"
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5.3 Test API
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Test health check
|
|
||||||
curl http://localhost:8000/health
|
|
||||||
|
|
||||||
# Output atteso:
|
|
||||||
{
|
|
||||||
"status": "healthy",
|
|
||||||
"database": "connected",
|
|
||||||
"ml_model": "loaded",
|
|
||||||
"ml_model_type": "hybrid (EIF + Feature Selection)",
|
|
||||||
"timestamp": "2025-11-24T18:30:00"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Test root endpoint
|
|
||||||
curl http://localhost:8000/
|
|
||||||
|
|
||||||
# Output atteso:
|
|
||||||
{
|
|
||||||
"service": "IDS API",
|
|
||||||
"version": "2.0.0",
|
|
||||||
"status": "running",
|
|
||||||
"model_type": "hybrid",
|
|
||||||
"model_loaded": true,
|
|
||||||
"use_hybrid": true
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📈 Step 6: Monitoring & Validation
|
|
||||||
|
|
||||||
### 6.1 Primo Detection Run
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# API call per detection (con API key se configurata)
|
|
||||||
curl -X POST http://localhost:8000/detect \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-H "X-API-Key: YOUR_API_KEY" \
|
|
||||||
-d '{
|
|
||||||
"max_records": 5000,
|
|
||||||
"hours_back": 1,
|
|
||||||
"risk_threshold": 60.0,
|
|
||||||
"auto_block": false
|
|
||||||
}'
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6.2 Verifica Detections
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Query PostgreSQL per vedere detections
|
|
||||||
psql -d ids -c "
|
|
||||||
SELECT
|
|
||||||
source_ip,
|
|
||||||
risk_score,
|
|
||||||
confidence,
|
|
||||||
anomaly_type,
|
|
||||||
detected_at
|
|
||||||
FROM detections
|
|
||||||
ORDER BY detected_at DESC
|
|
||||||
LIMIT 10;
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6.3 Monitoring Logs
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Monitora log ML backend
|
|
||||||
sudo journalctl -u ids-ml-backend -f | grep -E "(HYBRID|DETECT|TRAIN)"
|
|
||||||
|
|
||||||
# Log chiave:
|
|
||||||
# - "[HYBRID] Models loaded" - Modello caricato OK
|
|
||||||
# - "[DETECT] Using Hybrid ML Detector" - Detection con nuovo modello
|
|
||||||
# - "[DETECT] Detected X unique IPs above threshold" - Risultati
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔄 Step 7: Re-training Periodico
|
|
||||||
|
|
||||||
Il modello va ri-addestrato periodicamente (es. settimanalmente) su traffico recente:
|
|
||||||
|
|
||||||
### Opzione A: Manuale
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Ogni settimana
|
|
||||||
cd /opt/ids/python_ml
|
|
||||||
source venv/bin/activate
|
|
||||||
|
|
||||||
python train_hybrid.py --train --source database \
|
|
||||||
--db-password "YOUR_PASSWORD" \
|
|
||||||
--days 7
|
|
||||||
```
|
|
||||||
|
|
||||||
### Opzione B: Cron Job
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Crea script wrapper
|
|
||||||
cat > /opt/ids/scripts/retrain_ml.sh << 'EOF'
|
|
||||||
#!/bin/bash
|
|
||||||
set -e
|
|
||||||
|
|
||||||
cd /opt/ids/python_ml
|
|
||||||
source venv/bin/activate
|
|
||||||
|
|
||||||
python train_hybrid.py --train --source database \
|
|
||||||
--db-host localhost \
|
|
||||||
--db-port 5432 \
|
|
||||||
--db-name ids \
|
|
||||||
--db-user postgres \
|
|
||||||
--db-password "$PGPASSWORD" \
|
|
||||||
--days 7
|
|
||||||
|
|
||||||
# Restart backend per caricare nuovo modello
|
|
||||||
sudo systemctl restart ids-ml-backend
|
|
||||||
|
|
||||||
echo "[$(date)] ML model retrained successfully"
|
|
||||||
EOF
|
|
||||||
|
|
||||||
chmod +x /opt/ids/scripts/retrain_ml.sh
|
|
||||||
|
|
||||||
# Aggiungi cron (ogni domenica alle 3:00 AM)
|
|
||||||
sudo crontab -e
|
|
||||||
|
|
||||||
# Aggiungi riga:
|
|
||||||
0 3 * * 0 /opt/ids/scripts/retrain_ml.sh >> /var/log/ids/ml_retrain.log 2>&1
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📊 Step 8: Confronto Vecchio vs Nuovo
|
|
||||||
|
|
||||||
Monitora metriche prima/dopo per 1-2 settimane:
|
|
||||||
|
|
||||||
### Metriche da tracciare:
|
|
||||||
|
|
||||||
1. **False Positive Rate** (obiettivo: -80%)
|
|
||||||
```sql
|
|
||||||
-- Query FP rate settimanale
|
|
||||||
SELECT
|
|
||||||
DATE(detected_at) as date,
|
|
||||||
COUNT(*) FILTER (WHERE is_false_positive = true) as false_positives,
|
|
||||||
COUNT(*) as total_detections,
|
|
||||||
ROUND(100.0 * COUNT(*) FILTER (WHERE is_false_positive = true) / COUNT(*), 2) as fp_rate
|
|
||||||
FROM detections
|
|
||||||
WHERE detected_at >= NOW() - INTERVAL '7 days'
|
|
||||||
GROUP BY DATE(detected_at)
|
|
||||||
ORDER BY date;
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Detection Count per Confidence Level**
|
|
||||||
```sql
|
|
||||||
SELECT
|
|
||||||
confidence,
|
|
||||||
COUNT(*) as count
|
|
||||||
FROM detections
|
|
||||||
WHERE detected_at >= NOW() - INTERVAL '24 hours'
|
|
||||||
GROUP BY confidence
|
|
||||||
ORDER BY
|
|
||||||
CASE confidence
|
|
||||||
WHEN 'high' THEN 1
|
|
||||||
WHEN 'medium' THEN 2
|
|
||||||
WHEN 'low' THEN 3
|
|
||||||
END;
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Blocked IPs Analysis**
|
|
||||||
```bash
|
|
||||||
# Query MikroTik per vedere IP bloccati
|
|
||||||
# Confronta con detections high-confidence
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔧 Troubleshooting
|
|
||||||
|
|
||||||
### Problema: "ModuleNotFoundError: No module named 'eif'"
|
|
||||||
|
|
||||||
**Soluzione**:
|
|
||||||
```bash
|
|
||||||
cd /opt/ids/python_ml
|
|
||||||
source venv/bin/activate
|
|
||||||
pip install eif==2.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
### Problema: "Modello non addestrato. Esegui /train prima."
|
|
||||||
|
|
||||||
**Soluzione**:
|
|
||||||
```bash
|
|
||||||
# Verifica modelli esistano
|
|
||||||
ls -lh /opt/ids/python_ml/models/
|
|
||||||
|
|
||||||
# Se vuoti, esegui training
|
|
||||||
python train_hybrid.py --train --source database --db-password "PWD"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Problema: API restituisce errore 500
|
|
||||||
|
|
||||||
**Soluzione**:
|
|
||||||
```bash
|
|
||||||
# Check logs
|
|
||||||
sudo journalctl -u ids-ml-backend -n 100
|
|
||||||
|
|
||||||
# Verifica USE_HYBRID_DETECTOR
|
|
||||||
grep USE_HYBRID_DETECTOR /opt/ids/python_ml/.env
|
|
||||||
|
|
||||||
# Fallback a legacy
|
|
||||||
echo "USE_HYBRID_DETECTOR=false" >> /opt/ids/python_ml/.env
|
|
||||||
sudo systemctl restart ids-ml-backend
|
|
||||||
```
|
|
||||||
|
|
||||||
### Problema: Metrics validation non passa (Precision < 90%)
|
|
||||||
|
|
||||||
**Soluzione**: Tuning hyperparameters
|
|
||||||
```python
|
|
||||||
# In ml_hybrid_detector.py, modifica config:
|
|
||||||
'eif_contamination': 0.02, # Prova valori 0.01-0.05
|
|
||||||
'chi2_top_k': 20, # Prova 15-25
|
|
||||||
'confidence_high': 97.0, # Aumenta soglia confidence
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Checklist Finale
|
|
||||||
|
|
||||||
- [ ] Test sintetico passato (Precision ≥70%)
|
|
||||||
- [ ] Training su dati reali completato
|
|
||||||
- [ ] Modelli salvati in `python_ml/models/`
|
|
||||||
- [ ] `USE_HYBRID_DETECTOR=true` configurato
|
|
||||||
- [ ] ML backend restartato con successo
|
|
||||||
- [ ] API `/health` mostra `"ml_model_type": "hybrid"`
|
|
||||||
- [ ] Primo detection run completato
|
|
||||||
- [ ] Detections salvate in database con confidence levels
|
|
||||||
- [ ] (Opzionale) Validazione CICIDS2017 con metrics target raggiunti
|
|
||||||
- [ ] Re-training periodico configurato (cron o manuale)
|
|
||||||
- [ ] Dashboard frontend mostra detections con nuovi confidence levels
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📚 Documentazione Tecnica
|
|
||||||
|
|
||||||
### Architettura
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────┐
|
|
||||||
│ Network Logs │
|
|
||||||
│ (PostgreSQL) │
|
|
||||||
└────────┬────────┘
|
|
||||||
│
|
|
||||||
v
|
|
||||||
┌─────────────────┐
|
|
||||||
│ Feature Extract │ 25 features per IP
|
|
||||||
│ (25 features) │ (volume, temporal, protocol, behavioral)
|
|
||||||
└────────┬────────┘
|
|
||||||
│
|
|
||||||
v
|
|
||||||
┌─────────────────┐
|
|
||||||
│ Chi-Square Test │ Feature Selection
|
|
||||||
│ (Select Top 18)│ Riduce dimensionalità
|
|
||||||
└────────┬────────┘
|
|
||||||
│
|
|
||||||
v
|
|
||||||
┌─────────────────┐
|
|
||||||
│ Extended IF │ Unsupervised Anomaly Detection
|
|
||||||
│ (contamination │ n_estimators=250
|
|
||||||
│ = 0.03) │ anomaly_score: 0-100
|
|
||||||
└────────┬────────┘
|
|
||||||
│
|
|
||||||
v
|
|
||||||
┌─────────────────┐
|
|
||||||
│ Confidence Score│ 3-tier system
|
|
||||||
│ High ≥95% │ - High: auto-block
|
|
||||||
│ Medium ≥70% │ - Medium: manual review
|
|
||||||
│ Low <70% │ - Low: monitor
|
|
||||||
└────────┬────────┘
|
|
||||||
│
|
|
||||||
v
|
|
||||||
┌─────────────────┐
|
|
||||||
│ Detections │ Salvate in DB
|
|
||||||
│ (Database) │ Con geo info + confidence
|
|
||||||
└─────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
### Hyperparameters Tuning
|
|
||||||
|
|
||||||
| Parametro | Valore Default | Range Consigliato | Effetto |
|
|
||||||
|-----------|----------------|-------------------|---------|
|
|
||||||
| `eif_contamination` | 0.03 | 0.01 - 0.05 | % di anomalie attese. ↑ = più rilevamenti |
|
|
||||||
| `eif_n_estimators` | 250 | 100 - 500 | Numero alberi. ↑ = più stabile ma lento |
|
|
||||||
| `chi2_top_k` | 18 | 15 - 25 | Numero features selezionate |
|
|
||||||
| `confidence_high` | 95.0 | 90.0 - 98.0 | Soglia auto-block. ↑ = più conservativo |
|
|
||||||
| `confidence_medium` | 70.0 | 60.0 - 80.0 | Soglia review manuale |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🎯 Target Metrics Recap
|
|
||||||
|
|
||||||
| Metrica | Target Production | Test Sintetico | Note |
|
|
||||||
|---------|-------------------|----------------|------|
|
|
||||||
| **Precision** | ≥ 90% | ≥ 70% | Di 100 flagged, quanti sono veri attacchi |
|
|
||||||
| **Recall** | ≥ 80% | ≥ 60% | Di 100 attacchi, quanti rilevati |
|
|
||||||
| **F1-Score** | ≥ 85% | ≥ 65% | Media armonica Precision/Recall |
|
|
||||||
| **FPR** | ≤ 5% | ≤ 10% | Falsi positivi su traffico normale |
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📞 Support
|
|
||||||
|
|
||||||
Per problemi o domande:
|
|
||||||
1. Check logs: `sudo journalctl -u ids-ml-backend -f`
|
|
||||||
2. Verifica modelli: `ls -lh /opt/ids/python_ml/models/`
|
|
||||||
3. Test manuale: `python train_hybrid.py --test`
|
|
||||||
4. Rollback: `USE_HYBRID_DETECTOR=false` + restart
|
|
||||||
|
|
||||||
**Ultimo aggiornamento**: 24 Nov 2025 - v2.0.0
|
|
||||||
@ -1,342 +0,0 @@
|
|||||||
# IDS - Guida Cleanup Detections Automatico
|
|
||||||
|
|
||||||
## 📋 Overview
|
|
||||||
|
|
||||||
Sistema automatico di pulizia delle detections e gestione IP bloccati secondo regole temporali:
|
|
||||||
|
|
||||||
1. **Cleanup Detections**: Elimina detections non bloccate più vecchie di **48 ore**
|
|
||||||
2. **Auto-Unblock**: Sblocca IP bloccati da più di **2 ore** senza nuove anomalie
|
|
||||||
|
|
||||||
## ⚙️ Componenti
|
|
||||||
|
|
||||||
### 1. Script Python: `python_ml/cleanup_detections.py`
|
|
||||||
Script principale che esegue le operazioni di cleanup:
|
|
||||||
- Elimina detections vecchie dal database
|
|
||||||
- Marca come "sbloccati" gli IP nel DB (NON rimuove da MikroTik firewall!)
|
|
||||||
- Logging completo in `/var/log/ids/cleanup.log`
|
|
||||||
|
|
||||||
### 2. Wrapper Bash: `deployment/run_cleanup.sh`
|
|
||||||
Wrapper che carica le variabili d'ambiente e esegue lo script Python.
|
|
||||||
|
|
||||||
### 3. Systemd Service: `ids-cleanup.service`
|
|
||||||
Service oneshot che esegue il cleanup una volta.
|
|
||||||
|
|
||||||
### 4. Systemd Timer: `ids-cleanup.timer`
|
|
||||||
Timer che esegue il cleanup **ogni ora alle XX:10** (es. 10:10, 11:10, 12:10...).
|
|
||||||
|
|
||||||
## 🚀 Installazione
|
|
||||||
|
|
||||||
### Prerequisiti
|
|
||||||
Assicurati di avere le dipendenze Python installate:
|
|
||||||
```bash
|
|
||||||
# Installa dipendenze (se non già fatto)
|
|
||||||
sudo pip3 install psycopg2-binary python-dotenv
|
|
||||||
|
|
||||||
# Oppure usa requirements.txt
|
|
||||||
sudo pip3 install -r python_ml/requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
### Setup Automatico
|
|
||||||
```bash
|
|
||||||
cd /opt/ids
|
|
||||||
|
|
||||||
# Esegui setup automatico (installa dipendenze + configura timer)
|
|
||||||
sudo ./deployment/setup_cleanup_timer.sh
|
|
||||||
|
|
||||||
# Output:
|
|
||||||
# [1/7] Installazione dipendenze Python...
|
|
||||||
# [2/7] Creazione directory log...
|
|
||||||
# ...
|
|
||||||
# ✅ Cleanup timer installato e avviato con successo!
|
|
||||||
```
|
|
||||||
|
|
||||||
**Nota**: Lo script installa automaticamente le dipendenze Python necessarie.
|
|
||||||
|
|
||||||
## 📊 Monitoraggio
|
|
||||||
|
|
||||||
### Stato Timer
|
|
||||||
```bash
|
|
||||||
# Verifica che il timer sia attivo
|
|
||||||
sudo systemctl status ids-cleanup.timer
|
|
||||||
|
|
||||||
# Prossima esecuzione programmata
|
|
||||||
systemctl list-timers ids-cleanup.timer
|
|
||||||
```
|
|
||||||
|
|
||||||
### Log
|
|
||||||
```bash
|
|
||||||
# Real-time log
|
|
||||||
tail -f /var/log/ids/cleanup.log
|
|
||||||
|
|
||||||
# Ultime 50 righe
|
|
||||||
tail -50 /var/log/ids/cleanup.log
|
|
||||||
|
|
||||||
# Log completo
|
|
||||||
cat /var/log/ids/cleanup.log
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🔧 Uso Manuale
|
|
||||||
|
|
||||||
### Esecuzione Immediata
|
|
||||||
```bash
|
|
||||||
# Via systemd (consigliato)
|
|
||||||
sudo systemctl start ids-cleanup.service
|
|
||||||
|
|
||||||
# Oppure direttamente
|
|
||||||
sudo ./deployment/run_cleanup.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test con Output Verbose
|
|
||||||
```bash
|
|
||||||
cd /opt/ids
|
|
||||||
source .env
|
|
||||||
python3 python_ml/cleanup_detections.py
|
|
||||||
```
|
|
||||||
|
|
||||||
## 📝 Regole di Cleanup
|
|
||||||
|
|
||||||
### Regola 1: Cleanup Detections (48 ore)
|
|
||||||
**Query SQL**:
|
|
||||||
```sql
|
|
||||||
DELETE FROM detections
|
|
||||||
WHERE detected_at < NOW() - INTERVAL '48 hours'
|
|
||||||
AND blocked = false
|
|
||||||
```
|
|
||||||
|
|
||||||
**Logica**:
|
|
||||||
- Se un IP è stato rilevato ma **non bloccato**
|
|
||||||
- E non ci sono nuove detections da **48 ore**
|
|
||||||
- → Eliminalo dal database
|
|
||||||
|
|
||||||
**Esempio**:
|
|
||||||
- IP `1.2.3.4` rilevato il 23/11 alle 10:00
|
|
||||||
- Non bloccato (risk_score < 80)
|
|
||||||
- Nessuna nuova detection per 48 ore
|
|
||||||
- → **25/11 alle 10:10** → IP eliminato ✅
|
|
||||||
|
|
||||||
### Regola 2: Auto-Unblock (2 ore)
|
|
||||||
**Query SQL**:
|
|
||||||
```sql
|
|
||||||
UPDATE detections
|
|
||||||
SET blocked = false, blocked_at = NULL
|
|
||||||
WHERE blocked = true
|
|
||||||
AND blocked_at < NOW() - INTERVAL '2 hours'
|
|
||||||
AND NOT EXISTS (
|
|
||||||
SELECT 1 FROM detections d2
|
|
||||||
WHERE d2.source_ip = detections.source_ip
|
|
||||||
AND d2.detected_at > NOW() - INTERVAL '2 hours'
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Logica**:
|
|
||||||
- Se un IP è **bloccato**
|
|
||||||
- E bloccato da **più di 2 ore**
|
|
||||||
- E **nessuna nuova detection** nelle ultime 2 ore
|
|
||||||
- → Sbloccalo nel DB
|
|
||||||
|
|
||||||
**⚠️ ATTENZIONE**: Questo sblocca solo nel **database**, NON rimuove l'IP dalle **firewall list MikroTik**!
|
|
||||||
|
|
||||||
**Esempio**:
|
|
||||||
- IP `5.6.7.8` bloccato il 25/11 alle 08:00
|
|
||||||
- Nessuna nuova detection per 2 ore
|
|
||||||
- → **25/11 alle 10:10** → `blocked=false` nel DB ✅
|
|
||||||
- → **ANCORA nella firewall MikroTik** ❌
|
|
||||||
|
|
||||||
### Come rimuovere da MikroTik
|
|
||||||
```bash
|
|
||||||
# Via API ML Backend
|
|
||||||
curl -X POST http://localhost:8000/unblock-ip \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{"ip_address": "5.6.7.8"}'
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🛠️ Configurazione
|
|
||||||
|
|
||||||
### Modifica Intervalli
|
|
||||||
|
|
||||||
#### Cambia soglia cleanup (es. 72 ore invece di 48)
|
|
||||||
Modifica `python_ml/cleanup_detections.py`:
|
|
||||||
```python
|
|
||||||
# Linea ~47
|
|
||||||
deleted_count = cleanup_old_detections(conn, hours=72) # ← Cambia qui
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Cambia soglia unblock (es. 4 ore invece di 2)
|
|
||||||
Modifica `python_ml/cleanup_detections.py`:
|
|
||||||
```python
|
|
||||||
# Linea ~51
|
|
||||||
unblocked_count = unblock_old_ips(conn, hours=4) # ← Cambia qui
|
|
||||||
```
|
|
||||||
|
|
||||||
### Modifica Frequenza Esecuzione
|
|
||||||
Modifica `deployment/systemd/ids-cleanup.timer`:
|
|
||||||
```ini
|
|
||||||
[Timer]
|
|
||||||
# Ogni 6 ore invece di ogni ora
|
|
||||||
OnCalendar=00/6:10:00
|
|
||||||
```
|
|
||||||
|
|
||||||
Dopo le modifiche:
|
|
||||||
```bash
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
sudo systemctl restart ids-cleanup.timer
|
|
||||||
```
|
|
||||||
|
|
||||||
## 📊 Output Esempio
|
|
||||||
|
|
||||||
```
|
|
||||||
============================================================
|
|
||||||
CLEANUP DETECTIONS - Avvio
|
|
||||||
============================================================
|
|
||||||
✅ Connesso al database
|
|
||||||
|
|
||||||
[1/2] Cleanup detections vecchie...
|
|
||||||
Trovate 45 detections da eliminare (più vecchie di 48h)
|
|
||||||
✅ Eliminate 45 detections vecchie
|
|
||||||
|
|
||||||
[2/2] Sblocco IP vecchi...
|
|
||||||
Trovati 3 IP da sbloccare (bloccati da più di 2h)
|
|
||||||
- 1.2.3.4 (tipo: ddos, score: 85.2)
|
|
||||||
- 5.6.7.8 (tipo: port_scan, score: 82.1)
|
|
||||||
- 9.10.11.12 (tipo: brute_force, score: 90.5)
|
|
||||||
✅ Sbloccati 3 IP nel database
|
|
||||||
⚠️ ATTENZIONE: IP ancora presenti nelle firewall list MikroTik!
|
|
||||||
💡 Per rimuoverli dai router, usa: curl -X POST http://localhost:8000/unblock-ip -d '{"ip_address": "X.X.X.X"}'
|
|
||||||
|
|
||||||
============================================================
|
|
||||||
CLEANUP COMPLETATO
|
|
||||||
- Detections eliminate: 45
|
|
||||||
- IP sbloccati (DB): 3
|
|
||||||
============================================================
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🔍 Troubleshooting
|
|
||||||
|
|
||||||
### Timer non parte
|
|
||||||
```bash
|
|
||||||
# Verifica che il timer sia enabled
|
|
||||||
sudo systemctl is-enabled ids-cleanup.timer
|
|
||||||
|
|
||||||
# Se disabled, abilita
|
|
||||||
sudo systemctl enable ids-cleanup.timer
|
|
||||||
sudo systemctl start ids-cleanup.timer
|
|
||||||
```
|
|
||||||
|
|
||||||
### Errori nel log
|
|
||||||
```bash
|
|
||||||
# Controlla errori
|
|
||||||
grep ERROR /var/log/ids/cleanup.log
|
|
||||||
|
|
||||||
# Controlla connessione DB
|
|
||||||
grep "Connesso al database" /var/log/ids/cleanup.log
|
|
||||||
```
|
|
||||||
|
|
||||||
### Test connessione DB
|
|
||||||
```bash
|
|
||||||
cd /opt/ids
|
|
||||||
source .env
|
|
||||||
python3 -c "
|
|
||||||
import psycopg2
|
|
||||||
conn = psycopg2.connect(
|
|
||||||
host='$PGHOST',
|
|
||||||
port=$PGPORT,
|
|
||||||
user='$PGUSER',
|
|
||||||
password='$PGPASSWORD',
|
|
||||||
database='$PGDATABASE'
|
|
||||||
)
|
|
||||||
print('✅ DB connesso!')
|
|
||||||
conn.close()
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
## 📈 Metriche
|
|
||||||
|
|
||||||
### Query per statistiche
|
|
||||||
```sql
|
|
||||||
-- Detections per età
|
|
||||||
SELECT
|
|
||||||
CASE
|
|
||||||
WHEN detected_at > NOW() - INTERVAL '2 hours' THEN '< 2h'
|
|
||||||
WHEN detected_at > NOW() - INTERVAL '24 hours' THEN '< 24h'
|
|
||||||
WHEN detected_at > NOW() - INTERVAL '48 hours' THEN '< 48h'
|
|
||||||
ELSE '> 48h'
|
|
||||||
END as age_group,
|
|
||||||
COUNT(*) as count,
|
|
||||||
COUNT(CASE WHEN blocked THEN 1 END) as blocked_count
|
|
||||||
FROM detections
|
|
||||||
GROUP BY age_group
|
|
||||||
ORDER BY age_group;
|
|
||||||
|
|
||||||
-- IP bloccati per durata
|
|
||||||
SELECT
|
|
||||||
source_ip,
|
|
||||||
blocked_at,
|
|
||||||
EXTRACT(EPOCH FROM (NOW() - blocked_at)) / 3600 as hours_blocked,
|
|
||||||
anomaly_type,
|
|
||||||
risk_score::numeric
|
|
||||||
FROM detections
|
|
||||||
WHERE blocked = true
|
|
||||||
ORDER BY blocked_at DESC;
|
|
||||||
```
|
|
||||||
|
|
||||||
## ⚙️ Integrazione con Altri Sistemi
|
|
||||||
|
|
||||||
### Notifiche Email (opzionale)
|
|
||||||
Aggiungi a `python_ml/cleanup_detections.py`:
|
|
||||||
```python
|
|
||||||
import smtplib
|
|
||||||
from email.mime.text import MIMEText
|
|
||||||
|
|
||||||
if unblocked_count > 0:
|
|
||||||
msg = MIMEText(f"Sbloccati {unblocked_count} IP")
|
|
||||||
msg['Subject'] = 'IDS Cleanup Report'
|
|
||||||
msg['From'] = 'ids@example.com'
|
|
||||||
msg['To'] = 'admin@example.com'
|
|
||||||
|
|
||||||
s = smtplib.SMTP('localhost')
|
|
||||||
s.send_message(msg)
|
|
||||||
s.quit()
|
|
||||||
```
|
|
||||||
|
|
||||||
### Webhook (opzionale)
|
|
||||||
```python
|
|
||||||
import requests
|
|
||||||
|
|
||||||
requests.post('https://hooks.slack.com/...', json={
|
|
||||||
'text': f'IDS Cleanup: {deleted_count} detections eliminate, {unblocked_count} IP sbloccati'
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
## 🔒 Sicurezza
|
|
||||||
|
|
||||||
- Script eseguito come **root** (necessario per systemd)
|
|
||||||
- Credenziali DB caricate da `.env` (NON hardcoded)
|
|
||||||
- Log in `/var/log/ids/` con permessi `644`
|
|
||||||
- Service con `NoNewPrivileges=true` e `PrivateTmp=true`
|
|
||||||
|
|
||||||
## 📅 Scheduler
|
|
||||||
|
|
||||||
Il timer è configurato per eseguire:
|
|
||||||
- **Frequenza**: Ogni ora
|
|
||||||
- **Minuto**: XX:10 (10 minuti dopo l'ora)
|
|
||||||
- **Randomizzazione**: ±5 minuti per load balancing
|
|
||||||
- **Persistent**: Recupera esecuzioni perse durante downtime
|
|
||||||
|
|
||||||
**Esempio orari**: 00:10, 01:10, 02:10, ..., 23:10
|
|
||||||
|
|
||||||
## ✅ Checklist Post-Installazione
|
|
||||||
|
|
||||||
- [ ] Timer installato: `systemctl status ids-cleanup.timer`
|
|
||||||
- [ ] Prossima esecuzione visibile: `systemctl list-timers`
|
|
||||||
- [ ] Test manuale OK: `sudo ./deployment/run_cleanup.sh`
|
|
||||||
- [ ] Log creato: `ls -la /var/log/ids/cleanup.log`
|
|
||||||
- [ ] Nessun errore nel log: `grep ERROR /var/log/ids/cleanup.log`
|
|
||||||
- [ ] Cleanup funzionante: verificare conteggio detections prima/dopo
|
|
||||||
|
|
||||||
## 🆘 Supporto
|
|
||||||
|
|
||||||
Per problemi o domande:
|
|
||||||
1. Controlla log: `tail -f /var/log/ids/cleanup.log`
|
|
||||||
2. Verifica timer: `systemctl status ids-cleanup.timer`
|
|
||||||
3. Test manuale: `sudo ./deployment/run_cleanup.sh`
|
|
||||||
4. Apri issue su GitHub o contatta il team
|
|
||||||
@ -1,182 +0,0 @@
|
|||||||
# 🔧 TROUBLESHOOTING: Syslog Parser Bloccato
|
|
||||||
|
|
||||||
## 📊 Diagnosi Rapida (Sul Server)
|
|
||||||
|
|
||||||
### 1. Verifica Stato Servizio
|
|
||||||
```bash
|
|
||||||
sudo systemctl status ids-syslog-parser
|
|
||||||
journalctl -u ids-syslog-parser -n 100 --no-pager
|
|
||||||
```
|
|
||||||
|
|
||||||
**Cosa cercare:**
|
|
||||||
- ❌ `[ERROR] Errore processamento file:`
|
|
||||||
- ❌ `OperationalError: database connection`
|
|
||||||
- ❌ `ProgrammingError:`
|
|
||||||
- ✅ `[INFO] Processate X righe, salvate Y log` (deve continuare ad aumentare!)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 2. Verifica Database Connection
|
|
||||||
```bash
|
|
||||||
# Test connessione DB
|
|
||||||
psql -h 127.0.0.1 -U $PGUSER -d $PGDATABASE -c "SELECT COUNT(*) FROM network_logs WHERE timestamp > NOW() - INTERVAL '5 minutes';"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Se torna 0** → Parser non sta scrivendo!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### 3. Verifica File Log Syslog
|
|
||||||
```bash
|
|
||||||
# Log syslog in arrivo?
|
|
||||||
tail -f /var/log/mikrotik/raw.log | head -20
|
|
||||||
|
|
||||||
# Dimensione file
|
|
||||||
ls -lh /var/log/mikrotik/raw.log
|
|
||||||
|
|
||||||
# Ultimi log ricevuti
|
|
||||||
tail -5 /var/log/mikrotik/raw.log
|
|
||||||
```
|
|
||||||
|
|
||||||
**Se nessun log nuovo** → Problema rsyslog o router!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🐛 Cause Comuni di Blocco
|
|
||||||
|
|
||||||
### **Causa #1: Database Connection Timeout**
|
|
||||||
```python
|
|
||||||
# syslog_parser.py usa connessione persistente
|
|
||||||
self.conn = psycopg2.connect() # ← può scadere dopo ore!
|
|
||||||
```
|
|
||||||
|
|
||||||
**Soluzione:** Riavvia il servizio
|
|
||||||
```bash
|
|
||||||
sudo systemctl restart ids-syslog-parser
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Causa #2: Eccezione Non Gestita**
|
|
||||||
```python
|
|
||||||
# Loop si ferma se eccezione non gestita
|
|
||||||
except Exception as e:
|
|
||||||
print(f"[ERROR] Errore processamento file: {e}")
|
|
||||||
# ← Loop terminato!
|
|
||||||
```
|
|
||||||
|
|
||||||
**Fix:** Il parser ora continua anche dopo errori (v2.0+)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Causa #3: File Log Ruotato da Rsyslog**
|
|
||||||
Se rsyslog ruota il file `/var/log/mikrotik/raw.log`, il parser continua a leggere il file vecchio (inode diverso).
|
|
||||||
|
|
||||||
**Soluzione:** Usa logrotate + postrotate signal
|
|
||||||
```bash
|
|
||||||
# /etc/logrotate.d/mikrotik
|
|
||||||
/var/log/mikrotik/raw.log {
|
|
||||||
daily
|
|
||||||
rotate 7
|
|
||||||
compress
|
|
||||||
postrotate
|
|
||||||
systemctl restart ids-syslog-parser
|
|
||||||
endscript
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### **Causa #4: Cleanup DB Troppo Lento**
|
|
||||||
```python
|
|
||||||
# Cleanup ogni ~16 minuti
|
|
||||||
if cleanup_counter >= 10000:
|
|
||||||
self.cleanup_old_logs(days_to_keep=3) # ← DELETE su milioni di record!
|
|
||||||
```
|
|
||||||
|
|
||||||
Se il cleanup impiega troppo tempo, blocca il loop.
|
|
||||||
|
|
||||||
**Fix:** Ora usa batch delete con LIMIT (v2.0+)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚑 SOLUZIONE RAPIDA (Ora)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Riavvia parser
|
|
||||||
sudo systemctl restart ids-syslog-parser
|
|
||||||
|
|
||||||
# 2. Verifica che riparta
|
|
||||||
sudo journalctl -u ids-syslog-parser -f
|
|
||||||
|
|
||||||
# 3. Dopo 1-2 min, verifica nuovi log nel DB
|
|
||||||
psql -h 127.0.0.1 -U $PGUSER -d $PGDATABASE -c \
|
|
||||||
"SELECT COUNT(*) FROM network_logs WHERE timestamp > NOW() - INTERVAL '2 minutes';"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Output atteso:**
|
|
||||||
```
|
|
||||||
count
|
|
||||||
-------
|
|
||||||
1234 ← Numero crescente = OK!
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🔒 FIX PERMANENTE (v2.0)
|
|
||||||
|
|
||||||
### **Migliorie Implementate:**
|
|
||||||
|
|
||||||
1. **Auto-Reconnect** su DB timeout
|
|
||||||
2. **Error Recovery** - continua dopo eccezioni
|
|
||||||
3. **Batch Cleanup** - non blocca il processing
|
|
||||||
4. **Health Metrics** - monitoring integrato
|
|
||||||
|
|
||||||
### **Deploy Fix:**
|
|
||||||
```bash
|
|
||||||
cd /opt/ids
|
|
||||||
sudo ./update_from_git.sh
|
|
||||||
sudo systemctl restart ids-syslog-parser
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📈 Metriche da Monitorare
|
|
||||||
|
|
||||||
1. **Log/sec processati**
|
|
||||||
```sql
|
|
||||||
SELECT COUNT(*) / 60.0 AS logs_per_sec
|
|
||||||
FROM network_logs
|
|
||||||
WHERE timestamp > NOW() - INTERVAL '1 minute';
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Ultimo log ricevuto**
|
|
||||||
```sql
|
|
||||||
SELECT MAX(timestamp) AS last_log FROM network_logs;
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Gap detection** (se ultimo log > 5 min fa → problema!)
|
|
||||||
```sql
|
|
||||||
SELECT NOW() - MAX(timestamp) AS time_since_last_log
|
|
||||||
FROM network_logs;
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ✅ Checklist Post-Fix
|
|
||||||
|
|
||||||
- [ ] Servizio running e active
|
|
||||||
- [ ] Nuovi log in DB (ultimo < 1 min fa)
|
|
||||||
- [ ] Nessun errore in journalctl
|
|
||||||
- [ ] ML backend rileva nuove anomalie
|
|
||||||
- [ ] Dashboard mostra traffico real-time
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📞 Escalation
|
|
||||||
|
|
||||||
Se il problema persiste dopo questi fix:
|
|
||||||
1. Verifica configurazione rsyslog
|
|
||||||
2. Controlla firewall router (UDP:514)
|
|
||||||
3. Test manuale: `logger -p local7.info "TEST MESSAGE"`
|
|
||||||
4. Analizza log completi: `journalctl -u ids-syslog-parser --since "1 hour ago" > parser.log`
|
|
||||||
@ -1,80 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
###############################################################################
|
|
||||||
# Syslog Parser Health Check Script
|
|
||||||
# Verifica che il parser stia processando log regolarmente
|
|
||||||
# Uso: ./check_parser_health.sh
|
|
||||||
# Cron: */5 * * * * /opt/ids/deployment/check_parser_health.sh
|
|
||||||
###############################################################################
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
# Load environment
|
|
||||||
if [ -f /opt/ids/.env ]; then
|
|
||||||
export $(grep -v '^#' /opt/ids/.env | xargs)
|
|
||||||
fi
|
|
||||||
|
|
||||||
ALERT_THRESHOLD_MINUTES=5
|
|
||||||
LOG_FILE="/var/log/ids/parser-health.log"
|
|
||||||
|
|
||||||
mkdir -p /var/log/ids
|
|
||||||
touch "$LOG_FILE"
|
|
||||||
|
|
||||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] === Health Check Start ===" >> "$LOG_FILE"
|
|
||||||
|
|
||||||
# Check 1: Service running?
|
|
||||||
if ! systemctl is-active --quiet ids-syslog-parser; then
|
|
||||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] ❌ CRITICAL: Parser service NOT running!" >> "$LOG_FILE"
|
|
||||||
echo "Attempting automatic restart..." >> "$LOG_FILE"
|
|
||||||
systemctl restart ids-syslog-parser
|
|
||||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Service restarted" >> "$LOG_FILE"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check 2: Recent logs in database?
|
|
||||||
LAST_LOG_AGE=$(psql -h 127.0.0.1 -U "$PGUSER" -d "$PGDATABASE" -t -c \
|
|
||||||
"SELECT EXTRACT(EPOCH FROM (NOW() - MAX(timestamp)))/60 AS minutes_ago FROM network_logs;" | tr -d ' ')
|
|
||||||
|
|
||||||
if [ -z "$LAST_LOG_AGE" ] || [ "$LAST_LOG_AGE" = "" ]; then
|
|
||||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] ⚠️ WARNING: Cannot determine last log age (empty database?)" >> "$LOG_FILE"
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Convert to integer (bash doesn't handle floats)
|
|
||||||
LAST_LOG_AGE_INT=$(echo "$LAST_LOG_AGE" | cut -d'.' -f1)
|
|
||||||
|
|
||||||
if [ "$LAST_LOG_AGE_INT" -gt "$ALERT_THRESHOLD_MINUTES" ]; then
|
|
||||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] ❌ ALERT: Last log is $LAST_LOG_AGE_INT minutes old (threshold: $ALERT_THRESHOLD_MINUTES min)" >> "$LOG_FILE"
|
|
||||||
echo "Checking syslog file..." >> "$LOG_FILE"
|
|
||||||
|
|
||||||
# Check if syslog file has new data
|
|
||||||
if [ -f "/var/log/mikrotik/raw.log" ]; then
|
|
||||||
SYSLOG_SIZE=$(stat -f%z "/var/log/mikrotik/raw.log" 2>/dev/null || stat -c%s "/var/log/mikrotik/raw.log" 2>/dev/null)
|
|
||||||
echo "Syslog file size: $SYSLOG_SIZE bytes" >> "$LOG_FILE"
|
|
||||||
|
|
||||||
# Restart parser
|
|
||||||
echo "Restarting parser service..." >> "$LOG_FILE"
|
|
||||||
systemctl restart ids-syslog-parser
|
|
||||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Parser service restarted" >> "$LOG_FILE"
|
|
||||||
else
|
|
||||||
echo "⚠️ Syslog file not found: /var/log/mikrotik/raw.log" >> "$LOG_FILE"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] ✅ OK: Last log ${LAST_LOG_AGE_INT} minutes ago" >> "$LOG_FILE"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check 3: Parser errors?
|
|
||||||
ERROR_COUNT=$(journalctl -u ids-syslog-parser --since "5 minutes ago" | grep -c "\[ERROR\]" || echo "0")
|
|
||||||
|
|
||||||
if [ "$ERROR_COUNT" -gt 10 ]; then
|
|
||||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] ⚠️ WARNING: $ERROR_COUNT errors in last 5 minutes" >> "$LOG_FILE"
|
|
||||||
journalctl -u ids-syslog-parser --since "5 minutes ago" | grep "\[ERROR\]" | tail -5 >> "$LOG_FILE"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] === Health Check Complete ===" >> "$LOG_FILE"
|
|
||||||
echo "" >> "$LOG_FILE"
|
|
||||||
|
|
||||||
# Keep only last 1000 lines of log
|
|
||||||
tail -1000 "$LOG_FILE" > "${LOG_FILE}.tmp"
|
|
||||||
mv "${LOG_FILE}.tmp" "$LOG_FILE"
|
|
||||||
|
|
||||||
exit 0
|
|
||||||
@ -12,7 +12,7 @@ echo "=========================================" >> "$LOG_FILE"
|
|||||||
|
|
||||||
curl -X POST http://localhost:8000/train \
|
curl -X POST http://localhost:8000/train \
|
||||||
-H "Content-Type: application/json" \
|
-H "Content-Type: application/json" \
|
||||||
-d '{"max_records": 1000000, "hours_back": 24}' \
|
-d '{"max_records": 100000, "hours_back": 24}' \
|
||||||
--max-time 300 >> "$LOG_FILE" 2>&1
|
--max-time 300 >> "$LOG_FILE" 2>&1
|
||||||
|
|
||||||
EXIT_CODE=$?
|
EXIT_CODE=$?
|
||||||
|
|||||||
@ -1,48 +0,0 @@
|
|||||||
# Public Lists - Known Limitations (v2.0.0)
|
|
||||||
|
|
||||||
## CIDR Range Matching
|
|
||||||
|
|
||||||
**Current Status**: MVP with exact IP matching
|
|
||||||
**Impact**: CIDR ranges (e.g., Spamhaus /24 blocks) are stored but not yet matched against detections
|
|
||||||
|
|
||||||
### Details:
|
|
||||||
- `public_blacklist_ips.cidr_range` field exists and is populated by parsers
|
|
||||||
- Detections currently use **exact IP matching only**
|
|
||||||
- Whitelist entries with CIDR notation not expanded
|
|
||||||
|
|
||||||
### Future Iteration:
|
|
||||||
Requires PostgreSQL INET/CIDR column types and query optimizations:
|
|
||||||
1. Add dedicated `inet` columns to `public_blacklist_ips` and `whitelist`
|
|
||||||
2. Rewrite merge logic with CIDR containment operators (`<<=`, `>>=`)
|
|
||||||
3. Index optimization for network range queries
|
|
||||||
|
|
||||||
### Workaround (Production):
|
|
||||||
Most critical single IPs are still caught. For CIDR-heavy feeds, parser can be extended to expand ranges to individual IPs (trade-off: storage vs query performance).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Integration Status
|
|
||||||
|
|
||||||
✅ **Working**:
|
|
||||||
- Fetcher syncs every 10 minutes (systemd timer)
|
|
||||||
- Manual whitelist > Public whitelist > Blacklist priority
|
|
||||||
- Automatic cleanup of invalid detections
|
|
||||||
|
|
||||||
⚠️ **Manual Sync**:
|
|
||||||
- UI manual sync triggers by resetting `lastAttempt` timestamp
|
|
||||||
- Actual sync occurs on next fetcher cycle (max 10 min delay)
|
|
||||||
- For immediate sync: `sudo systemctl start ids-list-fetcher.service`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Performance Notes
|
|
||||||
|
|
||||||
- Bulk SQL operations avoid O(N) per-IP queries
|
|
||||||
- Tested with 186M+ network_logs records
|
|
||||||
- Query optimization ongoing for CIDR expansion
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Version**: 2.0.0 MVP
|
|
||||||
**Date**: 2025-11-26
|
|
||||||
**Next Iteration**: Full CIDR matching support
|
|
||||||
@ -1,295 +0,0 @@
|
|||||||
# Public Lists v2.0.0 - CIDR Complete Implementation
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
Sistema completo di integrazione liste pubbliche con supporto CIDR per matching di network ranges tramite operatori PostgreSQL INET.
|
|
||||||
|
|
||||||
## Database Schema v7
|
|
||||||
|
|
||||||
### Migration 007: CIDR Support
|
|
||||||
```sql
|
|
||||||
-- Aggiunte colonne INET/CIDR
|
|
||||||
ALTER TABLE public_blacklist_ips
|
|
||||||
ADD COLUMN ip_inet inet,
|
|
||||||
ADD COLUMN cidr_inet cidr;
|
|
||||||
|
|
||||||
ALTER TABLE whitelist
|
|
||||||
ADD COLUMN ip_inet inet;
|
|
||||||
|
|
||||||
-- Indexes GiST per operatori di rete
|
|
||||||
CREATE INDEX public_blacklist_ip_inet_idx ON public_blacklist_ips USING gist(ip_inet inet_ops);
|
|
||||||
CREATE INDEX public_blacklist_cidr_inet_idx ON public_blacklist_ips USING gist(cidr_inet inet_ops);
|
|
||||||
CREATE INDEX whitelist_ip_inet_idx ON whitelist USING gist(ip_inet inet_ops);
|
|
||||||
```
|
|
||||||
|
|
||||||
### Colonne Aggiunte
|
|
||||||
| Tabella | Colonna | Tipo | Scopo |
|
|
||||||
|---------|---------|------|-------|
|
|
||||||
| public_blacklist_ips | ip_inet | inet | IP singolo per matching esatto |
|
|
||||||
| public_blacklist_ips | cidr_inet | cidr | Range di rete per containment |
|
|
||||||
| whitelist | ip_inet | inet | IP/range per whitelist CIDR-aware |
|
|
||||||
|
|
||||||
## CIDR Matching Logic
|
|
||||||
|
|
||||||
### Operatori PostgreSQL INET
|
|
||||||
```sql
|
|
||||||
-- Containment: IP è contenuto in CIDR range?
|
|
||||||
'192.168.1.50'::inet <<= '192.168.1.0/24'::inet -- TRUE
|
|
||||||
|
|
||||||
-- Esempi pratici
|
|
||||||
'8.8.8.8'::inet <<= '8.8.8.0/24'::inet -- TRUE
|
|
||||||
'1.1.1.1'::inet <<= '8.8.8.0/24'::inet -- FALSE
|
|
||||||
'52.94.10.5'::inet <<= '52.94.0.0/16'::inet -- TRUE (AWS range)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Priority Logic con CIDR
|
|
||||||
```sql
|
|
||||||
-- Creazione detections con priorità CIDR-aware
|
|
||||||
INSERT INTO detections (source_ip, risk_score, ...)
|
|
||||||
SELECT bl.ip_address, 75, ...
|
|
||||||
FROM public_blacklist_ips bl
|
|
||||||
WHERE bl.is_active = true
|
|
||||||
AND bl.ip_inet IS NOT NULL
|
|
||||||
-- Priorità 1: Whitelist manuale (massima)
|
|
||||||
AND NOT EXISTS (
|
|
||||||
SELECT 1 FROM whitelist wl
|
|
||||||
WHERE wl.active = true
|
|
||||||
AND wl.source = 'manual'
|
|
||||||
AND (bl.ip_inet = wl.ip_inet OR bl.ip_inet <<= wl.ip_inet)
|
|
||||||
)
|
|
||||||
-- Priorità 2: Whitelist pubblica
|
|
||||||
AND NOT EXISTS (
|
|
||||||
SELECT 1 FROM whitelist wl
|
|
||||||
WHERE wl.active = true
|
|
||||||
AND wl.source != 'manual'
|
|
||||||
AND (bl.ip_inet = wl.ip_inet OR bl.ip_inet <<= wl.ip_inet)
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Cleanup CIDR-Aware
|
|
||||||
```sql
|
|
||||||
-- Rimuove detections per IP in whitelist ranges
|
|
||||||
DELETE FROM detections d
|
|
||||||
WHERE d.detection_source = 'public_blacklist'
|
|
||||||
AND EXISTS (
|
|
||||||
SELECT 1 FROM whitelist wl
|
|
||||||
WHERE wl.active = true
|
|
||||||
AND wl.ip_inet IS NOT NULL
|
|
||||||
AND (d.source_ip::inet = wl.ip_inet
|
|
||||||
OR d.source_ip::inet <<= wl.ip_inet)
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Performance
|
|
||||||
|
|
||||||
### Index Strategy
|
|
||||||
- **GiST indexes** ottimizzati per operatori `<<=` e `>>=`
|
|
||||||
- Query log(n) anche con 186M+ record
|
|
||||||
- Bulk operations mantenute per efficienza
|
|
||||||
|
|
||||||
### Benchmark
|
|
||||||
| Operazione | Complessità | Tempo Medio |
|
|
||||||
|------------|-------------|-------------|
|
|
||||||
| Exact IP lookup | O(log n) | ~5ms |
|
|
||||||
| CIDR containment | O(log n) | ~15ms |
|
|
||||||
| Bulk detection (10k IPs) | O(n) | ~2s |
|
|
||||||
| Priority filtering (100k) | O(n log m) | ~500ms |
|
|
||||||
|
|
||||||
## Testing Matrix
|
|
||||||
|
|
||||||
| Scenario | Implementazione | Status |
|
|
||||||
|----------|-----------------|--------|
|
|
||||||
| Exact IP (8.8.8.8) | inet equality | ✅ Completo |
|
|
||||||
| CIDR range (192.168.1.0/24) | `<<=` operator | ✅ Completo |
|
|
||||||
| Mixed exact + CIDR | Combined query | ✅ Completo |
|
|
||||||
| Manual whitelist priority | Source-based exclusion | ✅ Completo |
|
|
||||||
| Public whitelist priority | Nested NOT EXISTS | ✅ Completo |
|
|
||||||
| Performance (186M+ rows) | Bulk + indexes | ✅ Completo |
|
|
||||||
|
|
||||||
## Deployment su AlmaLinux 9
|
|
||||||
|
|
||||||
### Pre-Deployment
|
|
||||||
```bash
|
|
||||||
# Backup database
|
|
||||||
sudo -u postgres pg_dump ids_production > /opt/ids/backups/pre_v2_$(date +%Y%m%d).sql
|
|
||||||
|
|
||||||
# Verifica versione schema
|
|
||||||
sudo -u postgres psql ids_production -c "SELECT version FROM schema_version;"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Esecuzione Migration
|
|
||||||
```bash
|
|
||||||
cd /opt/ids
|
|
||||||
sudo -u postgres psql ids_production < deployment/migrations/007_add_cidr_support.sql
|
|
||||||
|
|
||||||
# Verifica successo
|
|
||||||
sudo -u postgres psql ids_production -c "
|
|
||||||
SELECT version, updated_at FROM schema_version WHERE id = 1;
|
|
||||||
SELECT COUNT(*) FROM public_blacklist_ips WHERE ip_inet IS NOT NULL;
|
|
||||||
SELECT COUNT(*) FROM whitelist WHERE ip_inet IS NOT NULL;
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Update Codice Python
|
|
||||||
```bash
|
|
||||||
# Pull da GitLab
|
|
||||||
./update_from_git.sh
|
|
||||||
|
|
||||||
# Restart services
|
|
||||||
sudo systemctl restart ids-list-fetcher
|
|
||||||
sudo systemctl restart ids-ml-backend
|
|
||||||
|
|
||||||
# Verifica logs
|
|
||||||
journalctl -u ids-list-fetcher -n 50
|
|
||||||
journalctl -u ids-ml-backend -n 50
|
|
||||||
```
|
|
||||||
|
|
||||||
### Validazione Post-Deploy
|
|
||||||
```bash
|
|
||||||
# Test CIDR matching
|
|
||||||
sudo -u postgres psql ids_production -c "
|
|
||||||
-- Verifica popolazione INET columns
|
|
||||||
SELECT
|
|
||||||
COUNT(*) as total_blacklist,
|
|
||||||
COUNT(ip_inet) as with_inet,
|
|
||||||
COUNT(cidr_inet) as with_cidr
|
|
||||||
FROM public_blacklist_ips;
|
|
||||||
|
|
||||||
-- Test containment query
|
|
||||||
SELECT * FROM whitelist
|
|
||||||
WHERE active = true
|
|
||||||
AND '192.168.1.50'::inet <<= ip_inet
|
|
||||||
LIMIT 5;
|
|
||||||
|
|
||||||
-- Verifica priority logic
|
|
||||||
SELECT source, COUNT(*)
|
|
||||||
FROM whitelist
|
|
||||||
WHERE active = true
|
|
||||||
GROUP BY source;
|
|
||||||
"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Monitoring
|
|
||||||
|
|
||||||
### Service Health Checks
|
|
||||||
```bash
|
|
||||||
# Status fetcher
|
|
||||||
systemctl status ids-list-fetcher
|
|
||||||
systemctl list-timers ids-list-fetcher
|
|
||||||
|
|
||||||
# Logs real-time
|
|
||||||
journalctl -u ids-list-fetcher -f
|
|
||||||
```
|
|
||||||
|
|
||||||
### Database Queries
|
|
||||||
```sql
|
|
||||||
-- Sync status liste
|
|
||||||
SELECT
|
|
||||||
name,
|
|
||||||
type,
|
|
||||||
last_success,
|
|
||||||
total_ips,
|
|
||||||
active_ips,
|
|
||||||
error_count,
|
|
||||||
last_error
|
|
||||||
FROM public_lists
|
|
||||||
ORDER BY last_success DESC;
|
|
||||||
|
|
||||||
-- CIDR coverage
|
|
||||||
SELECT
|
|
||||||
COUNT(*) as total,
|
|
||||||
COUNT(CASE WHEN cidr_range IS NOT NULL THEN 1 END) as with_cidr,
|
|
||||||
COUNT(CASE WHEN ip_inet IS NOT NULL THEN 1 END) as with_inet,
|
|
||||||
COUNT(CASE WHEN cidr_inet IS NOT NULL THEN 1 END) as cidr_inet_populated
|
|
||||||
FROM public_blacklist_ips;
|
|
||||||
|
|
||||||
-- Detection sources
|
|
||||||
SELECT
|
|
||||||
detection_source,
|
|
||||||
COUNT(*) as count,
|
|
||||||
AVG(risk_score) as avg_score
|
|
||||||
FROM detections
|
|
||||||
GROUP BY detection_source;
|
|
||||||
```
|
|
||||||
|
|
||||||
## Esempi d'Uso
|
|
||||||
|
|
||||||
### Scenario 1: AWS Range Whitelist
|
|
||||||
```sql
|
|
||||||
-- Whitelist AWS range 52.94.0.0/16
|
|
||||||
INSERT INTO whitelist (ip_address, ip_inet, source, comment)
|
|
||||||
VALUES ('52.94.0.0/16', '52.94.0.0/16'::inet, 'aws', 'AWS us-east-1 range');
|
|
||||||
|
|
||||||
-- Verifica matching
|
|
||||||
SELECT * FROM detections
|
|
||||||
WHERE source_ip::inet <<= '52.94.0.0/16'::inet
|
|
||||||
AND detection_source = 'public_blacklist';
|
|
||||||
-- Queste detections verranno automaticamente cleanup
|
|
||||||
```
|
|
||||||
|
|
||||||
### Scenario 2: Priority Override
|
|
||||||
```sql
|
|
||||||
-- Blacklist Spamhaus: 1.2.3.4
|
|
||||||
-- Public whitelist GCP: 1.2.3.0/24
|
|
||||||
-- Manual whitelist utente: NESSUNA
|
|
||||||
|
|
||||||
-- Risultato: 1.2.3.4 NON genera detection (public whitelist vince)
|
|
||||||
|
|
||||||
-- Se aggiungi manual whitelist:
|
|
||||||
INSERT INTO whitelist (ip_address, ip_inet, source)
|
|
||||||
VALUES ('1.2.3.4', '1.2.3.4'::inet, 'manual');
|
|
||||||
|
|
||||||
-- Ora 1.2.3.4 è protetto da priorità massima (manual > public > blacklist)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### INET Column Non Populated
|
|
||||||
```sql
|
|
||||||
-- Manually populate se necessario
|
|
||||||
UPDATE public_blacklist_ips
|
|
||||||
SET ip_inet = ip_address::inet,
|
|
||||||
cidr_inet = COALESCE(cidr_range::cidr, (ip_address || '/32')::cidr)
|
|
||||||
WHERE ip_inet IS NULL;
|
|
||||||
|
|
||||||
UPDATE whitelist
|
|
||||||
SET ip_inet = CASE
|
|
||||||
WHEN ip_address ~ '/' THEN ip_address::inet
|
|
||||||
ELSE ip_address::inet
|
|
||||||
END
|
|
||||||
WHERE ip_inet IS NULL;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Index Missing
|
|
||||||
```sql
|
|
||||||
-- Ricrea indexes se mancanti
|
|
||||||
CREATE INDEX IF NOT EXISTS public_blacklist_ip_inet_idx
|
|
||||||
ON public_blacklist_ips USING gist(ip_inet inet_ops);
|
|
||||||
CREATE INDEX IF NOT EXISTS public_blacklist_cidr_inet_idx
|
|
||||||
ON public_blacklist_ips USING gist(cidr_inet inet_ops);
|
|
||||||
CREATE INDEX IF NOT EXISTS whitelist_ip_inet_idx
|
|
||||||
ON whitelist USING gist(ip_inet inet_ops);
|
|
||||||
```
|
|
||||||
|
|
||||||
### Performance Degradation
|
|
||||||
```bash
|
|
||||||
# Reindex GiST
|
|
||||||
sudo -u postgres psql ids_production -c "REINDEX INDEX CONCURRENTLY public_blacklist_ip_inet_idx;"
|
|
||||||
|
|
||||||
# Vacuum analyze
|
|
||||||
sudo -u postgres psql ids_production -c "VACUUM ANALYZE public_blacklist_ips;"
|
|
||||||
sudo -u postgres psql ids_production -c "VACUUM ANALYZE whitelist;"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Known Issues
|
|
||||||
Nessuno. Sistema production-ready con CIDR completo.
|
|
||||||
|
|
||||||
## Future Enhancements (v2.1+)
|
|
||||||
- Incremental sync (delta updates)
|
|
||||||
- Redis caching per query frequenti
|
|
||||||
- Additional threat feeds (SANS ISC, AbuseIPDB)
|
|
||||||
- Table partitioning per scalabilità
|
|
||||||
|
|
||||||
## References
|
|
||||||
- PostgreSQL INET/CIDR docs: https://www.postgresql.org/docs/current/datatype-net-types.html
|
|
||||||
- GiST indexes: https://www.postgresql.org/docs/current/gist.html
|
|
||||||
- Network operators: https://www.postgresql.org/docs/current/functions-net.html
|
|
||||||
@ -1,105 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# =============================================================================
|
|
||||||
# IDS - Installazione Servizio List Fetcher
|
|
||||||
# =============================================================================
|
|
||||||
# Installa e configura il servizio systemd per il fetcher delle liste pubbliche
|
|
||||||
# Eseguire come ROOT: ./install_list_fetcher.sh
|
|
||||||
# =============================================================================
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
RED='\033[0;31m'
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
YELLOW='\033[1;33m'
|
|
||||||
BLUE='\033[0;34m'
|
|
||||||
NC='\033[0m'
|
|
||||||
|
|
||||||
echo -e "${BLUE}"
|
|
||||||
echo "╔═══════════════════════════════════════════════╗"
|
|
||||||
echo "║ 📋 INSTALLAZIONE IDS LIST FETCHER ║"
|
|
||||||
echo "╚═══════════════════════════════════════════════╝"
|
|
||||||
echo -e "${NC}"
|
|
||||||
|
|
||||||
IDS_DIR="/opt/ids"
|
|
||||||
SYSTEMD_DIR="/etc/systemd/system"
|
|
||||||
|
|
||||||
# Verifica di essere root
|
|
||||||
if [ "$EUID" -ne 0 ]; then
|
|
||||||
echo -e "${RED}❌ Questo script deve essere eseguito come root${NC}"
|
|
||||||
echo -e "${YELLOW} Esegui: sudo ./install_list_fetcher.sh${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Verifica che i file sorgente esistano
|
|
||||||
SERVICE_SRC="$IDS_DIR/deployment/systemd/ids-list-fetcher.service"
|
|
||||||
TIMER_SRC="$IDS_DIR/deployment/systemd/ids-list-fetcher.timer"
|
|
||||||
|
|
||||||
if [ ! -f "$SERVICE_SRC" ]; then
|
|
||||||
echo -e "${RED}❌ File service non trovato: $SERVICE_SRC${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ ! -f "$TIMER_SRC" ]; then
|
|
||||||
echo -e "${RED}❌ File timer non trovato: $TIMER_SRC${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Verifica che il virtual environment Python esista
|
|
||||||
VENV_PYTHON="$IDS_DIR/python_ml/venv/bin/python3"
|
|
||||||
if [ ! -f "$VENV_PYTHON" ]; then
|
|
||||||
echo -e "${YELLOW}⚠️ Virtual environment non trovato, creazione...${NC}"
|
|
||||||
cd "$IDS_DIR/python_ml"
|
|
||||||
python3.11 -m venv venv
|
|
||||||
./venv/bin/pip install --upgrade pip
|
|
||||||
./venv/bin/pip install -r requirements.txt
|
|
||||||
echo -e "${GREEN}✅ Virtual environment creato${NC}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Verifica che run_fetcher.py esista
|
|
||||||
FETCHER_SCRIPT="$IDS_DIR/python_ml/list_fetcher/run_fetcher.py"
|
|
||||||
if [ ! -f "$FETCHER_SCRIPT" ]; then
|
|
||||||
echo -e "${RED}❌ Script fetcher non trovato: $FETCHER_SCRIPT${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Copia file systemd
|
|
||||||
echo -e "${BLUE}📦 Installazione file systemd...${NC}"
|
|
||||||
|
|
||||||
cp "$SERVICE_SRC" "$SYSTEMD_DIR/ids-list-fetcher.service"
|
|
||||||
cp "$TIMER_SRC" "$SYSTEMD_DIR/ids-list-fetcher.timer"
|
|
||||||
|
|
||||||
echo -e "${GREEN} ✅ ids-list-fetcher.service installato${NC}"
|
|
||||||
echo -e "${GREEN} ✅ ids-list-fetcher.timer installato${NC}"
|
|
||||||
|
|
||||||
# Ricarica systemd
|
|
||||||
echo -e "${BLUE}🔄 Ricarica configurazione systemd...${NC}"
|
|
||||||
systemctl daemon-reload
|
|
||||||
echo -e "${GREEN}✅ Daemon ricaricato${NC}"
|
|
||||||
|
|
||||||
# Abilita e avvia timer
|
|
||||||
echo -e "${BLUE}⏱️ Abilitazione timer (ogni 10 minuti)...${NC}"
|
|
||||||
systemctl enable ids-list-fetcher.timer
|
|
||||||
systemctl start ids-list-fetcher.timer
|
|
||||||
echo -e "${GREEN}✅ Timer abilitato e avviato${NC}"
|
|
||||||
|
|
||||||
# Test esecuzione manuale
|
|
||||||
echo -e "${BLUE}🧪 Test esecuzione fetcher...${NC}"
|
|
||||||
if systemctl start ids-list-fetcher.service; then
|
|
||||||
echo -e "${GREEN}✅ Fetcher eseguito con successo${NC}"
|
|
||||||
else
|
|
||||||
echo -e "${YELLOW}⚠️ Prima esecuzione potrebbe fallire se liste non configurate${NC}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Mostra stato
|
|
||||||
echo ""
|
|
||||||
echo -e "${GREEN}╔═══════════════════════════════════════════════╗${NC}"
|
|
||||||
echo -e "${GREEN}║ ✅ INSTALLAZIONE COMPLETATA ║${NC}"
|
|
||||||
echo -e "${GREEN}╚═══════════════════════════════════════════════╝${NC}"
|
|
||||||
echo ""
|
|
||||||
echo -e "${BLUE}📋 COMANDI UTILI:${NC}"
|
|
||||||
echo -e " • Stato timer: ${YELLOW}systemctl status ids-list-fetcher.timer${NC}"
|
|
||||||
echo -e " • Stato service: ${YELLOW}systemctl status ids-list-fetcher.service${NC}"
|
|
||||||
echo -e " • Esegui manuale: ${YELLOW}systemctl start ids-list-fetcher.service${NC}"
|
|
||||||
echo -e " • Visualizza logs: ${YELLOW}journalctl -u ids-list-fetcher -n 50${NC}"
|
|
||||||
echo -e " • Timer attivi: ${YELLOW}systemctl list-timers | grep ids${NC}"
|
|
||||||
echo ""
|
|
||||||
@ -1,81 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Script per installare dipendenze ML Hybrid Detector
|
|
||||||
# SEMPLIFICATO: usa sklearn.IsolationForest (nessuna compilazione richiesta!)
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
echo "╔═══════════════════════════════════════════════╗"
|
|
||||||
echo "║ INSTALLAZIONE DIPENDENZE ML HYBRID ║"
|
|
||||||
echo "╚═══════════════════════════════════════════════╝"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Vai alla directory python_ml
|
|
||||||
cd "$(dirname "$0")/../python_ml" || exit 1
|
|
||||||
|
|
||||||
echo "📍 Directory corrente: $(pwd)"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Verifica venv
|
|
||||||
if [ ! -d "venv" ]; then
|
|
||||||
echo "❌ ERRORE: Virtual environment non trovato in $(pwd)/venv"
|
|
||||||
echo " Esegui prima: python3 -m venv venv"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Attiva venv
|
|
||||||
echo "🔧 Attivazione virtual environment..."
|
|
||||||
source venv/bin/activate
|
|
||||||
|
|
||||||
# Verifica che stiamo usando il venv
|
|
||||||
PYTHON_PATH=$(which python)
|
|
||||||
echo "📍 Python in uso: $PYTHON_PATH"
|
|
||||||
if [[ ! "$PYTHON_PATH" =~ "venv" ]]; then
|
|
||||||
echo "⚠️ WARNING: Non stiamo usando il venv correttamente!"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# STEP 1: Aggiorna pip/setuptools/wheel
|
|
||||||
echo "📦 Step 1/2: Aggiornamento pip/setuptools/wheel..."
|
|
||||||
python -m pip install --upgrade pip setuptools wheel
|
|
||||||
|
|
||||||
if [ $? -eq 0 ]; then
|
|
||||||
echo "✅ pip/setuptools/wheel aggiornati"
|
|
||||||
else
|
|
||||||
echo "❌ Errore durante aggiornamento pip"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# STEP 2: Installa dipendenze ML da requirements.txt
|
|
||||||
echo "📦 Step 2/2: Installazione dipendenze ML..."
|
|
||||||
python -m pip install xgboost==2.0.3 joblib==1.3.2
|
|
||||||
|
|
||||||
if [ $? -eq 0 ]; then
|
|
||||||
echo "✅ Dipendenze ML installate con successo"
|
|
||||||
else
|
|
||||||
echo "❌ Errore durante installazione dipendenze ML"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "✅ INSTALLAZIONE COMPLETATA!"
|
|
||||||
echo ""
|
|
||||||
echo "🧪 Test import componenti ML..."
|
|
||||||
python -c "from sklearn.ensemble import IsolationForest; from xgboost import XGBClassifier; print('✅ sklearn IsolationForest OK'); print('✅ XGBoost OK')"
|
|
||||||
|
|
||||||
if [ $? -eq 0 ]; then
|
|
||||||
echo ""
|
|
||||||
echo "✅ TUTTO OK! Hybrid ML Detector pronto per l'uso"
|
|
||||||
echo ""
|
|
||||||
echo "ℹ️ INFO: Sistema usa sklearn.IsolationForest (compatibile Python 3.11+)"
|
|
||||||
echo ""
|
|
||||||
echo "📋 Prossimi step:"
|
|
||||||
echo " 1. Test rapido: python train_hybrid.py --mode test"
|
|
||||||
echo " 2. Training completo: python train_hybrid.py --mode train"
|
|
||||||
else
|
|
||||||
echo "❌ Errore durante test import componenti ML"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
@ -1,116 +0,0 @@
|
|||||||
-- Migration 006: Add Public Lists Integration
|
|
||||||
-- Description: Adds blacklist/whitelist public sources with auto-sync support
|
|
||||||
-- Author: IDS System
|
|
||||||
-- Date: 2024-11-26
|
|
||||||
-- NOTE: Fully idempotent - safe to run multiple times
|
|
||||||
|
|
||||||
BEGIN;
|
|
||||||
|
|
||||||
-- ============================================================================
|
|
||||||
-- 1. CREATE NEW TABLES
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
-- Public threat/whitelist sources configuration
|
|
||||||
CREATE TABLE IF NOT EXISTS public_lists (
|
|
||||||
id VARCHAR PRIMARY KEY DEFAULT gen_random_uuid(),
|
|
||||||
name TEXT NOT NULL,
|
|
||||||
type TEXT NOT NULL CHECK (type IN ('blacklist', 'whitelist')),
|
|
||||||
url TEXT NOT NULL,
|
|
||||||
enabled BOOLEAN NOT NULL DEFAULT true,
|
|
||||||
fetch_interval_minutes INTEGER NOT NULL DEFAULT 10,
|
|
||||||
last_fetch TIMESTAMP,
|
|
||||||
last_success TIMESTAMP,
|
|
||||||
total_ips INTEGER NOT NULL DEFAULT 0,
|
|
||||||
active_ips INTEGER NOT NULL DEFAULT 0,
|
|
||||||
error_count INTEGER NOT NULL DEFAULT 0,
|
|
||||||
last_error TEXT,
|
|
||||||
created_at TIMESTAMP NOT NULL DEFAULT NOW()
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS public_lists_type_idx ON public_lists(type);
|
|
||||||
CREATE INDEX IF NOT EXISTS public_lists_enabled_idx ON public_lists(enabled);
|
|
||||||
|
|
||||||
-- Public blacklist IPs from external sources
|
|
||||||
CREATE TABLE IF NOT EXISTS public_blacklist_ips (
|
|
||||||
id VARCHAR PRIMARY KEY DEFAULT gen_random_uuid(),
|
|
||||||
ip_address TEXT NOT NULL,
|
|
||||||
cidr_range TEXT,
|
|
||||||
list_id VARCHAR NOT NULL REFERENCES public_lists(id) ON DELETE CASCADE,
|
|
||||||
first_seen TIMESTAMP NOT NULL DEFAULT NOW(),
|
|
||||||
last_seen TIMESTAMP NOT NULL DEFAULT NOW(),
|
|
||||||
is_active BOOLEAN NOT NULL DEFAULT true
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS public_blacklist_ip_idx ON public_blacklist_ips(ip_address);
|
|
||||||
CREATE INDEX IF NOT EXISTS public_blacklist_list_idx ON public_blacklist_ips(list_id);
|
|
||||||
CREATE INDEX IF NOT EXISTS public_blacklist_active_idx ON public_blacklist_ips(is_active);
|
|
||||||
|
|
||||||
-- Create unique constraint only if not exists
|
|
||||||
DO $$
|
|
||||||
BEGIN
|
|
||||||
IF NOT EXISTS (
|
|
||||||
SELECT 1 FROM pg_indexes
|
|
||||||
WHERE indexname = 'public_blacklist_ip_list_key'
|
|
||||||
) THEN
|
|
||||||
CREATE UNIQUE INDEX public_blacklist_ip_list_key ON public_blacklist_ips(ip_address, list_id);
|
|
||||||
END IF;
|
|
||||||
END $$;
|
|
||||||
|
|
||||||
-- ============================================================================
|
|
||||||
-- 2. ALTER EXISTING TABLES
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
-- Extend detections table with public list source tracking
|
|
||||||
ALTER TABLE detections
|
|
||||||
ADD COLUMN IF NOT EXISTS detection_source TEXT NOT NULL DEFAULT 'ml_model',
|
|
||||||
ADD COLUMN IF NOT EXISTS blacklist_id VARCHAR;
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS detection_source_idx ON detections(detection_source);
|
|
||||||
|
|
||||||
-- Add check constraint for valid detection sources
|
|
||||||
DO $$
|
|
||||||
BEGIN
|
|
||||||
IF NOT EXISTS (
|
|
||||||
SELECT 1 FROM pg_constraint
|
|
||||||
WHERE conname = 'detections_source_check'
|
|
||||||
) THEN
|
|
||||||
ALTER TABLE detections
|
|
||||||
ADD CONSTRAINT detections_source_check
|
|
||||||
CHECK (detection_source IN ('ml_model', 'public_blacklist', 'hybrid'));
|
|
||||||
END IF;
|
|
||||||
END $$;
|
|
||||||
|
|
||||||
-- Extend whitelist table with source tracking
|
|
||||||
ALTER TABLE whitelist
|
|
||||||
ADD COLUMN IF NOT EXISTS source TEXT NOT NULL DEFAULT 'manual',
|
|
||||||
ADD COLUMN IF NOT EXISTS list_id VARCHAR;
|
|
||||||
|
|
||||||
CREATE INDEX IF NOT EXISTS whitelist_source_idx ON whitelist(source);
|
|
||||||
|
|
||||||
-- Add check constraint for valid whitelist sources
|
|
||||||
DO $$
|
|
||||||
BEGIN
|
|
||||||
IF NOT EXISTS (
|
|
||||||
SELECT 1 FROM pg_constraint
|
|
||||||
WHERE conname = 'whitelist_source_check'
|
|
||||||
) THEN
|
|
||||||
ALTER TABLE whitelist
|
|
||||||
ADD CONSTRAINT whitelist_source_check
|
|
||||||
CHECK (source IN ('manual', 'aws', 'gcp', 'cloudflare', 'iana', 'ntp', 'other'));
|
|
||||||
END IF;
|
|
||||||
END $$;
|
|
||||||
|
|
||||||
-- ============================================================================
|
|
||||||
-- 3. UPDATE SCHEMA VERSION
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
INSERT INTO schema_version (id, version, description)
|
|
||||||
VALUES (1, 6, 'Add public lists integration (blacklist/whitelist sources)')
|
|
||||||
ON CONFLICT (id) DO UPDATE
|
|
||||||
SET version = 6,
|
|
||||||
description = 'Add public lists integration (blacklist/whitelist sources)',
|
|
||||||
applied_at = NOW();
|
|
||||||
|
|
||||||
COMMIT;
|
|
||||||
|
|
||||||
SELECT 'Migration 006 completed successfully' as status;
|
|
||||||
@ -1,88 +0,0 @@
|
|||||||
-- Migration 007: Add INET/CIDR support for proper network range matching
|
|
||||||
-- Required for public lists integration (Spamhaus /24, AWS ranges, etc.)
|
|
||||||
-- Date: 2025-11-26
|
|
||||||
-- NOTE: Handles case where columns exist as TEXT type (from Drizzle)
|
|
||||||
|
|
||||||
BEGIN;
|
|
||||||
|
|
||||||
-- ============================================================================
|
|
||||||
-- FIX: Drop TEXT columns and recreate as proper INET/CIDR types
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
-- Check column type and fix if needed for public_blacklist_ips
|
|
||||||
DO $$
|
|
||||||
DECLARE
|
|
||||||
col_type text;
|
|
||||||
BEGIN
|
|
||||||
-- Check ip_inet column type
|
|
||||||
SELECT data_type INTO col_type
|
|
||||||
FROM information_schema.columns
|
|
||||||
WHERE table_name = 'public_blacklist_ips' AND column_name = 'ip_inet';
|
|
||||||
|
|
||||||
IF col_type = 'text' THEN
|
|
||||||
-- Drop the wrong type columns
|
|
||||||
ALTER TABLE public_blacklist_ips DROP COLUMN IF EXISTS ip_inet;
|
|
||||||
ALTER TABLE public_blacklist_ips DROP COLUMN IF EXISTS cidr_inet;
|
|
||||||
RAISE NOTICE 'Dropped TEXT columns, will recreate as INET/CIDR';
|
|
||||||
END IF;
|
|
||||||
END $$;
|
|
||||||
|
|
||||||
-- Add INET/CIDR columns with correct types
|
|
||||||
ALTER TABLE public_blacklist_ips
|
|
||||||
ADD COLUMN IF NOT EXISTS ip_inet inet,
|
|
||||||
ADD COLUMN IF NOT EXISTS cidr_inet cidr;
|
|
||||||
|
|
||||||
-- Populate new columns from existing text data
|
|
||||||
UPDATE public_blacklist_ips
|
|
||||||
SET ip_inet = ip_address::inet,
|
|
||||||
cidr_inet = CASE
|
|
||||||
WHEN cidr_range IS NOT NULL THEN cidr_range::cidr
|
|
||||||
ELSE (ip_address || '/32')::cidr
|
|
||||||
END
|
|
||||||
WHERE ip_inet IS NULL OR cidr_inet IS NULL;
|
|
||||||
|
|
||||||
-- Create GiST indexes for INET operators
|
|
||||||
CREATE INDEX IF NOT EXISTS public_blacklist_ip_inet_idx ON public_blacklist_ips USING gist(ip_inet inet_ops);
|
|
||||||
CREATE INDEX IF NOT EXISTS public_blacklist_cidr_inet_idx ON public_blacklist_ips USING gist(cidr_inet inet_ops);
|
|
||||||
|
|
||||||
-- ============================================================================
|
|
||||||
-- Fix whitelist table
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
DO $$
|
|
||||||
DECLARE
|
|
||||||
col_type text;
|
|
||||||
BEGIN
|
|
||||||
SELECT data_type INTO col_type
|
|
||||||
FROM information_schema.columns
|
|
||||||
WHERE table_name = 'whitelist' AND column_name = 'ip_inet';
|
|
||||||
|
|
||||||
IF col_type = 'text' THEN
|
|
||||||
ALTER TABLE whitelist DROP COLUMN IF EXISTS ip_inet;
|
|
||||||
RAISE NOTICE 'Dropped TEXT column from whitelist, will recreate as INET';
|
|
||||||
END IF;
|
|
||||||
END $$;
|
|
||||||
|
|
||||||
-- Add INET column to whitelist
|
|
||||||
ALTER TABLE whitelist
|
|
||||||
ADD COLUMN IF NOT EXISTS ip_inet inet;
|
|
||||||
|
|
||||||
-- Populate whitelist INET column
|
|
||||||
UPDATE whitelist
|
|
||||||
SET ip_inet = CASE
|
|
||||||
WHEN ip_address ~ '/' THEN ip_address::inet
|
|
||||||
ELSE ip_address::inet
|
|
||||||
END
|
|
||||||
WHERE ip_inet IS NULL;
|
|
||||||
|
|
||||||
-- Create index for whitelist INET matching
|
|
||||||
CREATE INDEX IF NOT EXISTS whitelist_ip_inet_idx ON whitelist USING gist(ip_inet inet_ops);
|
|
||||||
|
|
||||||
-- Update schema version
|
|
||||||
UPDATE schema_version SET version = 7, applied_at = NOW() WHERE id = 1;
|
|
||||||
|
|
||||||
COMMIT;
|
|
||||||
|
|
||||||
-- Verification
|
|
||||||
SELECT 'Migration 007 completed successfully' as status;
|
|
||||||
SELECT version, applied_at FROM schema_version WHERE id = 1;
|
|
||||||
@ -1,92 +0,0 @@
|
|||||||
-- Migration 008: Force INET/CIDR types (unconditional)
|
|
||||||
-- Fixes issues where columns remained TEXT after conditional migration 007
|
|
||||||
-- Date: 2026-01-02
|
|
||||||
|
|
||||||
BEGIN;
|
|
||||||
|
|
||||||
-- ============================================================================
|
|
||||||
-- FORCE DROP AND RECREATE ALL INET COLUMNS
|
|
||||||
-- This is unconditional - always executes regardless of current state
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
-- Drop indexes first (if exist)
|
|
||||||
DROP INDEX IF EXISTS public_blacklist_ip_inet_idx;
|
|
||||||
DROP INDEX IF EXISTS public_blacklist_cidr_inet_idx;
|
|
||||||
DROP INDEX IF EXISTS whitelist_ip_inet_idx;
|
|
||||||
|
|
||||||
-- ============================================================================
|
|
||||||
-- FIX public_blacklist_ips TABLE
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
-- Drop columns unconditionally
|
|
||||||
ALTER TABLE public_blacklist_ips DROP COLUMN IF EXISTS ip_inet;
|
|
||||||
ALTER TABLE public_blacklist_ips DROP COLUMN IF EXISTS cidr_inet;
|
|
||||||
|
|
||||||
-- Recreate with correct INET/CIDR types
|
|
||||||
ALTER TABLE public_blacklist_ips ADD COLUMN ip_inet inet;
|
|
||||||
ALTER TABLE public_blacklist_ips ADD COLUMN cidr_inet cidr;
|
|
||||||
|
|
||||||
-- Populate from existing text data
|
|
||||||
UPDATE public_blacklist_ips
|
|
||||||
SET
|
|
||||||
ip_inet = CASE
|
|
||||||
WHEN ip_address ~ '/' THEN ip_address::inet
|
|
||||||
ELSE ip_address::inet
|
|
||||||
END,
|
|
||||||
cidr_inet = CASE
|
|
||||||
WHEN cidr_range IS NOT NULL AND cidr_range != '' THEN cidr_range::cidr
|
|
||||||
WHEN ip_address ~ '/' THEN ip_address::cidr
|
|
||||||
ELSE (ip_address || '/32')::cidr
|
|
||||||
END
|
|
||||||
WHERE ip_inet IS NULL;
|
|
||||||
|
|
||||||
-- Create GiST indexes for fast INET/CIDR containment operators
|
|
||||||
CREATE INDEX public_blacklist_ip_inet_idx ON public_blacklist_ips USING gist(ip_inet inet_ops);
|
|
||||||
CREATE INDEX public_blacklist_cidr_inet_idx ON public_blacklist_ips USING gist(cidr_inet inet_ops);
|
|
||||||
|
|
||||||
-- ============================================================================
|
|
||||||
-- FIX whitelist TABLE
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
-- Drop column unconditionally
|
|
||||||
ALTER TABLE whitelist DROP COLUMN IF EXISTS ip_inet;
|
|
||||||
|
|
||||||
-- Recreate with correct INET type
|
|
||||||
ALTER TABLE whitelist ADD COLUMN ip_inet inet;
|
|
||||||
|
|
||||||
-- Populate from existing text data
|
|
||||||
UPDATE whitelist
|
|
||||||
SET ip_inet = CASE
|
|
||||||
WHEN ip_address ~ '/' THEN ip_address::inet
|
|
||||||
ELSE ip_address::inet
|
|
||||||
END
|
|
||||||
WHERE ip_inet IS NULL;
|
|
||||||
|
|
||||||
-- Create index for whitelist
|
|
||||||
CREATE INDEX whitelist_ip_inet_idx ON whitelist USING gist(ip_inet inet_ops);
|
|
||||||
|
|
||||||
-- ============================================================================
|
|
||||||
-- UPDATE SCHEMA VERSION
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
UPDATE schema_version SET version = 8, applied_at = NOW() WHERE id = 1;
|
|
||||||
|
|
||||||
COMMIT;
|
|
||||||
|
|
||||||
-- ============================================================================
|
|
||||||
-- VERIFICATION
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
SELECT 'Migration 008 completed successfully' as status;
|
|
||||||
SELECT version, applied_at FROM schema_version WHERE id = 1;
|
|
||||||
|
|
||||||
-- Verify column types
|
|
||||||
SELECT
|
|
||||||
table_name,
|
|
||||||
column_name,
|
|
||||||
data_type
|
|
||||||
FROM information_schema.columns
|
|
||||||
WHERE
|
|
||||||
(table_name = 'public_blacklist_ips' AND column_name IN ('ip_inet', 'cidr_inet'))
|
|
||||||
OR (table_name = 'whitelist' AND column_name = 'ip_inet')
|
|
||||||
ORDER BY table_name, column_name;
|
|
||||||
@ -1,33 +0,0 @@
|
|||||||
-- Migration 009: Add Microsoft Azure and Meta/Facebook public lists
|
|
||||||
-- Date: 2026-01-02
|
|
||||||
|
|
||||||
-- Microsoft Azure IP ranges (whitelist - cloud provider)
|
|
||||||
INSERT INTO public_lists (name, url, type, format, enabled, description, fetch_interval)
|
|
||||||
VALUES (
|
|
||||||
'Microsoft Azure',
|
|
||||||
'https://raw.githubusercontent.com/femueller/cloud-ip-ranges/master/microsoft-azure-ip-ranges.json',
|
|
||||||
'whitelist',
|
|
||||||
'json',
|
|
||||||
true,
|
|
||||||
'Microsoft Azure cloud IP ranges - auto-updated from Azure Service Tags',
|
|
||||||
3600
|
|
||||||
) ON CONFLICT (name) DO UPDATE SET
|
|
||||||
url = EXCLUDED.url,
|
|
||||||
description = EXCLUDED.description;
|
|
||||||
|
|
||||||
-- Meta/Facebook IP ranges (whitelist - major service provider)
|
|
||||||
INSERT INTO public_lists (name, url, type, format, enabled, description, fetch_interval)
|
|
||||||
VALUES (
|
|
||||||
'Meta (Facebook)',
|
|
||||||
'https://raw.githubusercontent.com/parseword/util-misc/master/block-facebook/facebook-ip-ranges.txt',
|
|
||||||
'whitelist',
|
|
||||||
'plain',
|
|
||||||
true,
|
|
||||||
'Meta/Facebook IP ranges (includes Instagram, WhatsApp, Oculus) from BGP AS32934/AS54115/AS63293',
|
|
||||||
3600
|
|
||||||
) ON CONFLICT (name) DO UPDATE SET
|
|
||||||
url = EXCLUDED.url,
|
|
||||||
description = EXCLUDED.description;
|
|
||||||
|
|
||||||
-- Verify insertion
|
|
||||||
SELECT id, name, type, enabled, url FROM public_lists WHERE name IN ('Microsoft Azure', 'Meta (Facebook)');
|
|
||||||
@ -1,48 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# =========================================================
|
|
||||||
# IDS - Cleanup Detections Runner
|
|
||||||
# =========================================================
|
|
||||||
# Esegue cleanup automatico delle detections secondo regole:
|
|
||||||
# - Cancella detections non anomale dopo 48h
|
|
||||||
# - Sblocca IP bloccati se non più anomali dopo 2h
|
|
||||||
#
|
|
||||||
# Uso: ./run_cleanup.sh
|
|
||||||
# =========================================================
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
|
|
||||||
|
|
||||||
# Carica variabili ambiente
|
|
||||||
if [ -f "$PROJECT_ROOT/.env" ]; then
|
|
||||||
set -a
|
|
||||||
source "$PROJECT_ROOT/.env"
|
|
||||||
set +a
|
|
||||||
else
|
|
||||||
echo "❌ File .env non trovato in $PROJECT_ROOT"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Log
|
|
||||||
LOG_FILE="/var/log/ids/cleanup.log"
|
|
||||||
mkdir -p /var/log/ids
|
|
||||||
|
|
||||||
echo "=========================================" >> "$LOG_FILE"
|
|
||||||
echo "[$(date)] Cleanup automatico avviato" >> "$LOG_FILE"
|
|
||||||
echo "=========================================" >> "$LOG_FILE"
|
|
||||||
|
|
||||||
# Esegui cleanup
|
|
||||||
cd "$PROJECT_ROOT"
|
|
||||||
python3 python_ml/cleanup_detections.py >> "$LOG_FILE" 2>&1
|
|
||||||
|
|
||||||
EXIT_CODE=$?
|
|
||||||
|
|
||||||
if [ $EXIT_CODE -eq 0 ]; then
|
|
||||||
echo "[$(date)] Cleanup completato con successo" >> "$LOG_FILE"
|
|
||||||
else
|
|
||||||
echo "[$(date)] Cleanup fallito (exit code: $EXIT_CODE)" >> "$LOG_FILE"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "" >> "$LOG_FILE"
|
|
||||||
exit $EXIT_CODE
|
|
||||||
@ -1,92 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
# ML Training Wrapper - Esecuzione Automatica via Systemd
|
|
||||||
# Carica credenziali da .env in modo sicuro
|
|
||||||
#
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
IDS_ROOT="/opt/ids"
|
|
||||||
ENV_FILE="$IDS_ROOT/.env"
|
|
||||||
PYTHON_ML_DIR="$IDS_ROOT/python_ml"
|
|
||||||
VENV_PYTHON="$PYTHON_ML_DIR/venv/bin/python"
|
|
||||||
LOG_DIR="/var/log/ids"
|
|
||||||
|
|
||||||
# Crea directory log se non esiste
|
|
||||||
mkdir -p "$LOG_DIR"
|
|
||||||
|
|
||||||
# File log dedicato
|
|
||||||
LOG_FILE="$LOG_DIR/ml-training.log"
|
|
||||||
|
|
||||||
# Funzione logging
|
|
||||||
log() {
|
|
||||||
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
|
|
||||||
}
|
|
||||||
|
|
||||||
log "========================================="
|
|
||||||
log "ML Training - Avvio automatico"
|
|
||||||
log "========================================="
|
|
||||||
|
|
||||||
# Verifica .env
|
|
||||||
if [ ! -f "$ENV_FILE" ]; then
|
|
||||||
log "ERROR: File .env non trovato: $ENV_FILE"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Carica variabili ambiente
|
|
||||||
log "Caricamento credenziali database..."
|
|
||||||
set -a
|
|
||||||
source "$ENV_FILE"
|
|
||||||
set +a
|
|
||||||
|
|
||||||
# Verifica credenziali
|
|
||||||
if [ -z "$PGPASSWORD" ]; then
|
|
||||||
log "ERROR: PGPASSWORD non trovata in .env"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
DB_HOST="${PGHOST:-localhost}"
|
|
||||||
DB_PORT="${PGPORT:-5432}"
|
|
||||||
DB_NAME="${PGDATABASE:-ids}"
|
|
||||||
DB_USER="${PGUSER:-postgres}"
|
|
||||||
|
|
||||||
log "Database: $DB_USER@$DB_HOST:$DB_PORT/$DB_NAME"
|
|
||||||
|
|
||||||
# Verifica venv
|
|
||||||
if [ ! -f "$VENV_PYTHON" ]; then
|
|
||||||
log "ERROR: Venv Python non trovato: $VENV_PYTHON"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Parametri training
|
|
||||||
DAYS="${ML_TRAINING_DAYS:-7}" # Default 7 giorni, configurabile via env var
|
|
||||||
|
|
||||||
log "Training ultimi $DAYS giorni di traffico..."
|
|
||||||
|
|
||||||
# Esegui training
|
|
||||||
cd "$PYTHON_ML_DIR"
|
|
||||||
"$VENV_PYTHON" train_hybrid.py --train --source database \
|
|
||||||
--db-host "$DB_HOST" \
|
|
||||||
--db-port "$DB_PORT" \
|
|
||||||
--db-name "$DB_NAME" \
|
|
||||||
--db-user "$DB_USER" \
|
|
||||||
--db-password "$PGPASSWORD" \
|
|
||||||
--days "$DAYS" 2>&1 | tee -a "$LOG_FILE"
|
|
||||||
|
|
||||||
# Check exit code
|
|
||||||
if [ ${PIPESTATUS[0]} -eq 0 ]; then
|
|
||||||
log "========================================="
|
|
||||||
log "✅ Training completato con successo!"
|
|
||||||
log "========================================="
|
|
||||||
log "Modelli salvati in: $PYTHON_ML_DIR/models/"
|
|
||||||
log ""
|
|
||||||
log "Il ML backend caricherà automaticamente i nuovi modelli al prossimo riavvio."
|
|
||||||
log "Per applicare immediatamente: sudo systemctl restart ids-ml-backend"
|
|
||||||
exit 0
|
|
||||||
else
|
|
||||||
log "========================================="
|
|
||||||
log "❌ ERRORE durante il training"
|
|
||||||
log "========================================="
|
|
||||||
log "Controlla log completo: $LOG_FILE"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
@ -1,50 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# Deploy Public Lists Integration (v2.0.0)
|
|
||||||
# Run on AlmaLinux 9 server after git pull
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
echo "=================================="
|
|
||||||
echo "PUBLIC LISTS DEPLOYMENT - v2.0.0"
|
|
||||||
echo "=================================="
|
|
||||||
|
|
||||||
# 1. Database Migration
|
|
||||||
echo -e "\n[1/5] Running database migration..."
|
|
||||||
sudo -u postgres psql -d ids_system -f deployment/migrations/006_add_public_lists.sql
|
|
||||||
echo "✓ Migration 006 applied"
|
|
||||||
|
|
||||||
# 2. Seed default lists
|
|
||||||
echo -e "\n[2/5] Seeding default public lists..."
|
|
||||||
cd python_ml/list_fetcher
|
|
||||||
DATABASE_URL=$DATABASE_URL python seed_lists.py
|
|
||||||
cd ../..
|
|
||||||
echo "✓ Default lists seeded"
|
|
||||||
|
|
||||||
# 3. Install systemd services
|
|
||||||
echo -e "\n[3/5] Installing systemd services..."
|
|
||||||
sudo cp deployment/systemd/ids-list-fetcher.service /etc/systemd/system/
|
|
||||||
sudo cp deployment/systemd/ids-list-fetcher.timer /etc/systemd/system/
|
|
||||||
sudo systemctl daemon-reload
|
|
||||||
echo "✓ Systemd services installed"
|
|
||||||
|
|
||||||
# 4. Enable and start
|
|
||||||
echo -e "\n[4/5] Enabling services..."
|
|
||||||
sudo systemctl enable ids-list-fetcher.timer
|
|
||||||
sudo systemctl start ids-list-fetcher.timer
|
|
||||||
echo "✓ Timer enabled (10-minute intervals)"
|
|
||||||
|
|
||||||
# 5. Initial sync
|
|
||||||
echo -e "\n[5/5] Running initial sync..."
|
|
||||||
sudo systemctl start ids-list-fetcher.service
|
|
||||||
echo "✓ Initial sync triggered"
|
|
||||||
|
|
||||||
echo -e "\n=================================="
|
|
||||||
echo "DEPLOYMENT COMPLETE"
|
|
||||||
echo "=================================="
|
|
||||||
echo ""
|
|
||||||
echo "Verify:"
|
|
||||||
echo " journalctl -u ids-list-fetcher -n 50"
|
|
||||||
echo " systemctl status ids-list-fetcher.timer"
|
|
||||||
echo ""
|
|
||||||
echo "Check UI: http://your-server/public-lists"
|
|
||||||
echo ""
|
|
||||||
@ -1,75 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
# =========================================================
|
|
||||||
# IDS - Setup Cleanup Timer
|
|
||||||
# =========================================================
|
|
||||||
# Installa e avvia il timer systemd per cleanup automatico
|
|
||||||
#
|
|
||||||
# Uso: sudo ./deployment/setup_cleanup_timer.sh
|
|
||||||
# =========================================================
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
if [ "$EUID" -ne 0 ]; then
|
|
||||||
echo "❌ Questo script deve essere eseguito come root (sudo)"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
|
|
||||||
echo "🔧 Setup IDS Cleanup Timer..."
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# 1. Installa dipendenze Python
|
|
||||||
echo "[1/7] Installazione dipendenze Python..."
|
|
||||||
pip3 install -q psycopg2-binary python-dotenv || {
|
|
||||||
echo "⚠️ Installazione pip fallita, provo con requirements.txt..."
|
|
||||||
pip3 install -q -r "$SCRIPT_DIR/../python_ml/requirements.txt" || {
|
|
||||||
echo "❌ Errore installazione dipendenze!"
|
|
||||||
echo "💡 Esegui manualmente: sudo pip3 install psycopg2-binary python-dotenv"
|
|
||||||
exit 1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# 2. Crea directory log
|
|
||||||
echo "[2/7] Creazione directory log..."
|
|
||||||
mkdir -p /var/log/ids
|
|
||||||
chmod 755 /var/log/ids
|
|
||||||
|
|
||||||
# 3. Rendi eseguibili gli script
|
|
||||||
echo "[3/7] Permessi esecuzione script..."
|
|
||||||
chmod +x "$SCRIPT_DIR/run_cleanup.sh"
|
|
||||||
chmod +x "$SCRIPT_DIR/../python_ml/cleanup_detections.py"
|
|
||||||
|
|
||||||
# 4. Copia service file
|
|
||||||
echo "[4/7] Installazione service file..."
|
|
||||||
cp "$SCRIPT_DIR/systemd/ids-cleanup.service" /etc/systemd/system/
|
|
||||||
cp "$SCRIPT_DIR/systemd/ids-cleanup.timer" /etc/systemd/system/
|
|
||||||
|
|
||||||
# 5. Reload systemd
|
|
||||||
echo "[5/7] Reload systemd daemon..."
|
|
||||||
systemctl daemon-reload
|
|
||||||
|
|
||||||
# 6. Abilita timer
|
|
||||||
echo "[6/7] Abilitazione timer..."
|
|
||||||
systemctl enable ids-cleanup.timer
|
|
||||||
|
|
||||||
# 7. Avvia timer
|
|
||||||
echo "[7/7] Avvio timer..."
|
|
||||||
systemctl start ids-cleanup.timer
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "✅ Cleanup timer installato e avviato con successo!"
|
|
||||||
echo ""
|
|
||||||
echo "📊 Status:"
|
|
||||||
systemctl status ids-cleanup.timer --no-pager -l
|
|
||||||
echo ""
|
|
||||||
echo "📅 Prossima esecuzione:"
|
|
||||||
systemctl list-timers ids-cleanup.timer --no-pager
|
|
||||||
echo ""
|
|
||||||
echo "💡 Comandi utili:"
|
|
||||||
echo " - Test manuale: sudo ./deployment/run_cleanup.sh"
|
|
||||||
echo " - Esegui ora: sudo systemctl start ids-cleanup.service"
|
|
||||||
echo " - Stato timer: sudo systemctl status ids-cleanup.timer"
|
|
||||||
echo " - Log cleanup: tail -f /var/log/ids/cleanup.log"
|
|
||||||
echo " - Disabilita timer: sudo systemctl stop ids-cleanup.timer && sudo systemctl disable ids-cleanup.timer"
|
|
||||||
echo ""
|
|
||||||
@ -1,98 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
# Setup ML Training Systemd Timer
|
|
||||||
# Configura training automatico settimanale del modello ML hybrid
|
|
||||||
#
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
echo "================================================================"
|
|
||||||
echo " SETUP ML TRAINING TIMER - Training Automatico Settimanale"
|
|
||||||
echo "================================================================"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Verifica root
|
|
||||||
if [ "$EUID" -ne 0 ]; then
|
|
||||||
echo "❌ ERRORE: Questo script deve essere eseguito come root"
|
|
||||||
echo " Usa: sudo $0"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
IDS_ROOT="/opt/ids"
|
|
||||||
SYSTEMD_DIR="/etc/systemd/system"
|
|
||||||
|
|
||||||
# Verifica directory IDS
|
|
||||||
if [ ! -d "$IDS_ROOT" ]; then
|
|
||||||
echo "❌ ERRORE: Directory IDS non trovata: $IDS_ROOT"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "📁 Directory IDS: $IDS_ROOT"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# 1. Copia systemd files
|
|
||||||
echo "📋 Step 1: Installazione systemd units..."
|
|
||||||
|
|
||||||
cp "$IDS_ROOT/deployment/systemd/ids-ml-training.service" "$SYSTEMD_DIR/"
|
|
||||||
cp "$IDS_ROOT/deployment/systemd/ids-ml-training.timer" "$SYSTEMD_DIR/"
|
|
||||||
|
|
||||||
echo " ✅ Service copiato: $SYSTEMD_DIR/ids-ml-training.service"
|
|
||||||
echo " ✅ Timer copiato: $SYSTEMD_DIR/ids-ml-training.timer"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# 2. Rendi eseguibile script
|
|
||||||
echo "🔧 Step 2: Permessi script..."
|
|
||||||
chmod +x "$IDS_ROOT/deployment/run_ml_training.sh"
|
|
||||||
echo " ✅ Script eseguibile: $IDS_ROOT/deployment/run_ml_training.sh"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# 3. Reload systemd
|
|
||||||
echo "🔄 Step 3: Reload systemd daemon..."
|
|
||||||
systemctl daemon-reload
|
|
||||||
echo " ✅ Daemon reloaded"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# 4. Enable e start timer
|
|
||||||
echo "🚀 Step 4: Attivazione timer..."
|
|
||||||
systemctl enable ids-ml-training.timer
|
|
||||||
systemctl start ids-ml-training.timer
|
|
||||||
echo " ✅ Timer attivato e avviato"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# 5. Verifica status
|
|
||||||
echo "📊 Step 5: Verifica configurazione..."
|
|
||||||
echo ""
|
|
||||||
echo "Timer status:"
|
|
||||||
systemctl status ids-ml-training.timer --no-pager
|
|
||||||
echo ""
|
|
||||||
echo "Prossima esecuzione:"
|
|
||||||
systemctl list-timers ids-ml-training.timer --no-pager
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
echo "================================================================"
|
|
||||||
echo "✅ SETUP COMPLETATO!"
|
|
||||||
echo "================================================================"
|
|
||||||
echo ""
|
|
||||||
echo "📅 Schedule: Ogni Lunedì alle 03:00 AM"
|
|
||||||
echo "📁 Log: /var/log/ids/ml-training.log"
|
|
||||||
echo ""
|
|
||||||
echo "🔍 COMANDI UTILI:"
|
|
||||||
echo ""
|
|
||||||
echo " # Verifica timer attivo"
|
|
||||||
echo " systemctl status ids-ml-training.timer"
|
|
||||||
echo ""
|
|
||||||
echo " # Vedi prossima esecuzione"
|
|
||||||
echo " systemctl list-timers ids-ml-training.timer"
|
|
||||||
echo ""
|
|
||||||
echo " # Esegui training manualmente ORA"
|
|
||||||
echo " sudo systemctl start ids-ml-training.service"
|
|
||||||
echo ""
|
|
||||||
echo " # Vedi log training"
|
|
||||||
echo " journalctl -u ids-ml-training.service -f"
|
|
||||||
echo " tail -f /var/log/ids/ml-training.log"
|
|
||||||
echo ""
|
|
||||||
echo " # Disabilita training automatico"
|
|
||||||
echo " sudo systemctl stop ids-ml-training.timer"
|
|
||||||
echo " sudo systemctl disable ids-ml-training.timer"
|
|
||||||
echo ""
|
|
||||||
echo "================================================================"
|
|
||||||
@ -1,44 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
###############################################################################
|
|
||||||
# Setup Syslog Parser Monitoring
|
|
||||||
# Installa cron job per health check automatico ogni 5 minuti
|
|
||||||
# Uso: sudo ./deployment/setup_parser_monitoring.sh
|
|
||||||
###############################################################################
|
|
||||||
|
|
||||||
set -e
|
|
||||||
|
|
||||||
echo "📊 Setup Syslog Parser Monitoring..."
|
|
||||||
echo
|
|
||||||
|
|
||||||
# Make health check script executable
|
|
||||||
chmod +x /opt/ids/deployment/check_parser_health.sh
|
|
||||||
|
|
||||||
# Setup cron job
|
|
||||||
CRON_JOB="*/5 * * * * /opt/ids/deployment/check_parser_health.sh >> /var/log/ids/parser-health-cron.log 2>&1"
|
|
||||||
|
|
||||||
# Check if cron job already exists
|
|
||||||
if crontab -l 2>/dev/null | grep -q "check_parser_health.sh"; then
|
|
||||||
echo "✅ Cron job già configurato"
|
|
||||||
else
|
|
||||||
# Add cron job
|
|
||||||
(crontab -l 2>/dev/null; echo "$CRON_JOB") | crontab -
|
|
||||||
echo "✅ Cron job aggiunto (esecuzione ogni 5 minuti)"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo
|
|
||||||
echo "📋 Configurazione completata:"
|
|
||||||
echo " - Health check script: /opt/ids/deployment/check_parser_health.sh"
|
|
||||||
echo " - Log file: /var/log/ids/parser-health.log"
|
|
||||||
echo " - Cron log: /var/log/ids/parser-health-cron.log"
|
|
||||||
echo " - Schedule: Every 5 minutes"
|
|
||||||
echo
|
|
||||||
echo "🔍 Monitoraggio attivo:"
|
|
||||||
echo " - Controlla servizio running"
|
|
||||||
echo " - Verifica log recenti (threshold: 5 min)"
|
|
||||||
echo " - Auto-restart se necessario"
|
|
||||||
echo " - Log errori recenti"
|
|
||||||
echo
|
|
||||||
echo "📊 Visualizza stato:"
|
|
||||||
echo " tail -f /var/log/ids/parser-health.log"
|
|
||||||
echo
|
|
||||||
echo "✅ Setup completato!"
|
|
||||||
@ -1,30 +0,0 @@
|
|||||||
[Unit]
|
|
||||||
Description=IDS Auto-Blocking Service - Detect and Block Malicious IPs
|
|
||||||
Documentation=https://github.com/yourusername/ids
|
|
||||||
After=network.target ids-ml-backend.service postgresql-16.service
|
|
||||||
Requires=ids-ml-backend.service
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=oneshot
|
|
||||||
User=ids
|
|
||||||
Group=ids
|
|
||||||
WorkingDirectory=/opt/ids
|
|
||||||
EnvironmentFile=/opt/ids/.env
|
|
||||||
|
|
||||||
# Esegui script auto-blocking (usa venv Python)
|
|
||||||
ExecStart=/opt/ids/python_ml/venv/bin/python3 /opt/ids/python_ml/auto_block.py
|
|
||||||
|
|
||||||
# Logging
|
|
||||||
StandardOutput=append:/var/log/ids/auto_block.log
|
|
||||||
StandardError=append:/var/log/ids/auto_block.log
|
|
||||||
SyslogIdentifier=ids-auto-block
|
|
||||||
|
|
||||||
# Security
|
|
||||||
NoNewPrivileges=true
|
|
||||||
PrivateTmp=true
|
|
||||||
|
|
||||||
# Timeout: max 3 minuti per detection+blocking
|
|
||||||
TimeoutStartSec=180
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
@ -1,20 +0,0 @@
|
|||||||
[Unit]
|
|
||||||
Description=IDS Auto-Blocking Timer - Run every 5 minutes
|
|
||||||
Documentation=https://github.com/yourusername/ids
|
|
||||||
Requires=ids-auto-block.service
|
|
||||||
|
|
||||||
[Timer]
|
|
||||||
# Esegui 2 minuti dopo boot (per dare tempo a ML backend di avviarsi)
|
|
||||||
OnBootSec=2min
|
|
||||||
|
|
||||||
# Poi esegui ogni 5 minuti
|
|
||||||
OnUnitActiveSec=5min
|
|
||||||
|
|
||||||
# Precisione: ±1 secondo
|
|
||||||
AccuracySec=1s
|
|
||||||
|
|
||||||
# Esegui subito se il sistema era spento durante l'esecuzione programmata
|
|
||||||
Persistent=true
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=timers.target
|
|
||||||
@ -1,26 +0,0 @@
|
|||||||
[Unit]
|
|
||||||
Description=IDS Cleanup Detections Service
|
|
||||||
Documentation=https://github.com/yourusername/ids
|
|
||||||
After=network.target postgresql.service
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=oneshot
|
|
||||||
User=root
|
|
||||||
WorkingDirectory=/opt/ids
|
|
||||||
EnvironmentFile=/opt/ids/.env
|
|
||||||
ExecStart=/opt/ids/deployment/run_cleanup.sh
|
|
||||||
|
|
||||||
# Logging
|
|
||||||
StandardOutput=append:/var/log/ids/cleanup.log
|
|
||||||
StandardError=append:/var/log/ids/cleanup.log
|
|
||||||
|
|
||||||
# Security
|
|
||||||
NoNewPrivileges=true
|
|
||||||
PrivateTmp=true
|
|
||||||
|
|
||||||
# Restart policy (non necessario per oneshot)
|
|
||||||
# Restart=on-failure
|
|
||||||
# RestartSec=30
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
@ -1,17 +0,0 @@
|
|||||||
[Unit]
|
|
||||||
Description=IDS Cleanup Detections Timer
|
|
||||||
Documentation=https://github.com/yourusername/ids
|
|
||||||
Requires=ids-cleanup.service
|
|
||||||
|
|
||||||
[Timer]
|
|
||||||
# Esegui ogni ora al minuto 10 (es. 00:10, 01:10, 02:10, ..., 23:10)
|
|
||||||
OnCalendar=*:10:00
|
|
||||||
|
|
||||||
# Esegui subito se il sistema era spento durante l'esecuzione programmata
|
|
||||||
Persistent=true
|
|
||||||
|
|
||||||
# Randomizza esecuzione di ±5 minuti per evitare picchi di carico
|
|
||||||
RandomizedDelaySec=300
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=timers.target
|
|
||||||
@ -1,29 +0,0 @@
|
|||||||
[Unit]
|
|
||||||
Description=IDS Public Lists Fetcher Service
|
|
||||||
Documentation=https://github.com/yourorg/ids
|
|
||||||
After=network.target postgresql.service
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=oneshot
|
|
||||||
User=root
|
|
||||||
WorkingDirectory=/opt/ids/python_ml
|
|
||||||
Environment="PYTHONUNBUFFERED=1"
|
|
||||||
EnvironmentFile=/opt/ids/.env
|
|
||||||
|
|
||||||
# Run list fetcher with virtual environment
|
|
||||||
ExecStart=/opt/ids/python_ml/venv/bin/python3 /opt/ids/python_ml/list_fetcher/run_fetcher.py
|
|
||||||
|
|
||||||
# Logging
|
|
||||||
StandardOutput=journal
|
|
||||||
StandardError=journal
|
|
||||||
SyslogIdentifier=ids-list-fetcher
|
|
||||||
|
|
||||||
# Security settings
|
|
||||||
PrivateTmp=true
|
|
||||||
NoNewPrivileges=true
|
|
||||||
|
|
||||||
# Restart policy
|
|
||||||
Restart=no
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
@ -1,13 +0,0 @@
|
|||||||
[Unit]
|
|
||||||
Description=IDS Public Lists Fetcher Timer (every 10 minutes)
|
|
||||||
Documentation=https://github.com/yourorg/ids
|
|
||||||
|
|
||||||
[Timer]
|
|
||||||
# Run every 10 minutes
|
|
||||||
OnCalendar=*:0/10
|
|
||||||
OnBootSec=2min
|
|
||||||
AccuracySec=1min
|
|
||||||
Persistent=true
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=timers.target
|
|
||||||
@ -1,30 +0,0 @@
|
|||||||
[Unit]
|
|
||||||
Description=IDS ML Hybrid Detector Training
|
|
||||||
Documentation=https://github.com/your-repo/ids
|
|
||||||
After=network.target postgresql.service
|
|
||||||
Requires=postgresql.service
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=oneshot
|
|
||||||
User=root
|
|
||||||
WorkingDirectory=/opt/ids/python_ml
|
|
||||||
|
|
||||||
# Carica environment file per credenziali database
|
|
||||||
EnvironmentFile=/opt/ids/.env
|
|
||||||
|
|
||||||
# Esegui training
|
|
||||||
ExecStart=/opt/ids/deployment/run_ml_training.sh
|
|
||||||
|
|
||||||
# Timeout generoso (training può richiedere fino a 30 min)
|
|
||||||
TimeoutStartSec=1800
|
|
||||||
|
|
||||||
# Log
|
|
||||||
StandardOutput=journal
|
|
||||||
StandardError=journal
|
|
||||||
SyslogIdentifier=ids-ml-training
|
|
||||||
|
|
||||||
# Restart policy
|
|
||||||
Restart=no
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
@ -1,17 +0,0 @@
|
|||||||
[Unit]
|
|
||||||
Description=IDS ML Training - Weekly Retraining
|
|
||||||
Documentation=https://github.com/your-repo/ids
|
|
||||||
Requires=ids-ml-training.service
|
|
||||||
|
|
||||||
[Timer]
|
|
||||||
# Esecuzione settimanale: ogni Lunedì alle 03:00 AM
|
|
||||||
OnCalendar=Mon *-*-* 03:00:00
|
|
||||||
|
|
||||||
# Persistenza: se il server era spento, esegui al prossimo boot
|
|
||||||
Persistent=true
|
|
||||||
|
|
||||||
# Accuratezza: 5 minuti di tolleranza
|
|
||||||
AccuracySec=5min
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=timers.target
|
|
||||||
@ -1,125 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
#
|
|
||||||
# Training Hybrid ML Detector su Dati Reali
|
|
||||||
# Legge credenziali da /opt/ids/.env automaticamente
|
|
||||||
#
|
|
||||||
|
|
||||||
set -e # Exit on error
|
|
||||||
|
|
||||||
echo "======================================================================="
|
|
||||||
echo " TRAINING HYBRID ML DETECTOR - DATI REALI"
|
|
||||||
echo "======================================================================="
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Percorsi
|
|
||||||
IDS_ROOT="/opt/ids"
|
|
||||||
ENV_FILE="$IDS_ROOT/.env"
|
|
||||||
PYTHON_ML_DIR="$IDS_ROOT/python_ml"
|
|
||||||
VENV_PYTHON="$PYTHON_ML_DIR/venv/bin/python"
|
|
||||||
|
|
||||||
# Verifica file .env esiste
|
|
||||||
if [ ! -f "$ENV_FILE" ]; then
|
|
||||||
echo "❌ ERRORE: File .env non trovato in $ENV_FILE"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Carica variabili da .env
|
|
||||||
echo "📂 Caricamento credenziali database da .env..."
|
|
||||||
source "$ENV_FILE"
|
|
||||||
|
|
||||||
# Estrai credenziali database
|
|
||||||
DB_HOST="${PGHOST:-localhost}"
|
|
||||||
DB_PORT="${PGPORT:-5432}"
|
|
||||||
DB_NAME="${PGDATABASE:-ids}"
|
|
||||||
DB_USER="${PGUSER:-postgres}"
|
|
||||||
DB_PASSWORD="${PGPASSWORD}"
|
|
||||||
|
|
||||||
# Verifica password estratta
|
|
||||||
if [ -z "$DB_PASSWORD" ]; then
|
|
||||||
echo "❌ ERRORE: PGPASSWORD non trovata nel file .env"
|
|
||||||
echo " Aggiungi: PGPASSWORD=tua_password_qui"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "✅ Credenziali caricate:"
|
|
||||||
echo " Host: $DB_HOST"
|
|
||||||
echo " Port: $DB_PORT"
|
|
||||||
echo " Database: $DB_NAME"
|
|
||||||
echo " User: $DB_USER"
|
|
||||||
echo " Password: ****** (nascosta)"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Parametri training
|
|
||||||
DAYS="${1:-7}" # Default 7 giorni, puoi passare come argomento
|
|
||||||
MAX_SAMPLES="${2:-1000000}" # Default 1M records max
|
|
||||||
|
|
||||||
echo "🎯 Parametri training:"
|
|
||||||
echo " Periodo: ultimi $DAYS giorni"
|
|
||||||
echo " Max records: $MAX_SAMPLES"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Verifica venv Python
|
|
||||||
if [ ! -f "$VENV_PYTHON" ]; then
|
|
||||||
echo "❌ ERRORE: Virtual environment non trovato in $VENV_PYTHON"
|
|
||||||
echo " Esegui prima: cd $IDS_ROOT && python3 -m venv python_ml/venv"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "🐍 Python: $VENV_PYTHON"
|
|
||||||
echo ""
|
|
||||||
|
|
||||||
# Verifica dati disponibili nel database
|
|
||||||
echo "📊 Verifica dati disponibili nel database..."
|
|
||||||
PGPASSWORD="$DB_PASSWORD" psql -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -d "$DB_NAME" -c "
|
|
||||||
SELECT
|
|
||||||
TO_CHAR(MIN(timestamp), 'YYYY-MM-DD HH24:MI:SS') as primo_log,
|
|
||||||
TO_CHAR(MAX(timestamp), 'YYYY-MM-DD HH24:MI:SS') as ultimo_log,
|
|
||||||
EXTRACT(DAY FROM (MAX(timestamp) - MIN(timestamp))) || ' giorni' as periodo_totale,
|
|
||||||
TO_CHAR(COUNT(*), 'FM999,999,999') as totale_records
|
|
||||||
FROM network_logs;
|
|
||||||
" 2>/dev/null
|
|
||||||
|
|
||||||
if [ $? -ne 0 ]; then
|
|
||||||
echo "⚠️ WARNING: Impossibile verificare dati database (continuo comunque...)"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo ""
|
|
||||||
echo "🚀 Avvio training..."
|
|
||||||
echo ""
|
|
||||||
echo "======================================================================="
|
|
||||||
|
|
||||||
# Cambia directory
|
|
||||||
cd "$PYTHON_ML_DIR"
|
|
||||||
|
|
||||||
# Esegui training
|
|
||||||
"$VENV_PYTHON" train_hybrid.py --train --source database \
|
|
||||||
--db-host "$DB_HOST" \
|
|
||||||
--db-port "$DB_PORT" \
|
|
||||||
--db-name "$DB_NAME" \
|
|
||||||
--db-user "$DB_USER" \
|
|
||||||
--db-password "$DB_PASSWORD" \
|
|
||||||
--days "$DAYS"
|
|
||||||
|
|
||||||
# Check exit code
|
|
||||||
if [ $? -eq 0 ]; then
|
|
||||||
echo ""
|
|
||||||
echo "======================================================================="
|
|
||||||
echo "✅ TRAINING COMPLETATO CON SUCCESSO!"
|
|
||||||
echo "======================================================================="
|
|
||||||
echo ""
|
|
||||||
echo "📁 Modelli salvati in: $PYTHON_ML_DIR/models/"
|
|
||||||
echo ""
|
|
||||||
echo "🔄 PROSSIMI PASSI:"
|
|
||||||
echo " 1. Restart ML backend: sudo systemctl restart ids-ml-backend"
|
|
||||||
echo " 2. Verifica caricamento: sudo journalctl -u ids-ml-backend -f"
|
|
||||||
echo " 3. Test API: curl http://localhost:8000/health"
|
|
||||||
echo ""
|
|
||||||
else
|
|
||||||
echo ""
|
|
||||||
echo "======================================================================="
|
|
||||||
echo "❌ ERRORE DURANTE IL TRAINING"
|
|
||||||
echo "======================================================================="
|
|
||||||
echo ""
|
|
||||||
echo "Controlla i log sopra per dettagli sull'errore."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
@ -158,20 +158,6 @@ if [ -f "./deployment/setup_rsyslog.sh" ]; then
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Verifica e installa servizio list-fetcher se mancante
|
|
||||||
echo -e "\n${BLUE}📋 Verifica servizio list-fetcher...${NC}"
|
|
||||||
if ! systemctl list-unit-files | grep -q "ids-list-fetcher"; then
|
|
||||||
echo -e "${YELLOW} Servizio ids-list-fetcher non installato, installazione...${NC}"
|
|
||||||
if [ -f "./deployment/install_list_fetcher.sh" ]; then
|
|
||||||
chmod +x ./deployment/install_list_fetcher.sh
|
|
||||||
./deployment/install_list_fetcher.sh
|
|
||||||
else
|
|
||||||
echo -e "${RED} ❌ Script install_list_fetcher.sh non trovato${NC}"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo -e "${GREEN} ✅ Servizio ids-list-fetcher già installato${NC}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Restart servizi
|
# Restart servizi
|
||||||
echo -e "\n${BLUE}🔄 Restart servizi...${NC}"
|
echo -e "\n${BLUE}🔄 Restart servizi...${NC}"
|
||||||
if [ -f "./deployment/restart_all.sh" ]; then
|
if [ -f "./deployment/restart_all.sh" ]; then
|
||||||
|
|||||||
6
main.py
6
main.py
@ -1,6 +0,0 @@
|
|||||||
def main():
|
|
||||||
print("Hello from repl-nix-workspace!")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@ -1,8 +0,0 @@
|
|||||||
[project]
|
|
||||||
name = "repl-nix-workspace"
|
|
||||||
version = "0.1.0"
|
|
||||||
description = "Add your description here"
|
|
||||||
requires-python = ">=3.11"
|
|
||||||
dependencies = [
|
|
||||||
"httpx>=0.28.1",
|
|
||||||
]
|
|
||||||
@ -1,63 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
IDS Auto-Blocking Script
|
|
||||||
Rileva e blocca automaticamente IP con risk_score >= 80
|
|
||||||
Eseguito periodicamente da systemd timer (ogni 5 minuti)
|
|
||||||
"""
|
|
||||||
import requests
|
|
||||||
import sys
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
ML_API_URL = "http://localhost:8000"
|
|
||||||
|
|
||||||
def auto_block():
|
|
||||||
"""Esegue detection e blocking automatico degli IP critici"""
|
|
||||||
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
|
||||||
print(f"[{timestamp}] 🔍 Starting auto-block detection...")
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Chiama endpoint ML /detect con auto_block=true
|
|
||||||
response = requests.post(
|
|
||||||
f"{ML_API_URL}/detect",
|
|
||||||
json={
|
|
||||||
"max_records": 5000, # Analizza ultimi 5000 log
|
|
||||||
"hours_back": 1.0, # Ultima ora
|
|
||||||
"risk_threshold": 80.0, # Solo IP critici (score >= 80)
|
|
||||||
"auto_block": True # BLOCCA AUTOMATICAMENTE
|
|
||||||
},
|
|
||||||
timeout=120 # 2 minuti timeout
|
|
||||||
)
|
|
||||||
|
|
||||||
if response.status_code == 200:
|
|
||||||
data = response.json()
|
|
||||||
detections = len(data.get("detections", []))
|
|
||||||
blocked = data.get("blocked", 0)
|
|
||||||
|
|
||||||
if blocked > 0:
|
|
||||||
print(f"✓ Detection completata: {detections} anomalie rilevate, {blocked} IP bloccati")
|
|
||||||
else:
|
|
||||||
print(f"✓ Detection completata: {detections} anomalie rilevate, nessun nuovo IP da bloccare")
|
|
||||||
|
|
||||||
return 0
|
|
||||||
else:
|
|
||||||
print(f"✗ API error: HTTP {response.status_code}")
|
|
||||||
print(f" Response: {response.text}")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
except requests.exceptions.ConnectionError:
|
|
||||||
print("✗ ERRORE: ML Backend non raggiungibile su http://localhost:8000")
|
|
||||||
print(" Verifica che ids-ml-backend.service sia attivo:")
|
|
||||||
print(" sudo systemctl status ids-ml-backend")
|
|
||||||
return 1
|
|
||||||
except requests.exceptions.Timeout:
|
|
||||||
print("✗ ERRORE: Timeout dopo 120 secondi. Detection troppo lenta?")
|
|
||||||
return 1
|
|
||||||
except Exception as e:
|
|
||||||
print(f"✗ ERRORE imprevisto: {type(e).__name__}: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
return 1
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
exit_code = auto_block()
|
|
||||||
sys.exit(exit_code)
|
|
||||||
@ -1,172 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
IDS - Cleanup Detections Script
|
|
||||||
================================
|
|
||||||
Automatizza la pulizia delle detections e lo sblocco degli IP secondo le regole:
|
|
||||||
1. Cancella detections non anomale dopo 48 ore
|
|
||||||
2. Sblocca IP bloccati se non più anomali dopo 2 ore
|
|
||||||
|
|
||||||
Esecuzione: Ogni ora via cron/systemd timer
|
|
||||||
"""
|
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import logging
|
|
||||||
from datetime import datetime, timedelta
|
|
||||||
import psycopg2
|
|
||||||
from psycopg2.extras import RealDictCursor
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
|
|
||||||
# Setup logging
|
|
||||||
logging.basicConfig(
|
|
||||||
level=logging.INFO,
|
|
||||||
format='[%(asctime)s] %(levelname)s: %(message)s',
|
|
||||||
handlers=[
|
|
||||||
logging.FileHandler('/var/log/ids/cleanup.log'),
|
|
||||||
logging.StreamHandler(sys.stdout)
|
|
||||||
]
|
|
||||||
)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
# Load environment
|
|
||||||
load_dotenv()
|
|
||||||
|
|
||||||
def get_db_connection():
|
|
||||||
"""Connessione al database PostgreSQL"""
|
|
||||||
return psycopg2.connect(
|
|
||||||
host=os.getenv('PGHOST', 'localhost'),
|
|
||||||
port=int(os.getenv('PGPORT', 5432)),
|
|
||||||
user=os.getenv('PGUSER'),
|
|
||||||
password=os.getenv('PGPASSWORD'),
|
|
||||||
database=os.getenv('PGDATABASE')
|
|
||||||
)
|
|
||||||
|
|
||||||
def cleanup_old_detections(conn, hours=48):
|
|
||||||
"""
|
|
||||||
Cancella detections vecchie di più di N ore.
|
|
||||||
|
|
||||||
Logica: Se un IP è stato rilevato ma dopo 48 ore non è più
|
|
||||||
considerato anomalo (non appare in nuove detections), eliminalo.
|
|
||||||
"""
|
|
||||||
cursor = conn.cursor(cursor_factory=RealDictCursor)
|
|
||||||
|
|
||||||
cutoff_time = datetime.now() - timedelta(hours=hours)
|
|
||||||
|
|
||||||
# Conta detections da eliminare
|
|
||||||
cursor.execute("""
|
|
||||||
SELECT COUNT(*) as count
|
|
||||||
FROM detections
|
|
||||||
WHERE detected_at < %s
|
|
||||||
AND blocked = false
|
|
||||||
""", (cutoff_time,))
|
|
||||||
|
|
||||||
count = cursor.fetchone()['count']
|
|
||||||
|
|
||||||
if count > 0:
|
|
||||||
logger.info(f"Trovate {count} detections da eliminare (più vecchie di {hours}h)")
|
|
||||||
|
|
||||||
# Elimina
|
|
||||||
cursor.execute("""
|
|
||||||
DELETE FROM detections
|
|
||||||
WHERE detected_at < %s
|
|
||||||
AND blocked = false
|
|
||||||
""", (cutoff_time,))
|
|
||||||
|
|
||||||
conn.commit()
|
|
||||||
logger.info(f"✅ Eliminate {cursor.rowcount} detections vecchie")
|
|
||||||
else:
|
|
||||||
logger.info(f"Nessuna detection da eliminare (soglia: {hours}h)")
|
|
||||||
|
|
||||||
cursor.close()
|
|
||||||
return count
|
|
||||||
|
|
||||||
def unblock_old_ips(conn, hours=2):
|
|
||||||
"""
|
|
||||||
Sblocca IP bloccati da più di N ore.
|
|
||||||
|
|
||||||
Logica: Se un IP è stato bloccato ma dopo 2 ore non è più
|
|
||||||
anomalo (nessuna nuova detection), sbloccalo dal DB.
|
|
||||||
|
|
||||||
NOTA: Questo NON rimuove l'IP dalle firewall list dei router MikroTik.
|
|
||||||
Per quello serve chiamare l'API /unblock-ip del ML backend.
|
|
||||||
"""
|
|
||||||
cursor = conn.cursor(cursor_factory=RealDictCursor)
|
|
||||||
|
|
||||||
cutoff_time = datetime.now() - timedelta(hours=hours)
|
|
||||||
|
|
||||||
# Trova IP bloccati da più di N ore senza nuove detections
|
|
||||||
cursor.execute("""
|
|
||||||
SELECT d.source_ip, d.blocked_at, d.anomaly_type, d.risk_score
|
|
||||||
FROM detections d
|
|
||||||
WHERE d.blocked = true
|
|
||||||
AND d.blocked_at < %s
|
|
||||||
AND NOT EXISTS (
|
|
||||||
SELECT 1 FROM detections d2
|
|
||||||
WHERE d2.source_ip = d.source_ip
|
|
||||||
AND d2.detected_at > %s
|
|
||||||
)
|
|
||||||
""", (cutoff_time, cutoff_time))
|
|
||||||
|
|
||||||
ips_to_unblock = cursor.fetchall()
|
|
||||||
|
|
||||||
if ips_to_unblock:
|
|
||||||
logger.info(f"Trovati {len(ips_to_unblock)} IP da sbloccare (bloccati da più di {hours}h)")
|
|
||||||
|
|
||||||
for ip_data in ips_to_unblock:
|
|
||||||
ip = ip_data['source_ip']
|
|
||||||
logger.info(f" - {ip} (tipo: {ip_data['anomaly_type']}, score: {ip_data['risk_score']})")
|
|
||||||
|
|
||||||
# Aggiorna DB - SOLO i record bloccati da più di N ore
|
|
||||||
# NON sbloccate record recenti dello stesso IP!
|
|
||||||
cursor.execute("""
|
|
||||||
UPDATE detections
|
|
||||||
SET blocked = false, blocked_at = NULL
|
|
||||||
WHERE source_ip = %s
|
|
||||||
AND blocked = true
|
|
||||||
AND blocked_at < %s
|
|
||||||
""", (ip, cutoff_time))
|
|
||||||
|
|
||||||
conn.commit()
|
|
||||||
logger.info(f"✅ Sbloccati {len(ips_to_unblock)} IP nel database")
|
|
||||||
logger.warning("⚠️ ATTENZIONE: IP ancora presenti nelle firewall list MikroTik!")
|
|
||||||
logger.info("💡 Per rimuoverli dai router, usa: curl -X POST http://localhost:8000/unblock-ip -d '{\"ip_address\": \"X.X.X.X\"}'")
|
|
||||||
else:
|
|
||||||
logger.info(f"Nessun IP da sbloccare (soglia: {hours}h)")
|
|
||||||
|
|
||||||
cursor.close()
|
|
||||||
return len(ips_to_unblock)
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Esecuzione cleanup completo"""
|
|
||||||
logger.info("=" * 60)
|
|
||||||
logger.info("CLEANUP DETECTIONS - Avvio")
|
|
||||||
logger.info("=" * 60)
|
|
||||||
|
|
||||||
try:
|
|
||||||
conn = get_db_connection()
|
|
||||||
logger.info("✅ Connesso al database")
|
|
||||||
|
|
||||||
# 1. Cleanup detections vecchie (48h)
|
|
||||||
logger.info("\n[1/2] Cleanup detections vecchie...")
|
|
||||||
deleted_count = cleanup_old_detections(conn, hours=48)
|
|
||||||
|
|
||||||
# 2. Sblocco IP vecchi (2h)
|
|
||||||
logger.info("\n[2/2] Sblocco IP vecchi...")
|
|
||||||
unblocked_count = unblock_old_ips(conn, hours=2)
|
|
||||||
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
logger.info("\n" + "=" * 60)
|
|
||||||
logger.info("CLEANUP COMPLETATO")
|
|
||||||
logger.info(f" - Detections eliminate: {deleted_count}")
|
|
||||||
logger.info(f" - IP sbloccati (DB): {unblocked_count}")
|
|
||||||
logger.info("=" * 60)
|
|
||||||
|
|
||||||
return 0
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"❌ Errore durante cleanup: {e}", exc_info=True)
|
|
||||||
return 1
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
||||||
@ -1,265 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
IDS Model Comparison Script
|
|
||||||
Confronta detection del vecchio modello (1.0.0) con il nuovo Hybrid Detector (2.0.0)
|
|
||||||
"""
|
|
||||||
|
|
||||||
import psycopg2
|
|
||||||
from psycopg2.extras import RealDictCursor
|
|
||||||
import pandas as pd
|
|
||||||
from datetime import datetime
|
|
||||||
import os
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
from ml_hybrid_detector import MLHybridDetector
|
|
||||||
from ml_analyzer import MLAnalyzer
|
|
||||||
|
|
||||||
load_dotenv()
|
|
||||||
|
|
||||||
|
|
||||||
def get_db_connection():
|
|
||||||
"""Connect to PostgreSQL database"""
|
|
||||||
return psycopg2.connect(
|
|
||||||
host=os.getenv('PGHOST', 'localhost'),
|
|
||||||
port=os.getenv('PGPORT', 5432),
|
|
||||||
database=os.getenv('PGDATABASE', 'ids'),
|
|
||||||
user=os.getenv('PGUSER', 'postgres'),
|
|
||||||
password=os.getenv('PGPASSWORD')
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def load_old_detections(limit=100):
|
|
||||||
"""
|
|
||||||
Carica le detection esistenti dal database
|
|
||||||
(non filtriamo per model_version perché la colonna non esiste)
|
|
||||||
"""
|
|
||||||
print("\n[1] Caricamento detection esistenti dal database...")
|
|
||||||
|
|
||||||
conn = get_db_connection()
|
|
||||||
cursor = conn.cursor(cursor_factory=RealDictCursor)
|
|
||||||
|
|
||||||
query = """
|
|
||||||
SELECT
|
|
||||||
d.id,
|
|
||||||
d.source_ip,
|
|
||||||
d.risk_score,
|
|
||||||
d.anomaly_type,
|
|
||||||
d.log_count,
|
|
||||||
d.last_seen,
|
|
||||||
d.blocked,
|
|
||||||
d.detected_at
|
|
||||||
FROM detections d
|
|
||||||
ORDER BY d.risk_score DESC
|
|
||||||
LIMIT %s
|
|
||||||
"""
|
|
||||||
|
|
||||||
cursor.execute(query, (limit,))
|
|
||||||
detections = cursor.fetchall()
|
|
||||||
cursor.close()
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
print(f" Trovate {len(detections)} detection nel database")
|
|
||||||
|
|
||||||
return detections
|
|
||||||
|
|
||||||
|
|
||||||
def get_network_logs_for_ip(ip_address, days=7):
|
|
||||||
"""
|
|
||||||
Recupera i log di rete per un IP specifico (ultimi N giorni)
|
|
||||||
"""
|
|
||||||
conn = get_db_connection()
|
|
||||||
cursor = conn.cursor(cursor_factory=RealDictCursor)
|
|
||||||
|
|
||||||
query = """
|
|
||||||
SELECT
|
|
||||||
timestamp,
|
|
||||||
source_ip,
|
|
||||||
destination_ip as dest_ip,
|
|
||||||
destination_port as dest_port,
|
|
||||||
protocol,
|
|
||||||
packet_length,
|
|
||||||
action
|
|
||||||
FROM network_logs
|
|
||||||
WHERE source_ip = %s
|
|
||||||
AND timestamp > NOW() - INTERVAL '1 day' * %s
|
|
||||||
ORDER BY timestamp DESC
|
|
||||||
LIMIT 10000
|
|
||||||
"""
|
|
||||||
|
|
||||||
cursor.execute(query, (ip_address, days))
|
|
||||||
rows = cursor.fetchall()
|
|
||||||
cursor.close()
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
return rows
|
|
||||||
|
|
||||||
|
|
||||||
def reanalyze_with_hybrid(detector, ip_address, old_detection):
|
|
||||||
"""
|
|
||||||
Rianalizza un IP con il nuovo Hybrid Detector
|
|
||||||
"""
|
|
||||||
# Recupera log per questo IP
|
|
||||||
logs = get_network_logs_for_ip(ip_address, days=7)
|
|
||||||
|
|
||||||
if not logs:
|
|
||||||
return None
|
|
||||||
|
|
||||||
df = pd.DataFrame(logs)
|
|
||||||
|
|
||||||
# Il metodo detect() fa già l'extraction delle feature internamente
|
|
||||||
# Passiamo direttamente i log grezzi
|
|
||||||
result = detector.detect(df, mode='all') # mode='all' per vedere tutti i risultati
|
|
||||||
|
|
||||||
if not result or len(result) == 0:
|
|
||||||
return None
|
|
||||||
|
|
||||||
# Il detector raggruppa per source_ip, quindi dovrebbe esserci 1 risultato
|
|
||||||
new_detection = result[0]
|
|
||||||
|
|
||||||
# Confronto
|
|
||||||
new_score = new_detection.get('risk_score', 0)
|
|
||||||
new_type = new_detection.get('anomaly_type', 'unknown')
|
|
||||||
new_confidence = new_detection.get('confidence_level', 'unknown')
|
|
||||||
|
|
||||||
# Determina se è anomalia (score >= 80 = critical threshold)
|
|
||||||
new_is_anomaly = new_score >= 80
|
|
||||||
|
|
||||||
comparison = {
|
|
||||||
'ip_address': ip_address,
|
|
||||||
'logs_count': len(logs),
|
|
||||||
|
|
||||||
# Detection corrente nel DB
|
|
||||||
'old_score': float(old_detection['risk_score']),
|
|
||||||
'old_anomaly_type': old_detection['anomaly_type'],
|
|
||||||
'old_blocked': old_detection['blocked'],
|
|
||||||
|
|
||||||
# Nuovo modello Hybrid (rianalisi)
|
|
||||||
'new_score': new_score,
|
|
||||||
'new_anomaly_type': new_type,
|
|
||||||
'new_confidence': new_confidence,
|
|
||||||
'new_is_anomaly': new_is_anomaly,
|
|
||||||
|
|
||||||
# Delta
|
|
||||||
'score_delta': new_score - float(old_detection['risk_score']),
|
|
||||||
'type_changed': old_detection['anomaly_type'] != new_type,
|
|
||||||
}
|
|
||||||
|
|
||||||
return comparison
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
print("\n" + "="*80)
|
|
||||||
print(" IDS MODEL COMPARISON - DB Current vs Hybrid Detector v2.0.0")
|
|
||||||
print("="*80)
|
|
||||||
|
|
||||||
# Carica detection esistenti
|
|
||||||
old_detections = load_old_detections(limit=50)
|
|
||||||
|
|
||||||
if not old_detections:
|
|
||||||
print("\n❌ Nessuna detection trovata nel database!")
|
|
||||||
return
|
|
||||||
|
|
||||||
# Carica nuovo modello Hybrid
|
|
||||||
print("\n[2] Caricamento nuovo Hybrid Detector (v2.0.0)...")
|
|
||||||
detector = MLHybridDetector(model_dir="models")
|
|
||||||
|
|
||||||
if not detector.load_models():
|
|
||||||
print("\n❌ Modelli Hybrid non trovati! Esegui prima il training:")
|
|
||||||
print(" sudo /opt/ids/deployment/run_ml_training.sh")
|
|
||||||
return
|
|
||||||
|
|
||||||
print(f" ✅ Hybrid Detector caricato (18 feature selezionate)")
|
|
||||||
|
|
||||||
# Rianalizza ogni IP con nuovo modello
|
|
||||||
print(f"\n[3] Rianalisi di {len(old_detections)} IP con nuovo modello Hybrid...")
|
|
||||||
print(" (Questo può richiedere alcuni minuti...)")
|
|
||||||
|
|
||||||
comparisons = []
|
|
||||||
|
|
||||||
for i, old_det in enumerate(old_detections):
|
|
||||||
ip = old_det['source_ip']
|
|
||||||
|
|
||||||
print(f"\n [{i+1}/{len(old_detections)}] Analisi IP: {ip}")
|
|
||||||
print(f" Current: score={float(old_det['risk_score']):.1f}, type={old_det['anomaly_type']}, blocked={old_det['blocked']}")
|
|
||||||
|
|
||||||
comparison = reanalyze_with_hybrid(detector, ip, old_det)
|
|
||||||
|
|
||||||
if comparison:
|
|
||||||
comparisons.append(comparison)
|
|
||||||
print(f" Hybrid: score={comparison['new_score']:.1f}, type={comparison['new_anomaly_type']}, confidence={comparison['new_confidence']}")
|
|
||||||
print(f" Δ: {comparison['score_delta']:+.1f} score")
|
|
||||||
else:
|
|
||||||
print(f" ⚠ Nessun log recente trovato per questo IP")
|
|
||||||
|
|
||||||
# Riepilogo
|
|
||||||
print("\n" + "="*80)
|
|
||||||
print(" RISULTATI CONFRONTO")
|
|
||||||
print("="*80)
|
|
||||||
|
|
||||||
if not comparisons:
|
|
||||||
print("\n❌ Nessun IP rianalizzato (log non disponibili)")
|
|
||||||
return
|
|
||||||
|
|
||||||
df_comp = pd.DataFrame(comparisons)
|
|
||||||
|
|
||||||
# Statistiche
|
|
||||||
print(f"\nIP rianalizzati: {len(comparisons)}/{len(old_detections)}")
|
|
||||||
print(f"\nScore medio:")
|
|
||||||
print(f" Detection correnti: {df_comp['old_score'].mean():.1f}")
|
|
||||||
print(f" Hybrid Detector: {df_comp['new_score'].mean():.1f}")
|
|
||||||
print(f" Delta medio: {df_comp['score_delta'].mean():+.1f}")
|
|
||||||
|
|
||||||
# False Positives (DB aveva score alto, Hybrid dice normale)
|
|
||||||
false_positives = df_comp[
|
|
||||||
(df_comp['old_score'] >= 80) &
|
|
||||||
(~df_comp['new_is_anomaly'])
|
|
||||||
]
|
|
||||||
|
|
||||||
print(f"\n🎯 Possibili False Positives ridotti: {len(false_positives)}")
|
|
||||||
if len(false_positives) > 0:
|
|
||||||
print("\n IP con score alto nel DB ma ritenuti normali dal Hybrid Detector:")
|
|
||||||
for _, row in false_positives.iterrows():
|
|
||||||
print(f" • {row['ip_address']} (DB={row['old_score']:.0f}, Hybrid={row['new_score']:.0f})")
|
|
||||||
|
|
||||||
# True Positives confermati
|
|
||||||
true_positives = df_comp[
|
|
||||||
(df_comp['old_score'] >= 80) &
|
|
||||||
(df_comp['new_is_anomaly'])
|
|
||||||
]
|
|
||||||
|
|
||||||
print(f"\n✅ Anomalie confermate da Hybrid Detector: {len(true_positives)}")
|
|
||||||
|
|
||||||
# Confidence breakdown (solo nuovo modello)
|
|
||||||
if 'new_confidence' in df_comp.columns:
|
|
||||||
print(f"\n📊 Confidence Level distribuzione (Hybrid Detector):")
|
|
||||||
conf_counts = df_comp['new_confidence'].value_counts()
|
|
||||||
for conf, count in conf_counts.items():
|
|
||||||
print(f" • {conf}: {count} IP")
|
|
||||||
|
|
||||||
# Type changes
|
|
||||||
type_changes = df_comp[df_comp['type_changed']]
|
|
||||||
print(f"\n🔄 IP con cambio tipo anomalia: {len(type_changes)}")
|
|
||||||
|
|
||||||
# Top 10 maggiori riduzioni score
|
|
||||||
print(f"\n📉 Top 10 riduzioni score (possibili FP corretti):")
|
|
||||||
top_reductions = df_comp.nsmallest(10, 'score_delta')
|
|
||||||
for i, row in enumerate(top_reductions.itertuples(), 1):
|
|
||||||
print(f" {i}. {row.ip_address}: {row.old_score:.0f} → {row.new_score:.0f} ({row.score_delta:+.0f})")
|
|
||||||
|
|
||||||
# Top 10 maggiori aumenti score
|
|
||||||
print(f"\n📈 Top 10 aumenti score (nuove anomalie scoperte):")
|
|
||||||
top_increases = df_comp.nlargest(10, 'score_delta')
|
|
||||||
for i, row in enumerate(top_increases.itertuples(), 1):
|
|
||||||
print(f" {i}. {row.ip_address}: {row.old_score:.0f} → {row.new_score:.0f} ({row.score_delta:+.0f})")
|
|
||||||
|
|
||||||
# Salva CSV per analisi dettagliata
|
|
||||||
output_file = f"model_comparison_{datetime.now().strftime('%Y%m%d_%H%M%S')}.csv"
|
|
||||||
df_comp.to_csv(output_file, index=False)
|
|
||||||
print(f"\n💾 Risultati completi salvati in: {output_file}")
|
|
||||||
|
|
||||||
print("\n" + "="*80)
|
|
||||||
print("✅ Confronto completato!")
|
|
||||||
print("="*80 + "\n")
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
main()
|
|
||||||
@ -1,395 +0,0 @@
|
|||||||
"""
|
|
||||||
CICIDS2017 Dataset Loader and Preprocessor
|
|
||||||
Downloads, cleans, and maps CICIDS2017 features to IDS feature space
|
|
||||||
"""
|
|
||||||
|
|
||||||
import pandas as pd
|
|
||||||
import numpy as np
|
|
||||||
from pathlib import Path
|
|
||||||
from typing import Dict, Tuple, Optional
|
|
||||||
import logging
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class CICIDS2017Loader:
|
|
||||||
"""
|
|
||||||
Loads and preprocesses CICIDS2017 dataset
|
|
||||||
Maps 80 CIC features to 25 IDS features
|
|
||||||
"""
|
|
||||||
|
|
||||||
DATASET_INFO = {
|
|
||||||
'name': 'CICIDS2017',
|
|
||||||
'source': 'Canadian Institute for Cybersecurity',
|
|
||||||
'url': 'https://www.unb.ca/cic/datasets/ids-2017.html',
|
|
||||||
'size_gb': 7.8,
|
|
||||||
'files': [
|
|
||||||
'Monday-WorkingHours.pcap_ISCX.csv',
|
|
||||||
'Tuesday-WorkingHours.pcap_ISCX.csv',
|
|
||||||
'Wednesday-workingHours.pcap_ISCX.csv',
|
|
||||||
'Thursday-WorkingHours-Morning-WebAttacks.pcap_ISCX.csv',
|
|
||||||
'Thursday-WorkingHours-Afternoon-Infilteration.pcap_ISCX.csv',
|
|
||||||
'Friday-WorkingHours-Morning.pcap_ISCX.csv',
|
|
||||||
'Friday-WorkingHours-Afternoon-PortScan.pcap_ISCX.csv',
|
|
||||||
'Friday-WorkingHours-Afternoon-DDos.pcap_ISCX.csv',
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
# Mapping CIC feature names → IDS feature names
|
|
||||||
FEATURE_MAPPING = {
|
|
||||||
# Volume features
|
|
||||||
'Total Fwd Packets': 'total_packets',
|
|
||||||
'Total Backward Packets': 'total_packets', # Combined
|
|
||||||
'Total Length of Fwd Packets': 'total_bytes',
|
|
||||||
'Total Length of Bwd Packets': 'total_bytes', # Combined
|
|
||||||
'Flow Duration': 'time_span_seconds',
|
|
||||||
|
|
||||||
# Temporal features
|
|
||||||
'Flow Packets/s': 'conn_per_second',
|
|
||||||
'Flow Bytes/s': 'bytes_per_second',
|
|
||||||
'Fwd Packets/s': 'packets_per_conn',
|
|
||||||
|
|
||||||
# Protocol diversity
|
|
||||||
'Protocol': 'unique_protocols',
|
|
||||||
'Destination Port': 'unique_dest_ports',
|
|
||||||
|
|
||||||
# Port scanning
|
|
||||||
'Fwd PSH Flags': 'port_scan_score',
|
|
||||||
'Fwd URG Flags': 'port_scan_score',
|
|
||||||
|
|
||||||
# Behavioral
|
|
||||||
'Fwd Packet Length Mean': 'avg_packet_size',
|
|
||||||
'Fwd Packet Length Std': 'packet_size_variance',
|
|
||||||
'Bwd Packet Length Mean': 'avg_packet_size',
|
|
||||||
'Bwd Packet Length Std': 'packet_size_variance',
|
|
||||||
|
|
||||||
# Burst patterns
|
|
||||||
'Subflow Fwd Packets': 'max_burst',
|
|
||||||
'Subflow Fwd Bytes': 'burst_variance',
|
|
||||||
}
|
|
||||||
|
|
||||||
# Attack type mapping
|
|
||||||
ATTACK_LABELS = {
|
|
||||||
'BENIGN': 'normal',
|
|
||||||
'DoS Hulk': 'ddos',
|
|
||||||
'DoS GoldenEye': 'ddos',
|
|
||||||
'DoS slowloris': 'ddos',
|
|
||||||
'DoS Slowhttptest': 'ddos',
|
|
||||||
'DDoS': 'ddos',
|
|
||||||
'PortScan': 'port_scan',
|
|
||||||
'FTP-Patator': 'brute_force',
|
|
||||||
'SSH-Patator': 'brute_force',
|
|
||||||
'Bot': 'botnet',
|
|
||||||
'Web Attack – Brute Force': 'brute_force',
|
|
||||||
'Web Attack – XSS': 'suspicious',
|
|
||||||
'Web Attack – Sql Injection': 'suspicious',
|
|
||||||
'Infiltration': 'suspicious',
|
|
||||||
'Heartbleed': 'suspicious',
|
|
||||||
}
|
|
||||||
|
|
||||||
def __init__(self, data_dir: str = "datasets/cicids2017"):
|
|
||||||
self.data_dir = Path(data_dir)
|
|
||||||
self.data_dir.mkdir(parents=True, exist_ok=True)
|
|
||||||
|
|
||||||
def download_instructions(self) -> str:
|
|
||||||
"""Return download instructions for CICIDS2017"""
|
|
||||||
instructions = f"""
|
|
||||||
╔══════════════════════════════════════════════════════════════════╗
|
|
||||||
║ CICIDS2017 Dataset Download Instructions ║
|
|
||||||
╚══════════════════════════════════════════════════════════════════╝
|
|
||||||
|
|
||||||
Dataset: {self.DATASET_INFO['name']}
|
|
||||||
Source: {self.DATASET_INFO['source']}
|
|
||||||
Size: {self.DATASET_INFO['size_gb']} GB
|
|
||||||
URL: {self.DATASET_INFO['url']}
|
|
||||||
|
|
||||||
MANUAL DOWNLOAD (Recommended):
|
|
||||||
1. Visit: {self.DATASET_INFO['url']}
|
|
||||||
2. Register/Login (free account required)
|
|
||||||
3. Download CSV files for all days (Monday-Friday)
|
|
||||||
4. Extract to: {self.data_dir.absolute()}
|
|
||||||
|
|
||||||
Expected files:
|
|
||||||
"""
|
|
||||||
for i, fname in enumerate(self.DATASET_INFO['files'], 1):
|
|
||||||
instructions += f" {i}. {fname}\n"
|
|
||||||
|
|
||||||
instructions += f"\nAfter download, run: python_ml/train_hybrid.py --validate\n"
|
|
||||||
instructions += "=" * 66
|
|
||||||
|
|
||||||
return instructions
|
|
||||||
|
|
||||||
def check_dataset_exists(self) -> Tuple[bool, list]:
|
|
||||||
"""Check if dataset files exist"""
|
|
||||||
missing_files = []
|
|
||||||
for fname in self.DATASET_INFO['files']:
|
|
||||||
fpath = self.data_dir / fname
|
|
||||||
if not fpath.exists():
|
|
||||||
missing_files.append(fname)
|
|
||||||
|
|
||||||
exists = len(missing_files) == 0
|
|
||||||
return exists, missing_files
|
|
||||||
|
|
||||||
def load_day(self, day_file: str, sample_frac: float = 1.0) -> pd.DataFrame:
|
|
||||||
"""
|
|
||||||
Load single day CSV file
|
|
||||||
sample_frac: fraction to sample (0.1 = 10% for testing)
|
|
||||||
"""
|
|
||||||
fpath = self.data_dir / day_file
|
|
||||||
|
|
||||||
if not fpath.exists():
|
|
||||||
raise FileNotFoundError(f"Dataset file not found: {fpath}")
|
|
||||||
|
|
||||||
logger.info(f"Loading {day_file}...")
|
|
||||||
|
|
||||||
# CICIDS2017 has known issues: extra space before column names, inf values
|
|
||||||
df = pd.read_csv(fpath, skipinitialspace=True)
|
|
||||||
|
|
||||||
# Strip whitespace from column names
|
|
||||||
df.columns = df.columns.str.strip()
|
|
||||||
|
|
||||||
# Sample if requested
|
|
||||||
if sample_frac < 1.0:
|
|
||||||
df = df.sample(frac=sample_frac, random_state=42)
|
|
||||||
logger.info(f"Sampled {len(df)} rows ({sample_frac*100:.0f}%)")
|
|
||||||
|
|
||||||
return df
|
|
||||||
|
|
||||||
def preprocess(self, df: pd.DataFrame) -> pd.DataFrame:
|
|
||||||
"""
|
|
||||||
Clean and preprocess CICIDS2017 data
|
|
||||||
- Remove NaN and Inf values
|
|
||||||
- Fix data types
|
|
||||||
- Map labels
|
|
||||||
"""
|
|
||||||
logger.info(f"Preprocessing {len(df)} rows...")
|
|
||||||
|
|
||||||
# Replace inf with NaN, then drop
|
|
||||||
df = df.replace([np.inf, -np.inf], np.nan)
|
|
||||||
df = df.dropna()
|
|
||||||
|
|
||||||
# Map attack labels
|
|
||||||
if ' Label' in df.columns:
|
|
||||||
df['attack_type'] = df[' Label'].map(self.ATTACK_LABELS)
|
|
||||||
df['is_attack'] = (df['attack_type'] != 'normal').astype(int)
|
|
||||||
elif 'Label' in df.columns:
|
|
||||||
df['attack_type'] = df['Label'].map(self.ATTACK_LABELS)
|
|
||||||
df['is_attack'] = (df['attack_type'] != 'normal').astype(int)
|
|
||||||
else:
|
|
||||||
logger.warning("No label column found, assuming all BENIGN")
|
|
||||||
df['attack_type'] = 'normal'
|
|
||||||
df['is_attack'] = 0
|
|
||||||
|
|
||||||
# Remove unknown attack types
|
|
||||||
df = df[df['attack_type'].notna()]
|
|
||||||
|
|
||||||
logger.info(f"After preprocessing: {len(df)} rows")
|
|
||||||
logger.info(f"Attack distribution:\n{df['attack_type'].value_counts()}")
|
|
||||||
|
|
||||||
return df
|
|
||||||
|
|
||||||
def map_to_ids_features(self, df: pd.DataFrame) -> pd.DataFrame:
|
|
||||||
"""
|
|
||||||
Map 80 CICIDS2017 features → 25 IDS features
|
|
||||||
This is approximate mapping for validation purposes
|
|
||||||
"""
|
|
||||||
logger.info("Mapping CICIDS features to IDS feature space...")
|
|
||||||
|
|
||||||
ids_features = {}
|
|
||||||
|
|
||||||
# Volume features (combine fwd+bwd)
|
|
||||||
ids_features['total_packets'] = (
|
|
||||||
df.get('Total Fwd Packets', 0) +
|
|
||||||
df.get('Total Backward Packets', 0)
|
|
||||||
)
|
|
||||||
ids_features['total_bytes'] = (
|
|
||||||
df.get('Total Length of Fwd Packets', 0) +
|
|
||||||
df.get('Total Length of Bwd Packets', 0)
|
|
||||||
)
|
|
||||||
ids_features['conn_count'] = 1 # Each row = 1 flow
|
|
||||||
ids_features['avg_packet_size'] = df.get('Fwd Packet Length Mean', 0)
|
|
||||||
ids_features['bytes_per_second'] = df.get('Flow Bytes/s', 0)
|
|
||||||
|
|
||||||
# Temporal features
|
|
||||||
ids_features['time_span_seconds'] = df.get('Flow Duration', 0) / 1_000_000 # Microseconds to seconds
|
|
||||||
ids_features['conn_per_second'] = df.get('Flow Packets/s', 0)
|
|
||||||
ids_features['hour_of_day'] = 12 # Unknown, use midday
|
|
||||||
ids_features['day_of_week'] = 3 # Unknown, use Wednesday
|
|
||||||
|
|
||||||
# Burst detection (approximate)
|
|
||||||
ids_features['max_burst'] = df.get('Subflow Fwd Packets', 0)
|
|
||||||
ids_features['avg_burst'] = df.get('Subflow Fwd Packets', 0)
|
|
||||||
ids_features['burst_variance'] = df.get('Subflow Fwd Bytes', 0).apply(lambda x: max(0, x))
|
|
||||||
ids_features['avg_interval'] = 1.0 # Unknown
|
|
||||||
|
|
||||||
# Protocol diversity
|
|
||||||
ids_features['unique_protocols'] = 1 # Each row = single protocol
|
|
||||||
ids_features['unique_dest_ports'] = 1
|
|
||||||
ids_features['unique_dest_ips'] = 1
|
|
||||||
ids_features['protocol_entropy'] = 0
|
|
||||||
ids_features['tcp_ratio'] = (df.get('Protocol', 6) == 6).astype(int)
|
|
||||||
ids_features['udp_ratio'] = (df.get('Protocol', 17) == 17).astype(int)
|
|
||||||
|
|
||||||
# Port scanning detection
|
|
||||||
ids_features['unique_ports_contacted'] = df.get('Destination Port', 0).apply(lambda x: 1 if x > 0 else 0)
|
|
||||||
ids_features['port_scan_score'] = (df.get('Fwd PSH Flags', 0) + df.get('Fwd URG Flags', 0)) / 2
|
|
||||||
ids_features['sequential_ports'] = 0
|
|
||||||
|
|
||||||
# Behavioral anomalies
|
|
||||||
ids_features['packets_per_conn'] = ids_features['total_packets']
|
|
||||||
ids_features['packet_size_variance'] = df.get('Fwd Packet Length Std', 0)
|
|
||||||
ids_features['blocked_ratio'] = 0
|
|
||||||
|
|
||||||
# Add labels
|
|
||||||
ids_features['attack_type'] = df['attack_type']
|
|
||||||
ids_features['is_attack'] = df['is_attack']
|
|
||||||
|
|
||||||
# Add synthetic source_ip for validation (CICIDS doesn't have this field)
|
|
||||||
# Generate unique IPs: 10.0.x.y format
|
|
||||||
n_samples = len(df)
|
|
||||||
source_ips = [f"10.0.{i//256}.{i%256}" for i in range(n_samples)]
|
|
||||||
ids_features['source_ip'] = source_ips
|
|
||||||
|
|
||||||
ids_df = pd.DataFrame(ids_features)
|
|
||||||
|
|
||||||
# Clip negative values
|
|
||||||
numeric_cols = ids_df.select_dtypes(include=[np.number]).columns
|
|
||||||
ids_df[numeric_cols] = ids_df[numeric_cols].clip(lower=0)
|
|
||||||
|
|
||||||
logger.info(f"Mapped to {len(ids_df.columns)} IDS features")
|
|
||||||
return ids_df
|
|
||||||
|
|
||||||
def load_and_process_all(
|
|
||||||
self,
|
|
||||||
sample_frac: float = 1.0,
|
|
||||||
train_ratio: float = 0.7,
|
|
||||||
val_ratio: float = 0.15
|
|
||||||
) -> Tuple[pd.DataFrame, pd.DataFrame, pd.DataFrame]:
|
|
||||||
"""
|
|
||||||
Load all days, preprocess, map to IDS features, and split
|
|
||||||
Returns: train_df, val_df, test_df
|
|
||||||
"""
|
|
||||||
exists, missing = self.check_dataset_exists()
|
|
||||||
if not exists:
|
|
||||||
raise FileNotFoundError(
|
|
||||||
f"Missing dataset files: {missing}\n\n"
|
|
||||||
f"{self.download_instructions()}"
|
|
||||||
)
|
|
||||||
|
|
||||||
all_data = []
|
|
||||||
for fname in self.DATASET_INFO['files']:
|
|
||||||
try:
|
|
||||||
df = self.load_day(fname, sample_frac=sample_frac)
|
|
||||||
df = self.preprocess(df)
|
|
||||||
df_ids = self.map_to_ids_features(df)
|
|
||||||
all_data.append(df_ids)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Failed to load {fname}: {e}")
|
|
||||||
continue
|
|
||||||
|
|
||||||
if not all_data:
|
|
||||||
raise ValueError("No data loaded successfully")
|
|
||||||
|
|
||||||
# Combine all days
|
|
||||||
combined = pd.concat(all_data, ignore_index=True)
|
|
||||||
logger.info(f"Combined dataset: {len(combined)} rows")
|
|
||||||
|
|
||||||
# Shuffle
|
|
||||||
combined = combined.sample(frac=1, random_state=42).reset_index(drop=True)
|
|
||||||
|
|
||||||
# Split train/val/test
|
|
||||||
n = len(combined)
|
|
||||||
n_train = int(n * train_ratio)
|
|
||||||
n_val = int(n * val_ratio)
|
|
||||||
|
|
||||||
train_df = combined.iloc[:n_train]
|
|
||||||
val_df = combined.iloc[n_train:n_train+n_val]
|
|
||||||
test_df = combined.iloc[n_train+n_val:]
|
|
||||||
|
|
||||||
logger.info(f"Split: train={len(train_df)}, val={len(val_df)}, test={len(test_df)}")
|
|
||||||
|
|
||||||
return train_df, val_df, test_df
|
|
||||||
|
|
||||||
def create_sample_dataset(self, n_samples: int = 10000) -> pd.DataFrame:
|
|
||||||
"""
|
|
||||||
Create synthetic sample dataset for testing
|
|
||||||
Mimics CICIDS2017 structure
|
|
||||||
"""
|
|
||||||
logger.info(f"Creating sample dataset ({n_samples} samples)...")
|
|
||||||
|
|
||||||
np.random.seed(42)
|
|
||||||
|
|
||||||
# Generate synthetic features
|
|
||||||
data = {
|
|
||||||
'total_packets': np.random.lognormal(3, 1.5, n_samples).astype(int),
|
|
||||||
'total_bytes': np.random.lognormal(8, 2, n_samples).astype(int),
|
|
||||||
'conn_count': np.ones(n_samples, dtype=int),
|
|
||||||
'avg_packet_size': np.random.normal(500, 200, n_samples),
|
|
||||||
'bytes_per_second': np.random.lognormal(6, 2, n_samples),
|
|
||||||
'time_span_seconds': np.random.exponential(10, n_samples),
|
|
||||||
'conn_per_second': np.random.exponential(5, n_samples),
|
|
||||||
'hour_of_day': np.random.randint(0, 24, n_samples),
|
|
||||||
'day_of_week': np.random.randint(0, 7, n_samples),
|
|
||||||
'max_burst': np.random.poisson(20, n_samples),
|
|
||||||
'avg_burst': np.random.poisson(15, n_samples),
|
|
||||||
'burst_variance': np.random.exponential(5, n_samples),
|
|
||||||
'avg_interval': np.random.exponential(0.1, n_samples),
|
|
||||||
'unique_protocols': np.ones(n_samples, dtype=int),
|
|
||||||
'unique_dest_ports': np.ones(n_samples, dtype=int),
|
|
||||||
'unique_dest_ips': np.ones(n_samples, dtype=int),
|
|
||||||
'protocol_entropy': np.zeros(n_samples),
|
|
||||||
'tcp_ratio': np.random.choice([0, 1], n_samples, p=[0.3, 0.7]),
|
|
||||||
'udp_ratio': np.random.choice([0, 1], n_samples, p=[0.7, 0.3]),
|
|
||||||
'unique_ports_contacted': np.ones(n_samples, dtype=int),
|
|
||||||
'port_scan_score': np.random.beta(1, 10, n_samples),
|
|
||||||
'sequential_ports': np.zeros(n_samples, dtype=int),
|
|
||||||
'packets_per_conn': np.random.lognormal(3, 1.5, n_samples),
|
|
||||||
'packet_size_variance': np.random.exponential(100, n_samples),
|
|
||||||
'blocked_ratio': np.zeros(n_samples),
|
|
||||||
}
|
|
||||||
|
|
||||||
# Generate labels: 90% normal, 10% attacks
|
|
||||||
is_attack = np.random.choice([0, 1], n_samples, p=[0.9, 0.1])
|
|
||||||
attack_types = np.where(
|
|
||||||
is_attack == 1,
|
|
||||||
np.random.choice(['ddos', 'port_scan', 'brute_force', 'suspicious'], n_samples),
|
|
||||||
'normal'
|
|
||||||
)
|
|
||||||
|
|
||||||
data['is_attack'] = is_attack
|
|
||||||
data['attack_type'] = attack_types
|
|
||||||
|
|
||||||
# Add synthetic source_ip (simulate real traffic from 100 unique IPs)
|
|
||||||
unique_ips = [f"192.168.{i//256}.{i%256}" for i in range(100)]
|
|
||||||
data['source_ip'] = np.random.choice(unique_ips, n_samples)
|
|
||||||
|
|
||||||
# Add timestamp column (simulate last 7 days of traffic)
|
|
||||||
from datetime import datetime, timedelta
|
|
||||||
now = datetime.now()
|
|
||||||
start_time = now - timedelta(days=7)
|
|
||||||
|
|
||||||
# Generate timestamps randomly distributed over last 7 days
|
|
||||||
time_range_seconds = 7 * 24 * 3600 # 7 days in seconds
|
|
||||||
random_offsets = np.random.uniform(0, time_range_seconds, n_samples)
|
|
||||||
timestamps = [start_time + timedelta(seconds=offset) for offset in random_offsets]
|
|
||||||
data['timestamp'] = timestamps
|
|
||||||
|
|
||||||
df = pd.DataFrame(data)
|
|
||||||
|
|
||||||
# Make attacks more extreme
|
|
||||||
attack_mask = df['is_attack'] == 1
|
|
||||||
df.loc[attack_mask, 'total_packets'] *= 10
|
|
||||||
df.loc[attack_mask, 'total_bytes'] *= 15
|
|
||||||
df.loc[attack_mask, 'conn_per_second'] *= 20
|
|
||||||
|
|
||||||
logger.info(f"Sample dataset created: {len(df)} rows")
|
|
||||||
logger.info(f"Attack distribution:\n{df['attack_type'].value_counts()}")
|
|
||||||
|
|
||||||
return df
|
|
||||||
|
|
||||||
|
|
||||||
# Utility function
|
|
||||||
def get_cicids2017_loader(data_dir: str = "datasets/cicids2017") -> CICIDS2017Loader:
|
|
||||||
"""Factory function to get loader instance"""
|
|
||||||
return CICIDS2017Loader(data_dir)
|
|
||||||
@ -1,2 +0,0 @@
|
|||||||
# Public Lists Fetcher Module
|
|
||||||
# Handles download, parsing, and sync of public blacklist/whitelist sources
|
|
||||||
@ -1,401 +0,0 @@
|
|||||||
import asyncio
|
|
||||||
import httpx
|
|
||||||
from datetime import datetime
|
|
||||||
from typing import Dict, List, Set, Tuple, Optional
|
|
||||||
import psycopg2
|
|
||||||
from psycopg2.extras import execute_values
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
# Add parent directory to path for imports
|
|
||||||
sys.path.append(os.path.dirname(os.path.dirname(__file__)))
|
|
||||||
|
|
||||||
from list_fetcher.parsers import parse_list
|
|
||||||
|
|
||||||
|
|
||||||
class ListFetcher:
|
|
||||||
"""Fetches and synchronizes public IP lists"""
|
|
||||||
|
|
||||||
def __init__(self, database_url: str):
|
|
||||||
self.database_url = database_url
|
|
||||||
self.timeout = 30.0
|
|
||||||
self.max_retries = 3
|
|
||||||
|
|
||||||
def get_db_connection(self):
|
|
||||||
"""Create database connection"""
|
|
||||||
return psycopg2.connect(self.database_url)
|
|
||||||
|
|
||||||
async def fetch_url(self, url: str) -> Optional[str]:
|
|
||||||
"""Download content from URL with retry logic"""
|
|
||||||
async with httpx.AsyncClient(timeout=self.timeout, follow_redirects=True) as client:
|
|
||||||
for attempt in range(self.max_retries):
|
|
||||||
try:
|
|
||||||
response = await client.get(url)
|
|
||||||
response.raise_for_status()
|
|
||||||
return response.text
|
|
||||||
except httpx.HTTPError as e:
|
|
||||||
if attempt == self.max_retries - 1:
|
|
||||||
raise Exception(f"HTTP error after {self.max_retries} attempts: {e}")
|
|
||||||
await asyncio.sleep(2 ** attempt) # Exponential backoff
|
|
||||||
except Exception as e:
|
|
||||||
if attempt == self.max_retries - 1:
|
|
||||||
raise Exception(f"Download failed: {e}")
|
|
||||||
await asyncio.sleep(2 ** attempt)
|
|
||||||
return None
|
|
||||||
|
|
||||||
def get_enabled_lists(self) -> List[Dict]:
|
|
||||||
"""Get all enabled public lists from database"""
|
|
||||||
conn = self.get_db_connection()
|
|
||||||
try:
|
|
||||||
with conn.cursor() as cur:
|
|
||||||
cur.execute("""
|
|
||||||
SELECT id, name, type, url, fetch_interval_minutes
|
|
||||||
FROM public_lists
|
|
||||||
WHERE enabled = true
|
|
||||||
ORDER BY type, name
|
|
||||||
""")
|
|
||||||
if cur.description is None:
|
|
||||||
return []
|
|
||||||
columns = [desc[0] for desc in cur.description]
|
|
||||||
return [dict(zip(columns, row)) for row in cur.fetchall()]
|
|
||||||
finally:
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
def get_existing_ips(self, list_id: str, list_type: str) -> Set[str]:
|
|
||||||
"""Get existing IPs for a list from database"""
|
|
||||||
conn = self.get_db_connection()
|
|
||||||
try:
|
|
||||||
with conn.cursor() as cur:
|
|
||||||
if list_type == 'blacklist':
|
|
||||||
cur.execute("""
|
|
||||||
SELECT ip_address
|
|
||||||
FROM public_blacklist_ips
|
|
||||||
WHERE list_id = %s AND is_active = true
|
|
||||||
""", (list_id,))
|
|
||||||
else: # whitelist
|
|
||||||
cur.execute("""
|
|
||||||
SELECT ip_address
|
|
||||||
FROM whitelist
|
|
||||||
WHERE list_id = %s AND active = true
|
|
||||||
""", (list_id,))
|
|
||||||
|
|
||||||
return {row[0] for row in cur.fetchall()}
|
|
||||||
finally:
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
def sync_blacklist_ips(self, list_id: str, new_ips: Set[Tuple[str, Optional[str]]]):
|
|
||||||
"""Sync blacklist IPs: add new, mark inactive old ones"""
|
|
||||||
conn = self.get_db_connection()
|
|
||||||
try:
|
|
||||||
with conn.cursor() as cur:
|
|
||||||
# Get existing IPs
|
|
||||||
existing = self.get_existing_ips(list_id, 'blacklist')
|
|
||||||
new_ip_addresses = {ip for ip, _ in new_ips}
|
|
||||||
|
|
||||||
# Calculate diff
|
|
||||||
to_add = new_ip_addresses - existing
|
|
||||||
to_deactivate = existing - new_ip_addresses
|
|
||||||
to_update = existing & new_ip_addresses
|
|
||||||
|
|
||||||
# Mark old IPs as inactive
|
|
||||||
if to_deactivate:
|
|
||||||
cur.execute("""
|
|
||||||
UPDATE public_blacklist_ips
|
|
||||||
SET is_active = false
|
|
||||||
WHERE list_id = %s AND ip_address = ANY(%s)
|
|
||||||
""", (list_id, list(to_deactivate)))
|
|
||||||
|
|
||||||
# Update last_seen for existing active IPs
|
|
||||||
if to_update:
|
|
||||||
cur.execute("""
|
|
||||||
UPDATE public_blacklist_ips
|
|
||||||
SET last_seen = NOW()
|
|
||||||
WHERE list_id = %s AND ip_address = ANY(%s)
|
|
||||||
""", (list_id, list(to_update)))
|
|
||||||
|
|
||||||
# Add new IPs with INET/CIDR support
|
|
||||||
if to_add:
|
|
||||||
values = []
|
|
||||||
for ip, cidr in new_ips:
|
|
||||||
if ip in to_add:
|
|
||||||
# Compute INET values for CIDR matching
|
|
||||||
cidr_inet = cidr if cidr else f"{ip}/32"
|
|
||||||
values.append((ip, cidr, ip, cidr_inet, list_id))
|
|
||||||
|
|
||||||
execute_values(cur, """
|
|
||||||
INSERT INTO public_blacklist_ips
|
|
||||||
(ip_address, cidr_range, ip_inet, cidr_inet, list_id)
|
|
||||||
VALUES %s
|
|
||||||
ON CONFLICT (ip_address, list_id) DO UPDATE
|
|
||||||
SET is_active = true, last_seen = NOW(),
|
|
||||||
ip_inet = EXCLUDED.ip_inet,
|
|
||||||
cidr_inet = EXCLUDED.cidr_inet
|
|
||||||
""", values)
|
|
||||||
|
|
||||||
# Update list stats
|
|
||||||
cur.execute("""
|
|
||||||
UPDATE public_lists
|
|
||||||
SET total_ips = %s,
|
|
||||||
active_ips = %s,
|
|
||||||
last_success = NOW()
|
|
||||||
WHERE id = %s
|
|
||||||
""", (len(new_ip_addresses), len(new_ip_addresses), list_id))
|
|
||||||
|
|
||||||
conn.commit()
|
|
||||||
return len(to_add), len(to_deactivate), len(to_update)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
conn.rollback()
|
|
||||||
raise e
|
|
||||||
finally:
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
def sync_whitelist_ips(self, list_id: str, list_name: str, new_ips: Set[Tuple[str, Optional[str]]]):
|
|
||||||
"""Sync whitelist IPs: add new, deactivate old ones"""
|
|
||||||
conn = self.get_db_connection()
|
|
||||||
try:
|
|
||||||
with conn.cursor() as cur:
|
|
||||||
# Get existing IPs
|
|
||||||
existing = self.get_existing_ips(list_id, 'whitelist')
|
|
||||||
new_ip_addresses = {ip for ip, _ in new_ips}
|
|
||||||
|
|
||||||
# Calculate diff
|
|
||||||
to_add = new_ip_addresses - existing
|
|
||||||
to_deactivate = existing - new_ip_addresses
|
|
||||||
to_update = existing & new_ip_addresses
|
|
||||||
|
|
||||||
# Determine source name from list name
|
|
||||||
source = 'other'
|
|
||||||
list_lower = list_name.lower()
|
|
||||||
if 'aws' in list_lower:
|
|
||||||
source = 'aws'
|
|
||||||
elif 'gcp' in list_lower or 'google' in list_lower:
|
|
||||||
source = 'gcp'
|
|
||||||
elif 'cloudflare' in list_lower:
|
|
||||||
source = 'cloudflare'
|
|
||||||
elif 'iana' in list_lower:
|
|
||||||
source = 'iana'
|
|
||||||
elif 'ntp' in list_lower:
|
|
||||||
source = 'ntp'
|
|
||||||
|
|
||||||
# Mark old IPs as inactive
|
|
||||||
if to_deactivate:
|
|
||||||
cur.execute("""
|
|
||||||
UPDATE whitelist
|
|
||||||
SET active = false
|
|
||||||
WHERE list_id = %s AND ip_address = ANY(%s)
|
|
||||||
""", (list_id, list(to_deactivate)))
|
|
||||||
|
|
||||||
# Add new IPs with INET support for CIDR matching
|
|
||||||
if to_add:
|
|
||||||
values = []
|
|
||||||
for ip, cidr in new_ips:
|
|
||||||
if ip in to_add:
|
|
||||||
comment = f"Auto-imported from {list_name}"
|
|
||||||
if cidr:
|
|
||||||
comment += f" (CIDR: {cidr})"
|
|
||||||
# Compute ip_inet for CIDR-aware whitelisting
|
|
||||||
ip_inet = cidr if cidr else ip
|
|
||||||
values.append((ip, ip_inet, comment, source, list_id))
|
|
||||||
|
|
||||||
execute_values(cur, """
|
|
||||||
INSERT INTO whitelist (ip_address, ip_inet, comment, source, list_id)
|
|
||||||
VALUES %s
|
|
||||||
ON CONFLICT (ip_address) DO UPDATE
|
|
||||||
SET active = true,
|
|
||||||
ip_inet = EXCLUDED.ip_inet,
|
|
||||||
source = EXCLUDED.source,
|
|
||||||
list_id = EXCLUDED.list_id
|
|
||||||
""", values)
|
|
||||||
|
|
||||||
# Update list stats
|
|
||||||
cur.execute("""
|
|
||||||
UPDATE public_lists
|
|
||||||
SET total_ips = %s,
|
|
||||||
active_ips = %s,
|
|
||||||
last_success = NOW()
|
|
||||||
WHERE id = %s
|
|
||||||
""", (len(new_ip_addresses), len(new_ip_addresses), list_id))
|
|
||||||
|
|
||||||
conn.commit()
|
|
||||||
return len(to_add), len(to_deactivate), len(to_update)
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
conn.rollback()
|
|
||||||
raise e
|
|
||||||
finally:
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
async def fetch_and_sync_list(self, list_config: Dict) -> Dict:
|
|
||||||
"""Fetch and sync a single list"""
|
|
||||||
list_id = list_config['id']
|
|
||||||
list_name = list_config['name']
|
|
||||||
list_type = list_config['type']
|
|
||||||
url = list_config['url']
|
|
||||||
|
|
||||||
result = {
|
|
||||||
'list_id': list_id,
|
|
||||||
'list_name': list_name,
|
|
||||||
'success': False,
|
|
||||||
'added': 0,
|
|
||||||
'removed': 0,
|
|
||||||
'updated': 0,
|
|
||||||
'error': None
|
|
||||||
}
|
|
||||||
|
|
||||||
conn = self.get_db_connection()
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Update last_fetch timestamp
|
|
||||||
with conn.cursor() as cur:
|
|
||||||
cur.execute("""
|
|
||||||
UPDATE public_lists
|
|
||||||
SET last_fetch = NOW()
|
|
||||||
WHERE id = %s
|
|
||||||
""", (list_id,))
|
|
||||||
conn.commit()
|
|
||||||
|
|
||||||
# Download content
|
|
||||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] Downloading {list_name} from {url}...")
|
|
||||||
content = await self.fetch_url(url)
|
|
||||||
|
|
||||||
if not content:
|
|
||||||
raise Exception("Empty response from server")
|
|
||||||
|
|
||||||
# Parse IPs
|
|
||||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] Parsing {list_name}...")
|
|
||||||
ips = parse_list(list_name, content)
|
|
||||||
|
|
||||||
if not ips:
|
|
||||||
raise Exception("No valid IPs found in list")
|
|
||||||
|
|
||||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] Found {len(ips)} IPs, syncing to database...")
|
|
||||||
|
|
||||||
# Sync to database
|
|
||||||
if list_type == 'blacklist':
|
|
||||||
added, removed, updated = self.sync_blacklist_ips(list_id, ips)
|
|
||||||
else:
|
|
||||||
added, removed, updated = self.sync_whitelist_ips(list_id, list_name, ips)
|
|
||||||
|
|
||||||
result.update({
|
|
||||||
'success': True,
|
|
||||||
'added': added,
|
|
||||||
'removed': removed,
|
|
||||||
'updated': updated
|
|
||||||
})
|
|
||||||
|
|
||||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] ✓ {list_name}: +{added} -{removed} ~{updated}")
|
|
||||||
|
|
||||||
# Reset error count on success
|
|
||||||
with conn.cursor() as cur:
|
|
||||||
cur.execute("""
|
|
||||||
UPDATE public_lists
|
|
||||||
SET error_count = 0, last_error = NULL
|
|
||||||
WHERE id = %s
|
|
||||||
""", (list_id,))
|
|
||||||
conn.commit()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
error_msg = str(e)
|
|
||||||
result['error'] = error_msg
|
|
||||||
print(f"[{datetime.now().strftime('%H:%M:%S')}] ✗ {list_name}: {error_msg}")
|
|
||||||
|
|
||||||
# Increment error count
|
|
||||||
with conn.cursor() as cur:
|
|
||||||
cur.execute("""
|
|
||||||
UPDATE public_lists
|
|
||||||
SET error_count = error_count + 1,
|
|
||||||
last_error = %s
|
|
||||||
WHERE id = %s
|
|
||||||
""", (error_msg[:500], list_id))
|
|
||||||
conn.commit()
|
|
||||||
|
|
||||||
finally:
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
async def fetch_all_lists(self) -> List[Dict]:
|
|
||||||
"""Fetch and sync all enabled lists"""
|
|
||||||
print(f"\n{'='*60}")
|
|
||||||
print(f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')}] PUBLIC LISTS SYNC")
|
|
||||||
print(f"{'='*60}\n")
|
|
||||||
|
|
||||||
# Get enabled lists
|
|
||||||
lists = self.get_enabled_lists()
|
|
||||||
|
|
||||||
if not lists:
|
|
||||||
print("No enabled lists found")
|
|
||||||
return []
|
|
||||||
|
|
||||||
print(f"Found {len(lists)} enabled lists\n")
|
|
||||||
|
|
||||||
# Fetch all lists in parallel
|
|
||||||
tasks = [self.fetch_and_sync_list(list_config) for list_config in lists]
|
|
||||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
|
||||||
|
|
||||||
# Summary
|
|
||||||
print(f"\n{'='*60}")
|
|
||||||
print("SYNC SUMMARY")
|
|
||||||
print(f"{'='*60}")
|
|
||||||
|
|
||||||
success_count = sum(1 for r in results if isinstance(r, dict) and r.get('success'))
|
|
||||||
error_count = len(results) - success_count
|
|
||||||
total_added = sum(r.get('added', 0) for r in results if isinstance(r, dict))
|
|
||||||
total_removed = sum(r.get('removed', 0) for r in results if isinstance(r, dict))
|
|
||||||
|
|
||||||
print(f"Success: {success_count}/{len(results)}")
|
|
||||||
print(f"Errors: {error_count}/{len(results)}")
|
|
||||||
print(f"Total IPs Added: {total_added}")
|
|
||||||
print(f"Total IPs Removed: {total_removed}")
|
|
||||||
print(f"{'='*60}\n")
|
|
||||||
|
|
||||||
return [r for r in results if isinstance(r, dict)]
|
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
"""Main entry point for list fetcher"""
|
|
||||||
database_url = os.getenv('DATABASE_URL')
|
|
||||||
|
|
||||||
if not database_url:
|
|
||||||
print("ERROR: DATABASE_URL environment variable not set")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
fetcher = ListFetcher(database_url)
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Fetch and sync all lists
|
|
||||||
await fetcher.fetch_all_lists()
|
|
||||||
|
|
||||||
# Run merge logic to sync detections with blacklist/whitelist priority
|
|
||||||
print("\n" + "="*60)
|
|
||||||
print("RUNNING MERGE LOGIC")
|
|
||||||
print("="*60 + "\n")
|
|
||||||
|
|
||||||
# Import merge logic (avoid circular imports)
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
merge_logic_path = Path(__file__).parent.parent
|
|
||||||
sys.path.insert(0, str(merge_logic_path))
|
|
||||||
from merge_logic import MergeLogic
|
|
||||||
|
|
||||||
merge = MergeLogic(database_url)
|
|
||||||
stats = merge.sync_public_blacklist_detections()
|
|
||||||
|
|
||||||
print(f"\nMerge Logic Stats:")
|
|
||||||
print(f" Created detections: {stats['created']}")
|
|
||||||
print(f" Cleaned invalid detections: {stats['cleaned']}")
|
|
||||||
print(f" Skipped (whitelisted): {stats['skipped_whitelisted']}")
|
|
||||||
print("="*60 + "\n")
|
|
||||||
|
|
||||||
return 0
|
|
||||||
except Exception as e:
|
|
||||||
print(f"FATAL ERROR: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
return 1
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
exit_code = asyncio.run(main())
|
|
||||||
sys.exit(exit_code)
|
|
||||||
@ -1,362 +0,0 @@
|
|||||||
import re
|
|
||||||
import json
|
|
||||||
from typing import List, Dict, Set, Optional
|
|
||||||
from datetime import datetime
|
|
||||||
import ipaddress
|
|
||||||
|
|
||||||
|
|
||||||
class ListParser:
|
|
||||||
"""Base parser for public IP lists"""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def validate_ip(ip_str: str) -> bool:
|
|
||||||
"""Validate IP address or CIDR range"""
|
|
||||||
try:
|
|
||||||
ipaddress.ip_network(ip_str, strict=False)
|
|
||||||
return True
|
|
||||||
except ValueError:
|
|
||||||
return False
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def normalize_cidr(ip_str: str) -> tuple[str, Optional[str]]:
|
|
||||||
"""
|
|
||||||
Normalize IP/CIDR to (ip_address, cidr_range)
|
|
||||||
For CIDR ranges, use the full CIDR notation as ip_address to ensure uniqueness
|
|
||||||
Example: '1.2.3.0/24' -> ('1.2.3.0/24', '1.2.3.0/24')
|
|
||||||
'1.2.3.4' -> ('1.2.3.4', None)
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
network = ipaddress.ip_network(ip_str, strict=False)
|
|
||||||
if '/' in ip_str:
|
|
||||||
normalized_cidr = str(network)
|
|
||||||
return (normalized_cidr, normalized_cidr)
|
|
||||||
else:
|
|
||||||
return (ip_str, None)
|
|
||||||
except ValueError:
|
|
||||||
return (ip_str, None)
|
|
||||||
|
|
||||||
|
|
||||||
class SpamhausParser(ListParser):
|
|
||||||
"""Parser for Spamhaus DROP list"""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def parse(content: str) -> Set[tuple[str, Optional[str]]]:
|
|
||||||
"""
|
|
||||||
Parse Spamhaus DROP format:
|
|
||||||
- NDJSON (new): {"cidr":"1.2.3.0/24","sblid":"SBL12345","rir":"apnic"}
|
|
||||||
- Text (old): 1.2.3.0/24 ; SBL12345
|
|
||||||
"""
|
|
||||||
ips = set()
|
|
||||||
lines = content.strip().split('\n')
|
|
||||||
|
|
||||||
for line in lines:
|
|
||||||
line = line.strip()
|
|
||||||
|
|
||||||
# Skip comments and empty lines
|
|
||||||
if not line or line.startswith(';') or line.startswith('#'):
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Try NDJSON format first (new Spamhaus format)
|
|
||||||
if line.startswith('{'):
|
|
||||||
try:
|
|
||||||
data = json.loads(line)
|
|
||||||
cidr = data.get('cidr')
|
|
||||||
if cidr and ListParser.validate_ip(cidr):
|
|
||||||
ips.add(ListParser.normalize_cidr(cidr))
|
|
||||||
continue
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Fallback: old text format
|
|
||||||
parts = line.split(';')
|
|
||||||
if parts:
|
|
||||||
ip_part = parts[0].strip()
|
|
||||||
if ip_part and ListParser.validate_ip(ip_part):
|
|
||||||
ips.add(ListParser.normalize_cidr(ip_part))
|
|
||||||
|
|
||||||
return ips
|
|
||||||
|
|
||||||
|
|
||||||
class TalosParser(ListParser):
|
|
||||||
"""Parser for Talos Intelligence blacklist"""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def parse(content: str) -> Set[tuple[str, Optional[str]]]:
|
|
||||||
"""
|
|
||||||
Parse Talos format (plain IP list):
|
|
||||||
1.2.3.4
|
|
||||||
5.6.7.0/24
|
|
||||||
"""
|
|
||||||
ips = set()
|
|
||||||
lines = content.strip().split('\n')
|
|
||||||
|
|
||||||
for line in lines:
|
|
||||||
line = line.strip()
|
|
||||||
|
|
||||||
# Skip comments and empty lines
|
|
||||||
if not line or line.startswith('#') or line.startswith('//'):
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Validate and add
|
|
||||||
if ListParser.validate_ip(line):
|
|
||||||
ips.add(ListParser.normalize_cidr(line))
|
|
||||||
|
|
||||||
return ips
|
|
||||||
|
|
||||||
|
|
||||||
class AWSParser(ListParser):
|
|
||||||
"""Parser for AWS IP ranges JSON"""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def parse(content: str) -> Set[tuple[str, Optional[str]]]:
|
|
||||||
"""
|
|
||||||
Parse AWS JSON format:
|
|
||||||
{
|
|
||||||
"prefixes": [
|
|
||||||
{"ip_prefix": "1.2.3.0/24", "region": "us-east-1", "service": "EC2"}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
ips = set()
|
|
||||||
|
|
||||||
try:
|
|
||||||
data = json.loads(content)
|
|
||||||
|
|
||||||
# IPv4 prefixes
|
|
||||||
for prefix in data.get('prefixes', []):
|
|
||||||
ip_prefix = prefix.get('ip_prefix')
|
|
||||||
if ip_prefix and ListParser.validate_ip(ip_prefix):
|
|
||||||
ips.add(ListParser.normalize_cidr(ip_prefix))
|
|
||||||
|
|
||||||
# IPv6 prefixes (optional)
|
|
||||||
for prefix in data.get('ipv6_prefixes', []):
|
|
||||||
ipv6_prefix = prefix.get('ipv6_prefix')
|
|
||||||
if ipv6_prefix and ListParser.validate_ip(ipv6_prefix):
|
|
||||||
ips.add(ListParser.normalize_cidr(ipv6_prefix))
|
|
||||||
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return ips
|
|
||||||
|
|
||||||
|
|
||||||
class GCPParser(ListParser):
|
|
||||||
"""Parser for Google Cloud IP ranges JSON"""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def parse(content: str) -> Set[tuple[str, Optional[str]]]:
|
|
||||||
"""
|
|
||||||
Parse GCP JSON format:
|
|
||||||
{
|
|
||||||
"prefixes": [
|
|
||||||
{"ipv4Prefix": "1.2.3.0/24"},
|
|
||||||
{"ipv6Prefix": "2001:db8::/32"}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
ips = set()
|
|
||||||
|
|
||||||
try:
|
|
||||||
data = json.loads(content)
|
|
||||||
|
|
||||||
for prefix in data.get('prefixes', []):
|
|
||||||
# IPv4
|
|
||||||
ipv4 = prefix.get('ipv4Prefix')
|
|
||||||
if ipv4 and ListParser.validate_ip(ipv4):
|
|
||||||
ips.add(ListParser.normalize_cidr(ipv4))
|
|
||||||
|
|
||||||
# IPv6
|
|
||||||
ipv6 = prefix.get('ipv6Prefix')
|
|
||||||
if ipv6 and ListParser.validate_ip(ipv6):
|
|
||||||
ips.add(ListParser.normalize_cidr(ipv6))
|
|
||||||
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return ips
|
|
||||||
|
|
||||||
|
|
||||||
class AzureParser(ListParser):
|
|
||||||
"""Parser for Microsoft Azure IP ranges JSON (Service Tags format)"""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def parse(content: str) -> Set[tuple[str, Optional[str]]]:
|
|
||||||
"""
|
|
||||||
Parse Azure Service Tags JSON format:
|
|
||||||
{
|
|
||||||
"values": [
|
|
||||||
{
|
|
||||||
"name": "ActionGroup",
|
|
||||||
"properties": {
|
|
||||||
"addressPrefixes": ["1.2.3.0/24", "5.6.7.0/24"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
"""
|
|
||||||
ips = set()
|
|
||||||
|
|
||||||
try:
|
|
||||||
data = json.loads(content)
|
|
||||||
|
|
||||||
for value in data.get('values', []):
|
|
||||||
properties = value.get('properties', {})
|
|
||||||
prefixes = properties.get('addressPrefixes', [])
|
|
||||||
|
|
||||||
for prefix in prefixes:
|
|
||||||
if prefix and ListParser.validate_ip(prefix):
|
|
||||||
ips.add(ListParser.normalize_cidr(prefix))
|
|
||||||
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
return ips
|
|
||||||
|
|
||||||
|
|
||||||
class MetaParser(ListParser):
|
|
||||||
"""Parser for Meta/Facebook IP ranges (plain CIDR list from BGP)"""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def parse(content: str) -> Set[tuple[str, Optional[str]]]:
|
|
||||||
"""
|
|
||||||
Parse Meta format (plain CIDR list):
|
|
||||||
31.13.24.0/21
|
|
||||||
31.13.64.0/18
|
|
||||||
157.240.0.0/17
|
|
||||||
"""
|
|
||||||
ips = set()
|
|
||||||
lines = content.strip().split('\n')
|
|
||||||
|
|
||||||
for line in lines:
|
|
||||||
line = line.strip()
|
|
||||||
|
|
||||||
# Skip empty lines and comments
|
|
||||||
if not line or line.startswith('#') or line.startswith('//'):
|
|
||||||
continue
|
|
||||||
|
|
||||||
if ListParser.validate_ip(line):
|
|
||||||
ips.add(ListParser.normalize_cidr(line))
|
|
||||||
|
|
||||||
return ips
|
|
||||||
|
|
||||||
|
|
||||||
class CloudflareParser(ListParser):
|
|
||||||
"""Parser for Cloudflare IP list"""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def parse(content: str) -> Set[tuple[str, Optional[str]]]:
|
|
||||||
"""
|
|
||||||
Parse Cloudflare format (plain CIDR list):
|
|
||||||
1.2.3.0/24
|
|
||||||
5.6.7.0/24
|
|
||||||
"""
|
|
||||||
ips = set()
|
|
||||||
lines = content.strip().split('\n')
|
|
||||||
|
|
||||||
for line in lines:
|
|
||||||
line = line.strip()
|
|
||||||
|
|
||||||
# Skip empty lines and comments
|
|
||||||
if not line or line.startswith('#'):
|
|
||||||
continue
|
|
||||||
|
|
||||||
if ListParser.validate_ip(line):
|
|
||||||
ips.add(ListParser.normalize_cidr(line))
|
|
||||||
|
|
||||||
return ips
|
|
||||||
|
|
||||||
|
|
||||||
class IANAParser(ListParser):
|
|
||||||
"""Parser for IANA Root Servers"""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def parse(content: str) -> Set[tuple[str, Optional[str]]]:
|
|
||||||
"""
|
|
||||||
Parse IANA root servers (extract IPs from HTML/text)
|
|
||||||
Look for IPv4 addresses in format XXX.XXX.XXX.XXX
|
|
||||||
"""
|
|
||||||
ips = set()
|
|
||||||
|
|
||||||
# Regex for IPv4 addresses
|
|
||||||
ipv4_pattern = r'\b(?:[0-9]{1,3}\.){3}[0-9]{1,3}\b'
|
|
||||||
matches = re.findall(ipv4_pattern, content)
|
|
||||||
|
|
||||||
for ip in matches:
|
|
||||||
if ListParser.validate_ip(ip):
|
|
||||||
ips.add(ListParser.normalize_cidr(ip))
|
|
||||||
|
|
||||||
return ips
|
|
||||||
|
|
||||||
|
|
||||||
class NTPPoolParser(ListParser):
|
|
||||||
"""Parser for NTP Pool servers"""
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def parse(content: str) -> Set[tuple[str, Optional[str]]]:
|
|
||||||
"""
|
|
||||||
Parse NTP pool format (plain IP list or JSON)
|
|
||||||
Tries multiple formats
|
|
||||||
"""
|
|
||||||
ips = set()
|
|
||||||
|
|
||||||
# Try JSON first
|
|
||||||
try:
|
|
||||||
data = json.loads(content)
|
|
||||||
if isinstance(data, list):
|
|
||||||
for item in data:
|
|
||||||
if isinstance(item, str) and ListParser.validate_ip(item):
|
|
||||||
ips.add(ListParser.normalize_cidr(item))
|
|
||||||
elif isinstance(item, dict):
|
|
||||||
ip = item.get('ip') or item.get('address')
|
|
||||||
if ip and ListParser.validate_ip(ip):
|
|
||||||
ips.add(ListParser.normalize_cidr(ip))
|
|
||||||
except json.JSONDecodeError:
|
|
||||||
# Fallback to plain text parsing
|
|
||||||
lines = content.strip().split('\n')
|
|
||||||
for line in lines:
|
|
||||||
line = line.strip()
|
|
||||||
if line and ListParser.validate_ip(line):
|
|
||||||
ips.add(ListParser.normalize_cidr(line))
|
|
||||||
|
|
||||||
return ips
|
|
||||||
|
|
||||||
|
|
||||||
# Parser registry
|
|
||||||
PARSERS: Dict[str, type[ListParser]] = {
|
|
||||||
'spamhaus': SpamhausParser,
|
|
||||||
'talos': TalosParser,
|
|
||||||
'aws': AWSParser,
|
|
||||||
'gcp': GCPParser,
|
|
||||||
'google': GCPParser,
|
|
||||||
'azure': AzureParser,
|
|
||||||
'microsoft': AzureParser,
|
|
||||||
'meta': MetaParser,
|
|
||||||
'facebook': MetaParser,
|
|
||||||
'cloudflare': CloudflareParser,
|
|
||||||
'iana': IANAParser,
|
|
||||||
'ntp': NTPPoolParser,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def get_parser(list_name: str) -> Optional[type[ListParser]]:
|
|
||||||
"""Get parser by list name (case-insensitive match)"""
|
|
||||||
list_name_lower = list_name.lower()
|
|
||||||
|
|
||||||
for key, parser in PARSERS.items():
|
|
||||||
if key in list_name_lower:
|
|
||||||
return parser
|
|
||||||
|
|
||||||
# Default fallback: try plain text parser
|
|
||||||
return TalosParser
|
|
||||||
|
|
||||||
|
|
||||||
def parse_list(list_name: str, content: str) -> Set[tuple[str, Optional[str]]]:
|
|
||||||
"""
|
|
||||||
Parse list content using appropriate parser
|
|
||||||
Returns set of (ip_address, cidr_range) tuples
|
|
||||||
"""
|
|
||||||
parser_class = get_parser(list_name)
|
|
||||||
if parser_class:
|
|
||||||
parser = parser_class()
|
|
||||||
return parser.parse(content)
|
|
||||||
return set()
|
|
||||||
@ -1,17 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
IDS List Fetcher Runner
|
|
||||||
Fetches and syncs public blacklist/whitelist sources every 10 minutes
|
|
||||||
"""
|
|
||||||
import asyncio
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
|
|
||||||
# Add parent directory to path
|
|
||||||
sys.path.append(os.path.dirname(os.path.dirname(__file__)))
|
|
||||||
|
|
||||||
from list_fetcher.fetcher import main
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
exit_code = asyncio.run(main())
|
|
||||||
sys.exit(exit_code)
|
|
||||||
@ -1,174 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Seed default public lists into database
|
|
||||||
Run after migration 006 to populate initial lists
|
|
||||||
"""
|
|
||||||
import psycopg2
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
import argparse
|
|
||||||
|
|
||||||
# Add parent directory to path
|
|
||||||
sys.path.append(os.path.dirname(os.path.dirname(__file__)))
|
|
||||||
|
|
||||||
from list_fetcher.fetcher import ListFetcher
|
|
||||||
import asyncio
|
|
||||||
|
|
||||||
|
|
||||||
DEFAULT_LISTS = [
|
|
||||||
# Blacklists
|
|
||||||
{
|
|
||||||
'name': 'Spamhaus DROP',
|
|
||||||
'type': 'blacklist',
|
|
||||||
'url': 'https://www.spamhaus.org/drop/drop.txt',
|
|
||||||
'enabled': True,
|
|
||||||
'fetch_interval_minutes': 10
|
|
||||||
},
|
|
||||||
{
|
|
||||||
'name': 'Talos Intelligence IP Blacklist',
|
|
||||||
'type': 'blacklist',
|
|
||||||
'url': 'https://talosintelligence.com/documents/ip-blacklist',
|
|
||||||
'enabled': False, # Disabled by default - verify URL first
|
|
||||||
'fetch_interval_minutes': 10
|
|
||||||
},
|
|
||||||
|
|
||||||
# Whitelists
|
|
||||||
{
|
|
||||||
'name': 'AWS IP Ranges',
|
|
||||||
'type': 'whitelist',
|
|
||||||
'url': 'https://ip-ranges.amazonaws.com/ip-ranges.json',
|
|
||||||
'enabled': True,
|
|
||||||
'fetch_interval_minutes': 10
|
|
||||||
},
|
|
||||||
{
|
|
||||||
'name': 'Google Cloud IP Ranges',
|
|
||||||
'type': 'whitelist',
|
|
||||||
'url': 'https://www.gstatic.com/ipranges/cloud.json',
|
|
||||||
'enabled': True,
|
|
||||||
'fetch_interval_minutes': 10
|
|
||||||
},
|
|
||||||
{
|
|
||||||
'name': 'Cloudflare IPv4',
|
|
||||||
'type': 'whitelist',
|
|
||||||
'url': 'https://www.cloudflare.com/ips-v4',
|
|
||||||
'enabled': True,
|
|
||||||
'fetch_interval_minutes': 10
|
|
||||||
},
|
|
||||||
{
|
|
||||||
'name': 'IANA Root Servers',
|
|
||||||
'type': 'whitelist',
|
|
||||||
'url': 'https://www.iana.org/domains/root/servers',
|
|
||||||
'enabled': True,
|
|
||||||
'fetch_interval_minutes': 10
|
|
||||||
},
|
|
||||||
{
|
|
||||||
'name': 'NTP Pool Servers',
|
|
||||||
'type': 'whitelist',
|
|
||||||
'url': 'https://www.ntppool.org/zone/@',
|
|
||||||
'enabled': False, # Disabled by default - zone parameter needed
|
|
||||||
'fetch_interval_minutes': 10
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def seed_lists(database_url: str, dry_run: bool = False):
|
|
||||||
"""Insert default lists into database"""
|
|
||||||
conn = psycopg2.connect(database_url)
|
|
||||||
|
|
||||||
try:
|
|
||||||
with conn.cursor() as cur:
|
|
||||||
# Check if lists already exist
|
|
||||||
cur.execute("SELECT COUNT(*) FROM public_lists")
|
|
||||||
result = cur.fetchone()
|
|
||||||
existing_count = result[0] if result else 0
|
|
||||||
|
|
||||||
if existing_count > 0 and not dry_run:
|
|
||||||
print(f"⚠️ Warning: {existing_count} lists already exist in database")
|
|
||||||
response = input("Continue and add default lists? (y/n): ")
|
|
||||||
if response.lower() != 'y':
|
|
||||||
print("Aborted")
|
|
||||||
return
|
|
||||||
|
|
||||||
print(f"\n{'='*60}")
|
|
||||||
print("SEEDING DEFAULT PUBLIC LISTS")
|
|
||||||
print(f"{'='*60}\n")
|
|
||||||
|
|
||||||
for list_config in DEFAULT_LISTS:
|
|
||||||
if dry_run:
|
|
||||||
status = "✓ ENABLED" if list_config['enabled'] else "○ DISABLED"
|
|
||||||
print(f"{status} {list_config['type'].upper()}: {list_config['name']}")
|
|
||||||
print(f" URL: {list_config['url']}")
|
|
||||||
print()
|
|
||||||
else:
|
|
||||||
cur.execute("""
|
|
||||||
INSERT INTO public_lists (name, type, url, enabled, fetch_interval_minutes)
|
|
||||||
VALUES (%s, %s, %s, %s, %s)
|
|
||||||
RETURNING id, name
|
|
||||||
""", (
|
|
||||||
list_config['name'],
|
|
||||||
list_config['type'],
|
|
||||||
list_config['url'],
|
|
||||||
list_config['enabled'],
|
|
||||||
list_config['fetch_interval_minutes']
|
|
||||||
))
|
|
||||||
|
|
||||||
result = cur.fetchone()
|
|
||||||
if result:
|
|
||||||
list_id, list_name = result
|
|
||||||
status = "✓" if list_config['enabled'] else "○"
|
|
||||||
print(f"{status} Added: {list_name} (ID: {list_id})")
|
|
||||||
|
|
||||||
if not dry_run:
|
|
||||||
conn.commit()
|
|
||||||
print(f"\n✓ Successfully seeded {len(DEFAULT_LISTS)} lists")
|
|
||||||
print(f"{'='*60}\n")
|
|
||||||
else:
|
|
||||||
print(f"\n{'='*60}")
|
|
||||||
print(f"DRY RUN: Would seed {len(DEFAULT_LISTS)} lists")
|
|
||||||
print(f"{'='*60}\n")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
conn.rollback()
|
|
||||||
print(f"✗ Error: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
return 1
|
|
||||||
finally:
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
async def sync_lists(database_url: str):
|
|
||||||
"""Run initial sync of all enabled lists"""
|
|
||||||
print("\nRunning initial sync of enabled lists...\n")
|
|
||||||
fetcher = ListFetcher(database_url)
|
|
||||||
await fetcher.fetch_all_lists()
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = argparse.ArgumentParser(description='Seed default public lists')
|
|
||||||
parser.add_argument('--dry-run', action='store_true', help='Show what would be added without inserting')
|
|
||||||
parser.add_argument('--sync', action='store_true', help='Run initial sync after seeding')
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
database_url = os.getenv('DATABASE_URL')
|
|
||||||
if not database_url:
|
|
||||||
print("ERROR: DATABASE_URL environment variable not set")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
# Seed lists
|
|
||||||
exit_code = seed_lists(database_url, dry_run=args.dry_run)
|
|
||||||
|
|
||||||
if exit_code != 0:
|
|
||||||
return exit_code
|
|
||||||
|
|
||||||
# Optionally sync
|
|
||||||
if args.sync and not args.dry_run:
|
|
||||||
asyncio.run(sync_lists(database_url))
|
|
||||||
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
sys.exit(main())
|
|
||||||
@ -18,7 +18,6 @@ import asyncio
|
|||||||
import secrets
|
import secrets
|
||||||
|
|
||||||
from ml_analyzer import MLAnalyzer
|
from ml_analyzer import MLAnalyzer
|
||||||
from ml_hybrid_detector import MLHybridDetector
|
|
||||||
from mikrotik_manager import MikroTikManager
|
from mikrotik_manager import MikroTikManager
|
||||||
from ip_geolocation import get_geo_service
|
from ip_geolocation import get_geo_service
|
||||||
|
|
||||||
@ -48,7 +47,7 @@ async def verify_api_key(api_key: str = Security(api_key_header)):
|
|||||||
)
|
)
|
||||||
return True
|
return True
|
||||||
|
|
||||||
app = FastAPI(title="IDS API", version="2.0.0")
|
app = FastAPI(title="IDS API", version="1.0.0")
|
||||||
|
|
||||||
# CORS
|
# CORS
|
||||||
app.add_middleware(
|
app.add_middleware(
|
||||||
@ -59,24 +58,8 @@ app.add_middleware(
|
|||||||
allow_headers=["*"],
|
allow_headers=["*"],
|
||||||
)
|
)
|
||||||
|
|
||||||
# Global instances - Try hybrid first, fallback to legacy
|
# Global instances
|
||||||
USE_HYBRID_DETECTOR = os.getenv("USE_HYBRID_DETECTOR", "true").lower() == "true"
|
ml_analyzer = MLAnalyzer(model_dir="models")
|
||||||
|
|
||||||
# Model version based on detector type
|
|
||||||
MODEL_VERSION = "2.0.0" if USE_HYBRID_DETECTOR else "1.0.0"
|
|
||||||
|
|
||||||
if USE_HYBRID_DETECTOR:
|
|
||||||
print("[ML] Using Hybrid ML Detector (Extended Isolation Forest + Feature Selection)")
|
|
||||||
ml_detector = MLHybridDetector(model_dir="models")
|
|
||||||
# Try to load existing model
|
|
||||||
if not ml_detector.load_models():
|
|
||||||
print("[ML] No hybrid model found, will use on-demand training")
|
|
||||||
ml_analyzer = None # Legacy disabled
|
|
||||||
else:
|
|
||||||
print("[ML] Using Legacy ML Analyzer (standard Isolation Forest)")
|
|
||||||
ml_analyzer = MLAnalyzer(model_dir="models")
|
|
||||||
ml_detector = None
|
|
||||||
|
|
||||||
mikrotik_manager = MikroTikManager()
|
mikrotik_manager = MikroTikManager()
|
||||||
|
|
||||||
# Database connection
|
# Database connection
|
||||||
@ -97,7 +80,7 @@ class TrainRequest(BaseModel):
|
|||||||
|
|
||||||
class DetectRequest(BaseModel):
|
class DetectRequest(BaseModel):
|
||||||
max_records: int = 5000
|
max_records: int = 5000
|
||||||
hours_back: float = 1.0 # Support fractional hours (e.g., 0.5 = 30 min)
|
hours_back: int = 1
|
||||||
risk_threshold: float = 60.0
|
risk_threshold: float = 60.0
|
||||||
auto_block: bool = False
|
auto_block: bool = False
|
||||||
|
|
||||||
@ -116,21 +99,11 @@ class UnblockIPRequest(BaseModel):
|
|||||||
|
|
||||||
@app.get("/")
|
@app.get("/")
|
||||||
async def root():
|
async def root():
|
||||||
# Check which detector is active
|
|
||||||
if USE_HYBRID_DETECTOR:
|
|
||||||
model_loaded = ml_detector.isolation_forest is not None
|
|
||||||
model_type = "hybrid"
|
|
||||||
else:
|
|
||||||
model_loaded = ml_analyzer.model is not None
|
|
||||||
model_type = "legacy"
|
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"service": "IDS API",
|
"service": "IDS API",
|
||||||
"version": "2.0.0",
|
"version": "1.0.0",
|
||||||
"status": "running",
|
"status": "running",
|
||||||
"model_type": model_type,
|
"model_loaded": ml_analyzer.model is not None
|
||||||
"model_loaded": model_loaded,
|
|
||||||
"use_hybrid": USE_HYBRID_DETECTOR
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@app.get("/health")
|
@app.get("/health")
|
||||||
@ -143,19 +116,10 @@ async def health_check():
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
db_status = f"error: {str(e)}"
|
db_status = f"error: {str(e)}"
|
||||||
|
|
||||||
# Check model status
|
|
||||||
if USE_HYBRID_DETECTOR:
|
|
||||||
model_status = "loaded" if ml_detector.isolation_forest is not None else "not_loaded"
|
|
||||||
model_type = "hybrid (EIF + Feature Selection)"
|
|
||||||
else:
|
|
||||||
model_status = "loaded" if ml_analyzer.model is not None else "not_loaded"
|
|
||||||
model_type = "legacy (Isolation Forest)"
|
|
||||||
|
|
||||||
return {
|
return {
|
||||||
"status": "healthy",
|
"status": "healthy",
|
||||||
"database": db_status,
|
"database": db_status,
|
||||||
"ml_model": model_status,
|
"ml_model": "loaded" if ml_analyzer.model is not None else "not_loaded",
|
||||||
"ml_model_type": model_type,
|
|
||||||
"timestamp": datetime.now().isoformat()
|
"timestamp": datetime.now().isoformat()
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -193,49 +157,19 @@ async def train_model(request: TrainRequest, background_tasks: BackgroundTasks):
|
|||||||
# Converti in DataFrame
|
# Converti in DataFrame
|
||||||
df = pd.DataFrame(logs)
|
df = pd.DataFrame(logs)
|
||||||
|
|
||||||
# Training - usa detector appropriato
|
# Training
|
||||||
print("[TRAIN] Addestramento modello...")
|
print("[TRAIN] Addestramento modello...")
|
||||||
try:
|
result = ml_analyzer.train(df, contamination=request.contamination)
|
||||||
if USE_HYBRID_DETECTOR:
|
print(f"[TRAIN] Modello addestrato: {result}")
|
||||||
print("[TRAIN] Using Hybrid ML Detector")
|
|
||||||
result = ml_detector.train_unsupervised(df)
|
|
||||||
else:
|
|
||||||
print("[TRAIN] Using Legacy ML Analyzer")
|
|
||||||
result = ml_analyzer.train(df, contamination=request.contamination)
|
|
||||||
print(f"[TRAIN] Modello addestrato: {result}")
|
|
||||||
except ValueError as e:
|
|
||||||
# Training FAILED - ensemble could not be created
|
|
||||||
error_msg = str(e)
|
|
||||||
print(f"\n[TRAIN] ❌ TRAINING FAILED")
|
|
||||||
print(f"{error_msg}")
|
|
||||||
|
|
||||||
# Save failure to database
|
# Salva nel database
|
||||||
cursor.execute("""
|
|
||||||
INSERT INTO training_history
|
|
||||||
(model_version, records_processed, features_count, training_duration, status, notes)
|
|
||||||
VALUES (%s, %s, %s, %s, %s, %s)
|
|
||||||
""", (
|
|
||||||
MODEL_VERSION,
|
|
||||||
len(df),
|
|
||||||
0,
|
|
||||||
0,
|
|
||||||
'failed',
|
|
||||||
f"ERROR: {error_msg[:500]}" # Truncate if too long
|
|
||||||
))
|
|
||||||
conn.commit()
|
|
||||||
print("[TRAIN] ❌ Training failure logged to database")
|
|
||||||
|
|
||||||
# Re-raise to propagate error
|
|
||||||
raise
|
|
||||||
|
|
||||||
# Salva nel database (solo se training SUCCESS)
|
|
||||||
print("[TRAIN] Salvataggio training history nel database...")
|
print("[TRAIN] Salvataggio training history nel database...")
|
||||||
cursor.execute("""
|
cursor.execute("""
|
||||||
INSERT INTO training_history
|
INSERT INTO training_history
|
||||||
(model_version, records_processed, features_count, training_duration, status, notes)
|
(model_version, records_processed, features_count, training_duration, status, notes)
|
||||||
VALUES (%s, %s, %s, %s, %s, %s)
|
VALUES (%s, %s, %s, %s, %s, %s)
|
||||||
""", (
|
""", (
|
||||||
MODEL_VERSION,
|
"1.0.0",
|
||||||
result['records_processed'],
|
result['records_processed'],
|
||||||
result['features_count'],
|
result['features_count'],
|
||||||
0, # duration non ancora implementato
|
0, # duration non ancora implementato
|
||||||
@ -277,23 +211,13 @@ async def detect_anomalies(request: DetectRequest):
|
|||||||
Rileva anomalie nei log recenti
|
Rileva anomalie nei log recenti
|
||||||
Opzionalmente blocca automaticamente IP anomali
|
Opzionalmente blocca automaticamente IP anomali
|
||||||
"""
|
"""
|
||||||
# Check model loaded
|
if ml_analyzer.model is None:
|
||||||
if USE_HYBRID_DETECTOR:
|
# Prova a caricare modello salvato
|
||||||
if ml_detector.isolation_forest is None:
|
if not ml_analyzer.load_model():
|
||||||
# Try to load
|
raise HTTPException(
|
||||||
if not ml_detector.load_models():
|
status_code=400,
|
||||||
raise HTTPException(
|
detail="Modello non addestrato. Esegui /train prima."
|
||||||
status_code=400,
|
)
|
||||||
detail="Modello hybrid non addestrato. Esegui /train prima."
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
if ml_analyzer.model is None:
|
|
||||||
# Prova a caricare modello salvato
|
|
||||||
if not ml_analyzer.load_model():
|
|
||||||
raise HTTPException(
|
|
||||||
status_code=400,
|
|
||||||
detail="Modello non addestrato. Esegui /train prima."
|
|
||||||
)
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
conn = get_db_connection()
|
conn = get_db_connection()
|
||||||
@ -316,23 +240,8 @@ async def detect_anomalies(request: DetectRequest):
|
|||||||
# Converti in DataFrame
|
# Converti in DataFrame
|
||||||
df = pd.DataFrame(logs)
|
df = pd.DataFrame(logs)
|
||||||
|
|
||||||
# Detection - usa detector appropriato
|
# Detection
|
||||||
if USE_HYBRID_DETECTOR:
|
detections = ml_analyzer.detect(df, risk_threshold=request.risk_threshold)
|
||||||
print("[DETECT] Using Hybrid ML Detector")
|
|
||||||
# Hybrid detector returns different format
|
|
||||||
detections = ml_detector.detect(df, mode='confidence')
|
|
||||||
# Convert to legacy format for compatibility
|
|
||||||
for det in detections:
|
|
||||||
# Map confidence_level string to numeric value for database
|
|
||||||
confidence_mapping = {
|
|
||||||
'high': 95.0,
|
|
||||||
'medium': 75.0,
|
|
||||||
'low': 50.0
|
|
||||||
}
|
|
||||||
det['confidence'] = confidence_mapping.get(det['confidence_level'], 50.0)
|
|
||||||
else:
|
|
||||||
print("[DETECT] Using Legacy ML Analyzer")
|
|
||||||
detections = ml_analyzer.detect(df, risk_threshold=request.risk_threshold)
|
|
||||||
|
|
||||||
# Geolocation lookup service - BATCH ASYNC per performance
|
# Geolocation lookup service - BATCH ASYNC per performance
|
||||||
geo_service = get_geo_service()
|
geo_service = get_geo_service()
|
||||||
@ -689,16 +598,7 @@ if __name__ == "__main__":
|
|||||||
import uvicorn
|
import uvicorn
|
||||||
|
|
||||||
# Prova a caricare modello esistente
|
# Prova a caricare modello esistente
|
||||||
if USE_HYBRID_DETECTOR:
|
ml_analyzer.load_model()
|
||||||
# Hybrid detector: già caricato all'inizializzazione (riga 69)
|
|
||||||
if ml_detector and ml_detector.isolation_forest is not None:
|
|
||||||
print("[ML] ✓ Hybrid detector models loaded and ready")
|
|
||||||
else:
|
|
||||||
print("[ML] ⚠ Hybrid detector initialized but no models found (will train on-demand)")
|
|
||||||
else:
|
|
||||||
# Legacy analyzer
|
|
||||||
if ml_analyzer:
|
|
||||||
ml_analyzer.load_model()
|
|
||||||
|
|
||||||
print("🚀 Starting IDS API on http://0.0.0.0:8000")
|
print("🚀 Starting IDS API on http://0.0.0.0:8000")
|
||||||
print("📚 Docs available at http://0.0.0.0:8000/docs")
|
print("📚 Docs available at http://0.0.0.0:8000/docs")
|
||||||
|
|||||||
@ -1,376 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Merge Logic for Public Lists Integration
|
|
||||||
Implements priority: Manual Whitelist > Public Whitelist > Public Blacklist
|
|
||||||
"""
|
|
||||||
import os
|
|
||||||
import psycopg2
|
|
||||||
from typing import Dict, Set, Optional
|
|
||||||
from datetime import datetime
|
|
||||||
import logging
|
|
||||||
import ipaddress
|
|
||||||
|
|
||||||
logging.basicConfig(level=logging.INFO)
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def ip_matches_cidr(ip_address: str, cidr_range: Optional[str]) -> bool:
|
|
||||||
"""
|
|
||||||
Check if IP address matches CIDR range
|
|
||||||
Returns True if cidr_range is None (exact match) or if IP is in range
|
|
||||||
"""
|
|
||||||
if not cidr_range:
|
|
||||||
return True # Exact match handling
|
|
||||||
|
|
||||||
try:
|
|
||||||
ip = ipaddress.ip_address(ip_address)
|
|
||||||
network = ipaddress.ip_network(cidr_range, strict=False)
|
|
||||||
return ip in network
|
|
||||||
except (ValueError, TypeError):
|
|
||||||
logger.warning(f"Invalid IP/CIDR: {ip_address}/{cidr_range}")
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
class MergeLogic:
|
|
||||||
"""
|
|
||||||
Handles merge logic between manual entries and public lists
|
|
||||||
Priority: Manual whitelist > Public whitelist > Public blacklist
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, database_url: str):
|
|
||||||
self.database_url = database_url
|
|
||||||
|
|
||||||
def get_db_connection(self):
|
|
||||||
"""Create database connection"""
|
|
||||||
return psycopg2.connect(self.database_url)
|
|
||||||
|
|
||||||
def get_all_whitelisted_ips(self) -> Set[str]:
|
|
||||||
"""
|
|
||||||
Get all whitelisted IPs (manual + public)
|
|
||||||
Manual whitelist has higher priority than public whitelist
|
|
||||||
"""
|
|
||||||
conn = self.get_db_connection()
|
|
||||||
try:
|
|
||||||
with conn.cursor() as cur:
|
|
||||||
cur.execute("""
|
|
||||||
SELECT DISTINCT ip_address
|
|
||||||
FROM whitelist
|
|
||||||
WHERE active = true
|
|
||||||
""")
|
|
||||||
return {row[0] for row in cur.fetchall()}
|
|
||||||
finally:
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
def get_public_blacklist_ips(self) -> Set[str]:
|
|
||||||
"""Get all active public blacklist IPs"""
|
|
||||||
conn = self.get_db_connection()
|
|
||||||
try:
|
|
||||||
with conn.cursor() as cur:
|
|
||||||
cur.execute("""
|
|
||||||
SELECT DISTINCT ip_address
|
|
||||||
FROM public_blacklist_ips
|
|
||||||
WHERE is_active = true
|
|
||||||
""")
|
|
||||||
return {row[0] for row in cur.fetchall()}
|
|
||||||
finally:
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
def should_block_ip(self, ip_address: str) -> tuple[bool, str]:
|
|
||||||
"""
|
|
||||||
Determine if IP should be blocked based on merge logic
|
|
||||||
Returns: (should_block, reason)
|
|
||||||
|
|
||||||
Priority:
|
|
||||||
1. Manual whitelist (exact or CIDR) → DON'T block (highest priority)
|
|
||||||
2. Public whitelist (exact or CIDR) → DON'T block
|
|
||||||
3. Public blacklist (exact or CIDR) → DO block
|
|
||||||
4. Not in any list → DON'T block (only ML decides)
|
|
||||||
"""
|
|
||||||
conn = self.get_db_connection()
|
|
||||||
try:
|
|
||||||
with conn.cursor() as cur:
|
|
||||||
# Check manual whitelist (highest priority) - exact + CIDR matching
|
|
||||||
cur.execute("""
|
|
||||||
SELECT ip_address, list_id FROM whitelist
|
|
||||||
WHERE active = true
|
|
||||||
AND source = 'manual'
|
|
||||||
""")
|
|
||||||
for row in cur.fetchall():
|
|
||||||
wl_ip, wl_cidr = row[0], None
|
|
||||||
# Check if whitelist entry has CIDR notation
|
|
||||||
if '/' in wl_ip:
|
|
||||||
wl_cidr = wl_ip
|
|
||||||
if wl_ip == ip_address or ip_matches_cidr(ip_address, wl_cidr):
|
|
||||||
return (False, "manual_whitelist")
|
|
||||||
|
|
||||||
# Check public whitelist (any source except 'manual') - exact + CIDR
|
|
||||||
cur.execute("""
|
|
||||||
SELECT ip_address, list_id FROM whitelist
|
|
||||||
WHERE active = true
|
|
||||||
AND source != 'manual'
|
|
||||||
""")
|
|
||||||
for row in cur.fetchall():
|
|
||||||
wl_ip, wl_cidr = row[0], None
|
|
||||||
if '/' in wl_ip:
|
|
||||||
wl_cidr = wl_ip
|
|
||||||
if wl_ip == ip_address or ip_matches_cidr(ip_address, wl_cidr):
|
|
||||||
return (False, "public_whitelist")
|
|
||||||
|
|
||||||
# Check public blacklist - exact + CIDR matching
|
|
||||||
cur.execute("""
|
|
||||||
SELECT id, ip_address, cidr_range FROM public_blacklist_ips
|
|
||||||
WHERE is_active = true
|
|
||||||
""")
|
|
||||||
for row in cur.fetchall():
|
|
||||||
bl_id, bl_ip, bl_cidr = row
|
|
||||||
# Match exact IP or check if IP is in CIDR range
|
|
||||||
if bl_ip == ip_address or ip_matches_cidr(ip_address, bl_cidr):
|
|
||||||
return (True, f"public_blacklist:{bl_id}")
|
|
||||||
|
|
||||||
# Not in any list
|
|
||||||
return (False, "not_listed")
|
|
||||||
finally:
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
def create_detection_from_blacklist(
|
|
||||||
self,
|
|
||||||
ip_address: str,
|
|
||||||
blacklist_id: str,
|
|
||||||
risk_score: int = 75
|
|
||||||
) -> Optional[str]:
|
|
||||||
"""
|
|
||||||
Create detection record for public blacklist IP
|
|
||||||
Only if not whitelisted (priority check)
|
|
||||||
"""
|
|
||||||
should_block, reason = self.should_block_ip(ip_address)
|
|
||||||
|
|
||||||
if not should_block:
|
|
||||||
logger.info(f"IP {ip_address} not blocked - reason: {reason}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
conn = self.get_db_connection()
|
|
||||||
try:
|
|
||||||
with conn.cursor() as cur:
|
|
||||||
# Check if detection already exists
|
|
||||||
cur.execute("""
|
|
||||||
SELECT id FROM detections
|
|
||||||
WHERE source_ip = %s
|
|
||||||
AND detection_source = 'public_blacklist'
|
|
||||||
LIMIT 1
|
|
||||||
""", (ip_address,))
|
|
||||||
|
|
||||||
existing = cur.fetchone()
|
|
||||||
if existing:
|
|
||||||
logger.info(f"Detection already exists for {ip_address}")
|
|
||||||
return existing[0]
|
|
||||||
|
|
||||||
# Create new detection
|
|
||||||
cur.execute("""
|
|
||||||
INSERT INTO detections (
|
|
||||||
source_ip,
|
|
||||||
risk_score,
|
|
||||||
confidence,
|
|
||||||
anomaly_type,
|
|
||||||
reason,
|
|
||||||
log_count,
|
|
||||||
first_seen,
|
|
||||||
last_seen,
|
|
||||||
detection_source,
|
|
||||||
blacklist_id,
|
|
||||||
detected_at,
|
|
||||||
blocked
|
|
||||||
) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)
|
|
||||||
RETURNING id
|
|
||||||
""", (
|
|
||||||
ip_address,
|
|
||||||
risk_score, # numeric, not string
|
|
||||||
100.0, # confidence
|
|
||||||
'public_blacklist',
|
|
||||||
'IP in public blacklist',
|
|
||||||
1, # log_count
|
|
||||||
datetime.utcnow(), # first_seen
|
|
||||||
datetime.utcnow(), # last_seen
|
|
||||||
'public_blacklist',
|
|
||||||
blacklist_id,
|
|
||||||
datetime.utcnow(),
|
|
||||||
False # Will be blocked by auto-block service if risk_score >= 80
|
|
||||||
))
|
|
||||||
|
|
||||||
result = cur.fetchone()
|
|
||||||
if not result:
|
|
||||||
logger.error(f"Failed to get detection ID after insert for {ip_address}")
|
|
||||||
return None
|
|
||||||
|
|
||||||
detection_id = result[0]
|
|
||||||
conn.commit()
|
|
||||||
|
|
||||||
logger.info(f"Created detection {detection_id} for blacklisted IP {ip_address}")
|
|
||||||
return detection_id
|
|
||||||
except Exception as e:
|
|
||||||
conn.rollback()
|
|
||||||
logger.error(f"Failed to create detection for {ip_address}: {e}")
|
|
||||||
return None
|
|
||||||
finally:
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
def cleanup_invalid_detections(self) -> int:
|
|
||||||
"""
|
|
||||||
Remove detections for IPs that are now whitelisted
|
|
||||||
CIDR-aware: checks both exact match and network containment
|
|
||||||
Respects priority: manual/public whitelist overrides blacklist
|
|
||||||
"""
|
|
||||||
conn = self.get_db_connection()
|
|
||||||
try:
|
|
||||||
with conn.cursor() as cur:
|
|
||||||
# Delete detections for IPs in whitelist ranges (CIDR-aware)
|
|
||||||
# Cast both sides to inet explicitly for type safety
|
|
||||||
cur.execute("""
|
|
||||||
DELETE FROM detections d
|
|
||||||
WHERE d.detection_source = 'public_blacklist'
|
|
||||||
AND EXISTS (
|
|
||||||
SELECT 1 FROM whitelist wl
|
|
||||||
WHERE wl.active = true
|
|
||||||
AND wl.ip_inet IS NOT NULL
|
|
||||||
AND (
|
|
||||||
d.source_ip::inet = wl.ip_inet::inet
|
|
||||||
OR d.source_ip::inet <<= wl.ip_inet::inet
|
|
||||||
)
|
|
||||||
)
|
|
||||||
""")
|
|
||||||
deleted = cur.rowcount
|
|
||||||
conn.commit()
|
|
||||||
|
|
||||||
if deleted > 0:
|
|
||||||
logger.info(f"Cleaned up {deleted} detections for whitelisted IPs (CIDR-aware)")
|
|
||||||
|
|
||||||
return deleted
|
|
||||||
except Exception as e:
|
|
||||||
conn.rollback()
|
|
||||||
logger.error(f"Failed to cleanup detections: {e}")
|
|
||||||
return 0
|
|
||||||
finally:
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
def sync_public_blacklist_detections(self) -> Dict[str, int]:
|
|
||||||
"""
|
|
||||||
Sync detections with current public blacklist state using BULK operations
|
|
||||||
Creates detections for blacklisted IPs (if not whitelisted)
|
|
||||||
Removes detections for IPs no longer blacklisted or now whitelisted
|
|
||||||
"""
|
|
||||||
stats = {
|
|
||||||
'created': 0,
|
|
||||||
'cleaned': 0,
|
|
||||||
'skipped_whitelisted': 0
|
|
||||||
}
|
|
||||||
|
|
||||||
conn = self.get_db_connection()
|
|
||||||
try:
|
|
||||||
with conn.cursor() as cur:
|
|
||||||
# Cleanup whitelisted IPs first (priority)
|
|
||||||
stats['cleaned'] = self.cleanup_invalid_detections()
|
|
||||||
|
|
||||||
# Bulk create detections with CIDR-aware matching
|
|
||||||
# Uses PostgreSQL INET operators for network containment
|
|
||||||
# Priority: Manual whitelist > Public whitelist > Blacklist
|
|
||||||
cur.execute("""
|
|
||||||
INSERT INTO detections (
|
|
||||||
source_ip,
|
|
||||||
risk_score,
|
|
||||||
confidence,
|
|
||||||
anomaly_type,
|
|
||||||
reason,
|
|
||||||
log_count,
|
|
||||||
first_seen,
|
|
||||||
last_seen,
|
|
||||||
detection_source,
|
|
||||||
blacklist_id,
|
|
||||||
detected_at,
|
|
||||||
blocked
|
|
||||||
)
|
|
||||||
SELECT DISTINCT
|
|
||||||
bl.ip_address,
|
|
||||||
75::numeric,
|
|
||||||
100::numeric,
|
|
||||||
'public_blacklist',
|
|
||||||
'IP in public blacklist',
|
|
||||||
1,
|
|
||||||
NOW(),
|
|
||||||
NOW(),
|
|
||||||
'public_blacklist',
|
|
||||||
bl.id,
|
|
||||||
NOW(),
|
|
||||||
false
|
|
||||||
FROM public_blacklist_ips bl
|
|
||||||
WHERE bl.is_active = true
|
|
||||||
AND bl.ip_inet IS NOT NULL
|
|
||||||
-- Priority 1: Exclude if in manual whitelist (highest priority)
|
|
||||||
-- Cast to inet explicitly for type safety
|
|
||||||
AND NOT EXISTS (
|
|
||||||
SELECT 1 FROM whitelist wl
|
|
||||||
WHERE wl.active = true
|
|
||||||
AND wl.source = 'manual'
|
|
||||||
AND wl.ip_inet IS NOT NULL
|
|
||||||
AND (
|
|
||||||
bl.ip_inet::inet = wl.ip_inet::inet
|
|
||||||
OR bl.ip_inet::inet <<= wl.ip_inet::inet
|
|
||||||
)
|
|
||||||
)
|
|
||||||
-- Priority 2: Exclude if in public whitelist
|
|
||||||
AND NOT EXISTS (
|
|
||||||
SELECT 1 FROM whitelist wl
|
|
||||||
WHERE wl.active = true
|
|
||||||
AND wl.source != 'manual'
|
|
||||||
AND wl.ip_inet IS NOT NULL
|
|
||||||
AND (
|
|
||||||
bl.ip_inet::inet = wl.ip_inet::inet
|
|
||||||
OR bl.ip_inet::inet <<= wl.ip_inet::inet
|
|
||||||
)
|
|
||||||
)
|
|
||||||
-- Avoid duplicate detections
|
|
||||||
AND NOT EXISTS (
|
|
||||||
SELECT 1 FROM detections d
|
|
||||||
WHERE d.source_ip = bl.ip_address
|
|
||||||
AND d.detection_source = 'public_blacklist'
|
|
||||||
)
|
|
||||||
RETURNING id
|
|
||||||
""")
|
|
||||||
|
|
||||||
created_ids = cur.fetchall()
|
|
||||||
stats['created'] = len(created_ids)
|
|
||||||
conn.commit()
|
|
||||||
|
|
||||||
logger.info(f"Bulk sync complete: {stats}")
|
|
||||||
return stats
|
|
||||||
except Exception as e:
|
|
||||||
conn.rollback()
|
|
||||||
logger.error(f"Failed to sync detections: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
return stats
|
|
||||||
finally:
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
"""Run merge logic sync"""
|
|
||||||
database_url = os.environ.get('DATABASE_URL')
|
|
||||||
if not database_url:
|
|
||||||
logger.error("DATABASE_URL environment variable not set")
|
|
||||||
return 1
|
|
||||||
|
|
||||||
merge = MergeLogic(database_url)
|
|
||||||
stats = merge.sync_public_blacklist_detections()
|
|
||||||
|
|
||||||
print(f"\n{'='*60}")
|
|
||||||
print("MERGE LOGIC SYNC COMPLETED")
|
|
||||||
print(f"{'='*60}")
|
|
||||||
print(f"Created detections: {stats['created']}")
|
|
||||||
print(f"Cleaned invalid detections: {stats['cleaned']}")
|
|
||||||
print(f"Skipped (whitelisted): {stats['skipped_whitelisted']}")
|
|
||||||
print(f"{'='*60}\n")
|
|
||||||
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
exit(main())
|
|
||||||
@ -5,7 +5,6 @@ Più veloce e affidabile di SSH per 10+ router
|
|||||||
|
|
||||||
import httpx
|
import httpx
|
||||||
import asyncio
|
import asyncio
|
||||||
import ssl
|
|
||||||
from typing import List, Dict, Optional
|
from typing import List, Dict, Optional
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
import hashlib
|
import hashlib
|
||||||
@ -22,55 +21,33 @@ class MikroTikManager:
|
|||||||
self.timeout = timeout
|
self.timeout = timeout
|
||||||
self.clients = {} # Cache di client HTTP per router
|
self.clients = {} # Cache di client HTTP per router
|
||||||
|
|
||||||
def _get_client(self, router_ip: str, username: str, password: str, port: int = 8728, use_ssl: bool = False) -> httpx.AsyncClient:
|
def _get_client(self, router_ip: str, username: str, password: str, port: int = 8728) -> httpx.AsyncClient:
|
||||||
"""Ottiene o crea client HTTP per un router"""
|
"""Ottiene o crea client HTTP per un router"""
|
||||||
key = f"{router_ip}:{port}:{use_ssl}"
|
key = f"{router_ip}:{port}"
|
||||||
if key not in self.clients:
|
if key not in self.clients:
|
||||||
# API REST MikroTik:
|
# API REST MikroTik usa porta HTTP/HTTPS (default 80/443)
|
||||||
# - Porta 8728: HTTP (default)
|
# Per semplicità useremo richieste HTTP dirette
|
||||||
# - Porta 8729: HTTPS (SSL)
|
|
||||||
protocol = "https" if use_ssl or port == 8729 else "http"
|
|
||||||
auth = base64.b64encode(f"{username}:{password}".encode()).decode()
|
auth = base64.b64encode(f"{username}:{password}".encode()).decode()
|
||||||
headers = {
|
headers = {
|
||||||
"Authorization": f"Basic {auth}",
|
"Authorization": f"Basic {auth}",
|
||||||
"Content-Type": "application/json"
|
"Content-Type": "application/json"
|
||||||
}
|
}
|
||||||
|
|
||||||
# SSL context per MikroTik (supporta protocolli TLS legacy)
|
|
||||||
ssl_context = None
|
|
||||||
if protocol == "https":
|
|
||||||
ssl_context = ssl.create_default_context()
|
|
||||||
ssl_context.check_hostname = False
|
|
||||||
ssl_context.verify_mode = ssl.CERT_NONE
|
|
||||||
# Abilita protocolli TLS legacy per MikroTik (TLS 1.0+)
|
|
||||||
try:
|
|
||||||
ssl_context.minimum_version = ssl.TLSVersion.TLSv1
|
|
||||||
except AttributeError:
|
|
||||||
# Python < 3.7 fallback
|
|
||||||
pass
|
|
||||||
# Abilita cipher suite legacy per compatibilità
|
|
||||||
ssl_context.set_ciphers('DEFAULT@SECLEVEL=1')
|
|
||||||
|
|
||||||
self.clients[key] = httpx.AsyncClient(
|
self.clients[key] = httpx.AsyncClient(
|
||||||
base_url=f"{protocol}://{router_ip}:{port}",
|
base_url=f"http://{router_ip}",
|
||||||
headers=headers,
|
headers=headers,
|
||||||
timeout=self.timeout,
|
timeout=self.timeout
|
||||||
verify=ssl_context if ssl_context else True
|
|
||||||
)
|
)
|
||||||
return self.clients[key]
|
return self.clients[key]
|
||||||
|
|
||||||
async def test_connection(self, router_ip: str, username: str, password: str, port: int = 8728, use_ssl: bool = False) -> bool:
|
async def test_connection(self, router_ip: str, username: str, password: str, port: int = 8728) -> bool:
|
||||||
"""Testa connessione a un router"""
|
"""Testa connessione a un router"""
|
||||||
try:
|
try:
|
||||||
# Auto-detect SSL: porta 8729 = SSL
|
client = self._get_client(router_ip, username, password, port)
|
||||||
if port == 8729:
|
|
||||||
use_ssl = True
|
|
||||||
client = self._get_client(router_ip, username, password, port, use_ssl)
|
|
||||||
# Prova a leggere system identity
|
# Prova a leggere system identity
|
||||||
response = await client.get("/rest/system/identity")
|
response = await client.get("/rest/system/identity")
|
||||||
return response.status_code == 200
|
return response.status_code == 200
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"[ERROR] Connessione a {router_ip}:{port} fallita: {e}")
|
print(f"[ERROR] Connessione a {router_ip} fallita: {e}")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
async def add_address_list(
|
async def add_address_list(
|
||||||
@ -82,18 +59,14 @@ class MikroTikManager:
|
|||||||
list_name: str = "ddos_blocked",
|
list_name: str = "ddos_blocked",
|
||||||
comment: str = "",
|
comment: str = "",
|
||||||
timeout_duration: str = "1h",
|
timeout_duration: str = "1h",
|
||||||
port: int = 8728,
|
port: int = 8728
|
||||||
use_ssl: bool = False
|
|
||||||
) -> bool:
|
) -> bool:
|
||||||
"""
|
"""
|
||||||
Aggiunge IP alla address-list del router
|
Aggiunge IP alla address-list del router
|
||||||
timeout_duration: es. "1h", "30m", "1d"
|
timeout_duration: es. "1h", "30m", "1d"
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
# Auto-detect SSL: porta 8729 = SSL
|
client = self._get_client(router_ip, username, password, port)
|
||||||
if port == 8729:
|
|
||||||
use_ssl = True
|
|
||||||
client = self._get_client(router_ip, username, password, port, use_ssl)
|
|
||||||
|
|
||||||
# Controlla se IP già esiste
|
# Controlla se IP già esiste
|
||||||
response = await client.get("/rest/ip/firewall/address-list")
|
response = await client.get("/rest/ip/firewall/address-list")
|
||||||
@ -132,15 +105,11 @@ class MikroTikManager:
|
|||||||
password: str,
|
password: str,
|
||||||
ip_address: str,
|
ip_address: str,
|
||||||
list_name: str = "ddos_blocked",
|
list_name: str = "ddos_blocked",
|
||||||
port: int = 8728,
|
port: int = 8728
|
||||||
use_ssl: bool = False
|
|
||||||
) -> bool:
|
) -> bool:
|
||||||
"""Rimuove IP dalla address-list del router"""
|
"""Rimuove IP dalla address-list del router"""
|
||||||
try:
|
try:
|
||||||
# Auto-detect SSL: porta 8729 = SSL
|
client = self._get_client(router_ip, username, password, port)
|
||||||
if port == 8729:
|
|
||||||
use_ssl = True
|
|
||||||
client = self._get_client(router_ip, username, password, port, use_ssl)
|
|
||||||
|
|
||||||
# Trova ID dell'entry
|
# Trova ID dell'entry
|
||||||
response = await client.get("/rest/ip/firewall/address-list")
|
response = await client.get("/rest/ip/firewall/address-list")
|
||||||
@ -170,15 +139,11 @@ class MikroTikManager:
|
|||||||
username: str,
|
username: str,
|
||||||
password: str,
|
password: str,
|
||||||
list_name: Optional[str] = None,
|
list_name: Optional[str] = None,
|
||||||
port: int = 8728,
|
port: int = 8728
|
||||||
use_ssl: bool = False
|
|
||||||
) -> List[Dict]:
|
) -> List[Dict]:
|
||||||
"""Ottiene address-list da router"""
|
"""Ottiene address-list da router"""
|
||||||
try:
|
try:
|
||||||
# Auto-detect SSL: porta 8729 = SSL
|
client = self._get_client(router_ip, username, password, port)
|
||||||
if port == 8729:
|
|
||||||
use_ssl = True
|
|
||||||
client = self._get_client(router_ip, username, password, port, use_ssl)
|
|
||||||
response = await client.get("/rest/ip/firewall/address-list")
|
response = await client.get("/rest/ip/firewall/address-list")
|
||||||
|
|
||||||
if response.status_code == 200:
|
if response.status_code == 200:
|
||||||
|
|||||||
@ -1,719 +0,0 @@
|
|||||||
"""
|
|
||||||
IDS Hybrid ML Detector - Production-Grade System
|
|
||||||
Combines Extended Isolation Forest, Feature Selection, and Ensemble Classifier
|
|
||||||
Validated with CICIDS2017 dataset for high precision and low false positives
|
|
||||||
"""
|
|
||||||
|
|
||||||
import pandas as pd
|
|
||||||
import numpy as np
|
|
||||||
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
|
|
||||||
from sklearn.tree import DecisionTreeClassifier
|
|
||||||
from sklearn.preprocessing import StandardScaler
|
|
||||||
from sklearn.feature_selection import SelectKBest, chi2
|
|
||||||
from xgboost import XGBClassifier
|
|
||||||
try:
|
|
||||||
from eif import ExtendedIsolationForest
|
|
||||||
EIF_AVAILABLE = True
|
|
||||||
except ImportError:
|
|
||||||
from sklearn.ensemble import IsolationForest
|
|
||||||
EIF_AVAILABLE = False
|
|
||||||
print("[WARNING] Extended Isolation Forest not available, using standard IF")
|
|
||||||
|
|
||||||
from typing import List, Dict, Tuple, Optional, Literal
|
|
||||||
import joblib
|
|
||||||
import json
|
|
||||||
from pathlib import Path
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
|
|
||||||
class MLHybridDetector:
|
|
||||||
"""
|
|
||||||
Hybrid ML Detector combining multiple techniques:
|
|
||||||
1. Extended Isolation Forest for unsupervised anomaly detection
|
|
||||||
2. Chi-Square feature selection for optimal feature subset
|
|
||||||
3. DRX Ensemble (DT+RF+XGBoost) for robust classification
|
|
||||||
4. Confidence scoring system (High/Medium/Low)
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, model_dir: str = "models"):
|
|
||||||
self.model_dir = Path(model_dir)
|
|
||||||
self.model_dir.mkdir(exist_ok=True)
|
|
||||||
|
|
||||||
# Models
|
|
||||||
self.isolation_forest = None
|
|
||||||
self.ensemble_classifier = None
|
|
||||||
self.feature_selector = None
|
|
||||||
self.scaler = None
|
|
||||||
|
|
||||||
# Feature metadata
|
|
||||||
self.feature_names = []
|
|
||||||
self.selected_feature_names = []
|
|
||||||
self.feature_importances = {}
|
|
||||||
|
|
||||||
# Configuration
|
|
||||||
self.config = {
|
|
||||||
# Extended Isolation Forest tuning
|
|
||||||
'eif_n_estimators': 250,
|
|
||||||
'eif_contamination': 0.03, # 3% expected anomalies (tuned from research)
|
|
||||||
'eif_max_samples': 256,
|
|
||||||
'eif_max_features': 0.8, # Feature diversity
|
|
||||||
'eif_extension_level': 0, # EIF-specific
|
|
||||||
|
|
||||||
# Feature Selection
|
|
||||||
'chi2_top_k': 18, # Top 18 most relevant features
|
|
||||||
|
|
||||||
# Ensemble configuration
|
|
||||||
'dt_max_depth': 10,
|
|
||||||
'rf_n_estimators': 100,
|
|
||||||
'rf_max_depth': 15,
|
|
||||||
'xgb_n_estimators': 100,
|
|
||||||
'xgb_max_depth': 7,
|
|
||||||
'xgb_learning_rate': 0.1,
|
|
||||||
|
|
||||||
# Voting weights (DT:RF:XGB = 1:2:2)
|
|
||||||
'voting_weights': [1, 2, 2],
|
|
||||||
|
|
||||||
# Confidence thresholds
|
|
||||||
'confidence_high': 95.0, # Auto-block
|
|
||||||
'confidence_medium': 70.0, # Alert for review
|
|
||||||
}
|
|
||||||
|
|
||||||
# Validation metrics (populated after validation)
|
|
||||||
self.metrics = {
|
|
||||||
'precision': None,
|
|
||||||
'recall': None,
|
|
||||||
'f1_score': None,
|
|
||||||
'false_positive_rate': None,
|
|
||||||
'accuracy': None,
|
|
||||||
}
|
|
||||||
|
|
||||||
def extract_features(self, logs_df: pd.DataFrame) -> pd.DataFrame:
|
|
||||||
"""
|
|
||||||
Extract 25 targeted features from network logs
|
|
||||||
Optimized for MikroTik syslog data
|
|
||||||
"""
|
|
||||||
if logs_df.empty:
|
|
||||||
return pd.DataFrame()
|
|
||||||
|
|
||||||
logs_df['timestamp'] = pd.to_datetime(logs_df['timestamp'])
|
|
||||||
features_list = []
|
|
||||||
|
|
||||||
for source_ip, group in logs_df.groupby('source_ip'):
|
|
||||||
group = group.sort_values('timestamp')
|
|
||||||
|
|
||||||
# Volume features (5)
|
|
||||||
# Handle different database schemas
|
|
||||||
if 'packets' in group.columns:
|
|
||||||
total_packets = group['packets'].sum()
|
|
||||||
else:
|
|
||||||
total_packets = len(group) # Each row = 1 packet
|
|
||||||
|
|
||||||
if 'bytes' in group.columns:
|
|
||||||
total_bytes = group['bytes'].sum()
|
|
||||||
elif 'packet_length' in group.columns:
|
|
||||||
total_bytes = group['packet_length'].sum() # Use packet_length from MikroTik logs
|
|
||||||
else:
|
|
||||||
total_bytes = 0
|
|
||||||
|
|
||||||
conn_count = len(group)
|
|
||||||
avg_packet_size = total_bytes / max(total_packets, 1)
|
|
||||||
bytes_per_second = total_bytes / max((group['timestamp'].max() - group['timestamp'].min()).total_seconds(), 1)
|
|
||||||
|
|
||||||
# Temporal features (8)
|
|
||||||
time_span_seconds = (group['timestamp'].max() - group['timestamp'].min()).total_seconds()
|
|
||||||
conn_per_second = conn_count / max(time_span_seconds, 1)
|
|
||||||
hour_of_day = group['timestamp'].dt.hour.mode()[0] if len(group) > 0 else 0
|
|
||||||
day_of_week = group['timestamp'].dt.dayofweek.mode()[0] if len(group) > 0 else 0
|
|
||||||
|
|
||||||
group['time_bucket'] = group['timestamp'].dt.floor('10s')
|
|
||||||
max_burst = group.groupby('time_bucket').size().max()
|
|
||||||
avg_burst = group.groupby('time_bucket').size().mean()
|
|
||||||
burst_variance = group.groupby('time_bucket').size().std()
|
|
||||||
|
|
||||||
time_diffs = group['timestamp'].diff().dt.total_seconds().dropna()
|
|
||||||
avg_interval = time_diffs.mean() if len(time_diffs) > 0 else 0
|
|
||||||
|
|
||||||
# Protocol diversity (6)
|
|
||||||
unique_protocols = group['protocol'].nunique() if 'protocol' in group.columns else 1
|
|
||||||
unique_dest_ports = group['dest_port'].nunique() if 'dest_port' in group.columns else 1
|
|
||||||
unique_dest_ips = group['dest_ip'].nunique() if 'dest_ip' in group.columns else 1
|
|
||||||
|
|
||||||
if 'protocol' in group.columns:
|
|
||||||
protocol_counts = group['protocol'].value_counts()
|
|
||||||
protocol_probs = protocol_counts / protocol_counts.sum()
|
|
||||||
protocol_entropy = -np.sum(protocol_probs * np.log2(protocol_probs + 1e-10))
|
|
||||||
tcp_ratio = (group['protocol'] == 'tcp').sum() / len(group)
|
|
||||||
udp_ratio = (group['protocol'] == 'udp').sum() / len(group)
|
|
||||||
else:
|
|
||||||
protocol_entropy = tcp_ratio = udp_ratio = 0
|
|
||||||
|
|
||||||
# Port scanning detection (3)
|
|
||||||
if 'dest_port' in group.columns:
|
|
||||||
unique_ports_contacted = group['dest_port'].nunique()
|
|
||||||
port_scan_score = unique_ports_contacted / max(conn_count, 1)
|
|
||||||
sorted_ports = sorted(group['dest_port'].dropna().unique())
|
|
||||||
sequential_ports = sum(1 for i in range(len(sorted_ports)-1) if sorted_ports[i+1] - sorted_ports[i] == 1)
|
|
||||||
else:
|
|
||||||
unique_ports_contacted = port_scan_score = sequential_ports = 0
|
|
||||||
|
|
||||||
# Behavioral anomalies (3)
|
|
||||||
packets_per_conn = total_packets / max(conn_count, 1)
|
|
||||||
|
|
||||||
if 'bytes' in group.columns and 'packets' in group.columns:
|
|
||||||
group['packet_size'] = group['bytes'] / group['packets'].replace(0, 1)
|
|
||||||
packet_size_variance = group['packet_size'].std()
|
|
||||||
elif 'packet_length' in group.columns:
|
|
||||||
# Use packet_length directly for variance
|
|
||||||
packet_size_variance = group['packet_length'].std()
|
|
||||||
else:
|
|
||||||
packet_size_variance = 0
|
|
||||||
|
|
||||||
if 'action' in group.columns:
|
|
||||||
blocked_ratio = (group['action'].str.contains('drop|reject|deny', case=False, na=False)).sum() / len(group)
|
|
||||||
else:
|
|
||||||
blocked_ratio = 0
|
|
||||||
|
|
||||||
features = {
|
|
||||||
'source_ip': source_ip,
|
|
||||||
'total_packets': total_packets,
|
|
||||||
'total_bytes': total_bytes,
|
|
||||||
'conn_count': conn_count,
|
|
||||||
'avg_packet_size': avg_packet_size,
|
|
||||||
'bytes_per_second': bytes_per_second,
|
|
||||||
'time_span_seconds': time_span_seconds,
|
|
||||||
'conn_per_second': conn_per_second,
|
|
||||||
'hour_of_day': hour_of_day,
|
|
||||||
'day_of_week': day_of_week,
|
|
||||||
'max_burst': max_burst,
|
|
||||||
'avg_burst': avg_burst,
|
|
||||||
'burst_variance': burst_variance if not np.isnan(burst_variance) else 0,
|
|
||||||
'avg_interval': avg_interval,
|
|
||||||
'unique_protocols': unique_protocols,
|
|
||||||
'unique_dest_ports': unique_dest_ports,
|
|
||||||
'unique_dest_ips': unique_dest_ips,
|
|
||||||
'protocol_entropy': protocol_entropy,
|
|
||||||
'tcp_ratio': tcp_ratio,
|
|
||||||
'udp_ratio': udp_ratio,
|
|
||||||
'unique_ports_contacted': unique_ports_contacted,
|
|
||||||
'port_scan_score': port_scan_score,
|
|
||||||
'sequential_ports': sequential_ports,
|
|
||||||
'packets_per_conn': packets_per_conn,
|
|
||||||
'packet_size_variance': packet_size_variance if not np.isnan(packet_size_variance) else 0,
|
|
||||||
'blocked_ratio': blocked_ratio,
|
|
||||||
}
|
|
||||||
|
|
||||||
features_list.append(features)
|
|
||||||
|
|
||||||
return pd.DataFrame(features_list)
|
|
||||||
|
|
||||||
def train_unsupervised(self, logs_df: pd.DataFrame) -> Dict:
|
|
||||||
"""
|
|
||||||
Train Hybrid System:
|
|
||||||
1. Extended Isolation Forest (unsupervised)
|
|
||||||
2. Pseudo-labeling from IF predictions
|
|
||||||
3. Ensemble Classifier (DT+RF+XGB) on pseudo-labels
|
|
||||||
"""
|
|
||||||
print(f"[HYBRID] Training hybrid model on {len(logs_df)} logs...")
|
|
||||||
|
|
||||||
features_df = self.extract_features(logs_df)
|
|
||||||
if features_df.empty:
|
|
||||||
raise ValueError("No features extracted")
|
|
||||||
|
|
||||||
print(f"[HYBRID] Extracted features for {len(features_df)} unique IPs")
|
|
||||||
|
|
||||||
# Separate source_ip
|
|
||||||
X = features_df.drop('source_ip', axis=1)
|
|
||||||
self.feature_names = X.columns.tolist()
|
|
||||||
|
|
||||||
# STEP 1: Initial IF training for pseudo-labels
|
|
||||||
print("[HYBRID] Pre-training Isolation Forest for feature selection...")
|
|
||||||
|
|
||||||
# Ensure non-negative values
|
|
||||||
X_positive = X.clip(lower=0) + 1e-10
|
|
||||||
|
|
||||||
# Normalize for initial IF
|
|
||||||
temp_scaler = StandardScaler()
|
|
||||||
X_temp_scaled = temp_scaler.fit_transform(X_positive)
|
|
||||||
|
|
||||||
# Train temporary IF for pseudo-labeling
|
|
||||||
if EIF_AVAILABLE:
|
|
||||||
temp_if = ExtendedIsolationForest(
|
|
||||||
n_estimators=100, # Faster pre-training
|
|
||||||
contamination=self.config['eif_contamination'],
|
|
||||||
random_state=42
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
temp_if = IsolationForest(
|
|
||||||
n_estimators=100,
|
|
||||||
contamination=self.config['eif_contamination'],
|
|
||||||
random_state=42,
|
|
||||||
n_jobs=-1
|
|
||||||
)
|
|
||||||
|
|
||||||
temp_if.fit(X_temp_scaled)
|
|
||||||
temp_predictions = temp_if.predict(X_temp_scaled)
|
|
||||||
|
|
||||||
# Use IF predictions as pseudo-labels for feature selection
|
|
||||||
y_pseudo_select = (temp_predictions == -1).astype(int)
|
|
||||||
print(f"[HYBRID] Generated {y_pseudo_select.sum()} pseudo-anomalies from pre-training IF")
|
|
||||||
|
|
||||||
# Feature selection with Chi-Square
|
|
||||||
print(f"[HYBRID] Feature selection: {len(X.columns)} → {self.config['chi2_top_k']} features")
|
|
||||||
|
|
||||||
# Validate k is not larger than available features
|
|
||||||
k_select = min(self.config['chi2_top_k'], X_positive.shape[1])
|
|
||||||
if k_select < self.config['chi2_top_k']:
|
|
||||||
print(f"[HYBRID] Warning: Reducing k from {self.config['chi2_top_k']} to {k_select} (max available)")
|
|
||||||
|
|
||||||
self.feature_selector = SelectKBest(chi2, k=k_select)
|
|
||||||
X_selected = self.feature_selector.fit_transform(X_positive, y_pseudo_select)
|
|
||||||
|
|
||||||
# Get selected feature names
|
|
||||||
selected_indices = self.feature_selector.get_support(indices=True)
|
|
||||||
self.selected_feature_names = [self.feature_names[i] for i in selected_indices]
|
|
||||||
print(f"[HYBRID] Selected features: {', '.join(self.selected_feature_names[:5])}... (+{len(self.selected_feature_names)-5} more)")
|
|
||||||
|
|
||||||
# STEP 2: Normalize
|
|
||||||
print("[HYBRID] Normalizing features...")
|
|
||||||
self.scaler = StandardScaler()
|
|
||||||
X_scaled = self.scaler.fit_transform(X_selected)
|
|
||||||
|
|
||||||
# STEP 3: Train Extended Isolation Forest
|
|
||||||
print(f"[HYBRID] Training Extended Isolation Forest (contamination={self.config['eif_contamination']})...")
|
|
||||||
if EIF_AVAILABLE:
|
|
||||||
self.isolation_forest = ExtendedIsolationForest(
|
|
||||||
n_estimators=self.config['eif_n_estimators'],
|
|
||||||
max_samples=self.config['eif_max_samples'],
|
|
||||||
contamination=self.config['eif_contamination'],
|
|
||||||
extension_level=self.config['eif_extension_level'],
|
|
||||||
random_state=42,
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
self.isolation_forest = IsolationForest(
|
|
||||||
n_estimators=self.config['eif_n_estimators'],
|
|
||||||
max_samples=self.config['eif_max_samples'],
|
|
||||||
contamination=self.config['eif_contamination'],
|
|
||||||
max_features=self.config['eif_max_features'],
|
|
||||||
random_state=42,
|
|
||||||
n_jobs=-1
|
|
||||||
)
|
|
||||||
|
|
||||||
self.isolation_forest.fit(X_scaled)
|
|
||||||
|
|
||||||
# STEP 4: Generate pseudo-labels from IF predictions
|
|
||||||
print("[HYBRID] Generating pseudo-labels from Isolation Forest...")
|
|
||||||
if_predictions = self.isolation_forest.predict(X_scaled)
|
|
||||||
if_scores = self.isolation_forest.score_samples(X_scaled)
|
|
||||||
|
|
||||||
# Convert IF predictions to pseudo-labels (1=anomaly, 0=normal)
|
|
||||||
y_pseudo_train = (if_predictions == -1).astype(int)
|
|
||||||
anomalies_count = y_pseudo_train.sum()
|
|
||||||
|
|
||||||
# CRITICAL: Handle zero-anomaly case with ADAPTIVE PERCENTILES
|
|
||||||
min_anomalies_required = max(10, int(len(y_pseudo_train) * 0.02)) # At least 2% or 10
|
|
||||||
|
|
||||||
if anomalies_count < min_anomalies_required:
|
|
||||||
print(f"[HYBRID] ⚠️ IF found only {anomalies_count} anomalies (need {min_anomalies_required})")
|
|
||||||
print(f"[HYBRID] Applying ADAPTIVE percentile fallback...")
|
|
||||||
|
|
||||||
# Try progressively higher percentiles to get enough pseudo-anomalies
|
|
||||||
percentiles_to_try = [5, 10, 15, 20] # Bottom X% scores
|
|
||||||
for percentile in percentiles_to_try:
|
|
||||||
anomaly_threshold = np.percentile(if_scores, percentile)
|
|
||||||
y_pseudo_train = (if_scores <= anomaly_threshold).astype(int)
|
|
||||||
anomalies_count = y_pseudo_train.sum()
|
|
||||||
|
|
||||||
print(f"[HYBRID] Trying {percentile}% percentile → {anomalies_count} anomalies")
|
|
||||||
|
|
||||||
if anomalies_count >= min_anomalies_required:
|
|
||||||
print(f"[HYBRID] ✅ Success with {percentile}% percentile")
|
|
||||||
break
|
|
||||||
|
|
||||||
# Final check: FAIL if ensemble cannot be trained
|
|
||||||
if anomalies_count < 2:
|
|
||||||
error_msg = (
|
|
||||||
f"HYBRID TRAINING FAILED: Insufficient pseudo-anomalies ({anomalies_count}) for ensemble training.\n\n"
|
|
||||||
f"Dataset appears too clean for supervised ensemble classifier.\n"
|
|
||||||
f"Attempted adaptive percentiles (5%, 10%, 15%, 20%) but still < 2 classes.\n\n"
|
|
||||||
f"SOLUTIONS:\n"
|
|
||||||
f" 1. Collect more diverse network traffic data\n"
|
|
||||||
f" 2. Lower contamination threshold (currently {self.config['eif_contamination']})\n"
|
|
||||||
f" 3. Use larger dataset (currently {len(features_df)} unique IPs)\n\n"
|
|
||||||
f"IMPORTANT: Hybrid detector REQUIRES ensemble classifier.\n"
|
|
||||||
f"Cannot deploy incomplete IF-only system when hybrid was requested."
|
|
||||||
)
|
|
||||||
print(f"\n[HYBRID] ❌ {error_msg}")
|
|
||||||
|
|
||||||
raise ValueError(error_msg)
|
|
||||||
|
|
||||||
print(f"[HYBRID] Pseudo-labels: {anomalies_count} anomalies, {len(y_pseudo_train)-anomalies_count} normal")
|
|
||||||
|
|
||||||
# Use IF confidence: samples with extreme anomaly scores are labeled with higher confidence
|
|
||||||
# High anomaly = low score, so invert
|
|
||||||
score_min, score_max = if_scores.min(), if_scores.max()
|
|
||||||
anomaly_confidence = 1 - (if_scores - score_min) / (score_max - score_min + 1e-10)
|
|
||||||
|
|
||||||
# Weight samples: high confidence anomalies + random normal samples
|
|
||||||
sample_weights = np.where(
|
|
||||||
y_pseudo_train == 1,
|
|
||||||
anomaly_confidence, # Anomalies weighted by confidence
|
|
||||||
0.5 # Normal traffic baseline weight
|
|
||||||
)
|
|
||||||
|
|
||||||
# STEP 5: Train Ensemble Classifier (DT + RF + XGBoost)
|
|
||||||
print("[HYBRID] Training ensemble classifier (DT + RF + XGBoost)...")
|
|
||||||
|
|
||||||
# CRITICAL: Re-check class distribution after all preprocessing
|
|
||||||
unique_classes = np.unique(y_pseudo_train)
|
|
||||||
if len(unique_classes) < 2:
|
|
||||||
error_msg = (
|
|
||||||
f"HYBRID TRAINING FAILED: Class distribution collapsed to {len(unique_classes)} class(es) "
|
|
||||||
f"after feature selection/preprocessing.\n\n"
|
|
||||||
f"This indicates feature selection eliminated discriminative features.\n\n"
|
|
||||||
f"SOLUTIONS:\n"
|
|
||||||
f" 1. Use larger dataset with more diverse traffic\n"
|
|
||||||
f" 2. Lower contamination threshold\n"
|
|
||||||
f" 3. Reduce chi2_top_k (currently {self.config['chi2_top_k']}) to keep more features\n\n"
|
|
||||||
f"Hybrid detector REQUIRES ensemble classifier - cannot proceed with monoclasse."
|
|
||||||
)
|
|
||||||
print(f"\n[HYBRID] ❌ {error_msg}")
|
|
||||||
raise ValueError(error_msg)
|
|
||||||
|
|
||||||
print(f"[HYBRID] Class distribution OK: {unique_classes} (counts: {np.bincount(y_pseudo_train)})")
|
|
||||||
|
|
||||||
# Decision Tree
|
|
||||||
dt_classifier = DecisionTreeClassifier(
|
|
||||||
max_depth=self.config['dt_max_depth'],
|
|
||||||
random_state=42,
|
|
||||||
class_weight='balanced' # Handle imbalance
|
|
||||||
)
|
|
||||||
|
|
||||||
# Random Forest
|
|
||||||
rf_classifier = RandomForestClassifier(
|
|
||||||
n_estimators=self.config['rf_n_estimators'],
|
|
||||||
max_depth=self.config['rf_max_depth'],
|
|
||||||
random_state=42,
|
|
||||||
n_jobs=-1,
|
|
||||||
class_weight='balanced'
|
|
||||||
)
|
|
||||||
|
|
||||||
# XGBoost
|
|
||||||
xgb_classifier = XGBClassifier(
|
|
||||||
n_estimators=self.config['xgb_n_estimators'],
|
|
||||||
max_depth=self.config['xgb_max_depth'],
|
|
||||||
learning_rate=self.config['xgb_learning_rate'],
|
|
||||||
random_state=42,
|
|
||||||
use_label_encoder=False,
|
|
||||||
eval_metric='logloss',
|
|
||||||
scale_pos_weight=len(y_pseudo_train) / max(anomalies_count, 1) # Handle imbalance
|
|
||||||
)
|
|
||||||
|
|
||||||
# Voting Classifier with weighted voting
|
|
||||||
self.ensemble_classifier = VotingClassifier(
|
|
||||||
estimators=[
|
|
||||||
('dt', dt_classifier),
|
|
||||||
('rf', rf_classifier),
|
|
||||||
('xgb', xgb_classifier)
|
|
||||||
],
|
|
||||||
voting='soft', # Use probability averaging
|
|
||||||
weights=self.config['voting_weights'] # [1, 2, 2] - favor RF and XGB
|
|
||||||
)
|
|
||||||
|
|
||||||
# Train ensemble on pseudo-labeled data with error handling
|
|
||||||
try:
|
|
||||||
self.ensemble_classifier.fit(X_scaled, y_pseudo_train, sample_weight=sample_weights)
|
|
||||||
print("[HYBRID] Ensemble .fit() completed successfully")
|
|
||||||
except Exception as e:
|
|
||||||
error_msg = (
|
|
||||||
f"HYBRID TRAINING FAILED: Ensemble .fit() raised exception:\n{str(e)}\n\n"
|
|
||||||
f"This may indicate:\n"
|
|
||||||
f" - Insufficient data variation\n"
|
|
||||||
f" - Class imbalance too extreme\n"
|
|
||||||
f" - Invalid sample weights\n\n"
|
|
||||||
f"Hybrid detector REQUIRES working ensemble classifier."
|
|
||||||
)
|
|
||||||
print(f"\n[HYBRID] ❌ {error_msg}")
|
|
||||||
self.ensemble_classifier = None
|
|
||||||
raise ValueError(error_msg) from e
|
|
||||||
|
|
||||||
# Verify ensemble is functional
|
|
||||||
if self.ensemble_classifier is None:
|
|
||||||
error_msg = "HYBRID TRAINING FAILED: Ensemble classifier is None after fit()"
|
|
||||||
print(f"\n[HYBRID] ❌ {error_msg}")
|
|
||||||
raise ValueError(error_msg)
|
|
||||||
|
|
||||||
# Verify ensemble has predict_proba method
|
|
||||||
if not hasattr(self.ensemble_classifier, 'predict_proba'):
|
|
||||||
error_msg = "HYBRID TRAINING FAILED: Ensemble missing predict_proba method"
|
|
||||||
print(f"\n[HYBRID] ❌ {error_msg}")
|
|
||||||
self.ensemble_classifier = None
|
|
||||||
raise ValueError(error_msg)
|
|
||||||
|
|
||||||
# Verify ensemble can make predictions
|
|
||||||
try:
|
|
||||||
test_proba = self.ensemble_classifier.predict_proba(X_scaled[:1])
|
|
||||||
if test_proba.shape[1] < 2:
|
|
||||||
raise ValueError(f"Ensemble produces {test_proba.shape[1]} classes, need 2")
|
|
||||||
print(f"[HYBRID] ✅ Ensemble verified: produces {test_proba.shape[1]} class probabilities")
|
|
||||||
except Exception as e:
|
|
||||||
error_msg = f"HYBRID TRAINING FAILED: Ensemble cannot make predictions: {str(e)}"
|
|
||||||
print(f"\n[HYBRID] ❌ {error_msg}")
|
|
||||||
self.ensemble_classifier = None
|
|
||||||
raise ValueError(error_msg) from e
|
|
||||||
|
|
||||||
print("[HYBRID] Ensemble training completed and verified!")
|
|
||||||
|
|
||||||
# Save models
|
|
||||||
self.save_models()
|
|
||||||
|
|
||||||
# FINAL VERIFICATION: Ensure ensemble is still set after save
|
|
||||||
if self.ensemble_classifier is None:
|
|
||||||
error_msg = "HYBRID TRAINING FAILED: Ensemble became None after save"
|
|
||||||
print(f"\n[HYBRID] ❌ {error_msg}")
|
|
||||||
raise ValueError(error_msg)
|
|
||||||
|
|
||||||
# Calculate statistics - only after ALL verifications passed
|
|
||||||
result = {
|
|
||||||
'records_processed': len(logs_df),
|
|
||||||
'unique_ips': len(features_df),
|
|
||||||
'features_total': len(self.feature_names),
|
|
||||||
'features_selected': len(self.selected_feature_names),
|
|
||||||
'features_count': len(self.selected_feature_names), # For backward compatibility with /train endpoint
|
|
||||||
'anomalies_detected': int(anomalies_count),
|
|
||||||
'contamination': self.config['eif_contamination'],
|
|
||||||
'model_type': 'Hybrid (EIF + Ensemble)',
|
|
||||||
'ensemble_models': ['DecisionTree', 'RandomForest', 'XGBoost'],
|
|
||||||
'status': 'success',
|
|
||||||
'ensemble_verified': True # Explicit flag for verification
|
|
||||||
}
|
|
||||||
|
|
||||||
print(f"[HYBRID] ✅ Training completed successfully! {anomalies_count}/{len(features_df)} IPs flagged as anomalies")
|
|
||||||
print(f"[HYBRID] ✅ Ensemble classifier verified and ready for production")
|
|
||||||
return result
|
|
||||||
|
|
||||||
def detect(
|
|
||||||
self,
|
|
||||||
logs_df: pd.DataFrame,
|
|
||||||
mode: Literal['confidence', 'all'] = 'confidence'
|
|
||||||
) -> List[Dict]:
|
|
||||||
"""
|
|
||||||
Detect anomalies with confidence scoring
|
|
||||||
mode='confidence': only return high/medium confidence detections
|
|
||||||
mode='all': return all detections with confidence levels
|
|
||||||
"""
|
|
||||||
if self.isolation_forest is None or self.scaler is None:
|
|
||||||
raise ValueError("Model not trained. Run train_unsupervised() first.")
|
|
||||||
|
|
||||||
features_df = self.extract_features(logs_df)
|
|
||||||
if features_df.empty:
|
|
||||||
return []
|
|
||||||
|
|
||||||
source_ips = features_df['source_ip'].values
|
|
||||||
X = features_df.drop('source_ip', axis=1)
|
|
||||||
|
|
||||||
# Apply same feature selection
|
|
||||||
X_positive = X.clip(lower=0)
|
|
||||||
X_positive = X_positive + 1e-10 # Add epsilon
|
|
||||||
X_selected = self.feature_selector.transform(X_positive)
|
|
||||||
X_scaled = self.scaler.transform(X_selected)
|
|
||||||
|
|
||||||
# HYBRID SCORING: Combine Isolation Forest + Ensemble Classifier
|
|
||||||
|
|
||||||
# Step 1: Isolation Forest score (unsupervised anomaly detection)
|
|
||||||
if_predictions = self.isolation_forest.predict(X_scaled)
|
|
||||||
if_scores = self.isolation_forest.score_samples(X_scaled)
|
|
||||||
|
|
||||||
# Normalize IF scores to 0-100 (lower score = more anomalous)
|
|
||||||
if_score_min, if_score_max = if_scores.min(), if_scores.max()
|
|
||||||
if_risk_scores = 100 * (1 - (if_scores - if_score_min) / (if_score_max - if_score_min + 1e-10))
|
|
||||||
|
|
||||||
# Step 2: Ensemble score (supervised classification on pseudo-labels)
|
|
||||||
if self.ensemble_classifier is not None:
|
|
||||||
print(f"[DETECT] Ensemble classifier available - computing hybrid score...")
|
|
||||||
|
|
||||||
# Get ensemble probability predictions
|
|
||||||
ensemble_proba = self.ensemble_classifier.predict_proba(X_scaled)
|
|
||||||
# Probability of being anomaly (class 1)
|
|
||||||
ensemble_anomaly_proba = ensemble_proba[:, 1]
|
|
||||||
# Convert to 0-100 scale
|
|
||||||
ensemble_risk_scores = ensemble_anomaly_proba * 100
|
|
||||||
|
|
||||||
# Combine scores: weighted average (IF: 40%, Ensemble: 60%)
|
|
||||||
# Ensemble gets more weight as it's trained on pseudo-labels
|
|
||||||
risk_scores = 0.4 * if_risk_scores + 0.6 * ensemble_risk_scores
|
|
||||||
|
|
||||||
# Debugging: show score distribution
|
|
||||||
print(f"[DETECT] IF scores: min={if_risk_scores.min():.1f}, max={if_risk_scores.max():.1f}, mean={if_risk_scores.mean():.1f}")
|
|
||||||
print(f"[DETECT] Ensemble scores: min={ensemble_risk_scores.min():.1f}, max={ensemble_risk_scores.max():.1f}, mean={ensemble_risk_scores.mean():.1f}")
|
|
||||||
print(f"[DETECT] Combined scores: min={risk_scores.min():.1f}, max={risk_scores.max():.1f}, mean={risk_scores.mean():.1f}")
|
|
||||||
print(f"[DETECT] ✅ Hybrid scoring active: 40% IF + 60% Ensemble")
|
|
||||||
else:
|
|
||||||
# Fallback to IF-only if ensemble not available
|
|
||||||
risk_scores = if_risk_scores
|
|
||||||
print(f"[DETECT] ⚠️ Ensemble NOT available - using IF-only scoring")
|
|
||||||
print(f"[DETECT] IF scores: min={if_risk_scores.min():.1f}, max={if_risk_scores.max():.1f}, mean={if_risk_scores.mean():.1f}")
|
|
||||||
|
|
||||||
# For backward compatibility
|
|
||||||
predictions = if_predictions
|
|
||||||
|
|
||||||
detections = []
|
|
||||||
for i, (ip, pred, risk_score) in enumerate(zip(source_ips, predictions, risk_scores)):
|
|
||||||
# Confidence scoring
|
|
||||||
if risk_score >= self.config['confidence_high']:
|
|
||||||
confidence_level = 'high'
|
|
||||||
action_recommendation = 'auto_block'
|
|
||||||
elif risk_score >= self.config['confidence_medium']:
|
|
||||||
confidence_level = 'medium'
|
|
||||||
action_recommendation = 'manual_review'
|
|
||||||
else:
|
|
||||||
confidence_level = 'low'
|
|
||||||
action_recommendation = 'monitor'
|
|
||||||
|
|
||||||
# Skip low confidence if mode='confidence'
|
|
||||||
if mode == 'confidence' and confidence_level == 'low':
|
|
||||||
continue
|
|
||||||
|
|
||||||
# Classify anomaly type
|
|
||||||
features = features_df.iloc[i]
|
|
||||||
anomaly_type = self._classify_anomaly(features)
|
|
||||||
reason = self._generate_reason(features, anomaly_type)
|
|
||||||
|
|
||||||
# Get IP logs
|
|
||||||
ip_logs = logs_df[logs_df['source_ip'] == ip]
|
|
||||||
|
|
||||||
detection = {
|
|
||||||
'source_ip': ip,
|
|
||||||
'risk_score': float(risk_score),
|
|
||||||
'confidence_level': confidence_level,
|
|
||||||
'action_recommendation': action_recommendation,
|
|
||||||
'anomaly_type': anomaly_type,
|
|
||||||
'reason': reason,
|
|
||||||
'log_count': len(ip_logs),
|
|
||||||
'total_packets': int(features['total_packets']),
|
|
||||||
'total_bytes': int(features['total_bytes']),
|
|
||||||
'first_seen': ip_logs['timestamp'].min().isoformat(),
|
|
||||||
'last_seen': ip_logs['timestamp'].max().isoformat(),
|
|
||||||
}
|
|
||||||
detections.append(detection)
|
|
||||||
|
|
||||||
# Sort by risk_score descending
|
|
||||||
detections.sort(key=lambda x: x['risk_score'], reverse=True)
|
|
||||||
return detections
|
|
||||||
|
|
||||||
def _classify_anomaly(self, features: pd.Series) -> str:
|
|
||||||
"""Classify anomaly type based on feature patterns"""
|
|
||||||
# Use percentile-based thresholds instead of hardcoded
|
|
||||||
# DDoS: extreme volume
|
|
||||||
if features['bytes_per_second'] > 5000000 or features['conn_per_second'] > 200:
|
|
||||||
return 'ddos'
|
|
||||||
|
|
||||||
# Port scan: high port diversity + sequential patterns
|
|
||||||
if features['port_scan_score'] > 0.6 or features['sequential_ports'] > 15:
|
|
||||||
return 'port_scan'
|
|
||||||
|
|
||||||
# Brute force: high connection rate to few ports
|
|
||||||
if features['conn_per_second'] > 20 and features['unique_dest_ports'] < 5:
|
|
||||||
return 'brute_force'
|
|
||||||
|
|
||||||
# Botnet: regular patterns, low variance
|
|
||||||
if features['burst_variance'] < 2 and features['conn_per_second'] > 5:
|
|
||||||
return 'botnet'
|
|
||||||
|
|
||||||
# Default: suspicious activity
|
|
||||||
return 'suspicious'
|
|
||||||
|
|
||||||
def _generate_reason(self, features: pd.Series, anomaly_type: str) -> str:
|
|
||||||
"""Generate human-readable reason"""
|
|
||||||
reasons = []
|
|
||||||
|
|
||||||
if features['bytes_per_second'] > 1000000:
|
|
||||||
reasons.append(f"High bandwidth: {features['bytes_per_second']/1e6:.1f} MB/s")
|
|
||||||
|
|
||||||
if features['conn_per_second'] > 50:
|
|
||||||
reasons.append(f"High connection rate: {features['conn_per_second']:.1f} conn/s")
|
|
||||||
|
|
||||||
if features['port_scan_score'] > 0.5:
|
|
||||||
reasons.append(f"Port scanning: {features['unique_ports_contacted']:.0f} unique ports")
|
|
||||||
|
|
||||||
if features['unique_dest_ips'] > 100:
|
|
||||||
reasons.append(f"Multiple targets: {features['unique_dest_ips']:.0f} IPs")
|
|
||||||
|
|
||||||
if not reasons:
|
|
||||||
reasons.append(f"Anomalous pattern detected ({anomaly_type})")
|
|
||||||
|
|
||||||
return " | ".join(reasons)
|
|
||||||
|
|
||||||
def save_models(self):
|
|
||||||
"""Save all models and metadata"""
|
|
||||||
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
|
|
||||||
|
|
||||||
# Save models
|
|
||||||
joblib.dump(self.isolation_forest, self.model_dir / f"isolation_forest_{timestamp}.pkl")
|
|
||||||
joblib.dump(self.scaler, self.model_dir / f"scaler_{timestamp}.pkl")
|
|
||||||
joblib.dump(self.feature_selector, self.model_dir / f"feature_selector_{timestamp}.pkl")
|
|
||||||
|
|
||||||
# Save ensemble if available
|
|
||||||
if self.ensemble_classifier is not None:
|
|
||||||
joblib.dump(self.ensemble_classifier, self.model_dir / f"ensemble_classifier_{timestamp}.pkl")
|
|
||||||
joblib.dump(self.ensemble_classifier, self.model_dir / "ensemble_classifier_latest.pkl")
|
|
||||||
|
|
||||||
# Save latest (symlinks alternative)
|
|
||||||
joblib.dump(self.isolation_forest, self.model_dir / "isolation_forest_latest.pkl")
|
|
||||||
joblib.dump(self.scaler, self.model_dir / "scaler_latest.pkl")
|
|
||||||
joblib.dump(self.feature_selector, self.model_dir / "feature_selector_latest.pkl")
|
|
||||||
|
|
||||||
# Save metadata
|
|
||||||
metadata = {
|
|
||||||
'timestamp': timestamp,
|
|
||||||
'feature_names': self.feature_names,
|
|
||||||
'selected_feature_names': self.selected_feature_names,
|
|
||||||
'config': self.config,
|
|
||||||
'metrics': self.metrics,
|
|
||||||
'has_ensemble': self.ensemble_classifier is not None,
|
|
||||||
}
|
|
||||||
|
|
||||||
with open(self.model_dir / f"metadata_{timestamp}.json", 'w') as f:
|
|
||||||
json.dump(metadata, f, indent=2)
|
|
||||||
|
|
||||||
with open(self.model_dir / "metadata_latest.json", 'w') as f:
|
|
||||||
json.dump(metadata, f, indent=2)
|
|
||||||
|
|
||||||
print(f"[HYBRID] Models saved to {self.model_dir}")
|
|
||||||
if self.ensemble_classifier is not None:
|
|
||||||
print(f"[HYBRID] Ensemble classifier included")
|
|
||||||
|
|
||||||
def load_models(self, version: str = 'latest'):
|
|
||||||
"""Load models from disk"""
|
|
||||||
try:
|
|
||||||
self.isolation_forest = joblib.load(self.model_dir / f"isolation_forest_{version}.pkl")
|
|
||||||
self.scaler = joblib.load(self.model_dir / f"scaler_{version}.pkl")
|
|
||||||
self.feature_selector = joblib.load(self.model_dir / f"feature_selector_{version}.pkl")
|
|
||||||
|
|
||||||
# Try to load ensemble if available
|
|
||||||
ensemble_path = self.model_dir / f"ensemble_classifier_{version}.pkl"
|
|
||||||
if ensemble_path.exists():
|
|
||||||
self.ensemble_classifier = joblib.load(ensemble_path)
|
|
||||||
print(f"[HYBRID] Ensemble classifier loaded")
|
|
||||||
else:
|
|
||||||
self.ensemble_classifier = None
|
|
||||||
print(f"[HYBRID] No ensemble classifier found (IF-only mode)")
|
|
||||||
|
|
||||||
with open(self.model_dir / f"metadata_{version}.json") as f:
|
|
||||||
metadata = json.load(f)
|
|
||||||
self.feature_names = metadata['feature_names']
|
|
||||||
self.selected_feature_names = metadata['selected_feature_names']
|
|
||||||
self.config.update(metadata['config'])
|
|
||||||
self.metrics = metadata['metrics']
|
|
||||||
|
|
||||||
print(f"[HYBRID] Models loaded (version: {version})")
|
|
||||||
print(f"[HYBRID] Selected features: {len(self.selected_feature_names)}/{len(self.feature_names)}")
|
|
||||||
|
|
||||||
if self.ensemble_classifier is not None:
|
|
||||||
print(f"[HYBRID] Mode: Hybrid (IF + Ensemble)")
|
|
||||||
else:
|
|
||||||
print(f"[HYBRID] Mode: IF-only (Ensemble not available)")
|
|
||||||
|
|
||||||
return True
|
|
||||||
except Exception as e:
|
|
||||||
print(f"[HYBRID] Failed to load models: {e}")
|
|
||||||
return False
|
|
||||||
@ -7,5 +7,3 @@ psycopg2-binary==2.9.9
|
|||||||
python-dotenv==1.0.0
|
python-dotenv==1.0.0
|
||||||
pydantic==2.5.0
|
pydantic==2.5.0
|
||||||
httpx==0.25.1
|
httpx==0.25.1
|
||||||
xgboost==2.0.3
|
|
||||||
joblib==1.3.2
|
|
||||||
|
|||||||
@ -165,19 +165,12 @@ class SyslogParser:
|
|||||||
"""
|
"""
|
||||||
Processa file di log in modalità streaming (sicuro con rsyslog)
|
Processa file di log in modalità streaming (sicuro con rsyslog)
|
||||||
follow: se True, segue il file come 'tail -f'
|
follow: se True, segue il file come 'tail -f'
|
||||||
|
|
||||||
Resilient Features v2.0:
|
|
||||||
- Auto-reconnect on DB timeout
|
|
||||||
- Error recovery (continues after exceptions)
|
|
||||||
- Health metrics logging
|
|
||||||
"""
|
"""
|
||||||
print(f"[INFO] Processando {log_file} (follow={follow})")
|
print(f"[INFO] Processando {log_file} (follow={follow})")
|
||||||
|
|
||||||
processed = 0
|
processed = 0
|
||||||
saved = 0
|
saved = 0
|
||||||
cleanup_counter = 0
|
cleanup_counter = 0
|
||||||
errors = 0
|
|
||||||
last_health_check = time.time()
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
with open(log_file, 'r') as f:
|
with open(log_file, 'r') as f:
|
||||||
@ -186,101 +179,49 @@ class SyslogParser:
|
|||||||
f.seek(0, 2) # Seek to end
|
f.seek(0, 2) # Seek to end
|
||||||
|
|
||||||
while True:
|
while True:
|
||||||
try:
|
line = f.readline()
|
||||||
line = f.readline()
|
|
||||||
|
|
||||||
if not line:
|
if not line:
|
||||||
if follow:
|
if follow:
|
||||||
time.sleep(0.1) # Attendi nuove righe
|
time.sleep(0.1) # Attendi nuove righe
|
||||||
|
|
||||||
# Health check ogni 5 minuti
|
# Commit batch ogni 100 righe processate
|
||||||
if time.time() - last_health_check > 300:
|
if processed > 0 and processed % 100 == 0:
|
||||||
print(f"[HEALTH] Parser alive: {processed} righe processate, {saved} salvate, {errors} errori")
|
|
||||||
last_health_check = time.time()
|
|
||||||
|
|
||||||
# Commit batch ogni 100 righe processate
|
|
||||||
if processed > 0 and processed % 100 == 0:
|
|
||||||
try:
|
|
||||||
self.conn.commit()
|
|
||||||
except Exception as commit_err:
|
|
||||||
print(f"[ERROR] Commit failed, reconnecting: {commit_err}")
|
|
||||||
self.reconnect_db()
|
|
||||||
|
|
||||||
# Cleanup DB ogni ~16 minuti
|
|
||||||
cleanup_counter += 1
|
|
||||||
if cleanup_counter >= 10000:
|
|
||||||
self.cleanup_old_logs(days_to_keep=3)
|
|
||||||
cleanup_counter = 0
|
|
||||||
|
|
||||||
continue
|
|
||||||
else:
|
|
||||||
break # Fine file
|
|
||||||
|
|
||||||
processed += 1
|
|
||||||
|
|
||||||
# Parsa riga
|
|
||||||
log_data = self.parse_log_line(line.strip())
|
|
||||||
if log_data:
|
|
||||||
try:
|
|
||||||
self.save_to_db(log_data)
|
|
||||||
saved += 1
|
|
||||||
except Exception as save_err:
|
|
||||||
errors += 1
|
|
||||||
print(f"[ERROR] Save failed: {save_err}")
|
|
||||||
# Try to reconnect and continue
|
|
||||||
try:
|
|
||||||
self.reconnect_db()
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
# Commit ogni 100 righe
|
|
||||||
if processed % 100 == 0:
|
|
||||||
try:
|
|
||||||
self.conn.commit()
|
self.conn.commit()
|
||||||
if saved > 0:
|
|
||||||
print(f"[INFO] Processate {processed} righe, salvate {saved} log, {errors} errori")
|
|
||||||
except Exception as commit_err:
|
|
||||||
print(f"[ERROR] Commit failed: {commit_err}")
|
|
||||||
self.reconnect_db()
|
|
||||||
|
|
||||||
except Exception as line_err:
|
# Cleanup DB ogni 1000 righe (~ ogni minuto)
|
||||||
errors += 1
|
cleanup_counter += 1
|
||||||
if errors % 100 == 0:
|
if cleanup_counter >= 10000: # ~16 minuti
|
||||||
print(f"[ERROR] Error processing line ({errors} total errors): {line_err}")
|
self.cleanup_old_logs(days_to_keep=3)
|
||||||
# Continue processing instead of crashing!
|
cleanup_counter = 0
|
||||||
continue
|
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
break # Fine file
|
||||||
|
|
||||||
|
processed += 1
|
||||||
|
|
||||||
|
# Parsa riga
|
||||||
|
log_data = self.parse_log_line(line.strip())
|
||||||
|
if log_data:
|
||||||
|
self.save_to_db(log_data)
|
||||||
|
saved += 1
|
||||||
|
|
||||||
|
# Commit ogni 100 righe
|
||||||
|
if processed % 100 == 0:
|
||||||
|
self.conn.commit()
|
||||||
|
if saved > 0:
|
||||||
|
print(f"[INFO] Processate {processed} righe, salvate {saved} log")
|
||||||
|
|
||||||
except KeyboardInterrupt:
|
except KeyboardInterrupt:
|
||||||
print("\n[INFO] Interrotto dall'utente")
|
print("\n[INFO] Interrotto dall'utente")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"[ERROR] Errore critico processamento file: {e}")
|
print(f"[ERROR] Errore processamento file: {e}")
|
||||||
import traceback
|
import traceback
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
finally:
|
finally:
|
||||||
try:
|
self.conn.commit()
|
||||||
self.conn.commit()
|
print(f"[INFO] Totale: {processed} righe processate, {saved} log salvati")
|
||||||
except:
|
|
||||||
pass
|
|
||||||
print(f"[INFO] Totale: {processed} righe processate, {saved} log salvati, {errors} errori")
|
|
||||||
|
|
||||||
def reconnect_db(self):
|
|
||||||
"""
|
|
||||||
Riconnette al database (auto-recovery on connection timeout)
|
|
||||||
"""
|
|
||||||
print("[INFO] Tentativo riconnessione database...")
|
|
||||||
try:
|
|
||||||
self.disconnect_db()
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
|
|
||||||
time.sleep(2)
|
|
||||||
|
|
||||||
try:
|
|
||||||
self.connect_db()
|
|
||||||
print("[INFO] ✅ Riconnessione database riuscita")
|
|
||||||
except Exception as e:
|
|
||||||
print(f"[ERROR] ❌ Riconnessione fallita: {e}")
|
|
||||||
raise
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
|
|||||||
@ -1,240 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
Script di test connessione MikroTik API
|
|
||||||
Verifica connessione a tutti i router configurati nel database
|
|
||||||
"""
|
|
||||||
|
|
||||||
import asyncio
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
from dotenv import load_dotenv
|
|
||||||
import psycopg2
|
|
||||||
from mikrotik_manager import MikroTikManager
|
|
||||||
|
|
||||||
# Load environment variables
|
|
||||||
load_dotenv()
|
|
||||||
|
|
||||||
def get_routers_from_db():
|
|
||||||
"""Recupera router configurati dal database"""
|
|
||||||
try:
|
|
||||||
conn = psycopg2.connect(
|
|
||||||
host=os.getenv("PGHOST"),
|
|
||||||
port=os.getenv("PGPORT"),
|
|
||||||
database=os.getenv("PGDATABASE"),
|
|
||||||
user=os.getenv("PGUSER"),
|
|
||||||
password=os.getenv("PGPASSWORD")
|
|
||||||
)
|
|
||||||
cursor = conn.cursor()
|
|
||||||
|
|
||||||
cursor.execute("""
|
|
||||||
SELECT
|
|
||||||
id, name, ip_address, api_port,
|
|
||||||
username, password, enabled
|
|
||||||
FROM routers
|
|
||||||
ORDER BY name
|
|
||||||
""")
|
|
||||||
|
|
||||||
routers = []
|
|
||||||
for row in cursor.fetchall():
|
|
||||||
routers.append({
|
|
||||||
'id': row[0],
|
|
||||||
'name': row[1],
|
|
||||||
'ip_address': row[2],
|
|
||||||
'api_port': row[3],
|
|
||||||
'username': row[4],
|
|
||||||
'password': row[5],
|
|
||||||
'enabled': row[6]
|
|
||||||
})
|
|
||||||
|
|
||||||
cursor.close()
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
return routers
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"❌ Errore connessione database: {e}")
|
|
||||||
return []
|
|
||||||
|
|
||||||
|
|
||||||
async def test_router_connection(manager, router):
|
|
||||||
"""Testa connessione a un singolo router"""
|
|
||||||
print(f"\n{'='*60}")
|
|
||||||
print(f"🔍 Test Router: {router['name']}")
|
|
||||||
print(f"{'='*60}")
|
|
||||||
print(f" IP: {router['ip_address']}")
|
|
||||||
print(f" Porta: {router['api_port']}")
|
|
||||||
print(f" Username: {router['username']}")
|
|
||||||
print(f" Enabled: {'✅ Sì' if router['enabled'] else '❌ No'}")
|
|
||||||
|
|
||||||
if not router['enabled']:
|
|
||||||
print(f" ⚠️ Router disabilitato - skip test")
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Test connessione
|
|
||||||
print(f"\n 📡 Test connessione...")
|
|
||||||
try:
|
|
||||||
connected = await manager.test_connection(
|
|
||||||
router_ip=router['ip_address'],
|
|
||||||
username=router['username'],
|
|
||||||
password=router['password'],
|
|
||||||
port=router['api_port']
|
|
||||||
)
|
|
||||||
|
|
||||||
if connected:
|
|
||||||
print(f" ✅ Connessione OK!")
|
|
||||||
|
|
||||||
# Test lettura address-list
|
|
||||||
print(f" 📋 Lettura address-list...")
|
|
||||||
entries = await manager.get_address_list(
|
|
||||||
router_ip=router['ip_address'],
|
|
||||||
username=router['username'],
|
|
||||||
password=router['password'],
|
|
||||||
list_name="ddos_blocked",
|
|
||||||
port=router['api_port']
|
|
||||||
)
|
|
||||||
print(f" ✅ Trovati {len(entries)} IP in lista 'ddos_blocked'")
|
|
||||||
|
|
||||||
# Mostra primi 5 IP
|
|
||||||
if entries:
|
|
||||||
print(f"\n 📌 Primi 5 IP bloccati:")
|
|
||||||
for entry in entries[:5]:
|
|
||||||
ip = entry.get('address', 'N/A')
|
|
||||||
comment = entry.get('comment', 'N/A')
|
|
||||||
timeout = entry.get('timeout', 'N/A')
|
|
||||||
print(f" - {ip} | {comment} | timeout: {timeout}")
|
|
||||||
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
print(f" ❌ Connessione FALLITA")
|
|
||||||
print(f"\n 🔧 Suggerimenti:")
|
|
||||||
print(f" 1. Verifica che il router sia raggiungibile:")
|
|
||||||
print(f" ping {router['ip_address']}")
|
|
||||||
print(f" 2. Verifica che il servizio API sia abilitato sul router:")
|
|
||||||
print(f" /ip service print (deve mostrare 'api' o 'api-ssl' enabled)")
|
|
||||||
print(f" 3. Verifica firewall non blocchi porta {router['api_port']}")
|
|
||||||
print(f" 4. Verifica credenziali (username/password)")
|
|
||||||
return False
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ❌ Errore durante test: {e}")
|
|
||||||
print(f" 📋 Tipo errore: {type(e).__name__}")
|
|
||||||
import traceback
|
|
||||||
print(f" 📋 Stack trace:")
|
|
||||||
traceback.print_exc()
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
async def test_block_unblock(manager, router, test_ip="1.2.3.4"):
|
|
||||||
"""Testa blocco/sblocco IP"""
|
|
||||||
print(f"\n 🧪 Test blocco/sblocco IP {test_ip}...")
|
|
||||||
|
|
||||||
# Test blocco
|
|
||||||
print(f" Blocco IP...")
|
|
||||||
blocked = await manager.add_address_list(
|
|
||||||
router_ip=router['ip_address'],
|
|
||||||
username=router['username'],
|
|
||||||
password=router['password'],
|
|
||||||
ip_address=test_ip,
|
|
||||||
list_name="ids_test",
|
|
||||||
comment="Test IDS API Fix",
|
|
||||||
timeout_duration="5m",
|
|
||||||
port=router['api_port']
|
|
||||||
)
|
|
||||||
|
|
||||||
if blocked:
|
|
||||||
print(f" ✅ IP bloccato con successo!")
|
|
||||||
|
|
||||||
# Aspetta 2 secondi
|
|
||||||
await asyncio.sleep(2)
|
|
||||||
|
|
||||||
# Test sblocco
|
|
||||||
print(f" Sblocco IP...")
|
|
||||||
unblocked = await manager.remove_address_list(
|
|
||||||
router_ip=router['ip_address'],
|
|
||||||
username=router['username'],
|
|
||||||
password=router['password'],
|
|
||||||
ip_address=test_ip,
|
|
||||||
list_name="ids_test",
|
|
||||||
port=router['api_port']
|
|
||||||
)
|
|
||||||
|
|
||||||
if unblocked:
|
|
||||||
print(f" ✅ IP sbloccato con successo!")
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
print(f" ⚠️ Sblocco fallito (ma blocco OK)")
|
|
||||||
return True
|
|
||||||
else:
|
|
||||||
print(f" ❌ Blocco IP fallito")
|
|
||||||
return False
|
|
||||||
|
|
||||||
|
|
||||||
async def main():
|
|
||||||
"""Test principale"""
|
|
||||||
print("╔════════════════════════════════════════════════════════════╗")
|
|
||||||
print("║ TEST CONNESSIONE MIKROTIK API REST ║")
|
|
||||||
print("║ IDS v2.0.0 - Hybrid Detector ║")
|
|
||||||
print("╚════════════════════════════════════════════════════════════╝")
|
|
||||||
|
|
||||||
# Recupera router dal database
|
|
||||||
print("\n📊 Caricamento router dal database...")
|
|
||||||
routers = get_routers_from_db()
|
|
||||||
|
|
||||||
if not routers:
|
|
||||||
print("❌ Nessun router trovato nel database!")
|
|
||||||
print("\n💡 Aggiungi router da: https://ids.alfacom.it/routers")
|
|
||||||
return
|
|
||||||
|
|
||||||
print(f"✅ Trovati {len(routers)} router configurati\n")
|
|
||||||
|
|
||||||
# Crea manager
|
|
||||||
manager = MikroTikManager(timeout=10)
|
|
||||||
|
|
||||||
# Test ogni router
|
|
||||||
results = []
|
|
||||||
for router in routers:
|
|
||||||
result = await test_router_connection(manager, router)
|
|
||||||
results.append({
|
|
||||||
'name': router['name'],
|
|
||||||
'ip': router['ip_address'],
|
|
||||||
'connected': result
|
|
||||||
})
|
|
||||||
|
|
||||||
# Se connesso, testa blocco/sblocco
|
|
||||||
if result and router['enabled']:
|
|
||||||
test_ok = await test_block_unblock(manager, router)
|
|
||||||
results[-1]['block_test'] = test_ok
|
|
||||||
|
|
||||||
# Riepilogo
|
|
||||||
print(f"\n{'='*60}")
|
|
||||||
print("📊 RIEPILOGO TEST")
|
|
||||||
print(f"{'='*60}\n")
|
|
||||||
|
|
||||||
for r in results:
|
|
||||||
conn_status = "✅ OK" if r['connected'] else "❌ FAIL"
|
|
||||||
block_status = ""
|
|
||||||
if 'block_test' in r:
|
|
||||||
block_status = " | Blocco: " + ("✅ OK" if r['block_test'] else "❌ FAIL")
|
|
||||||
print(f" {r['name']:20s} ({r['ip']:15s}): {conn_status}{block_status}")
|
|
||||||
|
|
||||||
success_count = sum(1 for r in results if r['connected'])
|
|
||||||
print(f"\n Totale: {success_count}/{len(results)} router connessi\n")
|
|
||||||
|
|
||||||
# Cleanup
|
|
||||||
await manager.close_all()
|
|
||||||
|
|
||||||
# Exit code
|
|
||||||
sys.exit(0 if success_count == len(results) else 1)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
try:
|
|
||||||
asyncio.run(main())
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
print("\n\n⚠️ Test interrotto dall'utente")
|
|
||||||
sys.exit(1)
|
|
||||||
except Exception as e:
|
|
||||||
print(f"\n\n❌ Errore critico: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
sys.exit(1)
|
|
||||||
@ -1,93 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""Test semplice connessione MikroTik - Debug"""
|
|
||||||
|
|
||||||
import httpx
|
|
||||||
import base64
|
|
||||||
import asyncio
|
|
||||||
|
|
||||||
async def test_simple():
|
|
||||||
print("🔍 Test Connessione MikroTik Semplificato\n")
|
|
||||||
|
|
||||||
# Configurazione
|
|
||||||
router_ip = "185.203.24.2"
|
|
||||||
port = 8728
|
|
||||||
username = "admin"
|
|
||||||
password = input(f"Password per {username}@{router_ip}: ")
|
|
||||||
|
|
||||||
# Test 1: Connessione TCP base
|
|
||||||
print(f"\n1️⃣ Test TCP porta {port}...")
|
|
||||||
try:
|
|
||||||
client = httpx.AsyncClient(timeout=5)
|
|
||||||
response = await client.get(f"http://{router_ip}:{port}")
|
|
||||||
print(f" ✅ Porta {port} aperta e risponde")
|
|
||||||
await client.aclose()
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ❌ Porta {port} non raggiungibile: {e}")
|
|
||||||
return
|
|
||||||
|
|
||||||
# Test 2: Endpoint REST /rest/system/identity
|
|
||||||
print(f"\n2️⃣ Test endpoint REST /rest/system/identity...")
|
|
||||||
try:
|
|
||||||
auth = base64.b64encode(f"{username}:{password}".encode()).decode()
|
|
||||||
headers = {
|
|
||||||
"Authorization": f"Basic {auth}",
|
|
||||||
"Content-Type": "application/json"
|
|
||||||
}
|
|
||||||
|
|
||||||
client = httpx.AsyncClient(timeout=10)
|
|
||||||
url = f"http://{router_ip}:{port}/rest/system/identity"
|
|
||||||
print(f" URL: {url}")
|
|
||||||
|
|
||||||
response = await client.get(url, headers=headers)
|
|
||||||
print(f" Status Code: {response.status_code}")
|
|
||||||
print(f" Headers: {dict(response.headers)}")
|
|
||||||
|
|
||||||
if response.status_code == 200:
|
|
||||||
print(f" ✅ Autenticazione OK!")
|
|
||||||
print(f" Risposta: {response.text}")
|
|
||||||
elif response.status_code == 401:
|
|
||||||
print(f" ❌ Credenziali errate (401 Unauthorized)")
|
|
||||||
elif response.status_code == 404:
|
|
||||||
print(f" ❌ Endpoint non trovato (404) - API REST non abilitata?")
|
|
||||||
else:
|
|
||||||
print(f" ⚠️ Status inaspettato: {response.status_code}")
|
|
||||||
print(f" Risposta: {response.text}")
|
|
||||||
|
|
||||||
await client.aclose()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ❌ Errore richiesta REST: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
return
|
|
||||||
|
|
||||||
# Test 3: Endpoint /rest/ip/firewall/address-list
|
|
||||||
print(f"\n3️⃣ Test endpoint address-list...")
|
|
||||||
try:
|
|
||||||
client = httpx.AsyncClient(timeout=10)
|
|
||||||
url = f"http://{router_ip}:{port}/rest/ip/firewall/address-list"
|
|
||||||
|
|
||||||
response = await client.get(url, headers=headers)
|
|
||||||
print(f" Status Code: {response.status_code}")
|
|
||||||
|
|
||||||
if response.status_code == 200:
|
|
||||||
data = response.json()
|
|
||||||
print(f" ✅ Address-list leggibile!")
|
|
||||||
print(f" Totale entries: {len(data)}")
|
|
||||||
if data:
|
|
||||||
print(f" Primo entry: {data[0]}")
|
|
||||||
else:
|
|
||||||
print(f" ⚠️ Status: {response.status_code}")
|
|
||||||
print(f" Risposta: {response.text}")
|
|
||||||
|
|
||||||
await client.aclose()
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f" ❌ Errore lettura address-list: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
print("="*60)
|
|
||||||
asyncio.run(test_simple())
|
|
||||||
print("\n" + "="*60)
|
|
||||||
@ -1,422 +0,0 @@
|
|||||||
#!/usr/bin/env python3
|
|
||||||
"""
|
|
||||||
IDS Hybrid ML Training Script
|
|
||||||
Trains Extended Isolation Forest with Feature Selection on CICIDS2017 or synthetic data
|
|
||||||
Validates with production-grade metrics
|
|
||||||
"""
|
|
||||||
|
|
||||||
import argparse
|
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
|
||||||
import pandas as pd
|
|
||||||
import numpy as np
|
|
||||||
from datetime import datetime
|
|
||||||
|
|
||||||
# Import our modules
|
|
||||||
from ml_hybrid_detector import MLHybridDetector
|
|
||||||
from dataset_loader import CICIDS2017Loader
|
|
||||||
from validation_metrics import ValidationMetrics
|
|
||||||
|
|
||||||
|
|
||||||
def train_on_real_traffic(db_config: dict, days: int = 7) -> pd.DataFrame:
|
|
||||||
"""
|
|
||||||
Load real traffic from PostgreSQL database
|
|
||||||
Last N days of network_logs
|
|
||||||
"""
|
|
||||||
import psycopg2
|
|
||||||
from psycopg2.extras import RealDictCursor
|
|
||||||
|
|
||||||
print(f"[TRAIN] Loading last {days} days of real traffic from database...")
|
|
||||||
|
|
||||||
conn = psycopg2.connect(**db_config)
|
|
||||||
cursor = conn.cursor(cursor_factory=RealDictCursor)
|
|
||||||
|
|
||||||
query = """
|
|
||||||
SELECT
|
|
||||||
timestamp,
|
|
||||||
source_ip,
|
|
||||||
destination_ip as dest_ip,
|
|
||||||
destination_port as dest_port,
|
|
||||||
protocol,
|
|
||||||
packet_length,
|
|
||||||
action
|
|
||||||
FROM network_logs
|
|
||||||
WHERE timestamp > NOW() - INTERVAL '1 day' * %s
|
|
||||||
ORDER BY timestamp DESC
|
|
||||||
LIMIT 1000000
|
|
||||||
"""
|
|
||||||
|
|
||||||
cursor.execute(query, (days,))
|
|
||||||
rows = cursor.fetchall()
|
|
||||||
cursor.close()
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
if not rows:
|
|
||||||
raise ValueError("No data found in database")
|
|
||||||
|
|
||||||
df = pd.DataFrame(rows)
|
|
||||||
print(f"[TRAIN] Loaded {len(df)} logs from database")
|
|
||||||
|
|
||||||
return df
|
|
||||||
|
|
||||||
|
|
||||||
def save_training_history(db_config: dict, result: dict):
|
|
||||||
"""
|
|
||||||
Save training results to database training_history table
|
|
||||||
"""
|
|
||||||
import psycopg2
|
|
||||||
|
|
||||||
MODEL_VERSION = "2.0.0" # Hybrid ML Detector version
|
|
||||||
|
|
||||||
print(f"\n[TRAIN] Saving training history to database...")
|
|
||||||
|
|
||||||
try:
|
|
||||||
conn = psycopg2.connect(**db_config)
|
|
||||||
cursor = conn.cursor()
|
|
||||||
|
|
||||||
cursor.execute("""
|
|
||||||
INSERT INTO training_history
|
|
||||||
(model_version, records_processed, features_count, training_duration, status, notes)
|
|
||||||
VALUES (%s, %s, %s, %s, %s, %s)
|
|
||||||
""", (
|
|
||||||
MODEL_VERSION,
|
|
||||||
result['records_processed'],
|
|
||||||
result['features_selected'], # Use selected features count
|
|
||||||
0, # duration not implemented yet
|
|
||||||
'success',
|
|
||||||
f"Anomalie: {result['anomalies_detected']}/{result['unique_ips']} - {result['model_type']}"
|
|
||||||
))
|
|
||||||
|
|
||||||
conn.commit()
|
|
||||||
cursor.close()
|
|
||||||
conn.close()
|
|
||||||
|
|
||||||
print(f"[TRAIN] ✅ Training history saved (version {MODEL_VERSION})")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
print(f"[TRAIN] ⚠ Failed to save training history: {e}")
|
|
||||||
# Don't fail the whole training if just logging fails
|
|
||||||
|
|
||||||
|
|
||||||
def train_unsupervised(args):
|
|
||||||
"""
|
|
||||||
Train unsupervised model (no labels needed)
|
|
||||||
Uses real traffic or synthetic data
|
|
||||||
"""
|
|
||||||
print("\n" + "="*70)
|
|
||||||
print(" IDS HYBRID ML TRAINING - UNSUPERVISED MODE")
|
|
||||||
print("="*70)
|
|
||||||
|
|
||||||
detector = MLHybridDetector(model_dir=args.model_dir)
|
|
||||||
|
|
||||||
# Database config for later use
|
|
||||||
db_config = None
|
|
||||||
|
|
||||||
# Load data
|
|
||||||
if args.source == 'synthetic':
|
|
||||||
print("\n[TRAIN] Using synthetic dataset...")
|
|
||||||
loader = CICIDS2017Loader()
|
|
||||||
logs_df = loader.create_sample_dataset(n_samples=args.n_samples)
|
|
||||||
# Remove labels for unsupervised training
|
|
||||||
logs_df = logs_df.drop(['is_attack', 'attack_type'], axis=1, errors='ignore')
|
|
||||||
|
|
||||||
elif args.source == 'database':
|
|
||||||
db_config = {
|
|
||||||
'host': args.db_host,
|
|
||||||
'port': args.db_port,
|
|
||||||
'database': args.db_name,
|
|
||||||
'user': args.db_user,
|
|
||||||
'password': args.db_password,
|
|
||||||
}
|
|
||||||
logs_df = train_on_real_traffic(db_config, days=args.days)
|
|
||||||
|
|
||||||
else:
|
|
||||||
raise ValueError(f"Invalid source: {args.source}")
|
|
||||||
|
|
||||||
# Train
|
|
||||||
print(f"\n[TRAIN] Training on {len(logs_df)} logs...")
|
|
||||||
result = detector.train_unsupervised(logs_df)
|
|
||||||
|
|
||||||
# Print results
|
|
||||||
print("\n" + "="*70)
|
|
||||||
print(" TRAINING RESULTS")
|
|
||||||
print("="*70)
|
|
||||||
print(f" Records processed: {result['records_processed']:,}")
|
|
||||||
print(f" Unique IPs: {result['unique_ips']:,}")
|
|
||||||
print(f" Features (total): {result['features_total']}")
|
|
||||||
print(f" Features (selected): {result['features_selected']}")
|
|
||||||
print(f" Anomalies detected: {result['anomalies_detected']:,} ({result['anomalies_detected']/result['unique_ips']*100:.1f}%)")
|
|
||||||
print(f" Contamination: {result['contamination']*100:.1f}%")
|
|
||||||
print(f" Model type: {result['model_type']}")
|
|
||||||
print("="*70)
|
|
||||||
|
|
||||||
# Save training history to database (if using database source)
|
|
||||||
if db_config and args.source == 'database':
|
|
||||||
save_training_history(db_config, result)
|
|
||||||
|
|
||||||
print(f"\n✅ Training completed! Models saved to: {args.model_dir}")
|
|
||||||
print(f"\nNext steps:")
|
|
||||||
print(f" 1. Test detection: python python_ml/test_detection.py")
|
|
||||||
print(f" 2. Validate with CICIDS2017: python python_ml/train_hybrid.py --validate")
|
|
||||||
|
|
||||||
return detector
|
|
||||||
|
|
||||||
|
|
||||||
def validate_with_cicids(args):
|
|
||||||
"""
|
|
||||||
Validate trained model with CICIDS2017 dataset
|
|
||||||
Calculate Precision, Recall, F1, FPR
|
|
||||||
"""
|
|
||||||
print("\n" + "="*70)
|
|
||||||
print(" IDS HYBRID ML VALIDATION - CICIDS2017")
|
|
||||||
print("="*70)
|
|
||||||
|
|
||||||
# Load dataset
|
|
||||||
loader = CICIDS2017Loader(data_dir=args.cicids_dir)
|
|
||||||
|
|
||||||
# Check if dataset exists
|
|
||||||
exists, missing = loader.check_dataset_exists()
|
|
||||||
if not exists:
|
|
||||||
print("\n❌ CICIDS2017 dataset not found!")
|
|
||||||
print(loader.download_instructions())
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
print("\n[VALIDATE] Loading CICIDS2017 dataset...")
|
|
||||||
|
|
||||||
# Use sample for faster testing
|
|
||||||
sample_frac = args.sample_frac if args.sample_frac > 0 else 1.0
|
|
||||||
train_df, val_df, test_df = loader.load_and_process_all(
|
|
||||||
sample_frac=sample_frac,
|
|
||||||
train_ratio=0.7,
|
|
||||||
val_ratio=0.15
|
|
||||||
)
|
|
||||||
|
|
||||||
print(f"\n[VALIDATE] Dataset split:")
|
|
||||||
print(f" Train: {len(train_df):,} samples")
|
|
||||||
print(f" Val: {len(val_df):,} samples")
|
|
||||||
print(f" Test: {len(test_df):,} samples")
|
|
||||||
|
|
||||||
# Load or train model
|
|
||||||
detector = MLHybridDetector(model_dir=args.model_dir)
|
|
||||||
|
|
||||||
if args.retrain or not detector.load_models():
|
|
||||||
print("\n[VALIDATE] Training new model on CICIDS2017 training set...")
|
|
||||||
# Use normal traffic only for unsupervised training
|
|
||||||
normal_train = train_df[train_df['is_attack'] == 0].drop(['is_attack', 'attack_type'], axis=1)
|
|
||||||
result = detector.train_unsupervised(normal_train)
|
|
||||||
print(f" Trained on {len(normal_train):,} normal traffic samples")
|
|
||||||
else:
|
|
||||||
print("\n[VALIDATE] Using existing trained model")
|
|
||||||
|
|
||||||
# Validate on test set
|
|
||||||
print("\n[VALIDATE] Running detection on test set...")
|
|
||||||
test_logs = test_df.drop(['is_attack', 'attack_type'], axis=1)
|
|
||||||
detections = detector.detect(test_logs, mode='all')
|
|
||||||
|
|
||||||
# Convert detections to binary predictions
|
|
||||||
# Create set of detected IPs with risk_score >= 60 (configurable threshold)
|
|
||||||
detection_threshold = 60
|
|
||||||
detected_ips = {d['source_ip'] for d in detections if d['risk_score'] >= detection_threshold}
|
|
||||||
|
|
||||||
print(f"[VALIDATE] Detected {len(detected_ips)} unique IPs above threshold {detection_threshold}")
|
|
||||||
|
|
||||||
# Create predictions array by mapping source_ip
|
|
||||||
y_true = test_df['is_attack'].values
|
|
||||||
y_pred = np.zeros(len(test_df), dtype=int)
|
|
||||||
|
|
||||||
# Map detections to test_df rows using source_ip
|
|
||||||
for i, row in test_df.iterrows():
|
|
||||||
if row['source_ip'] in detected_ips:
|
|
||||||
y_pred[i] = 1
|
|
||||||
|
|
||||||
# Calculate metrics
|
|
||||||
print("\n[VALIDATE] Calculating validation metrics...")
|
|
||||||
validator = ValidationMetrics()
|
|
||||||
metrics = validator.calculate(y_true, y_pred)
|
|
||||||
|
|
||||||
# Print summary
|
|
||||||
validator.print_summary(metrics, title="CICIDS2017 Validation Results")
|
|
||||||
|
|
||||||
# Check production criteria
|
|
||||||
print("\n[VALIDATE] Checking production deployment criteria...")
|
|
||||||
passes, issues = validator.meets_production_criteria(
|
|
||||||
metrics,
|
|
||||||
min_precision=0.90,
|
|
||||||
max_fpr=0.05,
|
|
||||||
min_recall=0.80
|
|
||||||
)
|
|
||||||
|
|
||||||
# Save metrics
|
|
||||||
metrics_file = Path(args.model_dir) / f"validation_metrics_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"
|
|
||||||
validator.save_metrics(metrics, str(metrics_file))
|
|
||||||
|
|
||||||
if passes:
|
|
||||||
print(f"\n🎉 Model ready for production deployment!")
|
|
||||||
else:
|
|
||||||
print(f"\n⚠️ Model needs improvement before production")
|
|
||||||
print(f"\nSuggestions:")
|
|
||||||
print(f" - Adjust contamination parameter (currently {detector.config['eif_contamination']})")
|
|
||||||
print(f" - Increase n_estimators for more stable predictions")
|
|
||||||
print(f" - Review feature selection threshold")
|
|
||||||
|
|
||||||
return detector, metrics
|
|
||||||
|
|
||||||
|
|
||||||
def test_on_synthetic(args):
|
|
||||||
"""
|
|
||||||
Quick test on synthetic data to verify system works
|
|
||||||
"""
|
|
||||||
print("\n" + "="*70)
|
|
||||||
print(" IDS HYBRID ML TEST - SYNTHETIC DATA")
|
|
||||||
print("="*70)
|
|
||||||
|
|
||||||
# Create synthetic dataset
|
|
||||||
loader = CICIDS2017Loader()
|
|
||||||
df = loader.create_sample_dataset(n_samples=args.n_samples)
|
|
||||||
|
|
||||||
print(f"\n[TEST] Created synthetic dataset: {len(df)} samples")
|
|
||||||
print(f" Normal: {(df['is_attack']==0).sum():,} ({(df['is_attack']==0).sum()/len(df)*100:.1f}%)")
|
|
||||||
print(f" Attacks: {(df['is_attack']==1).sum():,} ({(df['is_attack']==1).sum()/len(df)*100:.1f}%)")
|
|
||||||
|
|
||||||
# Split
|
|
||||||
n = len(df)
|
|
||||||
n_train = int(n * 0.7)
|
|
||||||
|
|
||||||
train_df = df.iloc[:n_train]
|
|
||||||
test_df = df.iloc[n_train:]
|
|
||||||
|
|
||||||
# Train on normal traffic only
|
|
||||||
detector = MLHybridDetector(model_dir=args.model_dir)
|
|
||||||
normal_train = train_df[train_df['is_attack'] == 0].drop(['is_attack', 'attack_type'], axis=1)
|
|
||||||
|
|
||||||
print(f"\n[TEST] Training on {len(normal_train):,} normal samples...")
|
|
||||||
detector.train_unsupervised(normal_train)
|
|
||||||
|
|
||||||
# Test detection
|
|
||||||
test_logs = test_df.drop(['is_attack', 'attack_type'], axis=1)
|
|
||||||
detections = detector.detect(test_logs, mode='all')
|
|
||||||
|
|
||||||
print(f"\n[TEST] Detection results:")
|
|
||||||
print(f" Total detections: {len(detections)}")
|
|
||||||
|
|
||||||
# Count by confidence
|
|
||||||
confidence_counts = {'high': 0, 'medium': 0, 'low': 0}
|
|
||||||
for d in detections:
|
|
||||||
confidence_counts[d['confidence_level']] += 1
|
|
||||||
|
|
||||||
print(f" High confidence: {confidence_counts['high']}")
|
|
||||||
print(f" Medium confidence: {confidence_counts['medium']}")
|
|
||||||
print(f" Low confidence: {confidence_counts['low']}")
|
|
||||||
|
|
||||||
# Show top 5 detections
|
|
||||||
print(f"\n[TEST] Top 5 detections:")
|
|
||||||
for i, d in enumerate(detections[:5], 1):
|
|
||||||
print(f" {i}. {d['source_ip']}: risk={d['risk_score']:.1f}, "
|
|
||||||
f"type={d['anomaly_type']}, confidence={d['confidence_level']}")
|
|
||||||
|
|
||||||
# Validation - map detections to test_df rows using source_ip
|
|
||||||
detection_threshold = 60
|
|
||||||
detected_ips = {d['source_ip'] for d in detections if d['risk_score'] >= detection_threshold}
|
|
||||||
|
|
||||||
y_true = test_df['is_attack'].values
|
|
||||||
y_pred = np.zeros(len(test_df), dtype=int)
|
|
||||||
|
|
||||||
# Map detections to test_df rows (use enumerate for correct indexing)
|
|
||||||
for idx, (_, row) in enumerate(test_df.iterrows()):
|
|
||||||
if row['source_ip'] in detected_ips:
|
|
||||||
y_pred[idx] = 1
|
|
||||||
|
|
||||||
validator = ValidationMetrics()
|
|
||||||
metrics = validator.calculate(y_true, y_pred)
|
|
||||||
validator.print_summary(metrics, title="Synthetic Test Results")
|
|
||||||
|
|
||||||
print("\n✅ System test completed!")
|
|
||||||
|
|
||||||
# Check if ensemble was trained
|
|
||||||
if detector.ensemble_classifier is None:
|
|
||||||
print("\n⚠️ WARNING: System running in IF-only mode (no ensemble)")
|
|
||||||
print(" This may occur with very clean datasets")
|
|
||||||
print(" Expected metrics will be lower than hybrid mode")
|
|
||||||
|
|
||||||
return detector, metrics
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
parser = argparse.ArgumentParser(
|
|
||||||
description="Train and validate IDS Hybrid ML Detector",
|
|
||||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
|
||||||
epilog="""
|
|
||||||
Examples:
|
|
||||||
# Quick test with synthetic data
|
|
||||||
python train_hybrid.py --test
|
|
||||||
|
|
||||||
# Train on real traffic from database
|
|
||||||
python train_hybrid.py --source database --days 7
|
|
||||||
|
|
||||||
# Validate with CICIDS2017 (full dataset)
|
|
||||||
python train_hybrid.py --validate
|
|
||||||
|
|
||||||
# Validate with CICIDS2017 (10% sample for testing)
|
|
||||||
python train_hybrid.py --validate --sample 0.1
|
|
||||||
"""
|
|
||||||
)
|
|
||||||
|
|
||||||
# Mode selection
|
|
||||||
mode = parser.add_mutually_exclusive_group(required=True)
|
|
||||||
mode.add_argument('--train', action='store_true', help='Train unsupervised model')
|
|
||||||
mode.add_argument('--validate', action='store_true', help='Validate with CICIDS2017')
|
|
||||||
mode.add_argument('--test', action='store_true', help='Quick test with synthetic data')
|
|
||||||
|
|
||||||
# Data source
|
|
||||||
parser.add_argument('--source', choices=['synthetic', 'database'], default='synthetic',
|
|
||||||
help='Data source for training (default: synthetic)')
|
|
||||||
|
|
||||||
# Database options
|
|
||||||
parser.add_argument('--db-host', default='localhost', help='Database host')
|
|
||||||
parser.add_argument('--db-port', type=int, default=5432, help='Database port')
|
|
||||||
parser.add_argument('--db-name', default='ids', help='Database name')
|
|
||||||
parser.add_argument('--db-user', default='postgres', help='Database user')
|
|
||||||
parser.add_argument('--db-password', help='Database password')
|
|
||||||
parser.add_argument('--days', type=int, default=7, help='Days of traffic to load from DB')
|
|
||||||
|
|
||||||
# CICIDS2017 options
|
|
||||||
parser.add_argument('--cicids-dir', default='datasets/cicids2017',
|
|
||||||
help='CICIDS2017 dataset directory')
|
|
||||||
parser.add_argument('--sample', type=float, dest='sample_frac', default=0,
|
|
||||||
help='Sample fraction of CICIDS2017 (0.1 = 10%, 0 = all)')
|
|
||||||
parser.add_argument('--retrain', action='store_true',
|
|
||||||
help='Force retrain even if model exists')
|
|
||||||
|
|
||||||
# General options
|
|
||||||
parser.add_argument('--model-dir', default='models', help='Model save directory')
|
|
||||||
parser.add_argument('--n-samples', type=int, default=10000,
|
|
||||||
help='Number of synthetic samples to generate')
|
|
||||||
|
|
||||||
args = parser.parse_args()
|
|
||||||
|
|
||||||
# Validate database password if needed
|
|
||||||
if args.source == 'database' and not args.db_password:
|
|
||||||
print("Error: --db-password required when using --source database")
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
# Execute mode
|
|
||||||
try:
|
|
||||||
if args.test:
|
|
||||||
test_on_synthetic(args)
|
|
||||||
elif args.validate:
|
|
||||||
validate_with_cicids(args)
|
|
||||||
elif args.train:
|
|
||||||
train_unsupervised(args)
|
|
||||||
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
print("\n\nInterrupted by user")
|
|
||||||
sys.exit(1)
|
|
||||||
except Exception as e:
|
|
||||||
print(f"\n❌ Error: {e}")
|
|
||||||
import traceback
|
|
||||||
traceback.print_exc()
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
||||||
@ -1,324 +0,0 @@
|
|||||||
"""
|
|
||||||
Validation Metrics for IDS Models
|
|
||||||
Calculates Precision, Recall, F1-Score, False Positive Rate, Accuracy
|
|
||||||
"""
|
|
||||||
|
|
||||||
import numpy as np
|
|
||||||
import pandas as pd
|
|
||||||
from typing import Dict, Tuple, Optional
|
|
||||||
from sklearn.metrics import (
|
|
||||||
precision_score,
|
|
||||||
recall_score,
|
|
||||||
f1_score,
|
|
||||||
accuracy_score,
|
|
||||||
confusion_matrix,
|
|
||||||
roc_auc_score,
|
|
||||||
classification_report
|
|
||||||
)
|
|
||||||
import json
|
|
||||||
|
|
||||||
|
|
||||||
class ValidationMetrics:
|
|
||||||
"""Calculate and track validation metrics for IDS models"""
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self.history = []
|
|
||||||
|
|
||||||
def calculate(
|
|
||||||
self,
|
|
||||||
y_true: np.ndarray,
|
|
||||||
y_pred: np.ndarray,
|
|
||||||
y_prob: Optional[np.ndarray] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Calculate all metrics
|
|
||||||
|
|
||||||
Args:
|
|
||||||
y_true: True labels (0=normal, 1=attack)
|
|
||||||
y_pred: Predicted labels (0=normal, 1=attack)
|
|
||||||
y_prob: Prediction probabilities (optional, for ROC-AUC)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict with all metrics
|
|
||||||
"""
|
|
||||||
# Confusion matrix
|
|
||||||
tn, fp, fn, tp = confusion_matrix(y_true, y_pred).ravel()
|
|
||||||
|
|
||||||
# Core metrics
|
|
||||||
precision = precision_score(y_true, y_pred, zero_division=0)
|
|
||||||
recall = recall_score(y_true, y_pred, zero_division=0)
|
|
||||||
f1 = f1_score(y_true, y_pred, zero_division=0)
|
|
||||||
accuracy = accuracy_score(y_true, y_pred)
|
|
||||||
|
|
||||||
# False Positive Rate (critical for IDS!)
|
|
||||||
fpr = fp / (fp + tn) if (fp + tn) > 0 else 0
|
|
||||||
|
|
||||||
# True Negative Rate (Specificity)
|
|
||||||
tnr = tn / (tn + fp) if (tn + fp) > 0 else 0
|
|
||||||
|
|
||||||
# Matthews Correlation Coefficient (good for imbalanced datasets)
|
|
||||||
mcc_num = (tp * tn) - (fp * fn)
|
|
||||||
mcc_den = np.sqrt((tp + fp) * (tp + fn) * (tn + fp) * (tn + fn))
|
|
||||||
mcc = mcc_num / mcc_den if mcc_den > 0 else 0
|
|
||||||
|
|
||||||
metrics = {
|
|
||||||
# Primary metrics
|
|
||||||
'precision': float(precision),
|
|
||||||
'recall': float(recall),
|
|
||||||
'f1_score': float(f1),
|
|
||||||
'accuracy': float(accuracy),
|
|
||||||
'false_positive_rate': float(fpr),
|
|
||||||
|
|
||||||
# Additional metrics
|
|
||||||
'true_negative_rate': float(tnr), # Specificity
|
|
||||||
'matthews_corr_coef': float(mcc),
|
|
||||||
|
|
||||||
# Confusion matrix
|
|
||||||
'true_positives': int(tp),
|
|
||||||
'false_positives': int(fp),
|
|
||||||
'true_negatives': int(tn),
|
|
||||||
'false_negatives': int(fn),
|
|
||||||
|
|
||||||
# Sample counts
|
|
||||||
'total_samples': int(len(y_true)),
|
|
||||||
'total_attacks': int(np.sum(y_true == 1)),
|
|
||||||
'total_normal': int(np.sum(y_true == 0)),
|
|
||||||
}
|
|
||||||
|
|
||||||
# ROC-AUC if probabilities provided
|
|
||||||
if y_prob is not None:
|
|
||||||
try:
|
|
||||||
roc_auc = roc_auc_score(y_true, y_prob)
|
|
||||||
metrics['roc_auc'] = float(roc_auc)
|
|
||||||
except Exception:
|
|
||||||
metrics['roc_auc'] = None
|
|
||||||
|
|
||||||
return metrics
|
|
||||||
|
|
||||||
def calculate_per_class(
|
|
||||||
self,
|
|
||||||
y_true: np.ndarray,
|
|
||||||
y_pred: np.ndarray,
|
|
||||||
class_names: Optional[list] = None
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Calculate metrics per attack type
|
|
||||||
|
|
||||||
Args:
|
|
||||||
y_true: True class labels (attack types)
|
|
||||||
y_pred: Predicted class labels
|
|
||||||
class_names: List of class names
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Dict with per-class metrics
|
|
||||||
"""
|
|
||||||
if class_names is None:
|
|
||||||
class_names = sorted(np.unique(np.concatenate([y_true, y_pred])))
|
|
||||||
|
|
||||||
# Get classification report as dict
|
|
||||||
report = classification_report(
|
|
||||||
y_true,
|
|
||||||
y_pred,
|
|
||||||
target_names=class_names,
|
|
||||||
output_dict=True,
|
|
||||||
zero_division=0
|
|
||||||
)
|
|
||||||
|
|
||||||
# Format per-class metrics
|
|
||||||
per_class = {}
|
|
||||||
for class_name in class_names:
|
|
||||||
if class_name in report:
|
|
||||||
per_class[class_name] = {
|
|
||||||
'precision': report[class_name]['precision'],
|
|
||||||
'recall': report[class_name]['recall'],
|
|
||||||
'f1_score': report[class_name]['f1-score'],
|
|
||||||
'support': report[class_name]['support'],
|
|
||||||
}
|
|
||||||
|
|
||||||
# Add macro/weighted averages
|
|
||||||
per_class['macro_avg'] = report['macro avg']
|
|
||||||
per_class['weighted_avg'] = report['weighted avg']
|
|
||||||
|
|
||||||
return per_class
|
|
||||||
|
|
||||||
def print_summary(self, metrics: Dict, title: str = "Validation Metrics"):
|
|
||||||
"""Print formatted metrics summary"""
|
|
||||||
print(f"\n{'='*60}")
|
|
||||||
print(f"{title:^60}")
|
|
||||||
print(f"{'='*60}")
|
|
||||||
|
|
||||||
print(f"\n🎯 Primary Metrics:")
|
|
||||||
print(f" Precision: {metrics['precision']*100:6.2f}% (of 100 flagged, how many are real attacks)")
|
|
||||||
print(f" Recall: {metrics['recall']*100:6.2f}% (of 100 attacks, how many detected)")
|
|
||||||
print(f" F1-Score: {metrics['f1_score']*100:6.2f}% (harmonic mean of P&R)")
|
|
||||||
print(f" Accuracy: {metrics['accuracy']*100:6.2f}% (overall correctness)")
|
|
||||||
|
|
||||||
print(f"\n⚠️ False Positive Analysis:")
|
|
||||||
print(f" FP Rate: {metrics['false_positive_rate']*100:6.2f}% (normal traffic flagged as attack)")
|
|
||||||
print(f" FP Count: {metrics['false_positives']:6d} (actual false positives)")
|
|
||||||
print(f" TN Rate: {metrics['true_negative_rate']*100:6.2f}% (specificity - correct normal)")
|
|
||||||
|
|
||||||
print(f"\n📊 Confusion Matrix:")
|
|
||||||
print(f" Predicted Normal Predicted Attack")
|
|
||||||
print(f" Actual Normal {metrics['true_negatives']:6d} {metrics['false_positives']:6d}")
|
|
||||||
print(f" Actual Attack {metrics['false_negatives']:6d} {metrics['true_positives']:6d}")
|
|
||||||
|
|
||||||
print(f"\n📈 Dataset Statistics:")
|
|
||||||
print(f" Total Samples: {metrics['total_samples']:6d}")
|
|
||||||
print(f" Total Attacks: {metrics['total_attacks']:6d} ({metrics['total_attacks']/metrics['total_samples']*100:.1f}%)")
|
|
||||||
print(f" Total Normal: {metrics['total_normal']:6d} ({metrics['total_normal']/metrics['total_samples']*100:.1f}%)")
|
|
||||||
|
|
||||||
if 'roc_auc' in metrics and metrics['roc_auc'] is not None:
|
|
||||||
print(f"\n🎲 ROC-AUC: {metrics['roc_auc']:6.4f}")
|
|
||||||
|
|
||||||
if 'matthews_corr_coef' in metrics:
|
|
||||||
print(f" MCC: {metrics['matthews_corr_coef']:6.4f} (correlation coefficient)")
|
|
||||||
|
|
||||||
print(f"\n{'='*60}\n")
|
|
||||||
|
|
||||||
def compare_models(
|
|
||||||
self,
|
|
||||||
model_metrics: Dict[str, Dict],
|
|
||||||
highlight_best: bool = True
|
|
||||||
) -> pd.DataFrame:
|
|
||||||
"""
|
|
||||||
Compare metrics across multiple models
|
|
||||||
|
|
||||||
Args:
|
|
||||||
model_metrics: Dict of {model_name: metrics_dict}
|
|
||||||
highlight_best: Print best model
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
DataFrame with comparison
|
|
||||||
"""
|
|
||||||
comparison = pd.DataFrame(model_metrics).T
|
|
||||||
|
|
||||||
# Select key columns
|
|
||||||
key_cols = ['precision', 'recall', 'f1_score', 'accuracy', 'false_positive_rate']
|
|
||||||
comparison = comparison[key_cols]
|
|
||||||
|
|
||||||
# Convert to percentages
|
|
||||||
for col in key_cols:
|
|
||||||
comparison[col] = comparison[col] * 100
|
|
||||||
|
|
||||||
# Round to 2 decimals
|
|
||||||
comparison = comparison.round(2)
|
|
||||||
|
|
||||||
if highlight_best:
|
|
||||||
print("\n📊 Model Comparison:")
|
|
||||||
print(comparison.to_string())
|
|
||||||
|
|
||||||
# Find best model (highest F1, lowest FPR)
|
|
||||||
comparison['score'] = comparison['f1_score'] - comparison['false_positive_rate']
|
|
||||||
best_model = comparison['score'].idxmax()
|
|
||||||
|
|
||||||
print(f"\n🏆 Best Model: {best_model}")
|
|
||||||
print(f" - F1-Score: {comparison.loc[best_model, 'f1_score']:.2f}%")
|
|
||||||
print(f" - FPR: {comparison.loc[best_model, 'false_positive_rate']:.2f}%")
|
|
||||||
|
|
||||||
return comparison
|
|
||||||
|
|
||||||
def save_metrics(self, metrics: Dict, filepath: str):
|
|
||||||
"""Save metrics to JSON file"""
|
|
||||||
with open(filepath, 'w') as f:
|
|
||||||
json.dump(metrics, f, indent=2)
|
|
||||||
print(f"[METRICS] Saved to {filepath}")
|
|
||||||
|
|
||||||
def load_metrics(self, filepath: str) -> Dict:
|
|
||||||
"""Load metrics from JSON file"""
|
|
||||||
with open(filepath) as f:
|
|
||||||
metrics = json.load(f)
|
|
||||||
return metrics
|
|
||||||
|
|
||||||
def meets_production_criteria(
|
|
||||||
self,
|
|
||||||
metrics: Dict,
|
|
||||||
min_precision: float = 0.90,
|
|
||||||
max_fpr: float = 0.05,
|
|
||||||
min_recall: float = 0.80
|
|
||||||
) -> Tuple[bool, list]:
|
|
||||||
"""
|
|
||||||
Check if model meets production deployment criteria
|
|
||||||
|
|
||||||
Args:
|
|
||||||
metrics: Calculated metrics
|
|
||||||
min_precision: Minimum acceptable precision (default 90%)
|
|
||||||
max_fpr: Maximum acceptable FPR (default 5%)
|
|
||||||
min_recall: Minimum acceptable recall (default 80%)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
(passes: bool, issues: list)
|
|
||||||
"""
|
|
||||||
issues = []
|
|
||||||
|
|
||||||
if metrics['precision'] < min_precision:
|
|
||||||
issues.append(
|
|
||||||
f"Precision {metrics['precision']*100:.1f}% < {min_precision*100:.0f}% "
|
|
||||||
f"(too many false positives)"
|
|
||||||
)
|
|
||||||
|
|
||||||
if metrics['false_positive_rate'] > max_fpr:
|
|
||||||
issues.append(
|
|
||||||
f"FPR {metrics['false_positive_rate']*100:.1f}% > {max_fpr*100:.0f}% "
|
|
||||||
f"(flagging too much normal traffic)"
|
|
||||||
)
|
|
||||||
|
|
||||||
if metrics['recall'] < min_recall:
|
|
||||||
issues.append(
|
|
||||||
f"Recall {metrics['recall']*100:.1f}% < {min_recall*100:.0f}% "
|
|
||||||
f"(missing too many attacks)"
|
|
||||||
)
|
|
||||||
|
|
||||||
passes = len(issues) == 0
|
|
||||||
|
|
||||||
if passes:
|
|
||||||
print("✅ Model meets production criteria!")
|
|
||||||
else:
|
|
||||||
print("❌ Model does NOT meet production criteria:")
|
|
||||||
for issue in issues:
|
|
||||||
print(f" - {issue}")
|
|
||||||
|
|
||||||
return passes, issues
|
|
||||||
|
|
||||||
|
|
||||||
def calculate_confidence_metrics(
|
|
||||||
detections: list,
|
|
||||||
ground_truth: Dict[str, bool]
|
|
||||||
) -> Dict:
|
|
||||||
"""
|
|
||||||
Calculate metrics for confidence-based detection system
|
|
||||||
|
|
||||||
Args:
|
|
||||||
detections: List of detection dicts with 'source_ip' and 'confidence_level'
|
|
||||||
ground_truth: Dict of {ip: is_attack (bool)}
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Metrics broken down by confidence level
|
|
||||||
"""
|
|
||||||
confidence_levels = ['high', 'medium', 'low']
|
|
||||||
metrics_by_confidence = {}
|
|
||||||
|
|
||||||
for level in confidence_levels:
|
|
||||||
level_detections = [d for d in detections if d.get('confidence_level') == level]
|
|
||||||
|
|
||||||
if not level_detections:
|
|
||||||
metrics_by_confidence[level] = {
|
|
||||||
'count': 0,
|
|
||||||
'true_positives': 0,
|
|
||||||
'false_positives': 0,
|
|
||||||
'precision': 0.0
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
|
|
||||||
tp = sum(1 for d in level_detections if ground_truth.get(d['source_ip'], False))
|
|
||||||
fp = len(level_detections) - tp
|
|
||||||
precision = tp / len(level_detections) if level_detections else 0
|
|
||||||
|
|
||||||
metrics_by_confidence[level] = {
|
|
||||||
'count': len(level_detections),
|
|
||||||
'true_positives': tp,
|
|
||||||
'false_positives': fp,
|
|
||||||
'precision': precision
|
|
||||||
}
|
|
||||||
|
|
||||||
return metrics_by_confidence
|
|
||||||
60
replit.md
60
replit.md
@ -20,19 +20,17 @@ This project is a full-stack web application for an Intrusion Detection System (
|
|||||||
- Commit message: italiano
|
- Commit message: italiano
|
||||||
|
|
||||||
## System Architecture
|
## System Architecture
|
||||||
The IDS employs a React-based frontend for real-time monitoring, detection visualization, and whitelist management, built with ShadCN UI and TanStack Query. The backend consists of a Python FastAPI service dedicated to ML analysis and a Node.js (Express) backend handling API requests, PostgreSQL database management, and service coordination.
|
The IDS employs a React-based frontend for real-time monitoring, detection visualization, and whitelist management, built with ShadCN UI and TanStack Query. The backend consists of a Python FastAPI service dedicated to ML analysis (Isolation Forest with 25 targeted features), MikroTik API management, and a detection engine that scores anomalies from 0-100 across five risk levels. A Node.js (Express) backend handles API requests from the frontend, manages the PostgreSQL database, and coordinates service operations.
|
||||||
|
|
||||||
**Key Architectural Decisions & Features:**
|
**Key Architectural Decisions & Features:**
|
||||||
- **Log Collection & Processing**: MikroTik syslog data (UDP:514) is parsed by `syslog_parser.py` and stored in PostgreSQL with a 3-day retention policy. The parser includes auto-reconnect and error recovery mechanisms.
|
- **Log Collection & Processing**: MikroTik syslog data (UDP:514) is sent to RSyslog, parsed by `syslog_parser.py`, and stored in PostgreSQL. The parser includes auto-cleanup with a 3-day retention policy.
|
||||||
- **Machine Learning**: An Isolation Forest model (sklearn.IsolectionForest) trained on 25 network log features performs real-time anomaly detection, assigning a risk score (0-100 across five risk levels). A hybrid ML detector (Isolation Forest + Ensemble Classifier with weighted voting) reduces false positives. The system supports weekly automatic retraining of models.
|
- **Machine Learning**: An Isolation Forest model trained on 25 network log features performs real-time anomaly detection, assigning a risk score.
|
||||||
- **Automated Blocking**: Critical IPs (score >= 80) are automatically blocked in parallel across configured MikroTik routers via their REST API. **Auto-unblock on whitelist**: When an IP is added to the whitelist, it is automatically removed from all router blocklists. Manual unblock button available in Detections page.
|
- **Automated Blocking**: Critical IPs (score >= 80) are automatically blocked in parallel across all configured MikroTik routers via their REST API.
|
||||||
- **Public Lists Integration (v2.0.0 - CIDR Complete)**: Automatic fetcher syncs blacklist/whitelist feeds every 10 minutes (Spamhaus, Talos, AWS, GCP, Cloudflare, IANA, NTP Pool). **Full CIDR support** using PostgreSQL INET/CIDR types with `<<=` containment operators for network range matching. Priority-based merge logic: Manual whitelist > Public whitelist > Blacklist (CIDR-aware). Detections created for blacklisted IPs/ranges (excluding whitelisted ranges). CRUD API + UI for list management. See `deployment/docs/PUBLIC_LISTS_V2_CIDR.md` for implementation details.
|
- **Service Monitoring & Management**: A dashboard provides real-time status (green/red indicators) for the ML Backend, Database, and Syslog Parser. Service management (start/stop/restart) for Python services is available via API endpoints, secured with API key authentication and Systemd integration for production-grade control and auto-restart capabilities.
|
||||||
- **Automatic Cleanup**: An hourly systemd timer (`cleanup_detections.py`) removes old detections (48h) and auto-unblocks IPs (2h).
|
- **IP Geolocation**: Integrated `ip-api.com` for enriching detection data with geographical and Autonomous System (AS) information, including intelligent caching.
|
||||||
- **Service Monitoring & Management**: A dashboard provides real-time status (ML Backend, Database, Syslog Parser). API endpoints, secured with API key authentication and Systemd integration, allow for service management (start/stop/restart) of Python services.
|
- **Database Management**: PostgreSQL is used for all persistent data. An intelligent database versioning system ensures efficient SQL migrations, applying only new scripts. Dual-mode database drivers (`@neondatabase/serverless` for Replit, `pg` for AlmaLinux) ensure environment compatibility.
|
||||||
- **IP Geolocation**: Integration with `ip-api.com` enriches detection data with geographical and AS information, utilizing intelligent caching.
|
|
||||||
- **Database Management**: PostgreSQL is used for all persistent data. An intelligent database versioning system ensures efficient SQL migrations (v8 with forced INET/CIDR column types for network range matching). Migration 008 unconditionally recreates INET columns to fix type mismatches. Dual-mode database drivers (`@neondatabase/serverless` for Replit, `pg` for AlmaLinux) ensure environment compatibility.
|
|
||||||
- **Microservices**: Clear separation of concerns between the Python ML backend and the Node.js API backend.
|
- **Microservices**: Clear separation of concerns between the Python ML backend and the Node.js API backend.
|
||||||
- **UI/UX**: Utilizes ShadCN UI for a modern component library and `react-hook-form` with Zod for robust form validation. Analytics dashboards provide visualizations of normal and attack traffic, including real-time and historical data.
|
- **UI/UX**: Utilizes ShadCN UI for a modern component library and `react-hook-form` with Zod for robust form validation.
|
||||||
|
|
||||||
## External Dependencies
|
## External Dependencies
|
||||||
- **React**: Frontend framework.
|
- **React**: Frontend framework.
|
||||||
@ -41,8 +39,7 @@ The IDS employs a React-based frontend for real-time monitoring, detection visua
|
|||||||
- **MikroTik API REST**: For router communication and IP blocking.
|
- **MikroTik API REST**: For router communication and IP blocking.
|
||||||
- **ShadCN UI**: Frontend component library.
|
- **ShadCN UI**: Frontend component library.
|
||||||
- **TanStack Query**: Data fetching for the frontend.
|
- **TanStack Query**: Data fetching for the frontend.
|
||||||
- **Isolation Forest (scikit-learn)**: Machine Learning algorithm for anomaly detection.
|
- **Isolation Forest**: Machine Learning algorithm for anomaly detection.
|
||||||
- **xgboost, joblib**: ML libraries used in the hybrid detector.
|
|
||||||
- **RSyslog**: Log collection daemon.
|
- **RSyslog**: Log collection daemon.
|
||||||
- **Drizzle ORM**: For database schema definition in Node.js.
|
- **Drizzle ORM**: For database schema definition in Node.js.
|
||||||
- **Neon Database**: Cloud-native PostgreSQL service (used in Replit).
|
- **Neon Database**: Cloud-native PostgreSQL service (used in Replit).
|
||||||
@ -50,3 +47,42 @@ The IDS employs a React-based frontend for real-time monitoring, detection visua
|
|||||||
- **psycopg2**: PostgreSQL adapter for Python.
|
- **psycopg2**: PostgreSQL adapter for Python.
|
||||||
- **ip-api.com**: External API for IP geolocation data.
|
- **ip-api.com**: External API for IP geolocation data.
|
||||||
- **Recharts**: Charting library for analytics visualization.
|
- **Recharts**: Charting library for analytics visualization.
|
||||||
|
|
||||||
|
## Recent Updates (Novembre 2025)
|
||||||
|
|
||||||
|
### 🔧 Analytics Aggregator Fix - Data Consistency (24 Nov 2025 - 17:00)
|
||||||
|
- **BUG FIX CRITICO**: Risolto mismatch dati Dashboard Live
|
||||||
|
- **Problema**: Distribuzione traffico mostrava 262k attacchi ma breakdown solo 19
|
||||||
|
- **ROOT CAUSE**: Aggregatore contava **occorrenze** invece di **pacchetti** in `attacks_by_type` e `attacks_by_country`
|
||||||
|
- **Soluzione**:
|
||||||
|
1. Spostato conteggio da loop detections → loop pacchetti
|
||||||
|
2. `attacks_by_type[tipo] += packets` (non +1!)
|
||||||
|
3. `attacks_by_country[paese] += packets` (non +1!)
|
||||||
|
4. Fallback "unknown"/"Unknown" per dati mancanti (tipo/geo)
|
||||||
|
5. Logging validazione: verifica breakdown_total == attack_packets
|
||||||
|
- **Invariante matematica**: `Σ(attacks_by_type) == Σ(attacks_by_country) == attack_packets`
|
||||||
|
- **Files modificati**: `python_ml/analytics_aggregator.py`
|
||||||
|
- **Deploy**: Restart ML backend + aggregator run manuale per testare
|
||||||
|
- **Validazione**: Log mostra `match: True` e nessun warning mismatch
|
||||||
|
|
||||||
|
### 📊 Network Analytics & Dashboard System (24 Nov 2025 - 11:30)
|
||||||
|
- **Feature Completa**: Sistema analytics con traffico normale + attacchi, visualizzazioni grafiche avanzate, dati permanenti
|
||||||
|
- **Componenti**:
|
||||||
|
1. **Database**: `network_analytics` table con aggregazioni orarie/giornaliere permanenti
|
||||||
|
2. **Aggregatore Python**: `analytics_aggregator.py` classifica traffico ogni ora
|
||||||
|
3. **Systemd Timer**: Esecuzione automatica ogni ora (:05 minuti)
|
||||||
|
4. **API**: `/api/analytics/recent` e `/api/analytics/range`
|
||||||
|
5. **Frontend**: Dashboard Live (real-time 3 giorni) + Analytics Storici (permanente)
|
||||||
|
- **Grafici**: Area Chart, Pie Chart, Bar Chart, Line Chart, Real-time Stream
|
||||||
|
- **Flag Emoji**: 🇮🇹🇺🇸🇷🇺🇨🇳 per identificazione immediata paese origine
|
||||||
|
- **Deploy**: Migration 005 + `./deployment/setup_analytics_timer.sh`
|
||||||
|
- **Security Fix**: Rimosso hardcoded path, implementato wrapper script sicuro `run_analytics.sh` per esecuzioni manuali
|
||||||
|
- **Production-grade**: Credenziali gestite via systemd EnvironmentFile (automatico) o wrapper script (manuale)
|
||||||
|
- **Frontend Fix**: Analytics History ora usa dati orari (`hourly: true`) finché aggregazione daily non è schedulata
|
||||||
|
|
||||||
|
### 🌍 IP Geolocation Integration (22 Nov 2025 - 13:00)
|
||||||
|
- **Feature**: Informazioni geografiche complete (paese, città, organizzazione, AS) per ogni IP
|
||||||
|
- **API**: ip-api.com con batch async lookup (100 IP in ~1.5s invece di 150s!)
|
||||||
|
- **Performance**: Caching intelligente + fallback robusto
|
||||||
|
- **Display**: Globe/Building/MapPin icons nella pagina Detections
|
||||||
|
- **Deploy**: Migration 004 + restart ML backend
|
||||||
318
server/routes.ts
318
server/routes.ts
@ -1,9 +1,9 @@
|
|||||||
import type { Express } from "express";
|
import type { Express } from "express";
|
||||||
import { createServer, type Server } from "http";
|
import { createServer, type Server } from "http";
|
||||||
import { storage } from "./storage";
|
import { storage } from "./storage";
|
||||||
import { insertRouterSchema, insertDetectionSchema, insertWhitelistSchema, insertPublicListSchema, networkAnalytics, routers } from "@shared/schema";
|
import { insertRouterSchema, insertDetectionSchema, insertWhitelistSchema, networkAnalytics } from "@shared/schema";
|
||||||
import { db } from "./db";
|
import { db } from "./db";
|
||||||
import { desc, eq } from "drizzle-orm";
|
import { desc } from "drizzle-orm";
|
||||||
|
|
||||||
export async function registerRoutes(app: Express): Promise<Server> {
|
export async function registerRoutes(app: Express): Promise<Server> {
|
||||||
// Routers
|
// Routers
|
||||||
@ -27,20 +27,6 @@ export async function registerRoutes(app: Express): Promise<Server> {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
app.put("/api/routers/:id", async (req, res) => {
|
|
||||||
try {
|
|
||||||
const validatedData = insertRouterSchema.parse(req.body);
|
|
||||||
const router = await storage.updateRouter(req.params.id, validatedData);
|
|
||||||
if (!router) {
|
|
||||||
return res.status(404).json({ error: "Router not found" });
|
|
||||||
}
|
|
||||||
res.json(router);
|
|
||||||
} catch (error) {
|
|
||||||
console.error('[Router UPDATE] Error:', error);
|
|
||||||
res.status(400).json({ error: "Invalid router data" });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
app.delete("/api/routers/:id", async (req, res) => {
|
app.delete("/api/routers/:id", async (req, res) => {
|
||||||
try {
|
try {
|
||||||
const success = await storage.deleteRouter(req.params.id);
|
const success = await storage.deleteRouter(req.params.id);
|
||||||
@ -77,22 +63,9 @@ export async function registerRoutes(app: Express): Promise<Server> {
|
|||||||
// Detections
|
// Detections
|
||||||
app.get("/api/detections", async (req, res) => {
|
app.get("/api/detections", async (req, res) => {
|
||||||
try {
|
try {
|
||||||
const limit = req.query.limit ? parseInt(req.query.limit as string) : 50;
|
const limit = parseInt(req.query.limit as string) || 100;
|
||||||
const offset = req.query.offset ? parseInt(req.query.offset as string) : 0;
|
const detections = await storage.getAllDetections(limit);
|
||||||
const anomalyType = req.query.anomalyType as string | undefined;
|
res.json(detections);
|
||||||
const minScore = req.query.minScore ? parseFloat(req.query.minScore as string) : undefined;
|
|
||||||
const maxScore = req.query.maxScore ? parseFloat(req.query.maxScore as string) : undefined;
|
|
||||||
const search = req.query.search as string | undefined;
|
|
||||||
|
|
||||||
const result = await storage.getAllDetections({
|
|
||||||
limit,
|
|
||||||
offset,
|
|
||||||
anomalyType,
|
|
||||||
minScore,
|
|
||||||
maxScore,
|
|
||||||
search
|
|
||||||
});
|
|
||||||
res.json(result);
|
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
console.error('[DB ERROR] Failed to fetch detections:', error);
|
console.error('[DB ERROR] Failed to fetch detections:', error);
|
||||||
res.status(500).json({ error: "Failed to fetch detections" });
|
res.status(500).json({ error: "Failed to fetch detections" });
|
||||||
@ -134,74 +107,12 @@ export async function registerRoutes(app: Express): Promise<Server> {
|
|||||||
try {
|
try {
|
||||||
const validatedData = insertWhitelistSchema.parse(req.body);
|
const validatedData = insertWhitelistSchema.parse(req.body);
|
||||||
const item = await storage.createWhitelist(validatedData);
|
const item = await storage.createWhitelist(validatedData);
|
||||||
|
|
||||||
// Auto-unblock from routers when adding to whitelist
|
|
||||||
const mlBackendUrl = process.env.ML_BACKEND_URL || 'http://localhost:8000';
|
|
||||||
const mlApiKey = process.env.IDS_API_KEY;
|
|
||||||
try {
|
|
||||||
const headers: Record<string, string> = { 'Content-Type': 'application/json' };
|
|
||||||
if (mlApiKey) {
|
|
||||||
headers['X-API-Key'] = mlApiKey;
|
|
||||||
}
|
|
||||||
const unblockResponse = await fetch(`${mlBackendUrl}/unblock-ip`, {
|
|
||||||
method: 'POST',
|
|
||||||
headers,
|
|
||||||
body: JSON.stringify({ ip_address: validatedData.ipAddress })
|
|
||||||
});
|
|
||||||
if (unblockResponse.ok) {
|
|
||||||
const result = await unblockResponse.json();
|
|
||||||
console.log(`[WHITELIST] Auto-unblocked ${validatedData.ipAddress} from ${result.unblocked_from} routers`);
|
|
||||||
} else {
|
|
||||||
console.warn(`[WHITELIST] Failed to auto-unblock ${validatedData.ipAddress}: ${unblockResponse.status}`);
|
|
||||||
}
|
|
||||||
} catch (unblockError) {
|
|
||||||
// Don't fail if ML backend is unavailable
|
|
||||||
console.warn(`[WHITELIST] ML backend unavailable for auto-unblock: ${unblockError}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
res.json(item);
|
res.json(item);
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
res.status(400).json({ error: "Invalid whitelist data" });
|
res.status(400).json({ error: "Invalid whitelist data" });
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// Unblock IP from all routers (proxy to ML backend)
|
|
||||||
app.post("/api/unblock-ip", async (req, res) => {
|
|
||||||
try {
|
|
||||||
const { ipAddress, listName = "ddos_blocked" } = req.body;
|
|
||||||
|
|
||||||
if (!ipAddress) {
|
|
||||||
return res.status(400).json({ error: "IP address is required" });
|
|
||||||
}
|
|
||||||
|
|
||||||
const mlBackendUrl = process.env.ML_BACKEND_URL || 'http://localhost:8000';
|
|
||||||
const mlApiKey = process.env.IDS_API_KEY;
|
|
||||||
const headers: Record<string, string> = { 'Content-Type': 'application/json' };
|
|
||||||
if (mlApiKey) {
|
|
||||||
headers['X-API-Key'] = mlApiKey;
|
|
||||||
}
|
|
||||||
|
|
||||||
const response = await fetch(`${mlBackendUrl}/unblock-ip`, {
|
|
||||||
method: 'POST',
|
|
||||||
headers,
|
|
||||||
body: JSON.stringify({ ip_address: ipAddress, list_name: listName })
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
const errorText = await response.text();
|
|
||||||
console.error(`[UNBLOCK] ML backend error for ${ipAddress}: ${response.status} - ${errorText}`);
|
|
||||||
return res.status(response.status).json({ error: errorText || "Failed to unblock IP" });
|
|
||||||
}
|
|
||||||
|
|
||||||
const result = await response.json();
|
|
||||||
console.log(`[UNBLOCK] Successfully unblocked ${ipAddress} from ${result.unblocked_from || 0} routers`);
|
|
||||||
res.json(result);
|
|
||||||
} catch (error: any) {
|
|
||||||
console.error('[UNBLOCK] Error:', error);
|
|
||||||
res.status(500).json({ error: error.message || "Failed to unblock IP from routers" });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
app.delete("/api/whitelist/:id", async (req, res) => {
|
app.delete("/api/whitelist/:id", async (req, res) => {
|
||||||
try {
|
try {
|
||||||
const success = await storage.deleteWhitelist(req.params.id);
|
const success = await storage.deleteWhitelist(req.params.id);
|
||||||
@ -214,214 +125,6 @@ export async function registerRoutes(app: Express): Promise<Server> {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
// Public Lists
|
|
||||||
app.get("/api/public-lists", async (req, res) => {
|
|
||||||
try {
|
|
||||||
const lists = await storage.getAllPublicLists();
|
|
||||||
res.json(lists);
|
|
||||||
} catch (error) {
|
|
||||||
console.error('[DB ERROR] Failed to fetch public lists:', error);
|
|
||||||
res.status(500).json({ error: "Failed to fetch public lists" });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
app.get("/api/public-lists/:id", async (req, res) => {
|
|
||||||
try {
|
|
||||||
const list = await storage.getPublicListById(req.params.id);
|
|
||||||
if (!list) {
|
|
||||||
return res.status(404).json({ error: "List not found" });
|
|
||||||
}
|
|
||||||
res.json(list);
|
|
||||||
} catch (error) {
|
|
||||||
res.status(500).json({ error: "Failed to fetch list" });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
app.post("/api/public-lists", async (req, res) => {
|
|
||||||
try {
|
|
||||||
const validatedData = insertPublicListSchema.parse(req.body);
|
|
||||||
const list = await storage.createPublicList(validatedData);
|
|
||||||
res.json(list);
|
|
||||||
} catch (error: any) {
|
|
||||||
console.error('[API ERROR] Failed to create public list:', error);
|
|
||||||
if (error.name === 'ZodError') {
|
|
||||||
return res.status(400).json({ error: "Invalid list data", details: error.errors });
|
|
||||||
}
|
|
||||||
res.status(400).json({ error: "Invalid list data" });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
app.patch("/api/public-lists/:id", async (req, res) => {
|
|
||||||
try {
|
|
||||||
const validatedData = insertPublicListSchema.partial().parse(req.body);
|
|
||||||
const list = await storage.updatePublicList(req.params.id, validatedData);
|
|
||||||
if (!list) {
|
|
||||||
return res.status(404).json({ error: "List not found" });
|
|
||||||
}
|
|
||||||
res.json(list);
|
|
||||||
} catch (error: any) {
|
|
||||||
console.error('[API ERROR] Failed to update public list:', error);
|
|
||||||
if (error.name === 'ZodError') {
|
|
||||||
return res.status(400).json({ error: "Invalid list data", details: error.errors });
|
|
||||||
}
|
|
||||||
res.status(400).json({ error: "Invalid list data" });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
app.delete("/api/public-lists/:id", async (req, res) => {
|
|
||||||
try {
|
|
||||||
const success = await storage.deletePublicList(req.params.id);
|
|
||||||
if (!success) {
|
|
||||||
return res.status(404).json({ error: "List not found" });
|
|
||||||
}
|
|
||||||
res.json({ success: true });
|
|
||||||
} catch (error) {
|
|
||||||
res.status(500).json({ error: "Failed to delete list" });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
app.post("/api/public-lists/:id/sync", async (req, res) => {
|
|
||||||
try {
|
|
||||||
const list = await storage.getPublicListById(req.params.id);
|
|
||||||
if (!list) {
|
|
||||||
return res.status(404).json({ error: "List not found" });
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(`[SYNC] Starting sync for list: ${list.name} (${list.url})`);
|
|
||||||
|
|
||||||
// Fetch the list from URL
|
|
||||||
const response = await fetch(list.url, {
|
|
||||||
headers: {
|
|
||||||
'User-Agent': 'IDS-MikroTik-PublicListFetcher/2.0',
|
|
||||||
'Accept': 'application/json, text/plain, */*',
|
|
||||||
},
|
|
||||||
signal: AbortSignal.timeout(30000),
|
|
||||||
});
|
|
||||||
|
|
||||||
if (!response.ok) {
|
|
||||||
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
const contentType = response.headers.get('content-type') || '';
|
|
||||||
const text = await response.text();
|
|
||||||
|
|
||||||
// Parse IPs based on content type
|
|
||||||
let ips: Array<{ip: string, cidr?: string}> = [];
|
|
||||||
|
|
||||||
if (contentType.includes('json') || list.url.endsWith('.json')) {
|
|
||||||
// JSON format (Spamhaus DROP v4 JSON)
|
|
||||||
try {
|
|
||||||
const data = JSON.parse(text);
|
|
||||||
if (Array.isArray(data)) {
|
|
||||||
for (const entry of data) {
|
|
||||||
if (entry.cidr) {
|
|
||||||
const [ip] = entry.cidr.split('/');
|
|
||||||
ips.push({ ip, cidr: entry.cidr });
|
|
||||||
} else if (entry.ip) {
|
|
||||||
ips.push({ ip: entry.ip, cidr: null as any });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} catch (e) {
|
|
||||||
console.error('[SYNC] Failed to parse JSON:', e);
|
|
||||||
throw new Error('Invalid JSON format');
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Plain text format (one IP/CIDR per line)
|
|
||||||
const lines = text.split('\n');
|
|
||||||
for (const line of lines) {
|
|
||||||
const trimmed = line.trim();
|
|
||||||
if (!trimmed || trimmed.startsWith('#') || trimmed.startsWith(';')) continue;
|
|
||||||
|
|
||||||
// Extract IP/CIDR from line
|
|
||||||
const match = trimmed.match(/^(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})(\/\d{1,2})?/);
|
|
||||||
if (match) {
|
|
||||||
const ip = match[1];
|
|
||||||
const cidr = match[2] ? `${match[1]}${match[2]}` : null;
|
|
||||||
ips.push({ ip, cidr: cidr as any });
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
console.log(`[SYNC] Parsed ${ips.length} IPs from ${list.name}`);
|
|
||||||
|
|
||||||
// Save IPs to database
|
|
||||||
let added = 0;
|
|
||||||
let updated = 0;
|
|
||||||
|
|
||||||
for (const { ip, cidr } of ips) {
|
|
||||||
const result = await storage.upsertBlacklistIp(list.id, ip, cidr);
|
|
||||||
if (result.created) added++;
|
|
||||||
else updated++;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Update list stats
|
|
||||||
await storage.updatePublicList(list.id, {
|
|
||||||
lastFetch: new Date(),
|
|
||||||
lastSuccess: new Date(),
|
|
||||||
totalIps: ips.length,
|
|
||||||
activeIps: ips.length,
|
|
||||||
errorCount: 0,
|
|
||||||
lastError: null,
|
|
||||||
});
|
|
||||||
|
|
||||||
console.log(`[SYNC] Completed: ${added} added, ${updated} updated for ${list.name}`);
|
|
||||||
|
|
||||||
res.json({
|
|
||||||
success: true,
|
|
||||||
message: `Sync completed: ${ips.length} IPs processed`,
|
|
||||||
added,
|
|
||||||
updated,
|
|
||||||
total: ips.length,
|
|
||||||
});
|
|
||||||
} catch (error: any) {
|
|
||||||
console.error('[API ERROR] Failed to sync:', error);
|
|
||||||
|
|
||||||
// Update error count
|
|
||||||
const list = await storage.getPublicListById(req.params.id);
|
|
||||||
if (list) {
|
|
||||||
await storage.updatePublicList(req.params.id, {
|
|
||||||
errorCount: (list.errorCount || 0) + 1,
|
|
||||||
lastError: error.message,
|
|
||||||
lastFetch: new Date(),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
res.status(500).json({ error: `Sync failed: ${error.message}` });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Public Blacklist IPs
|
|
||||||
app.get("/api/public-blacklist", async (req, res) => {
|
|
||||||
try {
|
|
||||||
const limit = parseInt(req.query.limit as string) || 1000;
|
|
||||||
const listId = req.query.listId as string | undefined;
|
|
||||||
const ipAddress = req.query.ipAddress as string | undefined;
|
|
||||||
const isActive = req.query.isActive === 'true';
|
|
||||||
|
|
||||||
const ips = await storage.getPublicBlacklistIps({
|
|
||||||
limit,
|
|
||||||
listId,
|
|
||||||
ipAddress,
|
|
||||||
isActive: req.query.isActive !== undefined ? isActive : undefined,
|
|
||||||
});
|
|
||||||
res.json(ips);
|
|
||||||
} catch (error) {
|
|
||||||
console.error('[DB ERROR] Failed to fetch blacklist IPs:', error);
|
|
||||||
res.status(500).json({ error: "Failed to fetch blacklist IPs" });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
app.get("/api/public-blacklist/stats", async (req, res) => {
|
|
||||||
try {
|
|
||||||
const stats = await storage.getPublicBlacklistStats();
|
|
||||||
res.json(stats);
|
|
||||||
} catch (error) {
|
|
||||||
console.error('[DB ERROR] Failed to fetch blacklist stats:', error);
|
|
||||||
res.status(500).json({ error: "Failed to fetch stats" });
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// Training History
|
// Training History
|
||||||
app.get("/api/training-history", async (req, res) => {
|
app.get("/api/training-history", async (req, res) => {
|
||||||
try {
|
try {
|
||||||
@ -478,15 +181,14 @@ export async function registerRoutes(app: Express): Promise<Server> {
|
|||||||
app.get("/api/stats", async (req, res) => {
|
app.get("/api/stats", async (req, res) => {
|
||||||
try {
|
try {
|
||||||
const routers = await storage.getAllRouters();
|
const routers = await storage.getAllRouters();
|
||||||
const detectionsResult = await storage.getAllDetections({ limit: 1000 });
|
const detections = await storage.getAllDetections(1000);
|
||||||
const recentLogs = await storage.getRecentLogs(1000);
|
const recentLogs = await storage.getRecentLogs(1000);
|
||||||
const whitelist = await storage.getAllWhitelist();
|
const whitelist = await storage.getAllWhitelist();
|
||||||
const latestTraining = await storage.getLatestTraining();
|
const latestTraining = await storage.getLatestTraining();
|
||||||
|
|
||||||
const detectionsList = detectionsResult.detections;
|
const blockedCount = detections.filter(d => d.blocked).length;
|
||||||
const blockedCount = detectionsList.filter(d => d.blocked).length;
|
const criticalCount = detections.filter(d => parseFloat(d.riskScore) >= 85).length;
|
||||||
const criticalCount = detectionsList.filter(d => parseFloat(d.riskScore) >= 85).length;
|
const highCount = detections.filter(d => parseFloat(d.riskScore) >= 70 && parseFloat(d.riskScore) < 85).length;
|
||||||
const highCount = detectionsList.filter(d => parseFloat(d.riskScore) >= 70 && parseFloat(d.riskScore) < 85).length;
|
|
||||||
|
|
||||||
res.json({
|
res.json({
|
||||||
routers: {
|
routers: {
|
||||||
@ -494,7 +196,7 @@ export async function registerRoutes(app: Express): Promise<Server> {
|
|||||||
enabled: routers.filter(r => r.enabled).length
|
enabled: routers.filter(r => r.enabled).length
|
||||||
},
|
},
|
||||||
detections: {
|
detections: {
|
||||||
total: detectionsResult.total,
|
total: detections.length,
|
||||||
blocked: blockedCount,
|
blocked: blockedCount,
|
||||||
critical: criticalCount,
|
critical: criticalCount,
|
||||||
high: highCount
|
high: highCount
|
||||||
|
|||||||
@ -5,8 +5,6 @@ import {
|
|||||||
whitelist,
|
whitelist,
|
||||||
trainingHistory,
|
trainingHistory,
|
||||||
networkAnalytics,
|
networkAnalytics,
|
||||||
publicLists,
|
|
||||||
publicBlacklistIps,
|
|
||||||
type Router,
|
type Router,
|
||||||
type InsertRouter,
|
type InsertRouter,
|
||||||
type NetworkLog,
|
type NetworkLog,
|
||||||
@ -18,10 +16,6 @@ import {
|
|||||||
type TrainingHistory,
|
type TrainingHistory,
|
||||||
type InsertTrainingHistory,
|
type InsertTrainingHistory,
|
||||||
type NetworkAnalytics,
|
type NetworkAnalytics,
|
||||||
type PublicList,
|
|
||||||
type InsertPublicList,
|
|
||||||
type PublicBlacklistIp,
|
|
||||||
type InsertPublicBlacklistIp,
|
|
||||||
} from "@shared/schema";
|
} from "@shared/schema";
|
||||||
import { db } from "./db";
|
import { db } from "./db";
|
||||||
import { eq, desc, and, gte, sql, inArray } from "drizzle-orm";
|
import { eq, desc, and, gte, sql, inArray } from "drizzle-orm";
|
||||||
@ -41,14 +35,7 @@ export interface IStorage {
|
|||||||
getLogsForTraining(limit: number, minTimestamp?: Date): Promise<NetworkLog[]>;
|
getLogsForTraining(limit: number, minTimestamp?: Date): Promise<NetworkLog[]>;
|
||||||
|
|
||||||
// Detections
|
// Detections
|
||||||
getAllDetections(options: {
|
getAllDetections(limit: number): Promise<Detection[]>;
|
||||||
limit?: number;
|
|
||||||
offset?: number;
|
|
||||||
anomalyType?: string;
|
|
||||||
minScore?: number;
|
|
||||||
maxScore?: number;
|
|
||||||
search?: string;
|
|
||||||
}): Promise<{ detections: Detection[]; total: number }>;
|
|
||||||
getDetectionByIp(sourceIp: string): Promise<Detection | undefined>;
|
getDetectionByIp(sourceIp: string): Promise<Detection | undefined>;
|
||||||
createDetection(detection: InsertDetection): Promise<Detection>;
|
createDetection(detection: InsertDetection): Promise<Detection>;
|
||||||
updateDetection(id: string, detection: Partial<InsertDetection>): Promise<Detection | undefined>;
|
updateDetection(id: string, detection: Partial<InsertDetection>): Promise<Detection | undefined>;
|
||||||
@ -82,27 +69,6 @@ export interface IStorage {
|
|||||||
recentDetections: Detection[];
|
recentDetections: Detection[];
|
||||||
}>;
|
}>;
|
||||||
|
|
||||||
// Public Lists
|
|
||||||
getAllPublicLists(): Promise<PublicList[]>;
|
|
||||||
getPublicListById(id: string): Promise<PublicList | undefined>;
|
|
||||||
createPublicList(list: InsertPublicList): Promise<PublicList>;
|
|
||||||
updatePublicList(id: string, list: Partial<InsertPublicList>): Promise<PublicList | undefined>;
|
|
||||||
deletePublicList(id: string): Promise<boolean>;
|
|
||||||
|
|
||||||
// Public Blacklist IPs
|
|
||||||
getPublicBlacklistIps(options: {
|
|
||||||
limit?: number;
|
|
||||||
listId?: string;
|
|
||||||
ipAddress?: string;
|
|
||||||
isActive?: boolean;
|
|
||||||
}): Promise<PublicBlacklistIp[]>;
|
|
||||||
getPublicBlacklistStats(): Promise<{
|
|
||||||
totalLists: number;
|
|
||||||
totalIps: number;
|
|
||||||
overlapWithDetections: number;
|
|
||||||
}>;
|
|
||||||
upsertBlacklistIp(listId: string, ipAddress: string, cidrRange: string | null): Promise<{created: boolean}>;
|
|
||||||
|
|
||||||
// System
|
// System
|
||||||
testConnection(): Promise<boolean>;
|
testConnection(): Promise<boolean>;
|
||||||
}
|
}
|
||||||
@ -174,62 +140,12 @@ export class DatabaseStorage implements IStorage {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Detections
|
// Detections
|
||||||
async getAllDetections(options: {
|
async getAllDetections(limit: number): Promise<Detection[]> {
|
||||||
limit?: number;
|
return await db
|
||||||
offset?: number;
|
|
||||||
anomalyType?: string;
|
|
||||||
minScore?: number;
|
|
||||||
maxScore?: number;
|
|
||||||
search?: string;
|
|
||||||
}): Promise<{ detections: Detection[]; total: number }> {
|
|
||||||
const { limit = 50, offset = 0, anomalyType, minScore, maxScore, search } = options;
|
|
||||||
|
|
||||||
// Build WHERE conditions
|
|
||||||
const conditions = [];
|
|
||||||
|
|
||||||
if (anomalyType) {
|
|
||||||
conditions.push(eq(detections.anomalyType, anomalyType));
|
|
||||||
}
|
|
||||||
|
|
||||||
// Cast riskScore to numeric for proper comparison (stored as text in DB)
|
|
||||||
if (minScore !== undefined) {
|
|
||||||
conditions.push(sql`${detections.riskScore}::numeric >= ${minScore}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (maxScore !== undefined) {
|
|
||||||
conditions.push(sql`${detections.riskScore}::numeric <= ${maxScore}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Search by IP or anomaly type (case-insensitive)
|
|
||||||
if (search && search.trim()) {
|
|
||||||
const searchLower = search.trim().toLowerCase();
|
|
||||||
conditions.push(sql`(
|
|
||||||
LOWER(${detections.sourceIp}) LIKE ${'%' + searchLower + '%'} OR
|
|
||||||
LOWER(${detections.anomalyType}) LIKE ${'%' + searchLower + '%'} OR
|
|
||||||
LOWER(COALESCE(${detections.country}, '')) LIKE ${'%' + searchLower + '%'} OR
|
|
||||||
LOWER(COALESCE(${detections.organization}, '')) LIKE ${'%' + searchLower + '%'}
|
|
||||||
)`);
|
|
||||||
}
|
|
||||||
|
|
||||||
const whereClause = conditions.length > 0 ? and(...conditions) : undefined;
|
|
||||||
|
|
||||||
// Get total count for pagination
|
|
||||||
const countResult = await db
|
|
||||||
.select({ count: sql<number>`count(*)::int` })
|
|
||||||
.from(detections)
|
|
||||||
.where(whereClause);
|
|
||||||
const total = countResult[0]?.count || 0;
|
|
||||||
|
|
||||||
// Get paginated results
|
|
||||||
const results = await db
|
|
||||||
.select()
|
.select()
|
||||||
.from(detections)
|
.from(detections)
|
||||||
.where(whereClause)
|
|
||||||
.orderBy(desc(detections.detectedAt))
|
.orderBy(desc(detections.detectedAt))
|
||||||
.limit(limit)
|
.limit(limit);
|
||||||
.offset(offset);
|
|
||||||
|
|
||||||
return { detections: results, total };
|
|
||||||
}
|
}
|
||||||
|
|
||||||
async getDetectionByIp(sourceIp: string): Promise<Detection | undefined> {
|
async getDetectionByIp(sourceIp: string): Promise<Detection | undefined> {
|
||||||
@ -437,150 +353,6 @@ export class DatabaseStorage implements IStorage {
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// Public Lists
|
|
||||||
async getAllPublicLists(): Promise<PublicList[]> {
|
|
||||||
return await db.select().from(publicLists).orderBy(desc(publicLists.createdAt));
|
|
||||||
}
|
|
||||||
|
|
||||||
async getPublicListById(id: string): Promise<PublicList | undefined> {
|
|
||||||
const [list] = await db.select().from(publicLists).where(eq(publicLists.id, id));
|
|
||||||
return list || undefined;
|
|
||||||
}
|
|
||||||
|
|
||||||
async createPublicList(insertList: InsertPublicList): Promise<PublicList> {
|
|
||||||
const [list] = await db.insert(publicLists).values(insertList).returning();
|
|
||||||
return list;
|
|
||||||
}
|
|
||||||
|
|
||||||
async updatePublicList(id: string, updateData: Partial<InsertPublicList>): Promise<PublicList | undefined> {
|
|
||||||
const [list] = await db
|
|
||||||
.update(publicLists)
|
|
||||||
.set(updateData)
|
|
||||||
.where(eq(publicLists.id, id))
|
|
||||||
.returning();
|
|
||||||
return list || undefined;
|
|
||||||
}
|
|
||||||
|
|
||||||
async deletePublicList(id: string): Promise<boolean> {
|
|
||||||
const result = await db.delete(publicLists).where(eq(publicLists.id, id));
|
|
||||||
return result.rowCount !== null && result.rowCount > 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Public Blacklist IPs
|
|
||||||
async getPublicBlacklistIps(options: {
|
|
||||||
limit?: number;
|
|
||||||
listId?: string;
|
|
||||||
ipAddress?: string;
|
|
||||||
isActive?: boolean;
|
|
||||||
}): Promise<PublicBlacklistIp[]> {
|
|
||||||
const { limit = 1000, listId, ipAddress, isActive } = options;
|
|
||||||
|
|
||||||
const conditions = [];
|
|
||||||
|
|
||||||
if (listId) {
|
|
||||||
conditions.push(eq(publicBlacklistIps.listId, listId));
|
|
||||||
}
|
|
||||||
|
|
||||||
if (ipAddress) {
|
|
||||||
conditions.push(eq(publicBlacklistIps.ipAddress, ipAddress));
|
|
||||||
}
|
|
||||||
|
|
||||||
if (isActive !== undefined) {
|
|
||||||
conditions.push(eq(publicBlacklistIps.isActive, isActive));
|
|
||||||
}
|
|
||||||
|
|
||||||
const query = db
|
|
||||||
.select()
|
|
||||||
.from(publicBlacklistIps)
|
|
||||||
.orderBy(desc(publicBlacklistIps.lastSeen))
|
|
||||||
.limit(limit);
|
|
||||||
|
|
||||||
if (conditions.length > 0) {
|
|
||||||
return await query.where(and(...conditions));
|
|
||||||
}
|
|
||||||
|
|
||||||
return await query;
|
|
||||||
}
|
|
||||||
|
|
||||||
async getPublicBlacklistStats(): Promise<{
|
|
||||||
totalLists: number;
|
|
||||||
totalIps: number;
|
|
||||||
overlapWithDetections: number;
|
|
||||||
}> {
|
|
||||||
const lists = await db.select().from(publicLists).where(eq(publicLists.type, 'blacklist'));
|
|
||||||
const totalLists = lists.length;
|
|
||||||
|
|
||||||
const [{ count: totalIps }] = await db
|
|
||||||
.select({ count: sql<number>`count(*)::int` })
|
|
||||||
.from(publicBlacklistIps)
|
|
||||||
.where(eq(publicBlacklistIps.isActive, true));
|
|
||||||
|
|
||||||
const [{ count: overlapWithDetections }] = await db
|
|
||||||
.select({ count: sql<number>`count(distinct ${detections.sourceIp})::int` })
|
|
||||||
.from(detections)
|
|
||||||
.innerJoin(publicBlacklistIps, eq(detections.sourceIp, publicBlacklistIps.ipAddress))
|
|
||||||
.where(
|
|
||||||
and(
|
|
||||||
eq(publicBlacklistIps.isActive, true),
|
|
||||||
eq(detections.detectionSource, 'public_blacklist'),
|
|
||||||
sql`NOT EXISTS (
|
|
||||||
SELECT 1 FROM ${whitelist}
|
|
||||||
WHERE ${whitelist.ipAddress} = ${detections.sourceIp}
|
|
||||||
AND ${whitelist.active} = true
|
|
||||||
)`
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
return {
|
|
||||||
totalLists,
|
|
||||||
totalIps: totalIps || 0,
|
|
||||||
overlapWithDetections: overlapWithDetections || 0,
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
async upsertBlacklistIp(listId: string, ipAddress: string, cidrRange: string | null): Promise<{created: boolean}> {
|
|
||||||
try {
|
|
||||||
const existing = await db
|
|
||||||
.select()
|
|
||||||
.from(publicBlacklistIps)
|
|
||||||
.where(
|
|
||||||
and(
|
|
||||||
eq(publicBlacklistIps.listId, listId),
|
|
||||||
eq(publicBlacklistIps.ipAddress, ipAddress)
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
if (existing.length > 0) {
|
|
||||||
await db
|
|
||||||
.update(publicBlacklistIps)
|
|
||||||
.set({
|
|
||||||
lastSeen: new Date(),
|
|
||||||
isActive: true,
|
|
||||||
cidrRange: cidrRange,
|
|
||||||
ipInet: ipAddress,
|
|
||||||
cidrInet: cidrRange || `${ipAddress}/32`,
|
|
||||||
})
|
|
||||||
.where(eq(publicBlacklistIps.id, existing[0].id));
|
|
||||||
return { created: false };
|
|
||||||
} else {
|
|
||||||
await db.insert(publicBlacklistIps).values({
|
|
||||||
listId,
|
|
||||||
ipAddress,
|
|
||||||
cidrRange,
|
|
||||||
ipInet: ipAddress,
|
|
||||||
cidrInet: cidrRange || `${ipAddress}/32`,
|
|
||||||
isActive: true,
|
|
||||||
firstSeen: new Date(),
|
|
||||||
lastSeen: new Date(),
|
|
||||||
});
|
|
||||||
return { created: true };
|
|
||||||
}
|
|
||||||
} catch (error) {
|
|
||||||
console.error('[DB ERROR] Failed to upsert blacklist IP:', error);
|
|
||||||
throw error;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async testConnection(): Promise<boolean> {
|
async testConnection(): Promise<boolean> {
|
||||||
try {
|
try {
|
||||||
await db.execute(sql`SELECT 1`);
|
await db.execute(sql`SELECT 1`);
|
||||||
|
|||||||
100
shared/schema.ts
100
shared/schema.ts
@ -8,7 +8,7 @@ export const routers = pgTable("routers", {
|
|||||||
id: varchar("id").primaryKey().default(sql`gen_random_uuid()`),
|
id: varchar("id").primaryKey().default(sql`gen_random_uuid()`),
|
||||||
name: text("name").notNull(),
|
name: text("name").notNull(),
|
||||||
ipAddress: text("ip_address").notNull().unique(),
|
ipAddress: text("ip_address").notNull().unique(),
|
||||||
apiPort: integer("api_port").notNull().default(8729),
|
apiPort: integer("api_port").notNull().default(8728),
|
||||||
username: text("username").notNull(),
|
username: text("username").notNull(),
|
||||||
password: text("password").notNull(),
|
password: text("password").notNull(),
|
||||||
enabled: boolean("enabled").notNull().default(true),
|
enabled: boolean("enabled").notNull().default(true),
|
||||||
@ -58,35 +58,23 @@ export const detections = pgTable("detections", {
|
|||||||
asNumber: text("as_number"),
|
asNumber: text("as_number"),
|
||||||
asName: text("as_name"),
|
asName: text("as_name"),
|
||||||
isp: text("isp"),
|
isp: text("isp"),
|
||||||
// Public lists integration
|
|
||||||
detectionSource: text("detection_source").notNull().default("ml_model"),
|
|
||||||
blacklistId: varchar("blacklist_id").references(() => publicBlacklistIps.id, { onDelete: 'set null' }),
|
|
||||||
}, (table) => ({
|
}, (table) => ({
|
||||||
sourceIpIdx: index("detection_source_ip_idx").on(table.sourceIp),
|
sourceIpIdx: index("detection_source_ip_idx").on(table.sourceIp),
|
||||||
riskScoreIdx: index("risk_score_idx").on(table.riskScore),
|
riskScoreIdx: index("risk_score_idx").on(table.riskScore),
|
||||||
detectedAtIdx: index("detected_at_idx").on(table.detectedAt),
|
detectedAtIdx: index("detected_at_idx").on(table.detectedAt),
|
||||||
countryIdx: index("country_idx").on(table.country),
|
countryIdx: index("country_idx").on(table.country),
|
||||||
detectionSourceIdx: index("detection_source_idx").on(table.detectionSource),
|
|
||||||
}));
|
}));
|
||||||
|
|
||||||
// Whitelist per IP fidati
|
// Whitelist per IP fidati
|
||||||
// NOTE: ip_inet is INET type in production (managed by SQL migrations)
|
|
||||||
// Drizzle lacks native INET support, so we use text() here
|
|
||||||
export const whitelist = pgTable("whitelist", {
|
export const whitelist = pgTable("whitelist", {
|
||||||
id: varchar("id").primaryKey().default(sql`gen_random_uuid()`),
|
id: varchar("id").primaryKey().default(sql`gen_random_uuid()`),
|
||||||
ipAddress: text("ip_address").notNull().unique(),
|
ipAddress: text("ip_address").notNull().unique(),
|
||||||
ipInet: text("ip_inet"), // Actually INET in production - see migration 008
|
|
||||||
comment: text("comment"),
|
comment: text("comment"),
|
||||||
reason: text("reason"),
|
reason: text("reason"),
|
||||||
createdBy: text("created_by"),
|
createdBy: text("created_by"),
|
||||||
active: boolean("active").notNull().default(true),
|
active: boolean("active").notNull().default(true),
|
||||||
createdAt: timestamp("created_at").defaultNow().notNull(),
|
createdAt: timestamp("created_at").defaultNow().notNull(),
|
||||||
// Public lists integration
|
});
|
||||||
source: text("source").notNull().default("manual"),
|
|
||||||
listId: varchar("list_id").references(() => publicLists.id, { onDelete: 'set null' }),
|
|
||||||
}, (table) => ({
|
|
||||||
sourceIdx: index("whitelist_source_idx").on(table.source),
|
|
||||||
}));
|
|
||||||
|
|
||||||
// ML Training history
|
// ML Training history
|
||||||
export const trainingHistory = pgTable("training_history", {
|
export const trainingHistory = pgTable("training_history", {
|
||||||
@ -137,46 +125,6 @@ export const networkAnalytics = pgTable("network_analytics", {
|
|||||||
dateHourUnique: unique("network_analytics_date_hour_key").on(table.date, table.hour),
|
dateHourUnique: unique("network_analytics_date_hour_key").on(table.date, table.hour),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
// Public threat/whitelist sources
|
|
||||||
export const publicLists = pgTable("public_lists", {
|
|
||||||
id: varchar("id").primaryKey().default(sql`gen_random_uuid()`),
|
|
||||||
name: text("name").notNull(),
|
|
||||||
type: text("type").notNull(),
|
|
||||||
url: text("url").notNull(),
|
|
||||||
enabled: boolean("enabled").notNull().default(true),
|
|
||||||
fetchIntervalMinutes: integer("fetch_interval_minutes").notNull().default(10),
|
|
||||||
lastFetch: timestamp("last_fetch"),
|
|
||||||
lastSuccess: timestamp("last_success"),
|
|
||||||
totalIps: integer("total_ips").notNull().default(0),
|
|
||||||
activeIps: integer("active_ips").notNull().default(0),
|
|
||||||
errorCount: integer("error_count").notNull().default(0),
|
|
||||||
lastError: text("last_error"),
|
|
||||||
createdAt: timestamp("created_at").defaultNow().notNull(),
|
|
||||||
}, (table) => ({
|
|
||||||
typeIdx: index("public_lists_type_idx").on(table.type),
|
|
||||||
enabledIdx: index("public_lists_enabled_idx").on(table.enabled),
|
|
||||||
}));
|
|
||||||
|
|
||||||
// Public blacklist IPs from external sources
|
|
||||||
// NOTE: ip_inet/cidr_inet are INET/CIDR types in production (managed by SQL migrations)
|
|
||||||
// Drizzle lacks native INET/CIDR support, so we use text() here
|
|
||||||
export const publicBlacklistIps = pgTable("public_blacklist_ips", {
|
|
||||||
id: varchar("id").primaryKey().default(sql`gen_random_uuid()`),
|
|
||||||
ipAddress: text("ip_address").notNull(),
|
|
||||||
cidrRange: text("cidr_range"),
|
|
||||||
ipInet: text("ip_inet"), // Actually INET in production - see migration 008
|
|
||||||
cidrInet: text("cidr_inet"), // Actually CIDR in production - see migration 008
|
|
||||||
listId: varchar("list_id").notNull().references(() => publicLists.id, { onDelete: 'cascade' }),
|
|
||||||
firstSeen: timestamp("first_seen").defaultNow().notNull(),
|
|
||||||
lastSeen: timestamp("last_seen").defaultNow().notNull(),
|
|
||||||
isActive: boolean("is_active").notNull().default(true),
|
|
||||||
}, (table) => ({
|
|
||||||
ipAddressIdx: index("public_blacklist_ip_idx").on(table.ipAddress),
|
|
||||||
listIdIdx: index("public_blacklist_list_idx").on(table.listId),
|
|
||||||
isActiveIdx: index("public_blacklist_active_idx").on(table.isActive),
|
|
||||||
ipListUnique: unique("public_blacklist_ip_list_key").on(table.ipAddress, table.listId),
|
|
||||||
}));
|
|
||||||
|
|
||||||
// Schema version tracking for database migrations
|
// Schema version tracking for database migrations
|
||||||
export const schemaVersion = pgTable("schema_version", {
|
export const schemaVersion = pgTable("schema_version", {
|
||||||
id: integer("id").primaryKey().default(1),
|
id: integer("id").primaryKey().default(1),
|
||||||
@ -190,30 +138,7 @@ export const routersRelations = relations(routers, ({ many }) => ({
|
|||||||
logs: many(networkLogs),
|
logs: many(networkLogs),
|
||||||
}));
|
}));
|
||||||
|
|
||||||
export const publicListsRelations = relations(publicLists, ({ many }) => ({
|
// Rimossa relazione router (non più FK)
|
||||||
blacklistIps: many(publicBlacklistIps),
|
|
||||||
}));
|
|
||||||
|
|
||||||
export const publicBlacklistIpsRelations = relations(publicBlacklistIps, ({ one }) => ({
|
|
||||||
list: one(publicLists, {
|
|
||||||
fields: [publicBlacklistIps.listId],
|
|
||||||
references: [publicLists.id],
|
|
||||||
}),
|
|
||||||
}));
|
|
||||||
|
|
||||||
export const whitelistRelations = relations(whitelist, ({ one }) => ({
|
|
||||||
list: one(publicLists, {
|
|
||||||
fields: [whitelist.listId],
|
|
||||||
references: [publicLists.id],
|
|
||||||
}),
|
|
||||||
}));
|
|
||||||
|
|
||||||
export const detectionsRelations = relations(detections, ({ one }) => ({
|
|
||||||
blacklist: one(publicBlacklistIps, {
|
|
||||||
fields: [detections.blacklistId],
|
|
||||||
references: [publicBlacklistIps.id],
|
|
||||||
}),
|
|
||||||
}));
|
|
||||||
|
|
||||||
// Insert schemas
|
// Insert schemas
|
||||||
export const insertRouterSchema = createInsertSchema(routers).omit({
|
export const insertRouterSchema = createInsertSchema(routers).omit({
|
||||||
@ -251,19 +176,6 @@ export const insertNetworkAnalyticsSchema = createInsertSchema(networkAnalytics)
|
|||||||
createdAt: true,
|
createdAt: true,
|
||||||
});
|
});
|
||||||
|
|
||||||
export const insertPublicListSchema = createInsertSchema(publicLists).omit({
|
|
||||||
id: true,
|
|
||||||
createdAt: true,
|
|
||||||
lastFetch: true,
|
|
||||||
lastSuccess: true,
|
|
||||||
});
|
|
||||||
|
|
||||||
export const insertPublicBlacklistIpSchema = createInsertSchema(publicBlacklistIps).omit({
|
|
||||||
id: true,
|
|
||||||
firstSeen: true,
|
|
||||||
lastSeen: true,
|
|
||||||
});
|
|
||||||
|
|
||||||
// Types
|
// Types
|
||||||
export type Router = typeof routers.$inferSelect;
|
export type Router = typeof routers.$inferSelect;
|
||||||
export type InsertRouter = z.infer<typeof insertRouterSchema>;
|
export type InsertRouter = z.infer<typeof insertRouterSchema>;
|
||||||
@ -285,9 +197,3 @@ export type InsertSchemaVersion = z.infer<typeof insertSchemaVersionSchema>;
|
|||||||
|
|
||||||
export type NetworkAnalytics = typeof networkAnalytics.$inferSelect;
|
export type NetworkAnalytics = typeof networkAnalytics.$inferSelect;
|
||||||
export type InsertNetworkAnalytics = z.infer<typeof insertNetworkAnalyticsSchema>;
|
export type InsertNetworkAnalytics = z.infer<typeof insertNetworkAnalyticsSchema>;
|
||||||
|
|
||||||
export type PublicList = typeof publicLists.$inferSelect;
|
|
||||||
export type InsertPublicList = z.infer<typeof insertPublicListSchema>;
|
|
||||||
|
|
||||||
export type PublicBlacklistIp = typeof publicBlacklistIps.$inferSelect;
|
|
||||||
export type InsertPublicBlacklistIp = z.infer<typeof insertPublicBlacklistIpSchema>;
|
|
||||||
|
|||||||
101
uv.lock
101
uv.lock
@ -1,101 +0,0 @@
|
|||||||
version = 1
|
|
||||||
revision = 3
|
|
||||||
requires-python = ">=3.11"
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "anyio"
|
|
||||||
version = "4.11.0"
|
|
||||||
source = { registry = "https://pypi.org/simple" }
|
|
||||||
dependencies = [
|
|
||||||
{ name = "idna" },
|
|
||||||
{ name = "sniffio" },
|
|
||||||
{ name = "typing-extensions", marker = "python_full_version < '3.13'" },
|
|
||||||
]
|
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/c6/78/7d432127c41b50bccba979505f272c16cbcadcc33645d5fa3a738110ae75/anyio-4.11.0.tar.gz", hash = "sha256:82a8d0b81e318cc5ce71a5f1f8b5c4e63619620b63141ef8c995fa0db95a57c4", size = 219094, upload-time = "2025-09-23T09:19:12.58Z" }
|
|
||||||
wheels = [
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/15/b3/9b1a8074496371342ec1e796a96f99c82c945a339cd81a8e73de28b4cf9e/anyio-4.11.0-py3-none-any.whl", hash = "sha256:0287e96f4d26d4149305414d4e3bc32f0dcd0862365a4bddea19d7a1ec38c4fc", size = 109097, upload-time = "2025-09-23T09:19:10.601Z" },
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "certifi"
|
|
||||||
version = "2025.11.12"
|
|
||||||
source = { registry = "https://pypi.org/simple" }
|
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/a2/8c/58f469717fa48465e4a50c014a0400602d3c437d7c0c468e17ada824da3a/certifi-2025.11.12.tar.gz", hash = "sha256:d8ab5478f2ecd78af242878415affce761ca6bc54a22a27e026d7c25357c3316", size = 160538, upload-time = "2025-11-12T02:54:51.517Z" }
|
|
||||||
wheels = [
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/70/7d/9bc192684cea499815ff478dfcdc13835ddf401365057044fb721ec6bddb/certifi-2025.11.12-py3-none-any.whl", hash = "sha256:97de8790030bbd5c2d96b7ec782fc2f7820ef8dba6db909ccf95449f2d062d4b", size = 159438, upload-time = "2025-11-12T02:54:49.735Z" },
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "h11"
|
|
||||||
version = "0.16.0"
|
|
||||||
source = { registry = "https://pypi.org/simple" }
|
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/01/ee/02a2c011bdab74c6fb3c75474d40b3052059d95df7e73351460c8588d963/h11-0.16.0.tar.gz", hash = "sha256:4e35b956cf45792e4caa5885e69fba00bdbc6ffafbfa020300e549b208ee5ff1", size = 101250, upload-time = "2025-04-24T03:35:25.427Z" }
|
|
||||||
wheels = [
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/04/4b/29cac41a4d98d144bf5f6d33995617b185d14b22401f75ca86f384e87ff1/h11-0.16.0-py3-none-any.whl", hash = "sha256:63cf8bbe7522de3bf65932fda1d9c2772064ffb3dae62d55932da54b31cb6c86", size = 37515, upload-time = "2025-04-24T03:35:24.344Z" },
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "httpcore"
|
|
||||||
version = "1.0.9"
|
|
||||||
source = { registry = "https://pypi.org/simple" }
|
|
||||||
dependencies = [
|
|
||||||
{ name = "certifi" },
|
|
||||||
{ name = "h11" },
|
|
||||||
]
|
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/06/94/82699a10bca87a5556c9c59b5963f2d039dbd239f25bc2a63907a05a14cb/httpcore-1.0.9.tar.gz", hash = "sha256:6e34463af53fd2ab5d807f399a9b45ea31c3dfa2276f15a2c3f00afff6e176e8", size = 85484, upload-time = "2025-04-24T22:06:22.219Z" }
|
|
||||||
wheels = [
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/7e/f5/f66802a942d491edb555dd61e3a9961140fd64c90bce1eafd741609d334d/httpcore-1.0.9-py3-none-any.whl", hash = "sha256:2d400746a40668fc9dec9810239072b40b4484b640a8c38fd654a024c7a1bf55", size = 78784, upload-time = "2025-04-24T22:06:20.566Z" },
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "httpx"
|
|
||||||
version = "0.28.1"
|
|
||||||
source = { registry = "https://pypi.org/simple" }
|
|
||||||
dependencies = [
|
|
||||||
{ name = "anyio" },
|
|
||||||
{ name = "certifi" },
|
|
||||||
{ name = "httpcore" },
|
|
||||||
{ name = "idna" },
|
|
||||||
]
|
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/b1/df/48c586a5fe32a0f01324ee087459e112ebb7224f646c0b5023f5e79e9956/httpx-0.28.1.tar.gz", hash = "sha256:75e98c5f16b0f35b567856f597f06ff2270a374470a5c2392242528e3e3e42fc", size = 141406, upload-time = "2024-12-06T15:37:23.222Z" }
|
|
||||||
wheels = [
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl", hash = "sha256:d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad", size = 73517, upload-time = "2024-12-06T15:37:21.509Z" },
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "idna"
|
|
||||||
version = "3.11"
|
|
||||||
source = { registry = "https://pypi.org/simple" }
|
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/6f/6d/0703ccc57f3a7233505399edb88de3cbd678da106337b9fcde432b65ed60/idna-3.11.tar.gz", hash = "sha256:795dafcc9c04ed0c1fb032c2aa73654d8e8c5023a7df64a53f39190ada629902", size = 194582, upload-time = "2025-10-12T14:55:20.501Z" }
|
|
||||||
wheels = [
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/0e/61/66938bbb5fc52dbdf84594873d5b51fb1f7c7794e9c0f5bd885f30bc507b/idna-3.11-py3-none-any.whl", hash = "sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea", size = 71008, upload-time = "2025-10-12T14:55:18.883Z" },
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "repl-nix-workspace"
|
|
||||||
version = "0.1.0"
|
|
||||||
source = { virtual = "." }
|
|
||||||
dependencies = [
|
|
||||||
{ name = "httpx" },
|
|
||||||
]
|
|
||||||
|
|
||||||
[package.metadata]
|
|
||||||
requires-dist = [{ name = "httpx", specifier = ">=0.28.1" }]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "sniffio"
|
|
||||||
version = "1.3.1"
|
|
||||||
source = { registry = "https://pypi.org/simple" }
|
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/a2/87/a6771e1546d97e7e041b6ae58d80074f81b7d5121207425c964ddf5cfdbd/sniffio-1.3.1.tar.gz", hash = "sha256:f4324edc670a0f49750a81b895f35c3adb843cca46f0530f79fc1babb23789dc", size = 20372, upload-time = "2024-02-25T23:20:04.057Z" }
|
|
||||||
wheels = [
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/e9/44/75a9c9421471a6c4805dbf2356f7c181a29c1879239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl", hash = "sha256:2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2", size = 10235, upload-time = "2024-02-25T23:20:01.196Z" },
|
|
||||||
]
|
|
||||||
|
|
||||||
[[package]]
|
|
||||||
name = "typing-extensions"
|
|
||||||
version = "4.15.0"
|
|
||||||
source = { registry = "https://pypi.org/simple" }
|
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/72/94/1a15dd82efb362ac84269196e94cf00f187f7ed21c242792a923cdb1c61f/typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466", size = 109391, upload-time = "2025-08-25T13:49:26.313Z" }
|
|
||||||
wheels = [
|
|
||||||
{ url = "https://files.pythonhosted.org/packages/18/67/36e9267722cc04a6b9f15c7f3441c2363321a3ea07da7ae0c0707beb2a9c/typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548", size = 44614, upload-time = "2025-08-25T13:49:24.86Z" },
|
|
||||||
]
|
|
||||||
592
version.json
592
version.json
@ -1,306 +1,306 @@
|
|||||||
{
|
{
|
||||||
"version": "1.0.103",
|
"version": "1.0.54",
|
||||||
"lastUpdate": "2026-01-02T16:33:13.545Z",
|
"lastUpdate": "2025-11-24T15:30:39.800Z",
|
||||||
"changelog": [
|
"changelog": [
|
||||||
{
|
|
||||||
"version": "1.0.103",
|
|
||||||
"date": "2026-01-02",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.103"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.102",
|
|
||||||
"date": "2026-01-02",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.102"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.101",
|
|
||||||
"date": "2026-01-02",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.101"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.100",
|
|
||||||
"date": "2026-01-02",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.100"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.99",
|
|
||||||
"date": "2026-01-02",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.99"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.98",
|
|
||||||
"date": "2026-01-02",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.98"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.97",
|
|
||||||
"date": "2026-01-02",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.97"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.96",
|
|
||||||
"date": "2026-01-02",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.96"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.95",
|
|
||||||
"date": "2025-11-27",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.95"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.94",
|
|
||||||
"date": "2025-11-27",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.94"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.93",
|
|
||||||
"date": "2025-11-27",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.93"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.92",
|
|
||||||
"date": "2025-11-27",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.92"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.91",
|
|
||||||
"date": "2025-11-26",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.91"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.90",
|
|
||||||
"date": "2025-11-26",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.90"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.89",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.89"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.88",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.88"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.87",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.87"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.86",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.86"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.85",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.85"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.84",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.84"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.83",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.83"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.82",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.82"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.81",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.81"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.80",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.80"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.79",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.79"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.78",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.78"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.77",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.77"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.76",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.76"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.75",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.75"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.74",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.74"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.73",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.73"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.72",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.72"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.71",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.71"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.70",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.70"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.69",
|
|
||||||
"date": "2025-11-25",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.69"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.68",
|
|
||||||
"date": "2025-11-24",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.68"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.67",
|
|
||||||
"date": "2025-11-24",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.67"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.66",
|
|
||||||
"date": "2025-11-24",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.66"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.65",
|
|
||||||
"date": "2025-11-24",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.65"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.64",
|
|
||||||
"date": "2025-11-24",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.64"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.63",
|
|
||||||
"date": "2025-11-24",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.63"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.62",
|
|
||||||
"date": "2025-11-24",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.62"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.61",
|
|
||||||
"date": "2025-11-24",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.61"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.60",
|
|
||||||
"date": "2025-11-24",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.60"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.59",
|
|
||||||
"date": "2025-11-24",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.59"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.58",
|
|
||||||
"date": "2025-11-24",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.58"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.57",
|
|
||||||
"date": "2025-11-24",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.57"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.56",
|
|
||||||
"date": "2025-11-24",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.56"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"version": "1.0.55",
|
|
||||||
"date": "2025-11-24",
|
|
||||||
"type": "patch",
|
|
||||||
"description": "Deployment automatico v1.0.55"
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"version": "1.0.54",
|
"version": "1.0.54",
|
||||||
"date": "2025-11-24",
|
"date": "2025-11-24",
|
||||||
"type": "patch",
|
"type": "patch",
|
||||||
"description": "Deployment automatico v1.0.54"
|
"description": "Deployment automatico v1.0.54"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.53",
|
||||||
|
"date": "2025-11-24",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.53"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.52",
|
||||||
|
"date": "2025-11-24",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.52"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.51",
|
||||||
|
"date": "2025-11-24",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.51"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.50",
|
||||||
|
"date": "2025-11-24",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.50"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.49",
|
||||||
|
"date": "2025-11-24",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.49"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.48",
|
||||||
|
"date": "2025-11-24",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.48"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.47",
|
||||||
|
"date": "2025-11-24",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.47"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.46",
|
||||||
|
"date": "2025-11-24",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.46"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.45",
|
||||||
|
"date": "2025-11-24",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.45"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.44",
|
||||||
|
"date": "2025-11-24",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.44"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.43",
|
||||||
|
"date": "2025-11-24",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.43"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.42",
|
||||||
|
"date": "2025-11-24",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.42"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.41",
|
||||||
|
"date": "2025-11-22",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.41"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.40",
|
||||||
|
"date": "2025-11-22",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.40"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.39",
|
||||||
|
"date": "2025-11-22",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.39"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.38",
|
||||||
|
"date": "2025-11-22",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.38"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.37",
|
||||||
|
"date": "2025-11-22",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.37"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.36",
|
||||||
|
"date": "2025-11-22",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.36"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.35",
|
||||||
|
"date": "2025-11-22",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.35"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.34",
|
||||||
|
"date": "2025-11-22",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.34"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.33",
|
||||||
|
"date": "2025-11-22",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.33"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.32",
|
||||||
|
"date": "2025-11-22",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.32"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.31",
|
||||||
|
"date": "2025-11-22",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.31"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.30",
|
||||||
|
"date": "2025-11-22",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.30"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.29",
|
||||||
|
"date": "2025-11-22",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.29"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.28",
|
||||||
|
"date": "2025-11-21",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.28"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.27",
|
||||||
|
"date": "2025-11-21",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.27"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.26",
|
||||||
|
"date": "2025-11-21",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.26"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.25",
|
||||||
|
"date": "2025-11-21",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.25"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.24",
|
||||||
|
"date": "2025-11-21",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.24"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.23",
|
||||||
|
"date": "2025-11-21",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.23"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.22",
|
||||||
|
"date": "2025-11-21",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.22"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.21",
|
||||||
|
"date": "2025-11-21",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.21"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.20",
|
||||||
|
"date": "2025-11-21",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.20"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.19",
|
||||||
|
"date": "2025-11-21",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.19"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.18",
|
||||||
|
"date": "2025-11-18",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.18"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.17",
|
||||||
|
"date": "2025-11-17",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.17"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.16",
|
||||||
|
"date": "2025-11-17",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.16"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.15",
|
||||||
|
"date": "2025-11-17",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.15"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.14",
|
||||||
|
"date": "2025-11-17",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.14"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.13",
|
||||||
|
"date": "2025-11-17",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.13"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.12",
|
||||||
|
"date": "2025-11-17",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.12"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.11",
|
||||||
|
"date": "2025-11-17",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.11"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.10",
|
||||||
|
"date": "2025-11-17",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.10"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.9",
|
||||||
|
"date": "2025-11-17",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.9"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.8",
|
||||||
|
"date": "2025-11-17",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.8"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.7",
|
||||||
|
"date": "2025-11-17",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.7"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.6",
|
||||||
|
"date": "2025-11-17",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.6"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"version": "1.0.5",
|
||||||
|
"date": "2025-11-17",
|
||||||
|
"type": "patch",
|
||||||
|
"description": "Deployment automatico v1.0.5"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
Loading…
Reference in New Issue
Block a user