Contenuti




OneDrive Backup su Linux con rclone: Guida Completa alla Sincronizzazione Cloud

Configurazione enterprise-grade per backup automatici, sincronizzazione bidirezionale e monitoring avanzato


Contenuti

OneDrive su Linux con rclone: Sincronizzazione Cloud Enterprise-Grade

L’integrazione di Microsoft OneDrive con sistemi Linux rappresenta una sfida comune per amministratori di sistema e utenti che necessitano di accesso senza interruzioni ai dati cloud Microsoft in ambiente Linux. rclone Γ¨ la soluzione enterprise piΓΉ affidabile per questo scenario, offrendo sincronizzazione bidirezionale, backup automatici e gestione avanzata dei dati cloud su piattaforme Linux.

In questo articolo
  • Installazione e configurazione rclone per tutti i principali sistemi Linux
  • Autenticazione OAuth2 sicura con Microsoft OneDrive Business e Personal
  • Sincronizzazione automatica enterprise-grade con monitoring e error handling
  • Backup incrementali e differenziali con retention policy avanzate
  • Encryption e sicurezza per proteggere dati sensibili in transito e a riposo
  • Automazione completa con systemd, cron e scripting avanzato
  • Monitoring e alerting per ambienti di produzione mission-critical
  • Troubleshooting e performance tuning per ottimizzazioni operative

Indice della Guida

πŸš€ Parte I - Setup e Configurazione Base

  1. Architettura rclone e OneDrive
  2. Installazione Multi-Piattaforma
  3. Configurazione OAuth2 e Autenticazione

πŸ”’ Parte II - Sincronizzazione e Sicurezza

  1. Operazioni di Sincronizzazione Avanzate
  2. Encryption e Data Protection
  3. Gestione Permessi e Access Control

⚑ Parte III - Automazione Enterprise

  1. Backup Automatici e Scheduling
  2. Monitoring e Logging Avanzato
  3. Performance Optimization

πŸ› οΈ Parte IV - Operations e Maintenance

  1. Troubleshooting e Recovery
  2. Scaling e Multi-Account Management
  3. Best Practices Enterprise

Architettura rclone e OneDrive

Panoramica Tecnologica

rclone Γ¨ un potente strumento di sincronizzazione cloud che implementa il pattern “rsync per il cloud”, offrendo un’interfaccia unificata per oltre 70 provider di storage cloud diversi. Per Microsoft OneDrive, rclone utilizza le API Microsoft Graph e l'autenticazione OAuth2 per garantire accesso sicuro e conforme agli standard enterprise.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    OAuth2/Graph API    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Linux System β”‚ ◄─────────────────────► β”‚  Microsoft OneDrive β”‚
β”‚     (rclone)    β”‚                        β”‚   (Business/Personal) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚                                              β”‚
         β–Ό                                              β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”                        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Local Storage   β”‚                        β”‚  Cloud Storage      β”‚
β”‚ - Filesystem    β”‚                        β”‚  - Files & Metadata β”‚
β”‚ - Metadata      β”‚                        β”‚  - Versions History β”‚
β”‚ - Timestamps    β”‚                        β”‚  - Sharing Settings β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Caratteristiche Enterprise di rclone

Core Features

  • Sincronizzazione bidirezionale con conflict resolution intelligente
  • Backup incrementali e differenziali per ottimizzare il trasferimento dati
  • Encryption client-side con algoritmi AES-256 e ChaCha20-Poly1305
  • Bandwidth throttling e gestione avanzata delle risorse di rete
  • Resume capabilities per trasferimenti interrotti
  • Deduplication automatica per ridurre spazio e tempo di sincronizzazione

Enterprise Capabilities

  • Multi-threading per performance ottimali su connessioni ad alta velocitΓ 
  • Logging strutturato compatibile con sistemi di monitoring enterprise
  • Health checks e self-healing per operazioni critiche
  • API REST per integrazione con sistemi di orchestrazione
  • Plugin architecture per estensioni personalizzate

Installazione Multi-Piattaforma

Installazione Universale (Metodo Raccomandato)

1
2
3
4
5
6
# Download e installazione script ufficiale
curl -fsSL https://rclone.org/install.sh | sudo bash

# Verifica installazione
rclone version
rclone help

Installazione tramite Package Manager

Ubuntu/Debian

1
2
3
4
5
6
7
8
# Aggiorna repository
sudo apt update

# Installazione rclone
sudo apt install rclone -y

# Verifica versione (potrebbe non essere l'ultima)
rclone version

CentOS/RHEL/Rocky Linux

1
2
3
4
5
6
7
8
9
# Abilitazione EPEL repository
sudo dnf install epel-release -y

# Installazione rclone
sudo dnf install rclone -y

# Alternativamente per versioni piΓΉ recenti
sudo dnf copr enable julioloayzabc/rclone
sudo dnf install rclone

Arch Linux

1
2
3
4
5
# Installazione tramite pacman
sudo pacman -S rclone

# Alternativa AUR per versioni beta
yay -S rclone-beta

openSUSE

1
2
# Installazione tramite zypper
sudo zypper install rclone

Installazione da Sorgenti per Customizzazioni

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
# Prerequisiti Go environment
sudo apt install golang-go git -y

# Clone repository ufficiale
git clone https://github.com/rclone/rclone.git
cd rclone

# Compilazione
go build

# Installazione
sudo cp rclone /usr/local/bin/
sudo chown root:root /usr/local/bin/rclone
sudo chmod 755 /usr/local/bin/rclone

# Setup manuale
sudo mkdir -p /usr/local/share/man/man1
sudo cp rclone.1 /usr/local/share/man/man1/
sudo mandb

Setup Directory Structure Enterprise

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# Creazione struttura directory enterprise
sudo mkdir -p /opt/rclone/{config,logs,scripts,backups}
sudo mkdir -p /var/log/rclone
sudo mkdir -p /etc/rclone

# Configurazione permessi
sudo chown -R $(whoami):$(whoami) /opt/rclone
sudo chmod -R 755 /opt/rclone
sudo chmod 755 /var/log/rclone

# File di configurazione principale
sudo touch /etc/rclone/rclone.conf
sudo chown $(whoami):$(whoami) /etc/rclone/rclone.conf
sudo chmod 600 /etc/rclone/rclone.conf

Configurazione OAuth2 e Autenticazione

Configurazione Interattiva OneDrive Personal

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# Avvio configurazione guidata
rclone config

# Selezione delle opzioni
# n) New remote
# Nome: onedrive-personal
# Storage: 26 (Microsoft OneDrive)
# Client ID: <invio per default>
# Client Secret: <invio per default>
# Region: 1 (Microsoft Cloud Global)
# Edit advanced config: n
# Use auto config: y

Configurazione OneDrive Business/Enterprise

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# Per ambienti enterprise con Azure AD
rclone config

# Configurazione con parametri avanzati:
# Nome: onedrive-business
# Storage: 26 (Microsoft OneDrive)
# Client ID: <custom se registrato in Azure AD>
# Client Secret: <custom se registrato in Azure AD>
# Region: 1 (Microsoft Cloud Global)
# Advanced config: y
# - drive_type: business
# - expose_onenote_files: true
# - server_side_across_configs: false
# Use auto config: y

Configurazione Headless per Server

Per server senza interfaccia grafica, la configurazione deve essere completata su una macchina con browser:

1
2
3
4
5
6
7
# Sul server headless
rclone config

# Quando richiesto "Use auto config? (y/n)" seleziona: n
# Copia l'URL fornito e aprilo su una macchina con browser
# Completa l'autenticazione OAuth2
# Copia il token risultante nel server

Configurazione Avanzata con Client ID Personalizzato

Per ambienti enterprise Γ¨ raccomandato registrare un’applicazione Azure AD personalizzata:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# File di configurazione avanzata
cat > /etc/rclone/onedrive-enterprise.conf << 'EOF'
[onedrive-enterprise]
type = onedrive
client_id = YOUR_AZURE_APP_CLIENT_ID
client_secret = YOUR_AZURE_APP_CLIENT_SECRET
token = {"access_token":"xxx","token_type":"Bearer","refresh_token":"xxx","expiry":"2024-12-31T23:59:59Z"}
drive_type = business
region = global
chunk_size = 320M
drive_id = YOUR_DRIVE_ID
expose_onenote_files = true
EOF

Verifica Configurazione

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Test connettivitΓ  OneDrive
rclone lsd onedrive-personal:
rclone lsd onedrive-business:

# Informazioni dettagliate account
rclone about onedrive-personal:

# Test trasferimento semplice
echo "Test file $(date)" > test.txt
rclone copy test.txt onedrive-personal:/test/
rclone ls onedrive-personal:/test/

Operazioni di Sincronizzazione Avanzate

Comandi Base di Sincronizzazione

Copy vs Sync vs Move

1
2
3
4
5
6
7
8
# COPY: Copia file senza eliminare la destinazione
rclone copy /home/user/documents onedrive-personal:/backup/documents

# SYNC: Sincronizza rendendo la destinazione identica alla sorgente
rclone sync /home/user/documents onedrive-personal:/backup/documents

# MOVE: Sposta file (elimina dalla sorgente dopo il trasferimento)
rclone move /home/user/temp onedrive-personal:/archive/temp

Operazioni Bidirezionali

1
2
3
4
5
6
# Bisync: Sincronizzazione bidirezionale (feature sperimentale)
# Prima esecuzione per inizializzazione
rclone bisync /home/user/sync onedrive-personal:/sync --resync

# Sincronizzazioni successive
rclone bisync /home/user/sync onedrive-personal:/sync

Filtering e Include/Exclude Avanzati

File di Configurazione Filtri

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# Crea file filtri enterprise
cat > /opt/rclone/config/enterprise-filters.txt << 'EOF'
# Escludi file temporanei
- *.tmp
- *.temp
- .DS_Store
- Thumbs.db
- *.swp
- *.swo
- *~

# Escludi directory di sistema
- .git/**
- node_modules/**
- __pycache__/**
- .cache/**
- *.log

# Includi solo specifici tipi documento
+ *.pdf
+ *.docx
+ *.xlsx
+ *.pptx
+ *.txt
+ *.md

# Escludi file grandi (>100MB)
- size>100M

# Escludi file piΓΉ vecchi di 2 anni
- age>2y
EOF

# Utilizzo filtri
rclone sync /home/user/documents onedrive-personal:/documents \
  --filter-from /opt/rclone/config/enterprise-filters.txt

Filtri Dinamici per Compliance

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
# Script per filtri dinamici basati su compliance
cat > /opt/rclone/scripts/compliance-filter.sh << 'EOF'
#!/bin/bash

# Filtri per compliance GDPR
create_gdpr_filters() {
    cat > /tmp/gdpr-filters.txt << 'GDPR_EOF'
# Escludi PII potenziali
- *social*security*
- *tax*id*
- *ssn*
- *credit*card*
- *personal*data*
- *gdpr*sensitive*

# Includi documenti business pubblici
+ *contract*
+ *invoice*
+ *report*
+ *presentation*
GDPR_EOF
}

# Filtri per ambiente healthcare (HIPAA)
create_hipaa_filters() {
    cat > /tmp/hipaa-filters.txt << 'HIPAA_EOF'
# Escludi informazioni sanitarie
- *patient*
- *medical*record*
- *health*data*
- *phi*
- *hipaa*

# Includi documenti non-PHI
+ *policy*
+ *procedure*
+ *training*
HIPAA_EOF
}

# Esecuzione basata su parametro
case "${1:-gdpr}" in
    gdpr)
        create_gdpr_filters
        FILTER_FILE="/tmp/gdpr-filters.txt"
        ;;
    hipaa)
        create_hipaa_filters
        FILTER_FILE="/tmp/hipaa-filters.txt"
        ;;
esac

echo "Generated filter file: $FILTER_FILE"
EOF

chmod +x /opt/rclone/scripts/compliance-filter.sh

Sincronizzazione Incrementale Enterprise

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
# Script sincronizzazione incrementale avanzata
cat > /opt/rclone/scripts/incremental-sync.sh << 'EOF'
#!/bin/bash

# Configurazione
RCLONE_CONFIG="/etc/rclone/rclone.conf"
LOG_FILE="/var/log/rclone/incremental-$(date +%Y%m%d).log"
REMOTE_NAME="${1:-onedrive-business}"
LOCAL_PATH="${2:-/home/data}"
REMOTE_PATH="${3:-/backup}"

# Logging strutturato
log_message() {
    local level="$1"
    local message="$2"
    echo "[$(date -Iseconds)] [$level] $message" | tee -a "$LOG_FILE"
}

# Pre-flight checks
preflight_checks() {
    log_message "INFO" "Starting preflight checks..."

    # Verifica connettivitΓ 
    if ! rclone lsd "$REMOTE_NAME:" --config="$RCLONE_CONFIG" > /dev/null 2>&1; then
        log_message "ERROR" "Cannot connect to remote $REMOTE_NAME"
        exit 1
    fi

    # Verifica spazio locale
    local available_space=$(df -BG "$LOCAL_PATH" | awk 'NR==2 {print $4}' | sed 's/G//')
    if [ "$available_space" -lt 10 ]; then
        log_message "WARN" "Low disk space: ${available_space}GB available"
    fi

    log_message "INFO" "Preflight checks completed successfully"
}

# Sincronizzazione incrementale con retry
incremental_sync() {
    local max_retries=3
    local retry_count=0

    while [ $retry_count -lt $max_retries ]; do
        log_message "INFO" "Starting incremental sync (attempt $((retry_count + 1)))"

        if rclone sync "$LOCAL_PATH" "$REMOTE_NAME:$REMOTE_PATH" \
           --config="$RCLONE_CONFIG" \
           --log-level INFO \
           --log-file="$LOG_FILE" \
           --stats 30s \
           --stats-file-name-length 0 \
           --exclude ".tmp/**" \
           --exclude "*.log" \
           --max-age 30d \
           --transfers 8 \
           --checkers 16 \
           --retries 3 \
           --low-level-retries 10; then

            log_message "INFO" "Incremental sync completed successfully"
            return 0
        else
            retry_count=$((retry_count + 1))
            log_message "WARN" "Sync attempt $retry_count failed, retrying..."
            sleep $((retry_count * 30))
        fi
    done

    log_message "ERROR" "Incremental sync failed after $max_retries attempts"
    return 1
}

# Verifica post-sincronizzazione
post_sync_verification() {
    log_message "INFO" "Starting post-sync verification..."

    # Conta file locali vs remoti
    local local_count=$(find "$LOCAL_PATH" -type f | wc -l)
    local remote_count=$(rclone lsf "$REMOTE_NAME:$REMOTE_PATH" --recursive --config="$RCLONE_CONFIG" | wc -l)

    log_message "INFO" "Local files: $local_count, Remote files: $remote_count"

    # Verifica hash per campione di file
    rclone check "$LOCAL_PATH" "$REMOTE_NAME:$REMOTE_PATH" \
      --config="$RCLONE_CONFIG" \
      --one-way \
      --size-only \
      >> "$LOG_FILE" 2>&1

    if [ $? -eq 0 ]; then
        log_message "INFO" "Post-sync verification successful"
    else
        log_message "WARN" "Post-sync verification found differences"
    fi
}

# Esecuzione principale
main() {
    log_message "INFO" "=== STARTING INCREMENTAL SYNC SESSION ==="

    preflight_checks
    incremental_sync
    post_sync_verification

    log_message "INFO" "=== INCREMENTAL SYNC SESSION COMPLETED ==="
}

# Gestione segnali per graceful shutdown
trap 'log_message "WARN" "Received signal, shutting down gracefully..."; exit 0' SIGTERM SIGINT

# Esecuzione
main "$@"
EOF

chmod +x /opt/rclone/scripts/incremental-sync.sh

Encryption e Data Protection

Client-Side Encryption Setup

rclone supporta encryption client-side trasparente, garantendo che i dati siano crittografati prima di lasciare il sistema locale:

1
2
3
4
5
6
7
8
9
# Configurazione remote encrypted
rclone config

# Configurazione:
# Nome: onedrive-encrypted
# Storage: 10 (Encrypt/Decrypt a remote)
# Remote: onedrive-business:/encrypted-data
# Password: <strong-password>
# Salt: <optional-salt-for-extra-security>

Encryption Enterprise con Script Automatizzato

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
# Script setup encryption enterprise
cat > /opt/rclone/scripts/setup-encryption.sh << 'EOF'
#!/bin/bash

# Configurazione encryption enterprise
setup_enterprise_encryption() {
    local remote_name="$1"
    local base_remote="$2"
    local encryption_password="$3"

    echo "Setting up enterprise encryption for $remote_name..."

    # Genera salt casuale sicuro
    local salt=$(openssl rand -base64 32)

    # Configurazione rclone encryption
    rclone config create "$remote_name" \
        crypt \
        remote="$base_remote" \
        password="$(echo "$encryption_password" | rclone obscure -)" \
        password2="$(echo "$salt" | rclone obscure -)" \
        filename_encryption="standard" \
        directory_name_encryption=true

    echo "Encryption setup completed for $remote_name"
}

# Key rotation per compliance
rotate_encryption_keys() {
    local remote_name="$1"
    local new_password="$2"

    echo "Rotating encryption keys for $remote_name..."

    # Backup configurazione esistente
    cp ~/.config/rclone/rclone.conf ~/.config/rclone/rclone.conf.backup

    # Setup nuovo remote con nuove chiavi
    local temp_remote="${remote_name}-new"
    setup_enterprise_encryption "$temp_remote" "$(get_base_remote "$remote_name")" "$new_password"

    # Migrazione dati con nuova encryption
    rclone sync "$remote_name:" "$temp_remote:" --progress

    echo "Key rotation completed"
}

# Utility per ottenere remote base
get_base_remote() {
    local remote_name="$1"
    rclone config dump | jq -r ".[\"$remote_name\"].remote"
}

# Verifica integritΓ  encryption
verify_encryption_integrity() {
    local encrypted_remote="$1"
    local test_file="/tmp/encryption-test-$(date +%s).txt"

    # Crea file test
    echo "Encryption test $(date)" > "$test_file"

    # Upload encrypted
    rclone copy "$test_file" "$encrypted_remote:/test/"

    # Download e verifica
    local downloaded_file="/tmp/downloaded-test.txt"
    rclone copy "$encrypted_remote:/test/$(basename "$test_file")" "/tmp/downloaded-test.txt"

    if cmp -s "$test_file" "$downloaded_file"; then
        echo "Encryption integrity verification: PASSED"
        rm -f "$test_file" "$downloaded_file"
        rclone delete "$encrypted_remote:/test/$(basename "$test_file")"
        return 0
    else
        echo "Encryption integrity verification: FAILED"
        return 1
    fi
}

# Main execution
case "${1:-setup}" in
    setup)
        setup_enterprise_encryption "${2:-onedrive-encrypted}" "${3:-onedrive-business:/encrypted}" "${4:-$(openssl rand -base64 32)}"
        ;;
    rotate)
        rotate_encryption_keys "${2:-onedrive-encrypted}" "${3:-$(openssl rand -base64 32)}"
        ;;
    verify)
        verify_encryption_integrity "${2:-onedrive-encrypted}"
        ;;
    *)
        echo "Usage: $0 {setup|rotate|verify} [remote-name] [base-remote] [password]"
        exit 1
        ;;
esac
EOF

chmod +x /opt/rclone/scripts/setup-encryption.sh

Gestione Sicura delle Credenziali

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# Setup secrets management con systemd
cat > /etc/systemd/system/rclone-secrets.service << 'EOF'
[Unit]
Description=rclone Secrets Management
After=network.target

[Service]
Type=oneshot
RemainAfterExit=yes
Environment=RCLONE_CONFIG_PASS_FILE=/etc/rclone/master.key
ExecStart=/bin/bash -c 'test -f /etc/rclone/master.key || openssl rand -base64 32 > /etc/rclone/master.key'
ExecStart=/bin/chmod 600 /etc/rclone/master.key

[Install]
WantedBy=multi-user.target
EOF

# Abilitazione servizio
sudo systemctl enable rclone-secrets.service
sudo systemctl start rclone-secrets.service

# Script per gestione password sicure
cat > /opt/rclone/scripts/secure-config.sh << 'EOF'
#!/bin/bash

# Utilizza master key per proteggere configurazioni
export RCLONE_CONFIG_PASS_FILE="/etc/rclone/master.key"

# Configurazione con password protection
rclone config --config /etc/rclone/rclone.conf.encrypted "$@"
EOF

chmod +x /opt/rclone/scripts/secure-config.sh

Backup Automatici e Scheduling

Configurazione Systemd Timer per Backup Enterprise

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# Servizio backup systemd
cat > /etc/systemd/system/rclone-backup.service << 'EOF'
[Unit]
Description=rclone OneDrive Backup Service
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
User=backup
Group=backup
Environment=RCLONE_CONFIG=/etc/rclone/rclone.conf
Environment=RCLONE_LOG_LEVEL=INFO
ExecStart=/opt/rclone/scripts/enterprise-backup.sh
TimeoutSec=3600
StandardOutput=journal
StandardError=journal

[Install]
WantedBy=multi-user.target
EOF

# Timer per esecuzione programmata
cat > /etc/systemd/system/rclone-backup.timer << 'EOF'
[Unit]
Description=Run rclone backup every 6 hours
Requires=rclone-backup.service

[Timer]
OnCalendar=*-*-* 00,06,12,18:00:00
Persistent=true
RandomizedDelaySec=300

[Install]
WantedBy=timers.target
EOF

# Abilitazione timer
sudo systemctl enable rclone-backup.timer
sudo systemctl start rclone-backup.timer
sudo systemctl status rclone-backup.timer

Script Backup Enterprise-Grade

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
# Script backup completo enterprise
cat > /opt/rclone/scripts/enterprise-backup.sh << 'EOF'
#!/bin/bash

# Enterprise rclone Backup Script
# Version: 2.0
# Compatible with: Ubuntu 20.04+, CentOS 8+, RHEL 8+

set -euo pipefail

# Configurazione
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CONFIG_FILE="/etc/rclone/backup-config.conf"
LOG_DIR="/var/log/rclone"
LOCK_FILE="/var/run/rclone-backup.lock"

# Carica configurazione
if [ -f "$CONFIG_FILE" ]; then
    source "$CONFIG_FILE"
else
    # Configurazione default
    REMOTE_NAME="onedrive-business"
    BACKUP_SOURCES=("/home" "/etc" "/opt" "/var/log")
    REMOTE_BASE_PATH="/enterprise-backup"
    RETENTION_DAYS="30"
    MAX_PARALLEL_TRANSFERS="8"
    BANDWIDTH_LIMIT="10M"
    NOTIFICATION_EMAIL=""
fi

# Logging funzioni
setup_logging() {
    local timestamp=$(date +%Y%m%d_%H%M%S)
    LOG_FILE="$LOG_DIR/backup-$timestamp.log"

    exec 1> >(tee -a "$LOG_FILE")
    exec 2> >(tee -a "$LOG_FILE" >&2)

    log_info "=== ENTERPRISE BACKUP SESSION STARTED ==="
    log_info "Timestamp: $(date -Iseconds)"
    log_info "Log file: $LOG_FILE"
}

log_info() {
    echo "[$(date -Iseconds)] [INFO] $1"
}

log_warn() {
    echo "[$(date -Iseconds)] [WARN] $1" >&2
}

log_error() {
    echo "[$(date -Iseconds)] [ERROR] $1" >&2
}

# Lock management per prevenire esecuzioni concorrenti
acquire_lock() {
    if [ -f "$LOCK_FILE" ]; then
        local pid=$(cat "$LOCK_FILE")
        if kill -0 "$pid" 2>/dev/null; then
            log_error "Backup already running with PID: $pid"
            exit 1
        else
            log_warn "Stale lock file found, removing..."
            rm -f "$LOCK_FILE"
        fi
    fi

    echo $$ > "$LOCK_FILE"
    trap 'rm -f "$LOCK_FILE"' EXIT
}

# Health check pre-backup
health_check() {
    log_info "Performing health checks..."

    # Verifica connettivitΓ  OneDrive
    if ! rclone lsd "$REMOTE_NAME:" >/dev/null 2>&1; then
        log_error "Cannot connect to $REMOTE_NAME"
        return 1
    fi

    # Verifica spazio disponibile
    local quota_info=$(rclone about "$REMOTE_NAME:" --json)
    local used_bytes=$(echo "$quota_info" | jq -r '.used // 0')
    local total_bytes=$(echo "$quota_info" | jq -r '.total // 0')

    if [ "$total_bytes" -gt 0 ]; then
        local usage_percent=$(( used_bytes * 100 / total_bytes ))
        log_info "OneDrive usage: ${usage_percent}%"

        if [ "$usage_percent" -gt 90 ]; then
            log_warn "OneDrive usage above 90%, consider cleanup"
        fi
    fi

    # Verifica spazio locale
    for source in "${BACKUP_SOURCES[@]}"; do
        if [ ! -d "$source" ]; then
            log_warn "Source directory not found: $source"
            continue
        fi

        local available_space=$(df -BG "$source" | awk 'NR==2 {print $4}' | sed 's/G//')
        if [ "$available_space" -lt 5 ]; then
            log_warn "Low disk space on $source: ${available_space}GB"
        fi
    done

    log_info "Health checks completed"
}

# Backup singola directory con retry logic
backup_directory() {
    local source_dir="$1"
    local remote_path="$2"
    local max_retries=3
    local retry_count=0

    log_info "Backing up: $source_dir -> $remote_path"

    while [ $retry_count -lt $max_retries ]; do
        if rclone sync "$source_dir" "$remote_path" \
           --config /etc/rclone/rclone.conf \
           --log-level INFO \
           --stats 30s \
           --stats-one-line \
           --transfers "$MAX_PARALLEL_TRANSFERS" \
           --checkers 16 \
           --retries 3 \
           --low-level-retries 10 \
           --bwlimit "$BANDWIDTH_LIMIT" \
           --exclude ".cache/**" \
           --exclude "*.tmp" \
           --exclude "*.log" \
           --exclude ".git/**" \
           --max-age "${RETENTION_DAYS}d"; then

            log_info "Successfully backed up: $source_dir"
            return 0
        else
            retry_count=$((retry_count + 1))
            log_warn "Backup attempt $retry_count failed for $source_dir, retrying..."
            sleep $((retry_count * 60))
        fi
    done

    log_error "Failed to backup $source_dir after $max_retries attempts"
    return 1
}

# Cleanup backup obsoleti
cleanup_old_backups() {
    log_info "Cleaning up backups older than $RETENTION_DAYS days..."

    # Lista backup esistenti
    local backup_dirs=$(rclone lsf "$REMOTE_NAME:$REMOTE_BASE_PATH" --dirs-only)

    echo "$backup_dirs" | while read -r backup_dir; do
        if [ -n "$backup_dir" ]; then
            # Estrai timestamp dal nome directory (formato: backup-YYYYMMDD_HHMMSS)
            local timestamp=$(echo "$backup_dir" | sed 's/backup-\([0-9]\{8\}_[0-9]\{6\}\).*/\1/' | tr '_' ' ')

            if [ -n "$timestamp" ]; then
                local backup_date=$(date -d "$timestamp" +%s 2>/dev/null || echo "0")
                local cutoff_date=$(date -d "$RETENTION_DAYS days ago" +%s)

                if [ "$backup_date" -lt "$cutoff_date" ] && [ "$backup_date" -gt 0 ]; then
                    log_info "Removing old backup: $backup_dir"
                    rclone purge "$REMOTE_NAME:$REMOTE_BASE_PATH/$backup_dir"
                fi
            fi
        fi
    done
}

# Verifica integritΓ  backup
verify_backup_integrity() {
    log_info "Performing backup integrity verification..."

    local verification_errors=0

    for source in "${BACKUP_SOURCES[@]}"; do
        if [ ! -d "$source" ]; then
            continue
        fi

        local backup_name=$(basename "$source")
        local remote_path="$REMOTE_NAME:$REMOTE_BASE_PATH/$(date +%Y%m%d)/$backup_name"

        log_info "Verifying: $source vs $remote_path"

        if ! rclone check "$source" "$remote_path" --one-way --size-only >/dev/null 2>&1; then
            log_warn "Integrity check failed for $backup_name"
            verification_errors=$((verification_errors + 1))
        fi
    done

    if [ $verification_errors -eq 0 ]; then
        log_info "All integrity checks passed"
        return 0
    else
        log_warn "$verification_errors integrity check(s) failed"
        return 1
    fi
}

# Notifiche via email
send_notification() {
    local status="$1"
    local details="$2"

    if [ -n "$NOTIFICATION_EMAIL" ]; then
        local subject="rclone Backup $status - $(hostname)"
        local body="Backup session completed with status: $status\n\nDetails:\n$details\n\nLog file: $LOG_FILE"

        echo -e "$body" | mail -s "$subject" "$NOTIFICATION_EMAIL" 2>/dev/null || true
    fi
}

# Generazione report backup
generate_backup_report() {
    local status="$1"
    local start_time="$2"
    local end_time="$3"

    local report_file="$LOG_DIR/backup-report-$(date +%Y%m%d_%H%M%S).json"

    cat > "$report_file" << EOF
{
    "backup_session": {
        "status": "$status",
        "start_time": "$start_time",
        "end_time": "$end_time",
        "duration_seconds": $((end_time - start_time)),
        "hostname": "$(hostname)",
        "remote_name": "$REMOTE_NAME",
        "sources_backed_up": $(printf '%s\n' "${BACKUP_SOURCES[@]}" | jq -R . | jq -s .),
        "log_file": "$LOG_FILE",
        "configuration": {
            "retention_days": "$RETENTION_DAYS",
            "max_parallel_transfers": "$MAX_PARALLEL_TRANSFERS",
            "bandwidth_limit": "$BANDWIDTH_LIMIT"
        }
    }
}
EOF

    log_info "Backup report generated: $report_file"
}

# Main execution
main() {
    local start_time=$(date +%s)
    local backup_status="SUCCESS"
    local failed_backups=0

    # Setup
    setup_logging
    acquire_lock

    # Pre-backup checks
    if ! health_check; then
        backup_status="HEALTH_CHECK_FAILED"
        log_error "Health checks failed, aborting backup"
        exit 1
    fi

    # Esegui backup per ogni directory sorgente
    for source in "${BACKUP_SOURCES[@]}"; do
        if [ ! -d "$source" ]; then
            log_warn "Source directory not found: $source, skipping..."
            continue
        fi

        local backup_name=$(basename "$source")
        local remote_path="$REMOTE_NAME:$REMOTE_BASE_PATH/$(date +%Y%m%d)/$backup_name"

        if ! backup_directory "$source" "$remote_path"; then
            failed_backups=$((failed_backups + 1))
        fi
    done

    # Post-backup operations
    cleanup_old_backups

    if ! verify_backup_integrity; then
        backup_status="VERIFICATION_FAILED"
    fi

    if [ $failed_backups -gt 0 ]; then
        backup_status="PARTIAL_FAILURE"
        log_warn "$failed_backups backup(s) failed"
    fi

    # Finalize
    local end_time=$(date +%s)
    generate_backup_report "$backup_status" "$start_time" "$end_time"

    log_info "=== ENTERPRISE BACKUP SESSION COMPLETED ==="
    log_info "Status: $backup_status"
    log_info "Duration: $((end_time - start_time)) seconds"

    # Notificazione
    local details="Failed backups: $failed_backups\nTotal sources: ${#BACKUP_SOURCES[@]}\nDuration: $((end_time - start_time))s"
    send_notification "$backup_status" "$details"

    # Exit code appropriato
    case "$backup_status" in
        "SUCCESS") exit 0 ;;
        "PARTIAL_FAILURE") exit 2 ;;
        *) exit 1 ;;
    esac
}

# Gestione segnali graceful
trap 'log_warn "Received termination signal, shutting down gracefully..."; exit 130' SIGTERM SIGINT

# Esecuzione principale
main "$@"
EOF

chmod +x /opt/rclone/scripts/enterprise-backup.sh

Configurazione Backup Multi-Remote

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Configurazione backup ridondanti su multiple destinazioni
cat > /etc/rclone/backup-config.conf << 'EOF'
# Enterprise Backup Configuration

# Remote primario
REMOTE_NAME="onedrive-business"

# Remote secondari per ridondanza
SECONDARY_REMOTES=("onedrive-personal" "gdrive-backup")

# Directory da backuppare
BACKUP_SOURCES=("/home/users" "/opt/applications" "/etc" "/var/log")

# Configurazioni avanzate
REMOTE_BASE_PATH="/enterprise-backup"
RETENTION_DAYS="90"
MAX_PARALLEL_TRANSFERS="16"
BANDWIDTH_LIMIT="50M"

# Notifiche
NOTIFICATION_EMAIL="admin@company.com"
SLACK_WEBHOOK=""

# Encryption
ENCRYPTION_ENABLED="true"
ENCRYPTION_REMOTE_SUFFIX="-encrypted"
EOF

Monitoring e Logging Avanzato

Setup Logging Centralizzato

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# Configurazione rsyslog per rclone
cat > /etc/rsyslog.d/50-rclone.conf << 'EOF'
# rclone logging configuration
:programname,isequal,"rclone" /var/log/rclone/rclone.log
:programname,isequal,"rclone" stop
EOF

# Restart rsyslog
sudo systemctl restart rsyslog

# Configurazione logrotate
cat > /etc/logrotate.d/rclone << 'EOF'
/var/log/rclone/*.log {
    daily
    missingok
    rotate 30
    compress
    delaycompress
    notifempty
    create 640 root root
    postrotate
        systemctl reload rsyslog > /dev/null 2>&1 || true
    endscript
}
EOF

Sistema di Monitoring Completo

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
# Script monitoring rclone enterprise
cat > /opt/rclone/scripts/monitor-rclone.sh << 'EOF'
#!/bin/bash

# Enterprise rclone Monitoring Script
set -euo pipefail

MONITOR_LOG="/var/log/rclone/monitoring.log"
METRICS_FILE="/var/log/rclone/metrics.json"
ALERT_THRESHOLD_ERRORS=5
ALERT_THRESHOLD_LATENCY_MS=5000

# Monitoring functions
check_connectivity() {
    local remote="$1"
    local start_time=$(date +%s%3N)

    if rclone lsd "$remote:" >/dev/null 2>&1; then
        local end_time=$(date +%s%3N)
        local latency=$((end_time - start_time))

        echo "{\"remote\":\"$remote\",\"status\":\"UP\",\"latency_ms\":$latency,\"timestamp\":\"$(date -Iseconds)\"}"

        if [ "$latency" -gt "$ALERT_THRESHOLD_LATENCY_MS" ]; then
            log_alert "HIGH_LATENCY" "Remote $remote latency: ${latency}ms"
        fi

        return 0
    else
        echo "{\"remote\":\"$remote\",\"status\":\"DOWN\",\"latency_ms\":-1,\"timestamp\":\"$(date -Iseconds)\"}"
        log_alert "CONNECTIVITY" "Remote $remote is unreachable"
        return 1
    fi
}

check_quota_usage() {
    local remote="$1"

    local quota_json=$(rclone about "$remote:" --json 2>/dev/null || echo '{}')
    local used=$(echo "$quota_json" | jq -r '.used // 0')
    local total=$(echo "$quota_json" | jq -r '.total // 0')

    if [ "$total" -gt 0 ]; then
        local usage_percent=$(( used * 100 / total ))
        echo "{\"remote\":\"$remote\",\"used_bytes\":$used,\"total_bytes\":$total,\"usage_percent\":$usage_percent,\"timestamp\":\"$(date -Iseconds)\"}"

        if [ "$usage_percent" -gt 90 ]; then
            log_alert "QUOTA_HIGH" "Remote $remote usage: ${usage_percent}%"
        elif [ "$usage_percent" -gt 95 ]; then
            log_alert "QUOTA_CRITICAL" "Remote $remote usage critical: ${usage_percent}%"
        fi
    else
        echo "{\"remote\":\"$remote\",\"used_bytes\":$used,\"total_bytes\":null,\"usage_percent\":null,\"timestamp\":\"$(date -Iseconds)\"}"
    fi
}

check_recent_operations() {
    local log_file="$1"
    local hours_back="${2:-1}"

    if [ ! -f "$log_file" ]; then
        return 0
    fi

    # Analizza log per errori recenti
    local recent_errors=$(grep -c "ERROR" "$log_file" 2>/dev/null || echo "0")
    local recent_warnings=$(grep -c "WARN" "$log_file" 2>/dev/null || echo "0")

    echo "{\"log_file\":\"$log_file\",\"recent_errors\":$recent_errors,\"recent_warnings\":$recent_warnings,\"hours_analyzed\":$hours_back,\"timestamp\":\"$(date -Iseconds)\"}"

    if [ "$recent_errors" -gt "$ALERT_THRESHOLD_ERRORS" ]; then
        log_alert "HIGH_ERROR_RATE" "High error rate: $recent_errors errors in last ${hours_back}h"
    fi
}

log_alert() {
    local alert_type="$1"
    local message="$2"

    local alert_json="{\"alert_type\":\"$alert_type\",\"message\":\"$message\",\"hostname\":\"$(hostname)\",\"timestamp\":\"$(date -Iseconds)\"}"

    echo "[$(date -Iseconds)] [ALERT] [$alert_type] $message" >> "$MONITOR_LOG"

    # Invia alert via webhook se configurato
    if [ -n "${SLACK_WEBHOOK:-}" ]; then
        curl -X POST -H 'Content-type: application/json' \
             --data "{\"text\":\"🚨 rclone Alert: $message\"}" \
             "$SLACK_WEBHOOK" >/dev/null 2>&1 || true
    fi
}

generate_health_dashboard() {
    local dashboard_file="/var/www/html/rclone-dashboard.html"

    cat > "$dashboard_file" << 'EOF'
<!DOCTYPE html>
<html>
<head>
    <title>rclone Enterprise Dashboard</title>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <style>
        body { font-family: Arial, sans-serif; margin: 20px; }
        .status-up { color: #28a745; }
        .status-down { color: #dc3545; }
        .status-warning { color: #ffc107; }
        .metric-card { border: 1px solid #ddd; padding: 15px; margin: 10px 0; border-radius: 5px; }
        .timestamp { color: #666; font-size: 0.9em; }
    </style>
</head>
<body>
    <h1>🌐 rclone Enterprise Dashboard</h1>
    <div id="dashboard-content">
        <p>Loading dashboard data...</p>
    </div>

    <script>
        function loadDashboard() {
            fetch('/rclone-metrics.json')
                .then(response => response.json())
                .then(data => updateDashboard(data))
                .catch(error => console.error('Error loading dashboard:', error));
        }

        function updateDashboard(metrics) {
            const content = document.getElementById('dashboard-content');
            let html = '';

            if (metrics.connectivity) {
                html += '<h2>πŸ“‘ Connectivity Status</h2>';
                metrics.connectivity.forEach(conn => {
                    const statusClass = conn.status === 'UP' ? 'status-up' : 'status-down';
                    html += `<div class="metric-card">
                        <strong>${conn.remote}</strong>:
                        <span class="${statusClass}">${conn.status}</span>
                        ${conn.latency_ms > 0 ? ` (${conn.latency_ms}ms)` : ''}
                        <div class="timestamp">${conn.timestamp}</div>
                    </div>`;
                });
            }

            content.innerHTML = html;
        }

        // Refresh every 30 seconds
        setInterval(loadDashboard, 30000);
        loadDashboard();
    </script>
</body>
</html>
EOF

    echo "Dashboard generated: $dashboard_file"
}

# Main monitoring execution
main() {
    local remotes=("onedrive-business" "onedrive-personal")
    local metrics="{\"timestamp\":\"$(date -Iseconds)\",\"hostname\":\"$(hostname)\"}"

    # Connectivity checks
    local connectivity_results="["
    local first=true

    for remote in "${remotes[@]}"; do
        if [ "$first" = true ]; then
            first=false
        else
            connectivity_results+=","
        fi

        local result=$(check_connectivity "$remote")
        connectivity_results+="$result"
    done
    connectivity_results+="]"

    # Quota checks
    local quota_results="["
    first=true

    for remote in "${remotes[@]}"; do
        if [ "$first" = true ]; then
            first=false
        else
            quota_results+=","
        fi

        local result=$(check_quota_usage "$remote")
        quota_results+="$result"
    done
    quota_results+="]"

    # Log analysis
    local log_analysis=$(check_recent_operations "/var/log/rclone/rclone.log" 1)

    # Componi metriche complete
    metrics=$(echo "$metrics" | jq --argjson conn "$connectivity_results" --argjson quota "$quota_results" --argjson logs "$log_analysis" '. + {connectivity: $conn, quota_usage: $quota, log_analysis: $logs}')

    # Salva metriche
    echo "$metrics" > "$METRICS_FILE"
    cp "$METRICS_FILE" /var/www/html/rclone-metrics.json 2>/dev/null || true

    # Genera dashboard
    generate_health_dashboard

    echo "Monitoring cycle completed at $(date -Iseconds)"
}

# Esecuzione
main "$@"
EOF

chmod +x /opt/rclone/scripts/monitor-rclone.sh

Cron Job per Monitoring Continuo

1
2
3
4
5
6
7
8
# Setup cron job per monitoring
cat > /etc/cron.d/rclone-monitoring << 'EOF'
# rclone Enterprise Monitoring - every 5 minutes
*/5 * * * * root /opt/rclone/scripts/monitor-rclone.sh >> /var/log/rclone/monitoring.log 2>&1

# Daily health report - every day at 08:00
0 8 * * * root /opt/rclone/scripts/daily-health-report.sh
EOF

Performance Optimization

Configurazione Performance Enterprise

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
# File configurazione performance ottimizzata
cat > /opt/rclone/config/performance.conf << 'EOF'
# Enterprise Performance Configuration for rclone

# Transfer settings
--transfers=32
--checkers=16
--retries=3
--low-level-retries=10

# Bandwidth and throttling
--bwlimit=100M
--tpslimit=10
--tpslimit-burst=0

# Buffer and chunk sizes
--buffer-size=256M
--multi-thread-cutoff=250M
--multi-thread-streams=8

# Connection settings
--timeout=5m
--contimeout=60s
--expect-continue-timeout=1s

# Performance flags
--fast-list
--no-traverse
--no-check-certificate=false
--use-mmap

# Progress and stats
--progress
--stats=30s
--stats-one-line
EOF

# Script per applicare configurazioni performance
cat > /opt/rclone/scripts/apply-performance-config.sh << 'EOF'
#!/bin/bash

PERFORMANCE_CONFIG="/opt/rclone/config/performance.conf"

# Applica configurazioni performance a comando rclone
apply_performance_settings() {
    local base_command="$1"
    shift

    if [ -f "$PERFORMANCE_CONFIG" ]; then
        # Carica configurazioni da file
        local performance_flags=$(grep -v '^#' "$PERFORMANCE_CONFIG" | grep -v '^$' | tr '\n' ' ')

        # Esegui comando con performance flags
        eval "$base_command $performance_flags $*"
    else
        echo "Performance config not found: $PERFORMANCE_CONFIG"
        eval "$base_command $*"
    fi
}

# Esempi di utilizzo
case "${1:-help}" in
    sync)
        apply_performance_settings "rclone sync" "${@:2}"
        ;;
    copy)
        apply_performance_settings "rclone copy" "${@:2}"
        ;;
    help)
        echo "Usage: $0 {sync|copy} [rclone-arguments]"
        echo "Example: $0 sync /home/user onedrive:/backup"
        ;;
    *)
        apply_performance_settings "rclone $1" "${@:2}"
        ;;
esac
EOF

chmod +x /opt/rclone/scripts/apply-performance-config.sh

Benchmark e Testing Performance

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
# Script benchmark performance rclone
cat > /opt/rclone/scripts/performance-benchmark.sh << 'EOF'
#!/bin/bash

set -euo pipefail

BENCHMARK_DIR="/tmp/rclone-benchmark"
RESULTS_FILE="/var/log/rclone/benchmark-$(date +%Y%m%d_%H%M%S).json"
REMOTE_NAME="${1:-onedrive-business}"

# Cleanup function
cleanup() {
    rm -rf "$BENCHMARK_DIR"
    rclone purge "$REMOTE_NAME:/benchmark-test" >/dev/null 2>&1 || true
}

trap cleanup EXIT

# Setup benchmark environment
setup_benchmark() {
    mkdir -p "$BENCHMARK_DIR"

    echo "Setting up benchmark files..."

    # Create test files of different sizes
    dd if=/dev/zero of="$BENCHMARK_DIR/small_1MB.dat" bs=1M count=1 2>/dev/null
    dd if=/dev/zero of="$BENCHMARK_DIR/medium_10MB.dat" bs=1M count=10 2>/dev/null
    dd if=/dev/zero of="$BENCHMARK_DIR/large_100MB.dat" bs=1M count=100 2>/dev/null

    # Create many small files
    mkdir -p "$BENCHMARK_DIR/small_files"
    for i in {1..100}; do
        echo "Test file $i content $(date)" > "$BENCHMARK_DIR/small_files/file_$i.txt"
    done

    echo "Benchmark setup completed"
}

# Benchmark upload performance
benchmark_upload() {
    local test_name="$1"
    local source_path="$2"
    local remote_path="$REMOTE_NAME:/benchmark-test/$test_name"

    echo "Benchmarking upload: $test_name"

    local start_time=$(date +%s%3N)

    rclone copy "$source_path" "$remote_path" \
        --transfers 8 \
        --checkers 16 \
        --stats 0 \
        --progress=false \
        >/dev/null 2>&1

    local end_time=$(date +%s%3N)
    local duration_ms=$((end_time - start_time))

    # Calculate file size
    local file_size_bytes=$(du -sb "$source_path" | cut -f1)
    local throughput_mbps=$(echo "scale=2; $file_size_bytes * 8 / $duration_ms / 1000" | bc -l)

    echo "{\"test\":\"$test_name\",\"operation\":\"upload\",\"duration_ms\":$duration_ms,\"file_size_bytes\":$file_size_bytes,\"throughput_mbps\":$throughput_mbps}"
}

# Benchmark download performance
benchmark_download() {
    local test_name="$1"
    local remote_path="$REMOTE_NAME:/benchmark-test/$test_name"
    local download_dir="/tmp/download-$test_name"

    echo "Benchmarking download: $test_name"

    mkdir -p "$download_dir"

    local start_time=$(date +%s%3N)

    rclone copy "$remote_path" "$download_dir" \
        --transfers 8 \
        --checkers 16 \
        --stats 0 \
        --progress=false \
        >/dev/null 2>&1

    local end_time=$(date +%s%3N)
    local duration_ms=$((end_time - start_time))

    # Calculate downloaded size
    local file_size_bytes=$(du -sb "$download_dir" | cut -f1)
    local throughput_mbps=$(echo "scale=2; $file_size_bytes * 8 / $duration_ms / 1000" | bc -l)

    rm -rf "$download_dir"

    echo "{\"test\":\"$test_name\",\"operation\":\"download\",\"duration_ms\":$duration_ms,\"file_size_bytes\":$file_size_bytes,\"throughput_mbps\":$throughput_mbps}"
}

# Benchmark list operations
benchmark_list() {
    echo "Benchmarking list operations..."

    local start_time=$(date +%s%3N)

    local file_count=$(rclone lsf "$REMOTE_NAME:/benchmark-test" --recursive | wc -l)

    local end_time=$(date +%s%3N)
    local duration_ms=$((end_time - start_time))

    echo "{\"test\":\"list_operations\",\"operation\":\"list\",\"duration_ms\":$duration_ms,\"file_count\":$file_count}"
}

# Run comprehensive benchmark
run_benchmark() {
    echo "Starting comprehensive rclone benchmark for $REMOTE_NAME"

    setup_benchmark

    local results="["
    local first=true

    # Upload benchmarks
    for test in "small_1MB.dat" "medium_10MB.dat" "large_100MB.dat" "small_files"; do
        if [ "$first" = true ]; then
            first=false
        else
            results+=","
        fi

        local result=$(benchmark_upload "$test" "$BENCHMARK_DIR/$test")
        results+="$result"
    done

    # Download benchmarks
    for test in "small_1MB.dat" "medium_10MB.dat" "large_100MB.dat" "small_files"; do
        results+=","
        local result=$(benchmark_download "$test")
        results+="$result"
    done

    # List benchmark
    results+=","
    local list_result=$(benchmark_list)
    results+="$list_result"

    results+="]"

    # Create comprehensive report
    local report="{\"benchmark_session\":{\"timestamp\":\"$(date -Iseconds)\",\"remote\":\"$REMOTE_NAME\",\"hostname\":\"$(hostname)\",\"results\":$results}}"

    echo "$report" | jq '.' > "$RESULTS_FILE"

    echo "Benchmark completed. Results saved to: $RESULTS_FILE"

    # Display summary
    echo ""
    echo "=== BENCHMARK SUMMARY ==="
    jq -r '.benchmark_session.results[] | select(.operation=="upload") | "Upload \(.test): \(.throughput_mbps) Mbps"' "$RESULTS_FILE"
    jq -r '.benchmark_session.results[] | select(.operation=="download") | "Download \(.test): \(.throughput_mbps) Mbps"' "$RESULTS_FILE"
}

# Main execution
run_benchmark
EOF

chmod +x /opt/rclone/scripts/performance-benchmark.sh

Troubleshooting e Recovery

Diagnostic Tools Avanzati

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
# Suite di strumenti diagnostic per rclone
cat > /opt/rclone/scripts/diagnostic-suite.sh << 'EOF'
#!/bin/bash

# rclone Enterprise Diagnostic Suite
set -euo pipefail

DIAGNOSTIC_LOG="/var/log/rclone/diagnostic-$(date +%Y%m%d_%H%M%S).log"
REMOTE_NAME="${1:-onedrive-business}"

# Logging functions
log_info() {
    echo "[$(date -Iseconds)] [INFO] $1" | tee -a "$DIAGNOSTIC_LOG"
}

log_warn() {
    echo "[$(date -Iseconds)] [WARN] $1" | tee -a "$DIAGNOSTIC_LOG"
}

log_error() {
    echo "[$(date -Iseconds)] [ERROR] $1" | tee -a "$DIAGNOSTIC_LOG"
}

# System checks
check_system_resources() {
    log_info "=== SYSTEM RESOURCES CHECK ==="

    # Memory usage
    local mem_info=$(free -h | grep Mem)
    log_info "Memory: $mem_info"

    # Disk space
    local disk_info=$(df -h | grep -E "/$|/home|/tmp")
    echo "$disk_info" | while read -r line; do
        log_info "Disk: $line"
    done

    # CPU load
    local load_avg=$(uptime | awk -F'load average:' '{print $2}')
    log_info "Load average:$load_avg"

    # Network connectivity
    if ping -c 3 8.8.8.8 >/dev/null 2>&1; then
        log_info "Internet connectivity: OK"
    else
        log_error "Internet connectivity: FAILED"
    fi
}

# rclone configuration validation
validate_rclone_config() {
    log_info "=== RCLONE CONFIGURATION VALIDATION ==="

    # Check rclone version
    local rclone_version=$(rclone version | head -n1)
    log_info "rclone version: $rclone_version"

    # Check configuration file
    local config_file=$(rclone config file | grep -o '/.*')
    if [ -f "$config_file" ]; then
        log_info "Config file found: $config_file"
        local config_size=$(stat -c%s "$config_file")
        log_info "Config file size: $config_size bytes"
    else
        log_error "Config file not found: $config_file"
        return 1
    fi

    # List configured remotes
    log_info "Configured remotes:"
    rclone listremotes | while read -r remote; do
        log_info "  - $remote"
    done

    # Check specific remote
    if rclone config show "$REMOTE_NAME" >/dev/null 2>&1; then
        log_info "Remote '$REMOTE_NAME' configuration found"
    else
        log_error "Remote '$REMOTE_NAME' not configured"
        return 1
    fi
}

# Connectivity diagnostic
diagnose_connectivity() {
    log_info "=== CONNECTIVITY DIAGNOSTIC ==="

    # Basic connectivity test
    log_info "Testing basic connectivity to $REMOTE_NAME..."

    local start_time=$(date +%s%3N)
    if rclone lsd "$REMOTE_NAME:" >/dev/null 2>&1; then
        local end_time=$(date +%s%3N)
        local latency=$((end_time - start_time))
        log_info "Connectivity test: PASSED (${latency}ms)"
    else
        log_error "Connectivity test: FAILED"

        # Detailed error analysis
        log_info "Attempting detailed error analysis..."
        rclone lsd "$REMOTE_NAME:" --log-level DEBUG 2>&1 | tail -20 >> "$DIAGNOSTIC_LOG"
        return 1
    fi

    # Authentication test
    log_info "Testing authentication..."
    if rclone about "$REMOTE_NAME:" >/dev/null 2>&1; then
        log_info "Authentication test: PASSED"
    else
        log_error "Authentication test: FAILED"
        log_warn "This might indicate expired OAuth tokens"
        return 1
    fi

    # Performance test
    log_info "Running quick performance test..."
    local test_file="/tmp/rclone-test-$(date +%s).txt"
    echo "Test data $(date)" > "$test_file"

    local upload_start=$(date +%s%3N)
    if rclone copy "$test_file" "$REMOTE_NAME:/diagnostic-test/" >/dev/null 2>&1; then
        local upload_end=$(date +%s%3N)
        local upload_time=$((upload_end - upload_start))
        log_info "Upload test: PASSED (${upload_time}ms)"

        # Cleanup
        rclone delete "$REMOTE_NAME:/diagnostic-test/$(basename "$test_file")" >/dev/null 2>&1 || true
        rm -f "$test_file"
    else
        log_error "Upload test: FAILED"
        rm -f "$test_file"
        return 1
    fi
}

# Permission diagnostic
diagnose_permissions() {
    log_info "=== PERMISSIONS DIAGNOSTIC ==="

    # Test read permissions
    log_info "Testing read permissions..."
    if rclone ls "$REMOTE_NAME:" --max-depth 1 >/dev/null 2>&1; then
        log_info "Read permissions: OK"
    else
        log_error "Read permissions: FAILED"
    fi

    # Test write permissions
    log_info "Testing write permissions..."
    local test_dir="$REMOTE_NAME:/diagnostic-test-$(date +%s)"
    if rclone mkdir "$test_dir" >/dev/null 2>&1; then
        log_info "Write permissions: OK"
        rclone rmdir "$test_dir" >/dev/null 2>&1 || true
    else
        log_error "Write permissions: FAILED"
    fi

    # Test delete permissions
    log_info "Testing delete permissions..."
    local test_file="/tmp/delete-test.txt"
    echo "Delete test" > "$test_file"

    if rclone copy "$test_file" "$REMOTE_NAME:/diagnostic-test/" >/dev/null 2>&1; then
        if rclone delete "$REMOTE_NAME:/diagnostic-test/delete-test.txt" >/dev/null 2>&1; then
            log_info "Delete permissions: OK"
        else
            log_error "Delete permissions: FAILED"
        fi
    else
        log_warn "Cannot test delete permissions (upload failed)"
    fi

    rm -f "$test_file"
}

# OAuth token diagnostic
diagnose_oauth_token() {
    log_info "=== OAUTH TOKEN DIAGNOSTIC ==="

    # Extract token info from config
    local config_dump=$(rclone config dump)

    if echo "$config_dump" | jq -e ".\"$REMOTE_NAME\".token" >/dev/null 2>&1; then
        log_info "OAuth token found in configuration"

        # Try to extract expiry
        local token_info=$(echo "$config_dump" | jq -r ".\"$REMOTE_NAME\".token")
        local expiry=$(echo "$token_info" | jq -r '.expiry // "unknown"')

        if [ "$expiry" != "unknown" ] && [ "$expiry" != "null" ]; then
            local expiry_epoch=$(date -d "$expiry" +%s 2>/dev/null || echo "0")
            local current_epoch=$(date +%s)

            if [ "$expiry_epoch" -gt "$current_epoch" ]; then
                local remaining_hours=$(( (expiry_epoch - current_epoch) / 3600 ))
                log_info "Token expires in: $remaining_hours hours"

                if [ "$remaining_hours" -lt 24 ]; then
                    log_warn "Token will expire soon, consider refreshing"
                fi
            else
                log_error "Token appears to be expired"
            fi
        else
            log_info "Token expiry information not available"
        fi
    else
        log_error "No OAuth token found in configuration"
    fi
}

# Generate comprehensive diagnostic report
generate_diagnostic_report() {
    log_info "=== GENERATING DIAGNOSTIC REPORT ==="

    local report_file="/var/log/rclone/diagnostic-report-$(date +%Y%m%d_%H%M%S).json"

    # System information
    local system_info=$(cat << EOF
{
    "hostname": "$(hostname)",
    "timestamp": "$(date -Iseconds)",
    "rclone_version": "$(rclone version --check=false | head -n1)",
    "os_info": "$(uname -a)",
    "config_file": "$(rclone config file | grep -o '/.*' || echo 'unknown')",
    "remote_tested": "$REMOTE_NAME"
}
EOF
    )

    # Test results (simplified)
    local test_results='"test_results": ["System check completed", "Configuration validated", "Connectivity tested"]'

    # Create final report
    echo "{\"diagnostic_session\": $system_info, $test_results}" | jq '.' > "$report_file"

    log_info "Diagnostic report saved to: $report_file"
}

# Auto-fix common issues
auto_fix_issues() {
    log_info "=== AUTO-FIX COMMON ISSUES ==="

    # Fix 1: Clear rclone cache
    log_info "Clearing rclone cache..."
    if [ -d "$HOME/.cache/rclone" ]; then
        rm -rf "$HOME/.cache/rclone"
        log_info "rclone cache cleared"
    fi

    # Fix 2: Reset OAuth token if expired
    log_info "Checking OAuth token status..."
    if ! rclone about "$REMOTE_NAME:" >/dev/null 2>&1; then
        log_warn "OAuth token might be expired"
        log_info "You may need to run: rclone config reconnect $REMOTE_NAME:"
    fi

    # Fix 3: Verify and fix permissions
    log_info "Checking file permissions..."
    local config_file=$(rclone config file | grep -o '/.*')
    if [ -f "$config_file" ]; then
        chmod 600 "$config_file"
        log_info "Config file permissions fixed"
    fi
}

# Main diagnostic execution
main() {
    log_info "Starting rclone diagnostic suite for remote: $REMOTE_NAME"
    log_info "Diagnostic log: $DIAGNOSTIC_LOG"

    local exit_code=0

    # Run all diagnostic checks
    check_system_resources || exit_code=1
    validate_rclone_config || exit_code=1
    diagnose_connectivity || exit_code=1
    diagnose_permissions || exit_code=1
    diagnose_oauth_token || exit_code=1

    # Auto-fix if requested
    if [ "${AUTO_FIX:-false}" = "true" ]; then
        auto_fix_issues
    fi

    # Generate report
    generate_diagnostic_report

    if [ $exit_code -eq 0 ]; then
        log_info "=== ALL DIAGNOSTIC CHECKS PASSED ==="
    else
        log_warn "=== SOME DIAGNOSTIC CHECKS FAILED ==="
        log_info "Review the log file for detailed error information: $DIAGNOSTIC_LOG"
    fi

    return $exit_code
}

# Help function
show_help() {
    cat << EOF
rclone Enterprise Diagnostic Suite

Usage: $0 [remote-name] [options]

Options:
    -h, --help      Show this help message
    --auto-fix      Attempt to automatically fix common issues

Examples:
    $0                           # Diagnose default remote (onedrive-business)
    $0 onedrive-personal         # Diagnose specific remote
    $0 onedrive-business --auto-fix  # Diagnose and attempt fixes

Environment Variables:
    AUTO_FIX=true              # Enable automatic fixes
EOF
}

# Parse arguments
while [[ $# -gt 0 ]]; do
    case $1 in
        -h|--help)
            show_help
            exit 0
            ;;
        --auto-fix)
            AUTO_FIX=true
            shift
            ;;
        -*)
            echo "Unknown option $1"
            show_help
            exit 1
            ;;
        *)
            REMOTE_NAME="$1"
            shift
            ;;
    esac
done

# Run main diagnostic
main
EOF

chmod +x /opt/rclone/scripts/diagnostic-suite.sh

Recovery Procedures

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
# Script di recovery automatizzato
cat > /opt/rclone/scripts/disaster-recovery.sh << 'EOF'
#!/bin/bash

# rclone Disaster Recovery Script
set -euo pipefail

RECOVERY_LOG="/var/log/rclone/recovery-$(date +%Y%m%d_%H%M%S).log"
BACKUP_CONFIG_DIR="/opt/rclone/config/backup"
REMOTE_NAME="${1:-onedrive-business}"

log_info() {
    echo "[$(date -Iseconds)] [RECOVERY] [INFO] $1" | tee -a "$RECOVERY_LOG"
}

log_error() {
    echo "[$(date -Iseconds)] [RECOVERY] [ERROR] $1" | tee -a "$RECOVERY_LOG"
}

# Backup current configuration
backup_current_config() {
    log_info "Backing up current configuration..."

    mkdir -p "$BACKUP_CONFIG_DIR"

    local config_file=$(rclone config file | grep -o '/.*')
    if [ -f "$config_file" ]; then
        cp "$config_file" "$BACKUP_CONFIG_DIR/rclone.conf.$(date +%Y%m%d_%H%M%S)"
        log_info "Configuration backed up to $BACKUP_CONFIG_DIR"
    else
        log_error "Configuration file not found: $config_file"
    fi
}

# Restore from backup
restore_from_backup() {
    local backup_file="$1"

    log_info "Restoring configuration from: $backup_file"

    if [ -f "$backup_file" ]; then
        local config_file=$(rclone config file | grep -o '/.*')
        cp "$backup_file" "$config_file"
        log_info "Configuration restored successfully"

        # Test restored configuration
        if rclone lsd "$REMOTE_NAME:" >/dev/null 2>&1; then
            log_info "Restored configuration test: PASSED"
            return 0
        else
            log_error "Restored configuration test: FAILED"
            return 1
        fi
    else
        log_error "Backup file not found: $backup_file"
        return 1
    fi
}

# OAuth token refresh
refresh_oauth_token() {
    log_info "Attempting OAuth token refresh for: $REMOTE_NAME"

    # This requires manual intervention in most cases
    log_info "Manual OAuth refresh required. Run:"
    log_info "  rclone config reconnect $REMOTE_NAME:"

    # For headless systems, provide instructions
    log_info "For headless systems:"
    log_info "  1. Run rclone config on a machine with browser"
    log_info "  2. Copy the token section to this system's config"
}

# Data integrity check and repair
check_and_repair_data() {
    local local_path="$1"
    local remote_path="$2"

    log_info "Checking data integrity between $local_path and $remote_path"

    # Check for differences
    if rclone check "$local_path" "$remote_path" --one-way; then
        log_info "Data integrity check: PASSED"
        return 0
    else
        log_error "Data integrity check: FAILED"

        # Attempt repair
        log_info "Attempting data repair..."

        if rclone sync "$local_path" "$remote_path" --dry-run; then
            log_info "Repair simulation completed. Run without --dry-run to apply changes"

            # Ask for confirmation in interactive mode
            if [ -t 0 ]; then
                read -p "Apply repair changes? (y/N): " -n 1 -r
                echo
                if [[ $REPLY =~ ^[Yy]$ ]]; then
                    rclone sync "$local_path" "$remote_path"
                    log_info "Data repair completed"
                else
                    log_info "Data repair cancelled by user"
                fi
            else
                log_info "Non-interactive mode: repair changes not applied"
            fi
        else
            log_error "Data repair simulation failed"
            return 1
        fi
    fi
}

# Complete disaster recovery procedure
full_disaster_recovery() {
    log_info "Starting full disaster recovery procedure..."

    # Step 1: Backup current config
    backup_current_config

    # Step 2: Attempt automatic fixes
    log_info "Attempting automatic recovery fixes..."

    # Clear cache
    if [ -d "$HOME/.cache/rclone" ]; then
        rm -rf "$HOME/.cache/rclone"
        log_info "rclone cache cleared"
    fi

    # Step 3: Test connectivity
    if rclone lsd "$REMOTE_NAME:" >/dev/null 2>&1; then
        log_info "Basic connectivity test: PASSED"
    else
        log_error "Basic connectivity test: FAILED"

        # Step 4: Try OAuth refresh
        refresh_oauth_token
        return 1
    fi

    # Step 5: Test authentication
    if rclone about "$REMOTE_NAME:" >/dev/null 2>&1; then
        log_info "Authentication test: PASSED"
    else
        log_error "Authentication test: FAILED"
        refresh_oauth_token
        return 1
    fi

    log_info "Disaster recovery completed successfully"
}

# Interactive recovery menu
interactive_recovery() {
    echo "=== rclone Disaster Recovery Menu ==="
    echo "1. Full disaster recovery"
    echo "2. OAuth token refresh"
    echo "3. Restore from backup"
    echo "4. Data integrity check"
    echo "5. Exit"

    read -p "Select option (1-5): " choice

    case $choice in
        1) full_disaster_recovery ;;
        2) refresh_oauth_token ;;
        3)
            echo "Available backups:"
            ls -la "$BACKUP_CONFIG_DIR"/*.conf.* 2>/dev/null || echo "No backups found"
            read -p "Enter backup file path: " backup_file
            restore_from_backup "$backup_file"
            ;;
        4)
            read -p "Local path: " local_path
            read -p "Remote path (e.g., $REMOTE_NAME:/backup): " remote_path
            check_and_repair_data "$local_path" "$remote_path"
            ;;
        5) exit 0 ;;
        *) echo "Invalid option" ;;
    esac
}

# Main execution
main() {
    log_info "rclone Disaster Recovery started for: $REMOTE_NAME"
    log_info "Recovery log: $RECOVERY_LOG"

    case "${1:-interactive}" in
        full)
            full_disaster_recovery
            ;;
        oauth)
            refresh_oauth_token
            ;;
        backup)
            backup_current_config
            ;;
        restore)
            restore_from_backup "$2"
            ;;
        check)
            check_and_repair_data "$2" "$3"
            ;;
        interactive)
            interactive_recovery
            ;;
        *)
            echo "Usage: $0 {full|oauth|backup|restore|check|interactive} [args...]"
            exit 1
            ;;
    esac
}

# Execute main with all arguments
main "$@"
EOF

chmod +x /opt/rclone/scripts/disaster-recovery.sh

Best Practices Enterprise

Security Best Practices

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# rclone Enterprise Security Checklist

## πŸ” Authentication e Authorization
- [ ] Utilizzare OAuth2 personalizzato con Azure AD registrazione
- [ ] Implementare MFA (Multi-Factor Authentication) per account OneDrive
- [ ] Configurare scadenza token appropriata (non oltre 90 giorni)
- [ ] Utilizzare Service Principal per automazioni production
- [ ] Implementare rotation automatica delle credenziali

## πŸ›‘οΈ Data Protection
- [ ] Abilitare client-side encryption per dati sensibili
- [ ] Utilizzare strong passwords per encryption (32+ caratteri)
- [ ] Implementare key management sicuro con hardware security modules
- [ ] Configurare data classification e handling appropriato
- [ ] Implementare audit trail completo per compliance

## 🌐 Network Security
- [ ] Utilizzare connessioni VPN per trasferimenti critici
- [ ] Configurare firewall rules per limitare traffico rclone
- [ ] Implementare bandwidth limiting per evitare DoS
- [ ] Monitorare traffico di rete per anomalie
- [ ] Utilizzare proxy aziendale se richiesto dalle policy

## πŸ“‹ Configuration Management
- [ ] Proteggere file configurazione con permessi appropriati (600)
- [ ] Utilizzare configuration encryption per ambienti shared
- [ ] Implementare version control per configurazioni
- [ ] Separare configurazioni per ambiente (dev/staging/prod)
- [ ] Documentare tutte le configurazioni custom

## πŸ”„ Operational Security
- [ ] Implementare job scheduling sicuro con systemd
- [ ] Utilizzare utenti dedicati per servizi rclone
- [ ] Configurare log rotation e retention appropriate
- [ ] Implementare monitoring e alerting per security events
- [ ] Eseguire penetration testing regolare

Performance Best Practices

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# Configurazione performance enterprise ottimizzata
cat > /opt/rclone/config/enterprise-performance.conf << 'EOF'
# Enterprise Performance Configuration
# Optimized for high-throughput OneDrive operations

# Transfer Optimization
--transfers=32                    # Concurrent file transfers
--checkers=16                     # File existence checkers
--retries=5                       # Retry failed operations
--low-level-retries=20           # Low-level retry attempts

# Bandwidth Management
--bwlimit=100M                   # Bandwidth limit (adjust based on connection)
--tpslimit=50                    # Transactions per second limit
--tpslimit-burst=100             # Burst allowance

# Buffer and Memory
--buffer-size=256M               # File buffer size
--use-mmap                       # Memory-mapped file I/O
--multi-thread-cutoff=250M       # Multi-threading threshold
--multi-thread-streams=16        # Concurrent streams per file

# Connection Tuning
--timeout=10m                    # Overall operation timeout
--contimeout=60s                 # Connection timeout
--expect-continue-timeout=2s     # HTTP expect/continue timeout

# Advanced Options
--fast-list                      # Use recursive list operations
--no-traverse                    # Don't traverse destination
--check-first                    # Check before transferring
--no-update-modtime             # Don't update modification times

# Logging and Progress
--log-level=INFO                 # Appropriate logging level
--stats=60s                      # Progress statistics interval
--stats-one-line                 # Compact progress display
EOF

Monitoring e Alerting Best Practices

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
# Sistema di alerting avanzato per rclone
cat > /opt/rclone/scripts/advanced-alerting.sh << 'EOF'
#!/bin/bash

# Advanced Alerting System for rclone Enterprise
set -euo pipefail

ALERT_CONFIG="/etc/rclone/alerting.conf"
LOG_FILE="/var/log/rclone/alerting.log"

# Load configuration
if [ -f "$ALERT_CONFIG" ]; then
    source "$ALERT_CONFIG"
else
    # Default configuration
    SLACK_WEBHOOK=""
    EMAIL_RECIPIENTS="admin@company.com"
    ALERT_THRESHOLDS_ERROR_RATE="10"
    ALERT_THRESHOLDS_LATENCY_MS="5000"
    ALERT_THRESHOLDS_QUOTA_PERCENT="90"
fi

# Alert levels
declare -A ALERT_LEVELS=(
    [INFO]="πŸ’‘"
    [WARNING]="⚠️"
    [ERROR]="❌"
    [CRITICAL]="🚨"
)

# Send alert function
send_alert() {
    local level="$1"
    local title="$2"
    local message="$3"
    local details="$4"

    local timestamp=$(date -Iseconds)
    local hostname=$(hostname)
    local icon="${ALERT_LEVELS[$level]}"

    # Log alert
    echo "[$timestamp] [$level] $title: $message" >> "$LOG_FILE"

    # Slack notification
    if [ -n "$SLACK_WEBHOOK" ]; then
        local slack_payload=$(cat << EOF
{
    "username": "rclone-monitor",
    "icon_emoji": ":cloud:",
    "text": "$icon *$level Alert*: $title",
    "attachments": [
        {
            "color": $(case $level in INFO) echo "\"good\"";; WARNING) echo "\"warning\"";; ERROR|CRITICAL) echo "\"danger\"";; esac),
            "fields": [
                {
                    "title": "Message",
                    "value": "$message",
                    "short": false
                },
                {
                    "title": "Details",
                    "value": "$details",
                    "short": false
                },
                {
                    "title": "Hostname",
                    "value": "$hostname",
                    "short": true
                },
                {
                    "title": "Timestamp",
                    "value": "$timestamp",
                    "short": true
                }
            ]
        }
    ]
}
EOF
        )

        curl -X POST -H 'Content-type: application/json' \
             --data "$slack_payload" \
             "$SLACK_WEBHOOK" >/dev/null 2>&1 || true
    fi

    # Email notification for ERROR and CRITICAL
    if [ "$level" = "ERROR" ] || [ "$level" = "CRITICAL" ]; then
        local email_subject="$icon rclone $level Alert: $title"
        local email_body="rclone Alert Details:

Level: $level
Title: $title
Message: $message
Details: $details
Hostname: $hostname
Timestamp: $timestamp

Please investigate immediately.

--
rclone Enterprise Monitoring System"

        echo "$email_body" | mail -s "$email_subject" "$EMAIL_RECIPIENTS" 2>/dev/null || true
    fi
}

# Performance monitoring
monitor_performance() {
    local log_file="$1"
    local time_window_hours="${2:-1}"

    if [ ! -f "$log_file" ]; then
        return 0
    fi

    # Analyze recent performance metrics
    local recent_log=$(tail -n 1000 "$log_file" | grep -E "\[(ERROR|WARN)\]" | head -20)

    if [ -n "$recent_log" ]; then
        local error_count=$(echo "$recent_log" | grep -c "ERROR" || echo "0")
        local warning_count=$(echo "$recent_log" | grep -c "WARN" || echo "0")

        if [ "$error_count" -gt "$ALERT_THRESHOLDS_ERROR_RATE" ]; then
            send_alert "CRITICAL" "High Error Rate" \
                      "Detected $error_count errors in recent logs" \
                      "Recent errors: $(echo "$recent_log" | grep "ERROR" | tail -3)"
        elif [ "$warning_count" -gt 20 ]; then
            send_alert "WARNING" "High Warning Rate" \
                      "Detected $warning_count warnings in recent logs" \
                      "Recent warnings: $(echo "$recent_log" | grep "WARN" | tail -3)"
        fi
    fi
}

# Quota monitoring
monitor_quota() {
    local remote="$1"

    local quota_info=$(rclone about "$remote:" --json 2>/dev/null || echo '{}')
    local used=$(echo "$quota_info" | jq -r '.used // 0')
    local total=$(echo "$quota_info" | jq -r '.total // 0')

    if [ "$total" -gt 0 ]; then
        local usage_percent=$(( used * 100 / total ))

        if [ "$usage_percent" -gt "$ALERT_THRESHOLDS_QUOTA_PERCENT" ]; then
            local used_gb=$(( used / 1024 / 1024 / 1024 ))
            local total_gb=$(( total / 1024 / 1024 / 1024 ))

            send_alert "WARNING" "High Quota Usage" \
                      "OneDrive usage is at ${usage_percent}%" \
                      "Used: ${used_gb}GB / Total: ${total_gb}GB"
        fi
    fi
}

# Service health monitoring
monitor_service_health() {
    local services=("rclone-backup.service" "rclone-backup.timer")

    for service in "${services[@]}"; do
        if systemctl is-active --quiet "$service"; then
            # Service is active
            continue
        else
            send_alert "ERROR" "Service Down" \
                      "Service $service is not active" \
                      "Status: $(systemctl is-active "$service" 2>&1)"
        fi
    done
}

# Main monitoring execution
main() {
    local remotes=("onedrive-business" "onedrive-personal")

    echo "Starting advanced monitoring cycle at $(date -Iseconds)"

    # Monitor each remote
    for remote in "${remotes[@]}"; do
        # Connectivity check
        if ! rclone lsd "$remote:" >/dev/null 2>&1; then
            send_alert "CRITICAL" "Remote Connectivity Failed" \
                      "Cannot connect to remote: $remote" \
                      "Check network connectivity and OAuth tokens"
        else
            # Monitor quota if connected
            monitor_quota "$remote"
        fi
    done

    # Monitor system services
    monitor_service_health

    # Monitor log files for errors
    monitor_performance "/var/log/rclone/rclone.log" 1

    echo "Monitoring cycle completed at $(date -Iseconds)"
}

# Execute monitoring
main "$@"
EOF

chmod +x /opt/rclone/scripts/advanced-alerting.sh

# Setup cron per alerting continuo
echo "*/10 * * * * root /opt/rclone/scripts/advanced-alerting.sh >/dev/null 2>&1" > /etc/cron.d/rclone-alerting

Conclusioni e Next Steps

Riepilogo Implementazione Enterprise

Questa guida completa ha fornito un framework enterprise-grade per l’implementazione di OneDrive su Linux con rclone, coprendo tutti gli aspetti critici per ambienti di produzione mission-critical:

  1. Setup Multi-Piattaforma: Installazione su tutti i principali sistemi Linux con configurazioni ottimizzate
  2. Autenticazione Sicura: OAuth2 enterprise con Azure AD integration e token management avanzato
  3. Sincronizzazione Robusta: Pattern di backup incrementali, differenziali e bidirezionali con error handling
  4. Security Hardening: Client-side encryption, audit trail, compliance GDPR/HIPAA e data protection
  5. Automazione Enterprise: Systemd integration, monitoring 24/7, alerting e recovery automatico

Benefici Enterprise Implementati

Reliability e Resilienza

  • Multi-retry Logic: Gestione automatica di network failures e API rate limiting
  • Circuit Breaker Pattern: Protezione da cascading failures su servizi esterni
  • Health Checks Automatici: Monitoring continuo con self-healing capabilities
  • Disaster Recovery: Procedure automatizzate per recovery completo in caso di failure critico

Security e Compliance

  • Zero-Trust Architecture: Validazione completa di tutti gli input e configurazioni
  • Audit Trail Completo: Logging strutturato per compliance normative enterprise
  • Data Encryption: Protezione end-to-end per dati sensibili in transito e a riposo
  • Access Control Granulare: Gestione permessi enterprise con role-based access control

Performance e Scalability

  • Multi-threading Ottimizzato: Configurazioni per massimizzare throughput su connessioni enterprise
  • Bandwidth Management: Controllo intelligente della banda per non impattare altre operazioni
  • Cache Intelligente: Ottimizzazioni memory e disk per ridurre latenza operazioni
  • Load Balancing: Distribuzione carico su multiple connessioni per performance ottimali

Roadmap Estensioni Future

Il sistema rclone enterprise implementato Γ¨ progettato per essere facilmente estendibile:

  • Container Orchestration: Deployment Kubernetes/Docker Swarm per scaling automatico
  • Multi-Cloud Strategy: Integration con AWS S3, Google Drive, Azure Blob per ridondanza cloud
  • AI-Powered Optimization: Machine learning per ottimizzazione automatica performance basata su pattern di utilizzo
  • Advanced Analytics: Dashboard real-time per operational intelligence e predictive maintenance

Enterprise Support e Maintenance

Per garantire operation continuitΓ  in ambienti enterprise:

Maintenance Programmata

  • Weekly Health Checks: Verifica automatica integritΓ  sistema ogni settimana
  • Monthly Performance Reviews: Analisi metriche e ottimizzazioni performance
  • Quarterly Security Audits: Penetration testing e security assessment completi
  • Annual Disaster Recovery Tests: Simulazione complete failure per validare procedure recovery

Professional Services

  • 24/7 Monitoring: Sistema alerting enterprise con escalation automatiche
  • Managed Updates: Gestione aggiornamenti rclone con testing pre-production
  • Custom Integration: Sviluppo connettori personalizzati per sistemi legacy
  • Training & Certification: Formazione team IT per operational excellence

Trasforma la gestione dei tuoi dati cloud con rclone enterprise-grade! πŸš€β˜οΈ