Linux Server Automation with Bash Scripts and Systemd Services
Linux Server Automation with Bash Scripts and Systemd Services
Automating server tasks with Bash scripts and Systemd services is crucial for reliable infrastructure. I've automated deployments, backups, monitoring, and maintenance tasks across multiple Linux servers using these tools.
🎯 Why Automate?
Manual server management leads to:
- Human error in repetitive tasks
- Inconsistent execution
- Time waste on routine operations
- Missed maintenance windows
🛠 Bash Scripting Basics
Essential Script Structure
```bash #!/bin/bash
Script: deploy.sh
Description: Automated deployment script
set -e # Exit on error set -u # Exit on undefined variable
Configuration
APP_DIR="/var/www/app" BACKUP_DIR="/backups" LOG_FILE="/var/log/deploy.log"
Logging function
log() { echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE" }
Error handling
error_exit() { log "ERROR: $1" exit 1 }
Main execution
main() { log "Starting deployment..." # Your code here log "Deployment completed successfully" }
main "$@" ```
Best Practices
1. Always use `set -e`: Exit on errors 2. Quote variables: Prevent word splitting 3. Use functions: Organize code 4. Add logging: Track execution 5. Validate inputs: Check parameters
📦 Deployment Automation
Example: Laravel Deployment Script
```bash #!/bin/bash
deploy-laravel.sh
set -e
APP_DIR="/var/www/ameylokare.com" REPO_URL="https://github.com/username/repo.git" BRANCH="main"
log() { echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" }
deploy() { log "Starting deployment..."
cd "$APP_DIR" || error_exit "Cannot access app directory"
# Backup current version log "Creating backup..." tar -czf "/backups/app-$(date +%Y%m%d-%H%M%S).tar.gz" .
# Pull latest code log "Pulling latest code..." git fetch origin git reset --hard "origin/$BRANCH"
# Install dependencies log "Installing dependencies..." composer install --no-dev --optimize-autoloader npm ci --production
# Build assets log "Building assets..." npm run build
# Run migrations log "Running migrations..." php artisan migrate --force
# Clear caches log "Clearing caches..." php artisan config:cache php artisan route:cache php artisan view:cache
# Set permissions log "Setting permissions..." chown -R www-data:www-data storage bootstrap/cache chmod -R 755 storage bootstrap/cache
# Reload PHP-FPM log "Reloading PHP-FPM..." systemctl reload php8.2-fpm
log "Deployment completed successfully!" }
deploy "$@" ```
🔄 Systemd Services
Systemd manages long-running processes and scheduled tasks.
Create a Service
```bash
/etc/systemd/system/my-service.service
[Unit] Description=My Custom Service After=network.target
[Service] Type=simple User=www-data WorkingDirectory=/var/www/app ExecStart=/usr/bin/php /var/www/app/artisan queue:work Restart=always RestartSec=10
[Install] WantedBy=multi-user.target ```
Manage Service
```bash
Reload systemd
sudo systemctl daemon-reload
Enable service (start on boot)
sudo systemctl enable my-service
Start service
sudo systemctl start my-service
Check status
sudo systemctl status my-service
View logs
sudo journalctl -u my-service -f ```
⏰ Systemd Timers (Cron Alternative)
Systemd timers are more powerful than cron:
Create Timer
```bash
/etc/systemd/system/backup.timer
[Unit] Description=Daily Backup Timer Requires=backup.service
[Timer] OnCalendar=daily OnCalendar=*-*-* 02:00:00 Persistent=true
[Install] WantedBy=timers.target ```
Create Service
```bash
/etc/systemd/system/backup.service
[Unit] Description=Database Backup Service
[Service] Type=oneshot User=backup ExecStart=/usr/local/bin/backup-db.sh ```
Enable Timer
```bash sudo systemctl enable backup.timer sudo systemctl start backup.timer
List all timers
systemctl list-timers ```
💾 Backup Automation
Database Backup Script
```bash #!/bin/bash
backup-db.sh
set -e
DB_NAME="ameylokare_com" DB_USER="backup_user" BACKUP_DIR="/backups/database" RETENTION_DAYS=30
log() { echo "[$(date +'%Y-%m-%d %H:%M:%S')] $1" }
backup_database() { log "Starting database backup..."
# Create backup directory mkdir -p "$BACKUP_DIR"
# Backup filename BACKUP_FILE="$BACKUP_DIR/db-$(date +%Y%m%d-%H%M%S).sql.gz"
# Perform backup mysqldump -u "$DB_USER" -p"$DB_PASS" "$DB_NAME" | \ gzip > "$BACKUP_FILE"
if [ $? -eq 0 ]; then log "Backup created: $BACKUP_FILE"
# Clean old backups find "$BACKUP_DIR" -name "db-*.sql.gz" -mtime +$RETENTION_DAYS -delete log "Old backups cleaned (kept last $RETENTION_DAYS days)" else log "ERROR: Backup failed!" exit 1 fi }
backup_database ```
📊 Monitoring Scripts
Server Health Check
```bash #!/bin/bash
health-check.sh
check_disk() { USAGE=$(df -h / | awk 'NR==2 {print $5}' | sed 's/%//') if [ "$USAGE" -gt 80 ]; then echo "WARNING: Disk usage at ${USAGE}%" # Send alert fi }
check_memory() { MEM_USAGE=$(free | awk 'NR==2{printf "%.0f", $3*100/$2}') if [ "$MEM_USAGE" -gt 90 ]; then echo "WARNING: Memory usage at ${MEM_USAGE}%" fi }
check_services() { SERVICES=("php8.2-fpm" "nginx" "mysql") for service in "${SERVICES[@]}"; do if ! systemctl is-active --quiet "$service"; then echo "ERROR: $service is not running!" systemctl restart "$service" fi done }
check_disk check_memory check_services ```
🔐 Security Automation
Automated Security Updates
```bash #!/bin/bash
security-updates.sh
set -e
log() { logger -t security-updates "$1" }
update_system() { log "Starting security updates..."
apt update apt list --upgradable | grep -i security
# Install security updates only apt upgrade -y -o Dpkg::Options::="--force-confold" \ $(apt list --upgradable 2>/dev/null | grep -i security | cut -d'/' -f1)
if [ $? -eq 0 ]; then log "Security updates installed successfully" else log "ERROR: Security updates failed" exit 1 fi }
update_system ```
🚀 Advanced: Multi-Server Automation
SSH-Based Remote Execution
```bash #!/bin/bash
deploy-all-servers.sh
SERVERS=("server1.example.com" "server2.example.com" "server3.example.com") SCRIPT="/usr/local/bin/deploy-laravel.sh"
for server in "${SERVERS[@]}"; do echo "Deploying to $server..." ssh "$server" "bash $SCRIPT" || echo "Failed to deploy to $server" done ```
Parallel Execution
```bash
Using GNU parallel
parallel -j 5 ssh {} "bash deploy.sh" ::: "${SERVERS[@]}" ```
📝 Logging & Error Handling
Comprehensive Logging
```bash log() { local level=$1 shift local message="$@" echo "[$(date +'%Y-%m-%d %H:%M:%S')] [$level] $message" | \ tee -a "$LOG_FILE" }
Usage
log "INFO" "Starting deployment" log "ERROR" "Deployment failed" log "WARN" "Disk space low" ```
Error Notifications
```bash send_alert() { local subject="$1" local body="$2"
# Email echo "$body" | mail -s "$subject" admin@example.com
# Or webhook curl -X POST "https://hooks.slack.com/services/..." \ -d "{\"text\":\"$subject: $body\"}" }
On error
trap 'send_alert "Script Failed" "Error in $0 at line $LINENO"' ERR ```
🎓 Best Practices
1. Use version control for scripts 2. Test scripts in staging first 3. Add logging to all operations 4. Handle errors gracefully 5. Document script purpose and usage 6. Use systemd for long-running processes 7. Monitor script execution 8. Set proper permissions (chmod +x)
💡 Real-World Example
I automated a multi-server deployment pipeline:
1. Git hook triggers deployment script 2. Script backs up current version 3. Pulls latest code from repository 4. Runs migrations and updates 5. Clears caches and optimizes 6. Restarts services if needed 7. Sends notification on completion 8. Rolls back automatically on failure
Result: Zero-downtime deployments, consistent across all servers.
Conclusion
Bash scripts and Systemd services are powerful tools for Linux server automation. They enable reliable, consistent operations that scale across multiple servers. Start with simple scripts, add error handling and logging, then move to Systemd for production services.
The key is automation, monitoring, and iteration.