Linux powers over 96% of the world's top web servers, and mastering the command line is a non-negotiable skill for every developer, DevOps engineer, and sysadmin. This comprehensive guide covers 100+ essential Linux commands organized into 13 categories with real-world examples you can copy and use immediately.
- Master file navigation (ls, cd, find) and operations (cp, mv, chmod) for daily workflow efficiency
- Learn text processing tools (grep, sed, awk) to manipulate data streams like a pro
- Understand process management (ps, systemctl) and networking (curl, ssh, rsync) for server administration
- Use shell scripting to automate repetitive tasks with variables, loops, and functions
- Secure your systems with iptables/ufw, fail2ban, and SSH key-based authentication
1. File System Navigation
Navigating the Linux file system is the most fundamental skill. These commands help you move around directories, list contents, and find files efficiently.
Listing Files and Directories — ls
# Basic listing
ls
# Long format with permissions, size, date
ls -la
# Sort by modification time (newest first)
ls -lt
# Sort by file size (largest first)
ls -lS
# Human-readable file sizes
ls -lh
# List only directories
ls -d */
# Recursive listing
ls -R /var/log/Changing Directories — cd
# Go to home directory
cd ~
cd
# Go to previous directory
cd -
# Go up one level
cd ..
# Go up two levels
cd ../..
# Absolute path
cd /var/log/nginx
# Relative path
cd projects/myappFinding Files — find and locate
# Find files by name
find /home -name "*.log"
# Case-insensitive search
find / -iname "readme.md"
# Find files modified in last 24 hours
find /var/log -mtime -1
# Find files larger than 100MB
find / -size +100M
# Find and delete empty directories
find /tmp -type d -empty -delete
# Find files by permission
find /home -perm 777
# Find and execute a command
find . -name "*.tmp" -exec rm {} \;
# Fast locate (uses database, run updatedb first)
sudo updatedb
locate nginx.confDirectory Tree — tree and pwd
# Print current working directory
pwd
# Display directory tree
tree
# Limit depth to 2 levels
tree -L 2
# Show only directories
tree -d
# Show tree with file sizes
tree -sh
# Exclude node_modules and .git
tree -I "node_modules|.git"Important Linux Directory Structure
| Path | Description |
|---|---|
/ | Root directory — top of the filesystem |
/home | User home directories |
/etc | System configuration files |
/var | Variable data (logs, cache, mail) |
/var/log | System and application logs |
/tmp | Temporary files (cleared on reboot) |
/usr | User programs and data |
/usr/local | Locally installed software |
/opt | Optional application software |
/proc | Virtual filesystem — process and kernel info |
/dev | Device files |
/mnt | Temporary mount points |
2. File Operations
File manipulation commands are the bread and butter of daily Linux work. From copying files to changing permissions, these commands handle all file operations.
Copy, Move, and Delete
# Copy a file
cp source.txt destination.txt
# Copy directory recursively
cp -r /src/project /backup/project
# Copy preserving attributes (timestamps, permissions)
cp -a /src /dest
# Move / rename a file
mv oldname.txt newname.txt
# Move multiple files to a directory
mv *.log /var/archive/
# Remove a file
rm unwanted.txt
# Remove with confirmation prompt
rm -i important.txt
# Remove directory recursively (DANGEROUS — double check!)
rm -rf /tmp/build-cache
# Create directories (including parents)
mkdir -p /var/www/mysite/assets/imagesPermissions & Ownership — chmod and chown
# Set file to readable/writable by owner only
chmod 600 ~/.ssh/id_rsa
# Standard permissions: owner=rwx, group/others=rx
chmod 755 deploy.sh
# Standard permissions for regular files
chmod 644 index.html
# Add execute permission for owner
chmod u+x script.sh
# Remove write permission for group and others
chmod go-w config.yml
# Recursive permission change
chmod -R 755 /var/www/html
# Change file owner
chown www-data:www-data /var/www/html
# Change owner recursively
chown -R deploy:deploy /opt/appSymbolic Links — ln
# Create a symbolic (soft) link
ln -s /etc/nginx/sites-available/mysite /etc/nginx/sites-enabled/mysite
# Create a hard link
ln original.txt hardlink.txt
# Verify a symlink target
readlink -f /usr/bin/python3
# Find broken symlinks
find /usr/local/bin -type l ! -exec test -e {} \; -printViewing File Contents
# Display entire file
cat file.txt
# Display with line numbers
cat -n file.txt
# View first 20 lines
head -20 file.txt
# View last 20 lines
tail -20 file.txt
# Follow file in real time (great for logs)
tail -f /var/log/syslog
# Follow with line count
tail -f -n 100 /var/log/nginx/access.log
# Page through a file (scrollable)
less /var/log/syslog
# Press / to search, q to quit, G to go to end
# Display file with line numbers and non-printing chars
cat -An file.txt
# Concatenate multiple files
cat part1.txt part2.txt part3.txt > combined.txt
# Create file from stdin (Ctrl+D to end)
cat > notes.txtFile Comparison and Checksums
# Compare two files
diff file1.txt file2.txt
# Side-by-side comparison
diff -y file1.txt file2.txt
# Unified diff format (like git)
diff -u old.conf new.conf
# Compare directories
diff -r dir1/ dir2/
# Generate MD5 checksum
md5sum file.tar.gz
# Generate SHA256 checksum
sha256sum file.tar.gz
# Verify checksum file
sha256sum -c checksums.sha256
# Get file type information
file unknown-file
file /usr/bin/python3
# Show file statistics
stat file.txt3. Text Processing
Linux excels at text processing. These powerful commands let you search, transform, filter, and analyze text data — essential for log analysis, data extraction, and automation.
Searching Text — grep
# Search for a pattern in a file
grep "error" /var/log/syslog
# Case-insensitive search
grep -i "warning" app.log
# Recursive search in directories
grep -r "TODO" ./src
# Show line numbers
grep -n "function" app.js
# Show 3 lines of context around matches
grep -C 3 "Exception" error.log
# Invert match (show lines NOT matching)
grep -v "DEBUG" app.log
# Count matching lines
grep -c "error" /var/log/syslog
# Search with regex
grep -E "^[0-9]{4}-[0-9]{2}" access.log
# Show only matching filenames
grep -rl "API_KEY" /etc/Stream Editor — sed
# Find and replace (first occurrence per line)
sed 's/old/new/' file.txt
# Global replace (all occurrences)
sed 's/old/new/g' file.txt
# Edit file in place
sed -i 's/http:/https:/g' config.yml
# Delete lines matching a pattern
sed '/^#/d' config.conf
# Delete empty lines
sed '/^$/d' file.txt
# Print only lines 10-20
sed -n '10,20p' file.txt
# Insert text before line 5
sed '5i\New line inserted here' file.txt
# Replace only on lines containing "server"
sed '/server/s/80/443/g' nginx.confText Processing — awk
# Print specific columns (space-delimited)
awk '{print $1, $3}' access.log
# Use custom field separator
awk -F: '{print $1, $7}' /etc/passwd
# Filter rows by condition
awk '$3 > 500 {print $1, $3}' data.txt
# Sum a column
awk '{sum += $5} END {print "Total:", sum}' sales.csv
# Count unique values
awk -F, '{count[$2]++} END {for (k in count) print k, count[k]}' data.csv
# Format output
awk '{printf "%-20s %10d\n", $1, $3}' report.txtAdvanced sed Patterns
# Multiple replacements in one pass
sed -e 's/foo/bar/g' -e 's/baz/qux/g' file.txt
# Replace only on line range
sed '10,20s/old/new/g' file.txt
# Add line after match
sed '/^server {/a\ include /etc/nginx/security.conf;' nginx.conf
# Delete lines between two patterns
sed '/START_BLOCK/,/END_BLOCK/d' config.txt
# Extract text between patterns
sed -n '/BEGIN/,/END/p' file.txt
# Remove trailing whitespace
sed 's/[[:space:]]*$//' file.txt
# Convert Windows line endings to Unix
sed -i 's/\r$//' script.sh
# Number all non-empty lines
sed '/./=' file.txt | sed 'N; s/\n/\t/'Practical awk Examples
# Print lines longer than 80 characters
awk 'length > 80' file.txt
# Calculate average of a column
awk '{sum += $3; count++} END {print "Average:", sum/count}' data.txt
# Print between two patterns
awk '/START/,/END/' file.txt
# CSV processing with header
awk -F, 'NR==1 {for(i=1;i<=NF;i++) header[i]=$i; next}
{print header[1] "=" $1, header[3] "=" $3}' data.csv
# Group by and count
awk '{status[$NF]++} END {for (s in status) print s, status[s]}' access.log
# Multi-file processing
awk 'FNR==1 {print "--- " FILENAME " ---"} {print}' file1.txt file2.txt
# Replace column value conditionally
awk -F, 'BEGIN{OFS=","} $3 > 100 {$3 = "HIGH"} {print}' data.csvAdditional Text Tools
| Command | Description |
|---|---|
cut -d: -f1 /etc/passwd | Extract specific fields by delimiter |
sort -k2 -n data.txt | Sort by column 2 numerically |
sort -u file.txt | Sort and remove duplicate lines |
uniq -c | Count consecutive duplicate lines |
wc -l file.txt | Count number of lines |
wc -w file.txt | Count number of words |
tr '[:lower:]' '[:upper:]' | Convert lowercase to uppercase |
tr -d '\n' | Delete all newline characters |
# Practical pipeline: find top 10 IP addresses in access log
awk '{print $1}' access.log | sort | uniq -c | sort -rn | head -10
# Count lines in all Python files
find . -name "*.py" -exec wc -l {} + | sort -n
# Extract unique error messages
grep "ERROR" app.log | awk -F'] ' '{print $2}' | sort -u4. Process Management
Managing running processes is critical for server administration. These commands help you monitor, control, and manage system services.
Process Management Overview
| Command | Description |
|---|---|
ps aux | List all running processes |
top / htop | Interactive process monitor |
kill PID | Send termination signal to process |
kill -9 PID | Force kill a process |
killall name | Kill all processes by name |
nohup cmd & | Run in background, immune to hangup |
systemctl status | Check systemd service status |
journalctl -u svc | View service logs |
Viewing Processes — ps and top
# List all processes (full format)
ps aux
# Filter by process name
ps aux | grep nginx
# Show process tree
ps auxf
# Show processes for a specific user
ps -u www-data
# Interactive process viewer
top
# Better interactive viewer (install: apt install htop)
htop
# Sort by memory usage in top
# Press M inside top
# Sort by CPU usage in top
# Press P inside topKilling and Controlling Processes
# Graceful terminate (SIGTERM)
kill 12345
# Force kill (SIGKILL) — use as last resort
kill -9 12345
# Kill by process name
killall nginx
pkill -f "python app.py"
# Run process in background
./long-task.sh &
# Run process immune to hangups (survives logout)
nohup ./server.sh &
nohup ./server.sh > output.log 2>&1 &
# List background jobs
jobs
# Bring job to foreground
fg %1
# Send running process to background
# Press Ctrl+Z first, then:
bg %1Resource Limits and Priority
# Run command with lower CPU priority
nice -n 19 ./heavy-computation.sh
# Change priority of running process
renice -n 10 -p 12345
# Limit CPU usage (install: apt install cpulimit)
cpulimit -l 50 -p 12345
# Run command with memory limit
systemd-run --scope -p MemoryMax=512M ./my-app
# Show resource limits
ulimit -a
# Set max open files for current session
ulimit -n 65535
# Show process resource usage
/usr/bin/time -v ./my-script.sh
# View per-process memory map
pmap -x 12345System Services — systemctl and journalctl
# Start / stop / restart a service
sudo systemctl start nginx
sudo systemctl stop nginx
sudo systemctl restart nginx
# Reload config without restart
sudo systemctl reload nginx
# Check service status
systemctl status nginx
# Enable service on boot
sudo systemctl enable nginx
# Disable service on boot
sudo systemctl disable nginx
# List all active services
systemctl list-units --type=service --state=active
# View logs for a service
journalctl -u nginx
# Follow logs in real time
journalctl -u nginx -f
# View logs since last boot
journalctl -b
# View logs from the last hour
journalctl --since "1 hour ago"5. Networking
Networking commands are essential for server management, API testing, file transfer, and debugging connectivity issues.
HTTP Requests — curl and wget
# Simple GET request
curl https://api.example.com/data
# POST with JSON body
curl -X POST -H "Content-Type: application/json" \
-d '{"name":"test"}' https://api.example.com/users
# Download file with progress
curl -O https://example.com/file.tar.gz
# Follow redirects
curl -L https://short.url/abc
# Show response headers
curl -I https://example.com
# Download file with wget
wget https://example.com/file.tar.gz
# Resume interrupted download
wget -c https://example.com/large-file.iso
# Mirror a website
wget --mirror --convert-links https://example.comNetworking Commands Quick Reference
| Command | Description |
|---|---|
curl URL | Send HTTP request |
wget URL | Download a file |
ssh user@host | Secure remote login |
scp src user@host:dst | Securely copy file to remote |
rsync -avz src dst | Incremental file synchronization |
ss -tlnp | Show TCP listening ports with processes |
dig domain | DNS lookup |
ip addr show | Show network interfaces and IPs |
ping host | Test network connectivity |
traceroute host | Trace network route path |
Remote Access — ssh and File Transfer
# SSH into a remote server
ssh user@192.168.1.100
# SSH with specific key
ssh -i ~/.ssh/mykey user@server.com
# SSH with port forwarding (local)
ssh -L 8080:localhost:3000 user@server.com
# Copy file to remote server
scp file.txt user@server:/home/user/
# Copy directory from remote
scp -r user@server:/var/log/app ./logs
# Sync files with rsync (fast, incremental)
rsync -avz ./deploy/ user@server:/var/www/html/
# Rsync with delete (mirror source to destination)
rsync -avz --delete ./src/ user@server:/opt/app/
# Rsync over SSH with specific port
rsync -avz -e "ssh -p 2222" ./data/ user@server:/backup/Network Diagnostics
# Show listening ports
ss -tlnp
# Show all connections
ss -tunap
# Display network interfaces and IPs
ip addr show
# Show routing table
ip route show
# DNS lookup
dig example.com
dig example.com MX
nslookup example.com
# Test network connectivity
ping -c 4 google.com
# Trace route to host
traceroute google.com
# Check if a port is open
nc -zv server.com 443
# Monitor network traffic (requires root)
sudo tcpdump -i eth0 port 80Network Configuration Quick Checks
# Show all IP addresses
hostname -I
# Show public IP address
curl -s ifconfig.me
curl -s ipinfo.io/ip
# Test HTTP response code
curl -o /dev/null -s -w "%{http_code}" https://example.com
# Check DNS resolution time
dig +stats example.com | grep "Query time"
# Show active connections by state
ss -s
# Monitor bandwidth in real time (install: apt install iftop)
sudo iftop -i eth0
# Test port connectivity with timeout
timeout 5 bash -c 'echo > /dev/tcp/server.com/443' && echo "Open" || echo "Closed"
# Download with speed limit
curl --limit-rate 1M -O https://example.com/large-file.zip
# Send email via command line
echo "Server is down" | mail -s "ALERT" admin@example.com6. Disk & Storage
Monitoring and managing disk space prevents outages and ensures your servers run smoothly. These commands help you track usage and manage storage devices.
# Show filesystem disk usage (human-readable)
df -h
# Show inode usage
df -i
# Show directory size
du -sh /var/log
# Show top-level directory sizes
du -h --max-depth=1 /var
# Find largest files and directories
du -ah / | sort -rh | head -20
# List block devices
lsblk
# Show detailed disk information
sudo fdisk -l
# Mount a filesystem
sudo mount /dev/sdb1 /mnt/data
# Unmount a filesystem
sudo umount /mnt/data
# Show mounted filesystems
mount | column -t
# Create ext4 filesystem
sudo mkfs.ext4 /dev/sdb1
# Check and repair filesystem
sudo fsck /dev/sdb1
# Add permanent mount to fstab
# Edit /etc/fstab:
# /dev/sdb1 /mnt/data ext4 defaults 0 2| Command | Description |
|---|---|
df -h | Show filesystem disk space usage |
du -sh /path | Show total directory size |
lsblk | List all block devices (disks, partitions) |
mount / umount | Mount or unmount a filesystem |
mkfs.ext4 | Create ext4 filesystem on a partition |
fsck | Check and repair a filesystem |
Disk Performance and SMART Monitoring
# Test disk write speed
dd if=/dev/zero of=/tmp/testfile bs=1M count=1024 conv=fdatasync
# Test disk read speed
dd if=/tmp/testfile of=/dev/null bs=1M
# Show disk SMART status (install: apt install smartmontools)
sudo smartctl -a /dev/sda
# Check disk health
sudo smartctl -H /dev/sda
# Show partition UUID
blkid
# Resize a partition (with LVM)
sudo lvextend -L +10G /dev/mapper/vg0-root
sudo resize2fs /dev/mapper/vg0-root
# Check disk usage by file type
find / -type f -name "*.log" -exec du -ch {} + 2>/dev/null | tail -1
# Clean package manager cache
sudo apt clean # Debian/Ubuntu
sudo dnf clean all # RHEL/Fedora7. User Management
User and permission management is fundamental to Linux security. These commands control who can access what on your system.
# Add a new user
sudo useradd -m -s /bin/bash johndoe
# Add user with home dir and default shell
sudo useradd -m -s /bin/bash -G sudo,docker devuser
# Set password for user
sudo passwd johndoe
# Modify user (add to docker group)
sudo usermod -aG docker johndoe
# Delete user and home directory
sudo userdel -r johndoe
# View user groups
groups johndoe
id johndoe
# Switch to another user
su - johndoe
# Run command as another user
sudo -u www-data whoami
# Edit sudoers file safely
sudo visudo
# Add user to sudoers (append to /etc/sudoers)
# johndoe ALL=(ALL:ALL) NOPASSWD: ALL
# List currently logged in users
who
w
# Show last login history
last8. Package Management
Different Linux distributions use different package managers. Here is a comparison of the most common ones with equivalent commands.
| Action | apt (Debian/Ubuntu) | dnf/yum (RHEL/Fedora) | pacman (Arch) |
|---|---|---|---|
| Update package list | apt update | dnf check-update | pacman -Sy |
| Upgrade all packages | apt upgrade | dnf upgrade | pacman -Syu |
| Install package | apt install nginx | dnf install nginx | pacman -S nginx |
| Remove package | apt remove nginx | dnf remove nginx | pacman -R nginx |
| Search packages | apt search nginx | dnf search nginx | pacman -Ss nginx |
| Show package info | apt show nginx | dnf info nginx | pacman -Si nginx |
| List installed | apt list --installed | dnf list installed | pacman -Q |
| Clean cache | apt autoremove | dnf autoremove | pacman -Sc |
apt Detailed Usage (Debian/Ubuntu)
# Update and upgrade in one step
sudo apt update && sudo apt upgrade -y
# Install specific version
sudo apt install nginx=1.24.0-1
# Hold package (prevent upgrades)
sudo apt-mark hold nginx
sudo apt-mark unhold nginx
# Show installed package version
apt list --installed | grep nginx
# Show package dependencies
apt depends nginx
# Show reverse dependencies
apt rdepends nginx
# Remove package and its config files
sudo apt purge nginx
# Remove unused dependencies
sudo apt autoremove -y
# List upgradable packages
apt list --upgradable
# Download .deb without installing
apt download nginx
# Install local .deb file
sudo dpkg -i package.deb
sudo apt install -f # Fix broken dependencies
# Add a PPA repository
sudo add-apt-repository ppa:ondrej/php
sudo apt updatednf Detailed Usage (RHEL/Fedora)
# Check for updates
sudo dnf check-update
# Install package group
sudo dnf groupinstall "Development Tools"
# Show package history
sudo dnf history
sudo dnf history info 15
# Undo last transaction
sudo dnf history undo last
# List enabled repositories
dnf repolist
# Add a repository
sudo dnf config-manager --add-repo https://repo.example.com/repo.repo
# Install from specific repo
sudo dnf install --repo=epel nginx
# List all files in a package
rpm -ql nginx
# Which package provides a file
dnf provides /usr/bin/curlSnap Universal Packages
# Install a snap package
sudo snap install code --classic
# List installed snaps
snap list
# Update all snaps
sudo snap refresh
# Remove a snap
sudo snap remove code
# Find snaps
snap find "text editor"9. Shell Scripting Essentials
Shell scripting automates repetitive tasks and creates powerful workflows. Master these fundamentals to write effective Bash scripts.
Bash Special Variables
| Variable | Description |
|---|---|
$0 | Script name |
$1, $2, ... | Positional parameters (arguments passed) |
$# | Number of arguments |
$@ | All arguments (as separate words) |
$* | All arguments (as single string) |
$? | Exit code of last command |
$$ | PID of current shell |
$! | PID of last background process |
Variables and Strings
#!/bin/bash
# Variable assignment (no spaces around =)
NAME="Linux"
VERSION=6
DATE=$(date +%Y-%m-%d)
# Using variables
echo "Welcome to \$NAME version \$VERSION"
echo "Today is \$DATE"
# String operations
STR="Hello World"
echo "Length: \${#STR}" # 11
echo "Substr: \${STR:0:5}" # Hello
echo "Replace: \${STR/World/Linux}" # Hello Linux
# Default values
echo "\${UNSET_VAR:-default_value}" # uses default if unset
echo "\${UNSET_VAR:=default_value}" # sets and uses default if unsetConditionals
#!/bin/bash
# If-else
if [ -f "/etc/nginx/nginx.conf" ]; then
echo "Nginx config exists"
elif [ -f "/etc/apache2/apache2.conf" ]; then
echo "Apache config exists"
else
echo "No web server config found"
fi
# Numeric comparison
COUNT=$(wc -l < access.log)
if [ "\$COUNT" -gt 1000 ]; then
echo "High traffic: \$COUNT requests"
fi
# String comparison
ENV="production"
if [[ "\$ENV" == "production" ]]; then
echo "Running in production mode"
fi
# File test operators
# -f file exists and is regular file
# -d directory exists
# -r file is readable
# -w file is writable
# -x file is executable
# -s file exists and is not emptyLoops
#!/bin/bash
# For loop — iterate over list
for SERVER in web1 web2 web3 db1; do
echo "Checking \$SERVER..."
ping -c 1 "\$SERVER" > /dev/null 2>&1 && echo " UP" || echo " DOWN"
done
# For loop — C-style
for ((i=1; i<=10; i++)); do
echo "Iteration \$i"
done
# For loop — iterate over files
for FILE in /var/log/*.log; do
echo "Processing \$FILE ($(wc -l < "\$FILE") lines)"
done
# While loop
COUNTER=0
while [ "\$COUNTER" -lt 5 ]; do
echo "Count: \$COUNTER"
COUNTER=$((COUNTER + 1))
done
# Read file line by line
while IFS= read -r LINE; do
echo "Processing: \$LINE"
done < input.txtFunctions and Arrays
#!/bin/bash
# Define a function
check_service() {
local SERVICE_NAME="\$1"
if systemctl is-active --quiet "\$SERVICE_NAME"; then
echo "\$SERVICE_NAME is running"
return 0
else
echo "\$SERVICE_NAME is NOT running"
return 1
fi
}
# Call function
check_service nginx
check_service postgresql
# Arrays
SERVERS=("web1" "web2" "web3" "db1")
# Array length
echo "Total servers: \${#SERVERS[@]}"
# Iterate array
for SERVER in "\${SERVERS[@]}"; do
echo "Server: \$SERVER"
done
# Access by index
echo "First: \${SERVERS[0]}"
echo "Last: \${SERVERS[-1]}"
# Append to array
SERVERS+=("cache1")
# Associative array (Bash 4+)
declare -A PORTS
PORTS[nginx]=80
PORTS[ssh]=22
PORTS[postgres]=5432
for SERVICE in "\${!PORTS[@]}"; do
echo "\$SERVICE -> port \${PORTS[\$SERVICE]}"
doneError Handling and Debugging
#!/bin/bash
# Exit on first error
set -e
# Exit on undefined variable
set -u
# Fail on pipe errors
set -o pipefail
# Combined (recommended for all scripts)
set -euo pipefail
# Trap errors and run cleanup
cleanup() {
echo "Cleaning up temp files..."
rm -f /tmp/myapp_*
}
trap cleanup EXIT ERR
# Debug mode — print each command before executing
set -x
# Debug specific section only
set -x
# ... commands to debug ...
set +x
# Validate required arguments
if [ $# -lt 2 ]; then
echo "Usage: \$0 <source> <destination>"
exit 1
fi
# Check if command exists
if ! command -v docker &> /dev/null; then
echo "Docker is not installed"
exit 1
fiPractical Script Template
#!/bin/bash
set -euo pipefail
# ---- Configuration ----
LOG_DIR="/var/log/myapp"
BACKUP_DIR="/backup/db"
RETENTION_DAYS=7
DATE=$(date +%Y%m%d_%H%M%S)
# ---- Functions ----
log() {
echo "[$(date +"%Y-%m-%d %H:%M:%S")] \$1" | tee -a "\$LOG_DIR/backup.log"
}
die() {
log "ERROR: \$1"
exit 1
}
# ---- Main ----
log "Starting database backup..."
# Create backup directory if missing
mkdir -p "\$BACKUP_DIR" || die "Cannot create backup dir"
# Perform backup
pg_dump mydb > "\$BACKUP_DIR/mydb_\$DATE.sql" || die "pg_dump failed"
# Compress backup
gzip "\$BACKUP_DIR/mydb_\$DATE.sql" || die "Compression failed"
log "Backup created: mydb_\$DATE.sql.gz"
# Remove old backups
find "\$BACKUP_DIR" -name "*.sql.gz" -mtime +\$RETENTION_DAYS -delete
log "Removed backups older than \$RETENTION_DAYS days"
log "Backup completed successfully"10. I/O Redirection & Pipes
Redirection and pipes are what make the Linux command line truly powerful. They let you chain commands together and control where data flows.
Standard Streams and Redirection
# Redirect stdout to file (overwrite)
echo "Hello" > output.txt
# Redirect stdout to file (append)
echo "World" >> output.txt
# Redirect stderr to file
command_that_fails 2> errors.log
# Redirect both stdout and stderr
command > output.log 2>&1
# Modern syntax (Bash 4+)
command &> output.log
# Discard all output
command > /dev/null 2>&1
# Redirect stdin from file
sort < unsorted.txt
# Here document
cat <<EOF
Server: production
Date: $(date)
Status: running
EOF
# Here string
grep "error" <<< "This is an error message"Pipes and Command Chaining
# Basic pipe
ls -la | grep ".log"
# Multiple pipes
cat access.log | awk '{print $1}' | sort | uniq -c | sort -rn | head -10
# tee: write to file AND stdout simultaneously
ping google.com | tee ping-results.txt
# tee with append
echo "new log entry" | tee -a app.log
# xargs: convert stdin to command arguments
find . -name "*.tmp" | xargs rm
# xargs with placeholder
find . -name "*.js" | xargs -I {} cp {} /backup/
# Parallel execution with xargs
find . -name "*.png" | xargs -P 4 -I {} convert {} -resize 50% {}
# Command substitution
KILL_PIDS=$(pgrep -f "old-process")
echo "Killing PIDs: \$KILL_PIDS"
# Process substitution
diff <(sort file1.txt) <(sort file2.txt)Named Pipes and File Descriptors
# Create a named pipe (FIFO)
mkfifo /tmp/mypipe
# Writer (in terminal 1)
echo "Hello from writer" > /tmp/mypipe
# Reader (in terminal 2)
cat < /tmp/mypipe
# Custom file descriptors
exec 3> /tmp/custom-output.txt # Open fd 3 for writing
echo "Written to fd 3" >&3
exec 3>&- # Close fd 3
# Read from fd
exec 4< /etc/hostname
read HOSTNAME <&4
exec 4<&-
echo "Hostname: \$HOSTNAME"
# Swap stdout and stderr
command 3>&1 1>&2 2>&3 3>&-
# Log stdout and stderr separately
command > stdout.log 2> stderr.log
# Append both to same file with timestamps
command 2>&1 | while IFS= read -r line; do
echo "$(date +"%H:%M:%S") \$line"
done >> app.log11. Compression & Archives
Compressing files saves disk space and speeds up file transfers. These are the most commonly used archiving and compression tools.
tar — Archive Tool
# Create tar.gz archive
tar -czf archive.tar.gz /path/to/directory
# Create tar.bz2 archive (better compression)
tar -cjf archive.tar.bz2 /path/to/directory
# Extract tar.gz archive
tar -xzf archive.tar.gz
# Extract to specific directory
tar -xzf archive.tar.gz -C /opt/
# List contents without extracting
tar -tzf archive.tar.gz
# Extract specific file from archive
tar -xzf archive.tar.gz path/to/file.txt
# Create archive excluding patterns
tar -czf backup.tar.gz --exclude="*.log" --exclude="node_modules" /opt/appOther Compression Tools
| Tool | Compress | Decompress | Notes |
|---|---|---|---|
| gzip | gzip file.txt | gunzip file.txt.gz | Most common, fast |
| bzip2 | bzip2 file.txt | bunzip2 file.txt.bz2 | Better compression, slower |
| xz | xz file.txt | unxz file.txt.xz | Best compression, slowest |
| zip | zip -r archive.zip dir/ | unzip archive.zip | Cross-platform compatible |
| 7z | 7z a archive.7z dir/ | 7z x archive.7z | High compression, multi-format |
# Compress keeping original file
gzip -k large-file.log
# Set compression level (1=fast, 9=best)
gzip -9 data.csv
# Zip with password protection
zip -e -r secure.zip /sensitive/data/
# List zip contents
unzip -l archive.zipCross-Server Compressed Transfers
# Compress and transfer in one step (no temp file)
tar -czf - /var/www/html | ssh user@server "cat > /backup/site.tar.gz"
# Transfer and extract on remote in one step
tar -czf - /opt/app | ssh user@server "cd /opt && tar -xzf -"
# Split large archives into parts
tar -czf - /large/data | split -b 100M - backup_part_
# Rejoin split archive
cat backup_part_* | tar -xzf -
# Compress with parallel processing (install: apt install pigz)
tar -cf - /data | pigz > data.tar.gz
# Decompress with pigz
pigz -d data.tar.gz12. System Monitoring
Proactive system monitoring helps you identify performance bottlenecks, memory leaks, and hardware issues before they cause outages.
# System uptime and load averages
uptime
# Memory usage (human-readable)
free -h
# Virtual memory statistics (every 2 seconds)
vmstat 2 5
# I/O statistics
iostat -x 2 5
# System activity report (CPU, memory, disk, network)
sar -u 2 5 # CPU usage
sar -r 2 5 # Memory usage
sar -d 2 5 # Disk activity
# Kernel ring buffer messages
dmesg | tail -20
dmesg -T | grep -i error
# System information
uname -a
hostnamectl
# CPU information
lscpu
cat /proc/cpuinfo | grep "model name" | head -1
# Memory information
cat /proc/meminfo | head -5Real-Time Monitoring Commands
| Command | Description |
|---|---|
top / htop | Interactive process and resource monitor |
vmstat 1 | Virtual memory, CPU, and I/O stats every second |
iostat -x 1 | Extended disk I/O stats every second |
sar -n DEV 1 | Network interface stats every second |
watch -n 1 "df -h" | Refresh disk usage display every second |
dstat | Versatile all-in-one resource statistics tool |
nmon | Performance monitoring and analysis tool |
# Watch a command output in real time (updates every 2s)
watch "ss -tlnp"
# Monitor log file in real time
tail -f /var/log/syslog
# Monitor multiple log files
tail -f /var/log/nginx/access.log /var/log/nginx/error.log
# Quick system health check script
echo "=== Uptime ===" && uptime
echo "=== Memory ===" && free -h
echo "=== Disk ===" && df -h /
echo "=== Load ===" && cat /proc/loadavg
echo "=== Top Processes ===" && ps aux --sort=-%cpu | head -5Performance Analysis and Troubleshooting
# Show top CPU-consuming processes
ps aux --sort=-%cpu | head -10
# Show top memory-consuming processes
ps aux --sort=-%mem | head -10
# Trace system calls of a process
strace -p 12345 -e trace=network
strace -c ./my-program # Summary of syscalls
# Trace library calls
ltrace ./my-program
# Show open files for a process
lsof -p 12345
# Show files opened by a user
lsof -u www-data
# Find which process is using a file
lsof /var/log/syslog
# Find processes using deleted files (disk not freed)
lsof +L1
# Check OOM (Out of Memory) kills
dmesg | grep -i "out of memory"
grep -i "killed process" /var/log/syslog
# Network connections by process
sudo ss -tnp | awk '{print $6}' | sort | uniq -c | sort -rnLog Management
# View system log
journalctl -xe
# View boot messages
journalctl -b -1 # Previous boot
# Check disk usage of logs
journalctl --disk-usage
# Vacuum old logs (keep last 500MB)
sudo journalctl --vacuum-size=500M
# Vacuum old logs (keep last 7 days)
sudo journalctl --vacuum-time=7d
# Rotate logs manually
sudo logrotate -f /etc/logrotate.conf
# Monitor multiple logs simultaneously
multitail /var/log/nginx/access.log /var/log/nginx/error.log13. Security Commands
Security is not optional. These commands help you configure firewalls, manage SSH keys, and protect your systems from unauthorized access.
Firewall Management
# UFW (Uncomplicated Firewall) — Ubuntu/Debian
sudo ufw enable
sudo ufw status verbose
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw deny 3306/tcp
sudo ufw allow from 10.0.0.0/8 to any port 22
sudo ufw delete allow 80/tcp
# iptables — traditional firewall
# List current rules
sudo iptables -L -n -v
# Allow incoming SSH
sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT
# Allow incoming HTTP/HTTPS
sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT
# Drop all other incoming traffic
sudo iptables -A INPUT -j DROP
# Save iptables rules
sudo iptables-save > /etc/iptables.rules
# nftables — modern replacement for iptables
sudo nft list ruleset
sudo nft add rule inet filter input tcp dport 22 acceptSSH Key Management
# Generate Ed25519 key (recommended)
ssh-keygen -t ed25519 -C "your@email.com"
# Generate RSA key (4096-bit)
ssh-keygen -t rsa -b 4096 -C "your@email.com"
# Copy public key to server
ssh-copy-id -i ~/.ssh/id_ed25519.pub user@server
# Manual key copy (if ssh-copy-id unavailable)
cat ~/.ssh/id_ed25519.pub | ssh user@server "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"
# Set correct permissions
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
chmod 600 ~/.ssh/id_ed25519
chmod 644 ~/.ssh/id_ed25519.pub
# Disable password authentication (edit sshd_config)
# PasswordAuthentication no
# PubkeyAuthentication yes
# Then restart: sudo systemctl restart sshdfail2ban Intrusion Prevention
# Install fail2ban
sudo apt install fail2ban
# Start and enable
sudo systemctl start fail2ban
sudo systemctl enable fail2ban
# Check status
sudo fail2ban-client status
sudo fail2ban-client status sshd
# Unban an IP
sudo fail2ban-client set sshd unbanip 192.168.1.100
# Custom jail configuration (/etc/fail2ban/jail.local)
# [sshd]
# enabled = true
# port = 22
# filter = sshd
# logpath = /var/log/auth.log
# maxretry = 3
# bantime = 3600
# findtime = 600GPG Encryption
# Generate GPG key pair
gpg --full-generate-key
# List keys
gpg --list-keys
gpg --list-secret-keys
# Export public key
gpg --armor --export your@email.com > public.key
# Import someone else's public key
gpg --import their-key.pub
# Encrypt a file for a recipient
gpg --encrypt --recipient their@email.com secret.txt
# Decrypt a file
gpg --decrypt secret.txt.gpg > secret.txt
# Sign a file
gpg --sign document.pdf
# Verify a signature
gpg --verify document.pdf.gpgSystem Hardening Checklist
| Action | Command |
|---|---|
| Disable root SSH login | PermitRootLogin no in sshd_config |
| Change default SSH port | Port 2222 in sshd_config |
| Enable firewall | sudo ufw enable |
| Install fail2ban | sudo apt install fail2ban |
| Enable automatic security updates | sudo apt install unattended-upgrades |
| Disable password auth | PasswordAuthentication no |
| Set login timeout | ClientAliveInterval 300 |
| Limit sudo users | sudo visudo |
Security Auditing and Checks
# Find files with SUID/SGID permissions
find / -type f \( -perm -4000 -o -perm -2000 \) -ls 2>/dev/null
# Find world-writable files
find / -type f -perm -o+w -ls 2>/dev/null
# Find files with no owner
find / -nouser -o -nogroup 2>/dev/null
# Check for empty passwords
sudo awk -F: '($2 == "" ) {print $1}' /etc/shadow
# List users with UID 0 (root-equivalent)
awk -F: '($3 == 0) {print $1}' /etc/passwd
# Check open ports
sudo ss -tlnp
sudo lsof -i -P -n | grep LISTEN
# View failed login attempts
sudo lastb | head -20
sudo grep "Failed password" /var/log/auth.log | tail -20
# Check active SSH sessions
who
w
sudo ss -tnp | grep :22
# Scan for rootkits (install: apt install rkhunter)
sudo rkhunter --check
# Check file integrity (install: apt install aide)
sudo aide --checkSSL/TLS Certificate Management
# Check SSL certificate of a website
openssl s_client -connect example.com:443 -servername example.com < /dev/null 2>/dev/null | openssl x509 -noout -dates
# View certificate details
openssl x509 -in cert.pem -text -noout
# Generate self-signed certificate
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes
# Check certificate expiry with curl
curl -vI https://example.com 2>&1 | grep "expire date"
# Let's Encrypt with certbot
sudo certbot --nginx -d example.com -d www.example.com
sudo certbot renew --dry-run
# Check certificate chain
openssl s_client -connect example.com:443 -showcertsBonus: Essential One-Liners
These practical one-liners cover the most common scenarios in daily system administration.
# Find and replace in multiple files
find . -name "*.conf" -exec sed -i 's/old-domain/new-domain/g' {} +
# Kill all processes matching a pattern
pkill -f "pattern"
# Show directory sizes sorted by size
du -h --max-depth=1 | sort -rh
# Monitor file changes in real time
inotifywait -m -r /etc/
# Quick HTTP server (Python 3)
python3 -m http.server 8080
# Generate random password
openssl rand -base64 32
tr -dc 'A-Za-z0-9!@#$%' < /dev/urandom | head -c 24
# Show calendar
cal
cal 2026
# Convert epoch timestamp to date
date -d @1709251200
# Count files in directory
find /var/log -type f | wc -l
# List top 10 largest files
find / -type f -exec du -h {} + 2>/dev/null | sort -rh | head -10
# Watch disk I/O in real time
sudo iotop
# Show all cron jobs for all users
for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l 2>/dev/null; done
# Batch rename files (rename .txt to .md)
for f in *.txt; do mv "$f" "${f%.txt}.md"; done
# Create backup with timestamp
tar -czf "backup_$(date +%Y%m%d_%H%M%S).tar.gz" /var/www/Environment Variables
# View all environment variables
env
printenv
# Set variable for current session
export MY_VAR="hello"
# Set variable permanently (add to ~/.bashrc or ~/.profile)
echo 'export MY_VAR="hello"' >> ~/.bashrc
source ~/.bashrc
# Unset a variable
unset MY_VAR
# Common environment variables
echo \$HOME # User home directory
echo \$USER # Current username
echo \$PATH # Executable search path
echo \$SHELL # Current shell
echo \$PWD # Current directory
echo \$EDITOR # Default text editor
# Add to PATH
export PATH="\$HOME/.local/bin:\$PATH"Cron Scheduled Tasks
# Edit crontab for current user
crontab -e
# List crontab entries
crontab -l
# Cron expression format:
# MIN HOUR DAY MONTH WEEKDAY COMMAND
# 0-59 0-23 1-31 1-12 0-7
# Every day at 2:30 AM
# 30 2 * * * /opt/scripts/backup.sh
# Every 15 minutes
# */15 * * * * /opt/scripts/health-check.sh
# Every Monday at 9 AM
# 0 9 * * 1 /opt/scripts/weekly-report.sh
# First day of every month at midnight
# 0 0 1 * * /opt/scripts/monthly-cleanup.sh
# Log cron output
# 0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>&1tmux Terminal Multiplexer
# Start new session
tmux new -s mysession
# Detach from session: Ctrl+B then D
# List sessions
tmux ls
# Attach to session
tmux attach -t mysession
# Kill session
tmux kill-session -t mysession
# Split pane horizontally: Ctrl+B then "
# Split pane vertically: Ctrl+B then %
# Switch pane: Ctrl+B then arrow key
# Resize pane: Ctrl+B then hold arrow key
# Close pane: exit or Ctrl+D
# Create new window: Ctrl+B then C
# Switch window: Ctrl+B then window number
# Rename window: Ctrl+B then ,
# Scroll mode: Ctrl+B then [ (use arrows, q to exit)Quick Reference Table
Most frequently used commands grouped by category.
| Category | Command | Description |
|---|---|---|
| Navigation | ls -la | List all files with details including hidden |
| Navigation | find . -name "*.log" | Recursively search for matching files |
| Files | chmod 755 script.sh | Set file permissions |
| Files | chown user:group file | Change file ownership |
| Text | grep -rn "pattern" . | Recursive search with line numbers |
| Text | awk '{print $1}' | Extract first column |
| Process | ps aux | grep name | Find a process by name |
| Process | systemctl restart svc | Restart a system service |
| Network | ss -tlnp | Show listening ports |
| Network | rsync -avz src/ dst/ | Incrementally sync files |
| Disk | df -h | Show disk usage |
| Security | ufw allow 22/tcp | Allow SSH through firewall |
Conclusion
This guide covers the essential Linux commands every developer needs. Bookmark it as a reference and practice these commands regularly. The more you use the command line, the more efficient you become. For hands-on practice, set up a virtual machine or use our online tools to experiment safely.