DevToolBox免费
博客

Linux 命令指南:文件系统、文本处理、网络、Shell 脚本与安全

22 分钟阅读作者 DevToolBox Team

Linux 驱动着全球超过 96% 的顶级 Web 服务器,掌握命令行是每个开发者、DevOps 工程师和系统管理员不可或缺的技能。这份全面指南涵盖 100+ 个核心 Linux 命令,分为 13 个类别,附带可立即复制使用的实际示例。

TL;DR: 本指南涵盖 13 个类别中的 100+ 个 Linux 命令:文件导航、文件操作、文本处理、进程管理、网络、磁盘管理、用户管理、包管理、Shell 脚本、I/O 重定向、压缩、系统监控和安全。每个章节都包含带解释的实用示例。
要点总结
  • 掌握文件导航(ls、cd、find)和操作(cp、mv、chmod)以提高日常工作效率
  • 学习文本处理工具(grep、sed、awk)像专家一样处理数据流
  • 理解进程管理(ps、systemctl)和网络(curl、ssh、rsync)以进行服务器管理
  • 使用 Shell 脚本通过变量、循环和函数自动化重复性任务
  • 使用 iptables/ufw、fail2ban 和 SSH 密钥认证来保护系统安全

1. 文件系统导航

导航 Linux 文件系统是最基本的技能。这些命令帮助你在目录间移动、列出内容并高效查找文件。

列出文件和目录 — ls

# Basic listing
ls

# Long format with permissions, size, date
ls -la

# Sort by modification time (newest first)
ls -lt

# Sort by file size (largest first)
ls -lS

# Human-readable file sizes
ls -lh

# List only directories
ls -d */

# Recursive listing
ls -R /var/log/

改变目录 — cd

# Go to home directory
cd ~
cd

# Go to previous directory
cd -

# Go up one level
cd ..

# Go up two levels
cd ../..

# Absolute path
cd /var/log/nginx

# Relative path
cd projects/myapp

查找文件 — find 和 locate

# Find files by name
find /home -name "*.log"

# Case-insensitive search
find / -iname "readme.md"

# Find files modified in last 24 hours
find /var/log -mtime -1

# Find files larger than 100MB
find / -size +100M

# Find and delete empty directories
find /tmp -type d -empty -delete

# Find files by permission
find /home -perm 777

# Find and execute a command
find . -name "*.tmp" -exec rm {} \;

# Fast locate (uses database, run updatedb first)
sudo updatedb
locate nginx.conf

目录树 — tree 和 pwd

# Print current working directory
pwd

# Display directory tree
tree

# Limit depth to 2 levels
tree -L 2

# Show only directories
tree -d

# Show tree with file sizes
tree -sh

# Exclude node_modules and .git
tree -I "node_modules|.git"

重要的 Linux 目录结构

路径描述
/根目录 — 文件系统的顶层
/home用户主目录
/etc系统配置文件
/var可变数据(日志、缓存、邮件)
/var/log系统和应用程序日志
/tmp临时文件(重启时清除)
/usr用户程序和数据
/usr/local本地安装的软件
/opt可选应用软件包
/proc虚拟文件系统 — 进程和内核信息
/dev设备文件
/mnt临时挂载点

2. 文件操作

文件操作命令是日常 Linux 工作的基础。从复制文件到更改权限,这些命令处理所有文件操作。

复制、移动和删除

# Copy a file
cp source.txt destination.txt

# Copy directory recursively
cp -r /src/project /backup/project

# Copy preserving attributes (timestamps, permissions)
cp -a /src /dest

# Move / rename a file
mv oldname.txt newname.txt

# Move multiple files to a directory
mv *.log /var/archive/

# Remove a file
rm unwanted.txt

# Remove with confirmation prompt
rm -i important.txt

# Remove directory recursively (DANGEROUS — double check!)
rm -rf /tmp/build-cache

# Create directories (including parents)
mkdir -p /var/www/mysite/assets/images

权限与所有权 — chmod 和 chown

# Set file to readable/writable by owner only
chmod 600 ~/.ssh/id_rsa

# Standard permissions: owner=rwx, group/others=rx
chmod 755 deploy.sh

# Standard permissions for regular files
chmod 644 index.html

# Add execute permission for owner
chmod u+x script.sh

# Remove write permission for group and others
chmod go-w config.yml

# Recursive permission change
chmod -R 755 /var/www/html

# Change file owner
chown www-data:www-data /var/www/html

# Change owner recursively
chown -R deploy:deploy /opt/app

符号链接 — ln

# Create a symbolic (soft) link
ln -s /etc/nginx/sites-available/mysite /etc/nginx/sites-enabled/mysite

# Create a hard link
ln original.txt hardlink.txt

# Verify a symlink target
readlink -f /usr/bin/python3

# Find broken symlinks
find /usr/local/bin -type l ! -exec test -e {} \; -print
提示: 使用 ln -sf 强制覆盖已存在的符号链接,避免先删除再创建的两步操作。

文件内容查看

# Display entire file
cat file.txt

# Display with line numbers
cat -n file.txt

# View first 20 lines
head -20 file.txt

# View last 20 lines
tail -20 file.txt

# Follow file in real time (great for logs)
tail -f /var/log/syslog

# Follow with line count
tail -f -n 100 /var/log/nginx/access.log

# Page through a file (scrollable)
less /var/log/syslog
# Press / to search, q to quit, G to go to end

# Display file with line numbers and non-printing chars
cat -An file.txt

# Concatenate multiple files
cat part1.txt part2.txt part3.txt > combined.txt

# Create file from stdin (Ctrl+D to end)
cat > notes.txt

文件比较和校验

# Compare two files
diff file1.txt file2.txt

# Side-by-side comparison
diff -y file1.txt file2.txt

# Unified diff format (like git)
diff -u old.conf new.conf

# Compare directories
diff -r dir1/ dir2/

# Generate MD5 checksum
md5sum file.tar.gz

# Generate SHA256 checksum
sha256sum file.tar.gz

# Verify checksum file
sha256sum -c checksums.sha256

# Get file type information
file unknown-file
file /usr/bin/python3

# Show file statistics
stat file.txt

3. 文本处理

Linux 在文本处理方面表现出色。这些强大的命令让你搜索、转换、过滤和分析文本数据——对日志分析、数据提取和自动化至关重要。

搜索文本 — grep

# Search for a pattern in a file
grep "error" /var/log/syslog

# Case-insensitive search
grep -i "warning" app.log

# Recursive search in directories
grep -r "TODO" ./src

# Show line numbers
grep -n "function" app.js

# Show 3 lines of context around matches
grep -C 3 "Exception" error.log

# Invert match (show lines NOT matching)
grep -v "DEBUG" app.log

# Count matching lines
grep -c "error" /var/log/syslog

# Search with regex
grep -E "^[0-9]{4}-[0-9]{2}" access.log

# Show only matching filenames
grep -rl "API_KEY" /etc/

流编辑器 — sed

# Find and replace (first occurrence per line)
sed 's/old/new/' file.txt

# Global replace (all occurrences)
sed 's/old/new/g' file.txt

# Edit file in place
sed -i 's/http:/https:/g' config.yml

# Delete lines matching a pattern
sed '/^#/d' config.conf

# Delete empty lines
sed '/^$/d' file.txt

# Print only lines 10-20
sed -n '10,20p' file.txt

# Insert text before line 5
sed '5i\New line inserted here' file.txt

# Replace only on lines containing "server"
sed '/server/s/80/443/g' nginx.conf

文本处理 — awk

# Print specific columns (space-delimited)
awk '{print $1, $3}' access.log

# Use custom field separator
awk -F: '{print $1, $7}' /etc/passwd

# Filter rows by condition
awk '$3 > 500 {print $1, $3}' data.txt

# Sum a column
awk '{sum += $5} END {print "Total:", sum}' sales.csv

# Count unique values
awk -F, '{count[$2]++} END {for (k in count) print k, count[k]}' data.csv

# Format output
awk '{printf "%-20s %10d\n", $1, $3}' report.txt

sed 高级用法

# Multiple replacements in one pass
sed -e 's/foo/bar/g' -e 's/baz/qux/g' file.txt

# Replace only on line range
sed '10,20s/old/new/g' file.txt

# Add line after match
sed '/^server {/a\    include /etc/nginx/security.conf;' nginx.conf

# Delete lines between two patterns
sed '/START_BLOCK/,/END_BLOCK/d' config.txt

# Extract text between patterns
sed -n '/BEGIN/,/END/p' file.txt

# Remove trailing whitespace
sed 's/[[:space:]]*$//' file.txt

# Convert Windows line endings to Unix
sed -i 's/\r$//' script.sh

# Number all non-empty lines
sed '/./=' file.txt | sed 'N; s/\n/\t/'

awk 实用示例

# Print lines longer than 80 characters
awk 'length > 80' file.txt

# Calculate average of a column
awk '{sum += $3; count++} END {print "Average:", sum/count}' data.txt

# Print between two patterns
awk '/START/,/END/' file.txt

# CSV processing with header
awk -F, 'NR==1 {for(i=1;i<=NF;i++) header[i]=$i; next}
{print header[1] "=" $1, header[3] "=" $3}' data.csv

# Group by and count
awk '{status[$NF]++} END {for (s in status) print s, status[s]}' access.log

# Multi-file processing
awk 'FNR==1 {print "--- " FILENAME " ---"} {print}' file1.txt file2.txt

# Replace column value conditionally
awk -F, 'BEGIN{OFS=","} $3 > 100 {$3 = "HIGH"} {print}' data.csv

其他文本工具

命令描述
cut -d: -f1 /etc/passwd按分隔符提取特定字段
sort -k2 -n data.txt按第 2 列数值排序
sort -u file.txt排序并去除重复行
uniq -c计算连续重复行的次数
wc -l file.txt统计行数
wc -w file.txt统计单词数
tr '[:lower:]' '[:upper:]'将小写转换为大写
tr -d '\n'删除所有换行符
# Practical pipeline: find top 10 IP addresses in access log
awk '{print $1}' access.log | sort | uniq -c | sort -rn | head -10

# Count lines in all Python files
find . -name "*.py" -exec wc -l {} + | sort -n

# Extract unique error messages
grep "ERROR" app.log | awk -F'] ' '{print $2}' | sort -u

4. 进程管理

管理运行中的进程对服务器管理至关重要。这些命令帮助你监控、控制和管理系统服务。

进程管理概览

命令描述
ps aux列出所有运行中的进程
top / htop交互式进程监控
kill PID向进程发送终止信号
kill -9 PID强制终止进程
killall name按名称终止所有匹配进程
nohup cmd &在后台运行,不受终端关闭影响
systemctl status检查 systemd 服务状态
journalctl -u svc查看服务日志

查看进程 — ps 和 top

# List all processes (full format)
ps aux

# Filter by process name
ps aux | grep nginx

# Show process tree
ps auxf

# Show processes for a specific user
ps -u www-data

# Interactive process viewer
top

# Better interactive viewer (install: apt install htop)
htop

# Sort by memory usage in top
# Press M inside top

# Sort by CPU usage in top
# Press P inside top

终止和控制进程

# Graceful terminate (SIGTERM)
kill 12345

# Force kill (SIGKILL) — use as last resort
kill -9 12345

# Kill by process name
killall nginx
pkill -f "python app.py"

# Run process in background
./long-task.sh &

# Run process immune to hangups (survives logout)
nohup ./server.sh &
nohup ./server.sh > output.log 2>&1 &

# List background jobs
jobs

# Bring job to foreground
fg %1

# Send running process to background
# Press Ctrl+Z first, then:
bg %1

资源限制和优先级

# Run command with lower CPU priority
nice -n 19 ./heavy-computation.sh

# Change priority of running process
renice -n 10 -p 12345

# Limit CPU usage (install: apt install cpulimit)
cpulimit -l 50 -p 12345

# Run command with memory limit
systemd-run --scope -p MemoryMax=512M ./my-app

# Show resource limits
ulimit -a

# Set max open files for current session
ulimit -n 65535

# Show process resource usage
/usr/bin/time -v ./my-script.sh

# View per-process memory map
pmap -x 12345

系统服务 — systemctl 和 journalctl

# Start / stop / restart a service
sudo systemctl start nginx
sudo systemctl stop nginx
sudo systemctl restart nginx

# Reload config without restart
sudo systemctl reload nginx

# Check service status
systemctl status nginx

# Enable service on boot
sudo systemctl enable nginx

# Disable service on boot
sudo systemctl disable nginx

# List all active services
systemctl list-units --type=service --state=active

# View logs for a service
journalctl -u nginx

# Follow logs in real time
journalctl -u nginx -f

# View logs since last boot
journalctl -b

# View logs from the last hour
journalctl --since "1 hour ago"

5. 网络

网络命令对于服务器管理、API 测试、文件传输和调试连接问题必不可少。

HTTP 请求 — curl 和 wget

# Simple GET request
curl https://api.example.com/data

# POST with JSON body
curl -X POST -H "Content-Type: application/json" \
  -d '{"name":"test"}' https://api.example.com/users

# Download file with progress
curl -O https://example.com/file.tar.gz

# Follow redirects
curl -L https://short.url/abc

# Show response headers
curl -I https://example.com

# Download file with wget
wget https://example.com/file.tar.gz

# Resume interrupted download
wget -c https://example.com/large-file.iso

# Mirror a website
wget --mirror --convert-links https://example.com

网络命令速查表

命令描述
curl URL发送 HTTP 请求
wget URL下载文件
ssh user@host安全远程登录
scp src user@host:dst安全复制文件到远程
rsync -avz src dst增量文件同步
ss -tlnp显示 TCP 监听端口和进程
dig domainDNS 查询
ip addr show显示网络接口和 IP
ping host测试网络连通性
traceroute host跟踪网络路由路径

远程连接 — ssh 和文件传输

# SSH into a remote server
ssh user@192.168.1.100

# SSH with specific key
ssh -i ~/.ssh/mykey user@server.com

# SSH with port forwarding (local)
ssh -L 8080:localhost:3000 user@server.com

# Copy file to remote server
scp file.txt user@server:/home/user/

# Copy directory from remote
scp -r user@server:/var/log/app ./logs

# Sync files with rsync (fast, incremental)
rsync -avz ./deploy/ user@server:/var/www/html/

# Rsync with delete (mirror source to destination)
rsync -avz --delete ./src/ user@server:/opt/app/

# Rsync over SSH with specific port
rsync -avz -e "ssh -p 2222" ./data/ user@server:/backup/

网络诊断

# Show listening ports
ss -tlnp

# Show all connections
ss -tunap

# Display network interfaces and IPs
ip addr show

# Show routing table
ip route show

# DNS lookup
dig example.com
dig example.com MX
nslookup example.com

# Test network connectivity
ping -c 4 google.com

# Trace route to host
traceroute google.com

# Check if a port is open
nc -zv server.com 443

# Monitor network traffic (requires root)
sudo tcpdump -i eth0 port 80
提示: ss 命令是 netstat 的现代替代品,速度更快且输出更清晰。在新系统上优先使用 ss。

网络配置与防火墙快速检查

# Show all IP addresses
hostname -I

# Show public IP address
curl -s ifconfig.me
curl -s ipinfo.io/ip

# Test HTTP response code
curl -o /dev/null -s -w "%{http_code}" https://example.com

# Check DNS resolution time
dig +stats example.com | grep "Query time"

# Show active connections by state
ss -s

# Monitor bandwidth in real time (install: apt install iftop)
sudo iftop -i eth0

# Test port connectivity with timeout
timeout 5 bash -c 'echo > /dev/tcp/server.com/443' && echo "Open" || echo "Closed"

# Download with speed limit
curl --limit-rate 1M -O https://example.com/large-file.zip

# Send email via command line
echo "Server is down" | mail -s "ALERT" admin@example.com

6. 磁盘与存储

监控和管理磁盘空间可防止中断并确保服务器平稳运行。这些命令帮助你跟踪使用情况和管理存储设备。

# Show filesystem disk usage (human-readable)
df -h

# Show inode usage
df -i

# Show directory size
du -sh /var/log

# Show top-level directory sizes
du -h --max-depth=1 /var

# Find largest files and directories
du -ah / | sort -rh | head -20

# List block devices
lsblk

# Show detailed disk information
sudo fdisk -l

# Mount a filesystem
sudo mount /dev/sdb1 /mnt/data

# Unmount a filesystem
sudo umount /mnt/data

# Show mounted filesystems
mount | column -t

# Create ext4 filesystem
sudo mkfs.ext4 /dev/sdb1

# Check and repair filesystem
sudo fsck /dev/sdb1

# Add permanent mount to fstab
# Edit /etc/fstab:
# /dev/sdb1  /mnt/data  ext4  defaults  0  2
命令描述
df -h显示文件系统磁盘空间使用情况
du -sh /path显示目录总大小
lsblk列出所有块设备(磁盘、分区)
mount / umount挂载或卸载文件系统
mkfs.ext4创建 ext4 文件系统
fsck检查和修复文件系统

磁盘性能和 SMART 监控

# Test disk write speed
dd if=/dev/zero of=/tmp/testfile bs=1M count=1024 conv=fdatasync

# Test disk read speed
dd if=/tmp/testfile of=/dev/null bs=1M

# Show disk SMART status (install: apt install smartmontools)
sudo smartctl -a /dev/sda

# Check disk health
sudo smartctl -H /dev/sda

# Show partition UUID
blkid

# Resize a partition (with LVM)
sudo lvextend -L +10G /dev/mapper/vg0-root
sudo resize2fs /dev/mapper/vg0-root

# Check disk usage by file type
find / -type f -name "*.log" -exec du -ch {} + 2>/dev/null | tail -1

# Clean package manager cache
sudo apt clean           # Debian/Ubuntu
sudo dnf clean all       # RHEL/Fedora

7. 用户管理

用户和权限管理是 Linux 安全的基础。这些命令控制谁可以访问系统上的什么资源。

# Add a new user
sudo useradd -m -s /bin/bash johndoe

# Add user with home dir and default shell
sudo useradd -m -s /bin/bash -G sudo,docker devuser

# Set password for user
sudo passwd johndoe

# Modify user (add to docker group)
sudo usermod -aG docker johndoe

# Delete user and home directory
sudo userdel -r johndoe

# View user groups
groups johndoe
id johndoe

# Switch to another user
su - johndoe

# Run command as another user
sudo -u www-data whoami

# Edit sudoers file safely
sudo visudo

# Add user to sudoers (append to /etc/sudoers)
# johndoe ALL=(ALL:ALL) NOPASSWD: ALL

# List currently logged in users
who
w

# Show last login history
last
安全提示: 始终使用 visudo 而不是直接编辑 /etc/sudoers。visudo 在保存前会验证语法,防止配置错误导致无法使用 sudo。

8. 包管理

不同的 Linux 发行版使用不同的包管理器。以下是最常用包管理器的命令对比。

操作apt (Debian/Ubuntu)dnf/yum (RHEL/Fedora)pacman (Arch)
更新包列表apt updatednf check-updatepacman -Sy
升级所有包apt upgradednf upgradepacman -Syu
安装包apt install nginxdnf install nginxpacman -S nginx
删除包apt remove nginxdnf remove nginxpacman -R nginx
搜索包apt search nginxdnf search nginxpacman -Ss nginx
显示包信息apt show nginxdnf info nginxpacman -Si nginx
列出已安装包apt list --installeddnf list installedpacman -Q
清理缓存apt autoremovednf autoremovepacman -Sc

apt 详细用法 (Debian/Ubuntu)

# Update and upgrade in one step
sudo apt update && sudo apt upgrade -y

# Install specific version
sudo apt install nginx=1.24.0-1

# Hold package (prevent upgrades)
sudo apt-mark hold nginx
sudo apt-mark unhold nginx

# Show installed package version
apt list --installed | grep nginx

# Show package dependencies
apt depends nginx

# Show reverse dependencies
apt rdepends nginx

# Remove package and its config files
sudo apt purge nginx

# Remove unused dependencies
sudo apt autoremove -y

# List upgradable packages
apt list --upgradable

# Download .deb without installing
apt download nginx

# Install local .deb file
sudo dpkg -i package.deb
sudo apt install -f   # Fix broken dependencies

# Add a PPA repository
sudo add-apt-repository ppa:ondrej/php
sudo apt update

dnf 详细用法 (RHEL/Fedora)

# Check for updates
sudo dnf check-update

# Install package group
sudo dnf groupinstall "Development Tools"

# Show package history
sudo dnf history
sudo dnf history info 15

# Undo last transaction
sudo dnf history undo last

# List enabled repositories
dnf repolist

# Add a repository
sudo dnf config-manager --add-repo https://repo.example.com/repo.repo

# Install from specific repo
sudo dnf install --repo=epel nginx

# List all files in a package
rpm -ql nginx

# Which package provides a file
dnf provides /usr/bin/curl

Snap 通用包管理

# Install a snap package
sudo snap install code --classic

# List installed snaps
snap list

# Update all snaps
sudo snap refresh

# Remove a snap
sudo snap remove code

# Find snaps
snap find "text editor"

9. Shell 脚本基础

Shell 脚本自动化重复性任务并创建强大的工作流。掌握这些基础知识来编写高效的 Bash 脚本。

Bash 特殊变量

变量描述
$0脚本名称
$1, $2, ...位置参数(传入的参数)
$#参数个数
$@所有参数(作为独立单词)
$*所有参数(作为单个字符串)
$?上一个命令的退出码
$$当前 Shell 的 PID
$!最后一个后台进程的 PID

变量和字符串

#!/bin/bash

# Variable assignment (no spaces around =)
NAME="Linux"
VERSION=6
DATE=$(date +%Y-%m-%d)

# Using variables
echo "Welcome to \$NAME version \$VERSION"
echo "Today is \$DATE"

# String operations
STR="Hello World"
echo "Length: \${#STR}"        # 11
echo "Substr: \${STR:0:5}"     # Hello
echo "Replace: \${STR/World/Linux}"  # Hello Linux

# Default values
echo "\${UNSET_VAR:-default_value}"   # uses default if unset
echo "\${UNSET_VAR:=default_value}"   # sets and uses default if unset

条件判断

#!/bin/bash

# If-else
if [ -f "/etc/nginx/nginx.conf" ]; then
    echo "Nginx config exists"
elif [ -f "/etc/apache2/apache2.conf" ]; then
    echo "Apache config exists"
else
    echo "No web server config found"
fi

# Numeric comparison
COUNT=$(wc -l < access.log)
if [ "\$COUNT" -gt 1000 ]; then
    echo "High traffic: \$COUNT requests"
fi

# String comparison
ENV="production"
if [[ "\$ENV" == "production" ]]; then
    echo "Running in production mode"
fi

# File test operators
# -f  file exists and is regular file
# -d  directory exists
# -r  file is readable
# -w  file is writable
# -x  file is executable
# -s  file exists and is not empty

循环

#!/bin/bash

# For loop — iterate over list
for SERVER in web1 web2 web3 db1; do
    echo "Checking \$SERVER..."
    ping -c 1 "\$SERVER" > /dev/null 2>&1 && echo "  UP" || echo "  DOWN"
done

# For loop — C-style
for ((i=1; i<=10; i++)); do
    echo "Iteration \$i"
done

# For loop — iterate over files
for FILE in /var/log/*.log; do
    echo "Processing \$FILE ($(wc -l < "\$FILE") lines)"
done

# While loop
COUNTER=0
while [ "\$COUNTER" -lt 5 ]; do
    echo "Count: \$COUNTER"
    COUNTER=$((COUNTER + 1))
done

# Read file line by line
while IFS= read -r LINE; do
    echo "Processing: \$LINE"
done < input.txt

函数和数组

#!/bin/bash

# Define a function
check_service() {
    local SERVICE_NAME="\$1"
    if systemctl is-active --quiet "\$SERVICE_NAME"; then
        echo "\$SERVICE_NAME is running"
        return 0
    else
        echo "\$SERVICE_NAME is NOT running"
        return 1
    fi
}

# Call function
check_service nginx
check_service postgresql

# Arrays
SERVERS=("web1" "web2" "web3" "db1")

# Array length
echo "Total servers: \${#SERVERS[@]}"

# Iterate array
for SERVER in "\${SERVERS[@]}"; do
    echo "Server: \$SERVER"
done

# Access by index
echo "First: \${SERVERS[0]}"
echo "Last: \${SERVERS[-1]}"

# Append to array
SERVERS+=("cache1")

# Associative array (Bash 4+)
declare -A PORTS
PORTS[nginx]=80
PORTS[ssh]=22
PORTS[postgres]=5432

for SERVICE in "\${!PORTS[@]}"; do
    echo "\$SERVICE -> port \${PORTS[\$SERVICE]}"
done

错误处理和调试

#!/bin/bash

# Exit on first error
set -e

# Exit on undefined variable
set -u

# Fail on pipe errors
set -o pipefail

# Combined (recommended for all scripts)
set -euo pipefail

# Trap errors and run cleanup
cleanup() {
    echo "Cleaning up temp files..."
    rm -f /tmp/myapp_*
}
trap cleanup EXIT ERR

# Debug mode — print each command before executing
set -x

# Debug specific section only
set -x
# ... commands to debug ...
set +x

# Validate required arguments
if [ $# -lt 2 ]; then
    echo "Usage: \$0 <source> <destination>"
    exit 1
fi

# Check if command exists
if ! command -v docker &> /dev/null; then
    echo "Docker is not installed"
    exit 1
fi

实用脚本模板

#!/bin/bash
set -euo pipefail

# ---- Configuration ----
LOG_DIR="/var/log/myapp"
BACKUP_DIR="/backup/db"
RETENTION_DAYS=7
DATE=$(date +%Y%m%d_%H%M%S)

# ---- Functions ----
log() {
    echo "[$(date +"%Y-%m-%d %H:%M:%S")] \$1" | tee -a "\$LOG_DIR/backup.log"
}

die() {
    log "ERROR: \$1"
    exit 1
}

# ---- Main ----
log "Starting database backup..."

# Create backup directory if missing
mkdir -p "\$BACKUP_DIR" || die "Cannot create backup dir"

# Perform backup
pg_dump mydb > "\$BACKUP_DIR/mydb_\$DATE.sql" || die "pg_dump failed"

# Compress backup
gzip "\$BACKUP_DIR/mydb_\$DATE.sql" || die "Compression failed"
log "Backup created: mydb_\$DATE.sql.gz"

# Remove old backups
find "\$BACKUP_DIR" -name "*.sql.gz" -mtime +\$RETENTION_DAYS -delete
log "Removed backups older than \$RETENTION_DAYS days"

log "Backup completed successfully"

10. I/O 重定向与管道

重定向和管道是让 Linux 命令行真正强大的关键。它们让你将命令链接在一起并控制数据流向。

标准流和重定向

# Redirect stdout to file (overwrite)
echo "Hello" > output.txt

# Redirect stdout to file (append)
echo "World" >> output.txt

# Redirect stderr to file
command_that_fails 2> errors.log

# Redirect both stdout and stderr
command > output.log 2>&1

# Modern syntax (Bash 4+)
command &> output.log

# Discard all output
command > /dev/null 2>&1

# Redirect stdin from file
sort < unsorted.txt

# Here document
cat <<EOF
Server: production
Date: $(date)
Status: running
EOF

# Here string
grep "error" <<< "This is an error message"

管道和命令链

# Basic pipe
ls -la | grep ".log"

# Multiple pipes
cat access.log | awk '{print $1}' | sort | uniq -c | sort -rn | head -10

# tee: write to file AND stdout simultaneously
ping google.com | tee ping-results.txt

# tee with append
echo "new log entry" | tee -a app.log

# xargs: convert stdin to command arguments
find . -name "*.tmp" | xargs rm

# xargs with placeholder
find . -name "*.js" | xargs -I {} cp {} /backup/

# Parallel execution with xargs
find . -name "*.png" | xargs -P 4 -I {} convert {} -resize 50% {}

# Command substitution
KILL_PIDS=$(pgrep -f "old-process")
echo "Killing PIDs: \$KILL_PIDS"

# Process substitution
diff <(sort file1.txt) <(sort file2.txt)
提示: 使用 set -o pipefail 在脚本中确保管道中任何命令的失败都会导致整个管道失败,而不仅仅是最后一个命令。

命名管道和文件描述符

# Create a named pipe (FIFO)
mkfifo /tmp/mypipe

# Writer (in terminal 1)
echo "Hello from writer" > /tmp/mypipe

# Reader (in terminal 2)
cat < /tmp/mypipe

# Custom file descriptors
exec 3> /tmp/custom-output.txt   # Open fd 3 for writing
echo "Written to fd 3" >&3
exec 3>&-                         # Close fd 3

# Read from fd
exec 4< /etc/hostname
read HOSTNAME <&4
exec 4<&-
echo "Hostname: \$HOSTNAME"

# Swap stdout and stderr
command 3>&1 1>&2 2>&3 3>&-

# Log stdout and stderr separately
command > stdout.log 2> stderr.log

# Append both to same file with timestamps
command 2>&1 | while IFS= read -r line; do
    echo "$(date +"%H:%M:%S") \$line"
done >> app.log

11. 压缩与归档

压缩文件可节省磁盘空间并加快文件传输速度。这些是最常用的归档和压缩工具。

tar — 归档工具

# Create tar.gz archive
tar -czf archive.tar.gz /path/to/directory

# Create tar.bz2 archive (better compression)
tar -cjf archive.tar.bz2 /path/to/directory

# Extract tar.gz archive
tar -xzf archive.tar.gz

# Extract to specific directory
tar -xzf archive.tar.gz -C /opt/

# List contents without extracting
tar -tzf archive.tar.gz

# Extract specific file from archive
tar -xzf archive.tar.gz path/to/file.txt

# Create archive excluding patterns
tar -czf backup.tar.gz --exclude="*.log" --exclude="node_modules" /opt/app

其他压缩工具

工具压缩解压特点
gzipgzip file.txtgunzip file.txt.gz最常见,速度快
bzip2bzip2 file.txtbunzip2 file.txt.bz2更好的压缩率,较慢
xzxz file.txtunxz file.txt.xz最佳压缩率,最慢
zipzip -r archive.zip dir/unzip archive.zip跨平台兼容
7z7z a archive.7z dir/7z x archive.7z高压缩率,多格式支持
# Compress keeping original file
gzip -k large-file.log

# Set compression level (1=fast, 9=best)
gzip -9 data.csv

# Zip with password protection
zip -e -r secure.zip /sensitive/data/

# List zip contents
unzip -l archive.zip

跨服务器文件传输压缩

# Compress and transfer in one step (no temp file)
tar -czf - /var/www/html | ssh user@server "cat > /backup/site.tar.gz"

# Transfer and extract on remote in one step
tar -czf - /opt/app | ssh user@server "cd /opt && tar -xzf -"

# Split large archives into parts
tar -czf - /large/data | split -b 100M - backup_part_

# Rejoin split archive
cat backup_part_* | tar -xzf -

# Compress with parallel processing (install: apt install pigz)
tar -cf - /data | pigz > data.tar.gz

# Decompress with pigz
pigz -d data.tar.gz

12. 系统监控

主动的系统监控帮助你在性能瓶颈、内存泄漏和硬件问题导致中断之前识别它们。

# System uptime and load averages
uptime

# Memory usage (human-readable)
free -h

# Virtual memory statistics (every 2 seconds)
vmstat 2 5

# I/O statistics
iostat -x 2 5

# System activity report (CPU, memory, disk, network)
sar -u 2 5     # CPU usage
sar -r 2 5     # Memory usage
sar -d 2 5     # Disk activity

# Kernel ring buffer messages
dmesg | tail -20
dmesg -T | grep -i error

# System information
uname -a
hostnamectl

# CPU information
lscpu
cat /proc/cpuinfo | grep "model name" | head -1

# Memory information
cat /proc/meminfo | head -5

实时监控命令

命令描述
top / htop交互式进程和资源监控
vmstat 1每秒显示虚拟内存、CPU、I/O 统计
iostat -x 1每秒显示扩展磁盘 I/O 统计
sar -n DEV 1每秒显示网络接口统计
watch -n 1 "df -h"每秒刷新并显示磁盘使用情况
dstat综合系统资源统计工具
nmon性能监控和分析工具
# Watch a command output in real time (updates every 2s)
watch "ss -tlnp"

# Monitor log file in real time
tail -f /var/log/syslog

# Monitor multiple log files
tail -f /var/log/nginx/access.log /var/log/nginx/error.log

# Quick system health check script
echo "=== Uptime ===" && uptime
echo "=== Memory ===" && free -h
echo "=== Disk ===" && df -h /
echo "=== Load ===" && cat /proc/loadavg
echo "=== Top Processes ===" && ps aux --sort=-%cpu | head -5

性能分析和故障排查

# Show top CPU-consuming processes
ps aux --sort=-%cpu | head -10

# Show top memory-consuming processes
ps aux --sort=-%mem | head -10

# Trace system calls of a process
strace -p 12345 -e trace=network
strace -c ./my-program      # Summary of syscalls

# Trace library calls
ltrace ./my-program

# Show open files for a process
lsof -p 12345

# Show files opened by a user
lsof -u www-data

# Find which process is using a file
lsof /var/log/syslog

# Find processes using deleted files (disk not freed)
lsof +L1

# Check OOM (Out of Memory) kills
dmesg | grep -i "out of memory"
grep -i "killed process" /var/log/syslog

# Network connections by process
sudo ss -tnp | awk '{print $6}' | sort | uniq -c | sort -rn

日志管理

# View system log
journalctl -xe

# View boot messages
journalctl -b -1    # Previous boot

# Check disk usage of logs
journalctl --disk-usage

# Vacuum old logs (keep last 500MB)
sudo journalctl --vacuum-size=500M

# Vacuum old logs (keep last 7 days)
sudo journalctl --vacuum-time=7d

# Rotate logs manually
sudo logrotate -f /etc/logrotate.conf

# Monitor multiple logs simultaneously
multitail /var/log/nginx/access.log /var/log/nginx/error.log

13. 安全命令

安全不是可选项。这些命令帮助你配置防火墙、管理 SSH 密钥并保护系统免受未授权访问。

防火墙管理

# UFW (Uncomplicated Firewall) — Ubuntu/Debian
sudo ufw enable
sudo ufw status verbose
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw deny 3306/tcp
sudo ufw allow from 10.0.0.0/8 to any port 22
sudo ufw delete allow 80/tcp

# iptables — traditional firewall
# List current rules
sudo iptables -L -n -v

# Allow incoming SSH
sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT

# Allow incoming HTTP/HTTPS
sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT

# Drop all other incoming traffic
sudo iptables -A INPUT -j DROP

# Save iptables rules
sudo iptables-save > /etc/iptables.rules

# nftables — modern replacement for iptables
sudo nft list ruleset
sudo nft add rule inet filter input tcp dport 22 accept

SSH 密钥管理

# Generate Ed25519 key (recommended)
ssh-keygen -t ed25519 -C "your@email.com"

# Generate RSA key (4096-bit)
ssh-keygen -t rsa -b 4096 -C "your@email.com"

# Copy public key to server
ssh-copy-id -i ~/.ssh/id_ed25519.pub user@server

# Manual key copy (if ssh-copy-id unavailable)
cat ~/.ssh/id_ed25519.pub | ssh user@server "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"

# Set correct permissions
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
chmod 600 ~/.ssh/id_ed25519
chmod 644 ~/.ssh/id_ed25519.pub

# Disable password authentication (edit sshd_config)
# PasswordAuthentication no
# PubkeyAuthentication yes
# Then restart: sudo systemctl restart sshd

fail2ban 入侵防护

# Install fail2ban
sudo apt install fail2ban

# Start and enable
sudo systemctl start fail2ban
sudo systemctl enable fail2ban

# Check status
sudo fail2ban-client status
sudo fail2ban-client status sshd

# Unban an IP
sudo fail2ban-client set sshd unbanip 192.168.1.100

# Custom jail configuration (/etc/fail2ban/jail.local)
# [sshd]
# enabled = true
# port = 22
# filter = sshd
# logpath = /var/log/auth.log
# maxretry = 3
# bantime = 3600
# findtime = 600

GPG 加密

# Generate GPG key pair
gpg --full-generate-key

# List keys
gpg --list-keys
gpg --list-secret-keys

# Export public key
gpg --armor --export your@email.com > public.key

# Import someone else's public key
gpg --import their-key.pub

# Encrypt a file for a recipient
gpg --encrypt --recipient their@email.com secret.txt

# Decrypt a file
gpg --decrypt secret.txt.gpg > secret.txt

# Sign a file
gpg --sign document.pdf

# Verify a signature
gpg --verify document.pdf.gpg
安全最佳实践: (1) 始终使用 SSH 密钥而非密码。(2) 配置 fail2ban 防止暴力破解。(3) 使用 ufw/iptables 只开放必要端口。(4) 定期更新系统软件包。(5) 禁用 root SSH 登录。

系统加固清单

操作命令
禁用 root SSH 登录PermitRootLogin no 在 sshd_config 中
更改默认 SSH 端口Port 2222 在 sshd_config 中
启用防火墙sudo ufw enable
安装 fail2bansudo apt install fail2ban
启用自动安全更新sudo apt install unattended-upgrades
禁用密码认证PasswordAuthentication no
设置登录超时ClientAliveInterval 300
限制 sudo 用户sudo visudo

安全审计和检查

# Find files with SUID/SGID permissions
find / -type f \( -perm -4000 -o -perm -2000 \) -ls 2>/dev/null

# Find world-writable files
find / -type f -perm -o+w -ls 2>/dev/null

# Find files with no owner
find / -nouser -o -nogroup 2>/dev/null

# Check for empty passwords
sudo awk -F: '($2 == "" ) {print $1}' /etc/shadow

# List users with UID 0 (root-equivalent)
awk -F: '($3 == 0) {print $1}' /etc/passwd

# Check open ports
sudo ss -tlnp
sudo lsof -i -P -n | grep LISTEN

# View failed login attempts
sudo lastb | head -20
sudo grep "Failed password" /var/log/auth.log | tail -20

# Check active SSH sessions
who
w
sudo ss -tnp | grep :22

# Scan for rootkits (install: apt install rkhunter)
sudo rkhunter --check

# Check file integrity (install: apt install aide)
sudo aide --check

SSL/TLS 证书管理

# Check SSL certificate of a website
openssl s_client -connect example.com:443 -servername example.com < /dev/null 2>/dev/null | openssl x509 -noout -dates

# View certificate details
openssl x509 -in cert.pem -text -noout

# Generate self-signed certificate
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes

# Check certificate expiry with curl
curl -vI https://example.com 2>&1 | grep "expire date"

# Let's Encrypt with certbot
sudo certbot --nginx -d example.com -d www.example.com
sudo certbot renew --dry-run

# Check certificate chain
openssl s_client -connect example.com:443 -showcerts

附加:必备单行命令

这些实用的单行命令覆盖了日常管理中最常见的任务场景。

# Find and replace in multiple files
find . -name "*.conf" -exec sed -i 's/old-domain/new-domain/g' {} +

# Kill all processes matching a pattern
pkill -f "pattern"

# Show directory sizes sorted by size
du -h --max-depth=1 | sort -rh

# Monitor file changes in real time
inotifywait -m -r /etc/

# Quick HTTP server (Python 3)
python3 -m http.server 8080

# Generate random password
openssl rand -base64 32
tr -dc 'A-Za-z0-9!@#$%' < /dev/urandom | head -c 24

# Show calendar
cal
cal 2026

# Convert epoch timestamp to date
date -d @1709251200

# Count files in directory
find /var/log -type f | wc -l

# List top 10 largest files
find / -type f -exec du -h {} + 2>/dev/null | sort -rh | head -10

# Watch disk I/O in real time
sudo iotop

# Show all cron jobs for all users
for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l 2>/dev/null; done

# Batch rename files (rename .txt to .md)
for f in *.txt; do mv "$f" "${f%.txt}.md"; done

# Create backup with timestamp
tar -czf "backup_$(date +%Y%m%d_%H%M%S).tar.gz" /var/www/

环境变量

# View all environment variables
env
printenv

# Set variable for current session
export MY_VAR="hello"

# Set variable permanently (add to ~/.bashrc or ~/.profile)
echo 'export MY_VAR="hello"' >> ~/.bashrc
source ~/.bashrc

# Unset a variable
unset MY_VAR

# Common environment variables
echo \$HOME        # User home directory
echo \$USER        # Current username
echo \$PATH        # Executable search path
echo \$SHELL       # Current shell
echo \$PWD         # Current directory
echo \$EDITOR      # Default text editor

# Add to PATH
export PATH="\$HOME/.local/bin:\$PATH"

Cron 定时任务

# Edit crontab for current user
crontab -e

# List crontab entries
crontab -l

# Cron expression format:
# MIN  HOUR  DAY  MONTH  WEEKDAY  COMMAND
# 0-59 0-23  1-31 1-12   0-7

# Every day at 2:30 AM
# 30 2 * * * /opt/scripts/backup.sh

# Every 15 minutes
# */15 * * * * /opt/scripts/health-check.sh

# Every Monday at 9 AM
# 0 9 * * 1 /opt/scripts/weekly-report.sh

# First day of every month at midnight
# 0 0 1 * * /opt/scripts/monthly-cleanup.sh

# Log cron output
# 0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>&1

tmux 终端复用

# Start new session
tmux new -s mysession

# Detach from session: Ctrl+B then D

# List sessions
tmux ls

# Attach to session
tmux attach -t mysession

# Kill session
tmux kill-session -t mysession

# Split pane horizontally: Ctrl+B then "
# Split pane vertically: Ctrl+B then %
# Switch pane: Ctrl+B then arrow key
# Resize pane: Ctrl+B then hold arrow key
# Close pane: exit or Ctrl+D

# Create new window: Ctrl+B then C
# Switch window: Ctrl+B then window number
# Rename window: Ctrl+B then ,

# Scroll mode: Ctrl+B then [ (use arrows, q to exit)

快速参考表

按类别分组的最常用命令速查表。

类别命令描述
导航ls -la列出所有文件(含隐藏文件)的详细信息
导航find . -name "*.log"递归搜索匹配的文件
文件chmod 755 script.sh设置文件权限
文件chown user:group file更改文件所有者
文本grep -rn "pattern" .递归搜索并显示行号
文本awk '{print $1}'提取第一列
进程ps aux | grep name查找进程
进程systemctl restart svc重启系统服务
网络ss -tlnp显示监听端口
网络rsync -avz src/ dst/增量同步文件
磁盘df -h显示磁盘使用情况
安全ufw allow 22/tcp允许 SSH 端口通过防火墙

总结

本指南涵盖了每个开发者需要的核心 Linux 命令。将其收藏为参考并定期练习这些命令。你使用命令行越多,效率就越高。对于动手练习,可以搭建虚拟机或使用我们的在线工具安全地进行实验。

常见问题

最先应该学习哪个 Linux 命令?
从导航命令开始:ls、cd 和 pwd。然后学习文件操作:cp、mv、rm 和 mkdir。这些构成了其他一切的基础。熟练后,学习 grep 和管道(|),它们会显著提高你的生产力。
apt 和 yum 有什么区别?
apt 是基于 Debian 的发行版(Ubuntu、Debian、Linux Mint)的包管理器,而 yum(及其后继者 dnf)用于基于 Red Hat 的发行版(RHEL、CentOS、Fedora)。它们的功能相同但语法不同。例如,"apt install nginx" 对应 "yum install nginx"。
如何查找哪个进程在使用特定端口?
使用 "ss -tlnp | grep :端口号" 或 "lsof -i :端口号" 来查找监听特定端口的进程。例如,"ss -tlnp | grep :80" 显示使用 80 端口的进程。在较旧的系统上也可以使用 "netstat -tlnp | grep :端口号"。
grep、sed 和 awk 有什么区别?
grep 搜索模式并打印匹配行。sed 是流编辑器,用于转换文本(查找替换、删除行、插入文本)。awk 是用于文本处理的完整编程语言,擅长处理列式数据。用 grep 查找,用 sed 替换,用 awk 提取和计算。
如何在后台运行命令并在退出登录后继续运行?
使用 nohup 加 &:"nohup ./script.sh &"。这会将输出重定向到 nohup.out 并脱离终端。如需更多控制,使用 screen 或 tmux 会话,或为生产工作负载创建 systemd 服务。
在 Linux 中删除文件最安全的方式是什么?
始终使用 "rm -i" 进行交互式删除,它会要求确认。永远不要在没有先用 pwd 确认当前目录的情况下使用 "rm -rf /" 或 "rm -rf *"。考虑使用 trash-cli 代替 rm 以实现可恢复删除。在脚本中使用绝对路径以避免意外。
如何检查 Linux 上的磁盘空间使用情况?
使用 "df -h" 以人类可读格式查看文件系统级磁盘使用情况。使用 "du -sh /路径" 检查特定目录的大小。要查找最大的文件,使用 "du -ah /路径 | sort -rh | head -20"。ncdu 工具提供交互式磁盘使用分析器。
如何设置 SSH 密钥认证?
使用 "ssh-keygen -t ed25519" 生成密钥对。使用 "ssh-copy-id user@server" 将公钥复制到服务器。然后在 /etc/ssh/sshd_config 中设置 "PasswordAuthentication no" 禁用密码认证并重启 sshd。这比基于密码的登录更安全。
𝕏 Twitterin LinkedIn
这篇文章有帮助吗?

保持更新

获取每周开发技巧和新工具通知。

无垃圾邮件,随时退订。

试试这些相关工具

🔐Chmod Calculator.*Regex TesterCGCron Expression Generator

相关文章

DevOps 流水线指南:CI/CD、GitHub Actions、Docker、基础设施即代码与部署策略

完整的 DevOps 流水线指南,涵盖 CI/CD 基础、GitHub Actions、GitLab CI、Docker 多阶段构建、Terraform、Pulumi、部署策略、密钥管理、GitOps 和流水线安全。

Python 高级指南:类型提示、异步编程、元类、模式匹配与性能优化

完整的 Python 高级指南,涵盖类型提示与泛型、数据类和 Pydantic、装饰器、async/await 模式、元类、模式匹配、内存管理、并发编程、pytest 测试、打包和设计模式。

Web 安全指南:OWASP Top 10、认证、XSS、CSRF、CSP 与 DevSecOps 最佳实践

全面的 Web 安全指南,涵盖 OWASP Top 10 漏洞、认证策略、XSS/CSRF 防御、内容安全策略、安全头、加密、API 安全和 DevSecOps 实践。