DevToolBoxGRATIS
Blog

Linux Commands Guide: Essential Command Line Reference for Developers

14 min readoleh DevToolBox

Linux Commands Guide — Essential Command Line Reference for Developers

A comprehensive reference for Linux command line tools. Covers file system navigation, text processing, process management, networking, permissions, SSH, and shell productivity tips with real examples.

TL;DR

This guide covers the essential Linux commands every developer needs — from navigating the filesystem and managing files, to processing text, managing processes, configuring permissions, and working over SSH. Each section includes real command examples with flags explained and developer-focused tips.

Key Takeaways

  • find and grep are the most versatile search tools — master their flags early
  • Use pipes (|) to chain commands and build powerful one-liners
  • chmod, chown control access — understand octal vs symbolic modes
  • ps aux, top, and kill are your process management trio
  • SSH key-based auth + ~/.ssh/config makes remote work seamless
  • alias and shell functions save hours of repetitive typing
  • rsync is superior to scp for large or incremental file transfers
  • Always test destructive commands with --dry-run or echo first

Introduction

The Linux command line is one of the most powerful tools in a developer's arsenal. Whether you're deploying applications to production servers, automating build pipelines, debugging issues, or managing files efficiently, mastery of the terminal separates productive developers from those who struggle with their tools.

This guide serves as a comprehensive reference covering the commands you'll use daily. Each command is shown with practical flags and real-world examples. Rather than simply listing syntax, we explain when and why to use each command, with a focus on developer workflows.

1. File System Navigation

Understanding how to move around the Linux filesystem efficiently is the foundation of everything else. These commands help you orient yourself and locate files.

ls — List Directory Contents

ls is the most frequently used command. It lists files and directories in the current or specified path.

# Basic listing ls # Long format with permissions, owner, size, date ls -l # Include hidden files (dotfiles) ls -la # Human-readable file sizes ls -lh # Sort by modification time, newest first ls -lt # Reverse order (oldest first) ls -ltr # List specific directory ls -la /var/log # Recursive listing ls -R /etc/nginx

cd — Change Directory

# Go to home directory cd cd ~ # Go to previous directory cd - # Go up one level cd .. # Go up two levels cd ../.. # Absolute path cd /var/www/html # Relative path cd ../projects/myapp

pwd — Print Working Directory

# Print current directory pwd # Resolve symlinks (show physical path) pwd -P

find — Search for Files

find is extremely powerful for locating files based on name, type, size, date, permissions, and more.

# Find by name (exact) find /home -name "config.json" # Find by name (case-insensitive) find . -iname "readme*" # Find all .log files find /var/log -name "*.log" # Find files modified in the last 7 days find . -mtime -7 # Find files larger than 100MB find / -size +100M # Find empty files find . -empty -type f # Find and execute: delete all .tmp files find . -name "*.tmp" -exec rm {} ; # Find directories only find . -type d -name "node_modules" # Find and print with details find . -name "*.py" -ls # Exclude a directory from search find . -path ./node_modules -prune -o -name "*.js" -print

locate — Fast File Lookup

# Locate a file (uses prebuilt index) locate nginx.conf # Case-insensitive locate -i myfile # Limit results locate -n 10 "*.conf" # Update the database (run as root) sudo updatedb
CommandSpeedReal-timeBest For
findSlowerYesComplex criteria, new files, scripts
locateVery fastNo (uses index)Quick system-wide name search
whichInstantYesFinding executable path
whereisFastYesBinary + man page locations

2. File Operations

cp — Copy Files and Directories

# Copy a file cp source.txt destination.txt # Copy to directory cp file.txt /tmp/ # Copy directory recursively cp -r mydir/ /backup/mydir/ # Preserve timestamps and permissions cp -p file.txt /backup/ # Interactive (prompt before overwrite) cp -i source.txt dest.txt # Verbose output cp -v *.conf /etc/nginx/conf.d/ # Copy multiple files cp file1.txt file2.txt /destination/

mv — Move or Rename

# Rename a file mv oldname.txt newname.txt # Move to directory mv file.txt /tmp/ # Move directory mv mydir/ /var/www/ # Interactive (prompt before overwrite) mv -i source.txt dest.txt # Move multiple files mv *.log /var/log/archive/

rm — Remove Files and Directories

# Remove a file rm file.txt # Interactive (prompt for each file) rm -i file.txt # Remove directory recursively (CAREFUL!) rm -rf mydir/ # Remove with verbose output rm -v *.tmp # Force removal (ignore nonexistent files) rm -f file.txt # Tip: always dry-run with echo first echo rm -rf /some/path

mkdir — Create Directories

# Create a directory mkdir myproject # Create with parents (no error if exists) mkdir -p /var/log/myapp/2024 # Create with specific permissions mkdir -m 755 public_dir # Create multiple directories mkdir src tests docs

touch — Create Files / Update Timestamps

# Create empty file touch newfile.txt # Create multiple files touch file1.txt file2.txt file3.txt # Update access and modification time touch existingfile.txt # Set specific timestamp touch -t 202401011200 file.txt

ln — Create Links

# Hard link (same inode, same filesystem) ln original.txt hardlink.txt # Symbolic (soft) link ln -s /path/to/original symlink # Symbolic link to directory ln -s /var/www/html /home/user/www # Overwrite existing symlink ln -sf /new/target existing_symlink # List symlinks ls -la | grep "->"

3. Text Processing

Linux text processing tools are among the most powerful in the ecosystem. Chained together with pipes, they form a complete data transformation pipeline.

cat — Concatenate and Display Files

# Display file content cat file.txt # Display with line numbers cat -n file.txt # Concatenate multiple files cat file1.txt file2.txt > combined.txt # Create file with content (end with Ctrl+D) cat > newfile.txt # Append to file cat >> file.txt

grep — Search Text Patterns

# Search for pattern grep "error" logfile.txt # Case-insensitive search grep -i "Error" logfile.txt # Recursive search in directory grep -r "TODO" ./src/ # Show line numbers grep -n "function" app.js # Show only filenames with matches grep -l "import React" src/**/*.tsx # Invert match (lines NOT containing pattern) grep -v "debug" logfile.txt # Count matching lines grep -c "ERROR" logfile.txt # Extended regex grep -E "error|warning|critical" app.log # Context: show 3 lines before and after match grep -C 3 "NullPointerException" error.log # Match whole words only grep -w "cat" file.txt # Highlight matches grep --color=auto "pattern" file.txt

sed — Stream Editor

# Replace first occurrence per line sed 's/old/new/' file.txt # Replace all occurrences (global) sed 's/old/new/g' file.txt # Edit file in-place sed -i 's/old/new/g' file.txt # Create backup before editing sed -i.bak 's/localhost/production.db/g' config.env # Delete lines matching pattern sed '/^#/d' config.txt # Delete blank lines sed '/^$/d' file.txt # Print specific line numbers sed -n '10,20p' file.txt # Insert line after pattern sed '/^server/a listen 443 ssl;' nginx.conf

awk — Pattern Processing

# Print second column of CSV awk -F',' '{print $2}' data.csv # Print lines where 3rd column > 100 awk '$3 > 100' data.txt # Sum column values awk '{sum += $3} END {print sum}' data.txt # Print filename and line count awk 'END {print FILENAME, NR}' file.txt # Process /etc/passwd: print username and shell awk -F: '{print $1, $7}' /etc/passwd # Multiple operations awk '{gsub(/foo/, "bar"); print}' file.txt # Print lines between two patterns awk '/START/,/END/' file.txt

cut, sort, uniq, wc

# cut: extract fields cut -d',' -f1,3 data.csv # fields 1 and 3 cut -c1-10 file.txt # first 10 characters # sort: sort lines sort file.txt # alphabetical sort -n numbers.txt # numeric sort -r file.txt # reverse sort -k2 data.txt # sort by column 2 sort -u file.txt # unique (remove dupes) # uniq: filter duplicate lines (requires sorted input) sort file.txt | uniq # remove duplicates sort file.txt | uniq -c # count occurrences sort file.txt | uniq -d # show only duplicates # wc: word/line/character count wc -l file.txt # line count wc -w file.txt # word count wc -c file.txt # byte count wc file.txt # all: lines words bytes

4. Process Management

ps — Process Status

# All processes with full info ps aux # Show processes for current user ps -u $USER # Tree view showing parent/child ps -ejH pstree # Find process by name ps aux | grep nginx # Sort by CPU usage ps aux --sort=-%cpu | head -10 # Sort by memory usage ps aux --sort=-%mem | head -10

top and htop — Real-Time Monitor

# Launch top (interactive) top # top keyboard shortcuts: # q = quit # k = kill process (enter PID) # M = sort by memory # P = sort by CPU # u = filter by user # htop (enhanced, more user-friendly) htop # Top for specific user top -u www-data # Non-interactive, 1 iteration, no header top -bn1 | grep "Cpu(s)"

kill — Terminate Processes

# Graceful terminate (SIGTERM, default) kill PID # Force kill (SIGKILL — cannot be caught) kill -9 PID # Kill by name pkill nginx killall node # Force kill by name pkill -9 python # Send signal to all matching processes pkill -HUP nginx # reload nginx config # Find PID first pgrep -l node # Kill all processes of a user pkill -u username

jobs, bg, fg, nohup

# List background jobs jobs # Suspend current process Ctrl+Z # Resume in background bg %1 # Bring to foreground fg %1 # Run in background from start command & # Run immune to hangup signal (survives logout) nohup ./server.sh & # Disown a running job (detach from shell) disown %1 # Run with output to file nohup python app.py > app.log 2>&1 &

5. Permissions and Ownership

chmod — Change File Mode

Linux permissions use a 3-tier system: owner, group, others. Each tier can have read (4), write (2), execute (1) permissions.

# Symbolic mode chmod +x script.sh # add execute for all chmod -w file.txt # remove write for all chmod u+x,g-w file.txt # add exec for user, remove write for group chmod a+r file.txt # add read for all # Octal mode (most common) chmod 755 script.sh # rwxr-xr-x (owner rwx, group rx, others rx) chmod 644 config.txt # rw-r--r-- (owner rw, group r, others r) chmod 600 ~/.ssh/id_rsa # rw------- (owner only) chmod 777 public_dir # rwxrwxrwx (AVOID in production) # Recursive chmod -R 755 /var/www/html # Common patterns: # 755 — directories, executables # 644 — regular files # 600 — private keys, sensitive config # 700 — private directories
OctalSymbolicMeaningCommon Use
777rwxrwxrwxAll can read/write/execAvoid in production
755rwxr-xr-xOwner full, others read+execDirectories, scripts
644rw-r--r--Owner rw, others read-onlyConfig files, web assets
600rw-------Owner onlySSH keys, secrets
400r--------Owner read-onlyCertificates

chown and chgrp — Change Ownership

# Change owner chown newuser file.txt # Change owner and group chown newuser:newgroup file.txt # Change group only chgrp www-data /var/www/html # Recursive ownership change chown -R www-data:www-data /var/www/html # Preserve root (prevent accidental root recursion) chown --preserve-root -R user:group /dir # Check current ownership ls -la file.txt stat file.txt

6. Network Commands

curl — Transfer Data with URLs

# GET request curl https://api.example.com/users # POST with JSON body curl -X POST https://api.example.com/users -H "Content-Type: application/json" -d '{"name":"Alice","email":"alice@example.com"}' # Send Authorization header curl -H "Authorization: Bearer TOKEN" https://api.example.com/me # Download file curl -O https://example.com/file.tar.gz # Download with custom filename curl -o output.tar.gz https://example.com/archive.tar.gz # Follow redirects curl -L https://example.com # Show response headers curl -I https://example.com # Verbose output (for debugging) curl -v https://example.com # Upload file curl -F "file=@/path/to/file.txt" https://upload.example.com # Resume download curl -C - -O https://example.com/bigfile.iso

wget — Download Files

# Download file wget https://example.com/file.tar.gz # Download in background wget -b https://example.com/bigfile.iso # Resume interrupted download wget -c https://example.com/file.tar.gz # Mirror website wget -r -np https://example.com/docs/ # Limit download speed wget --limit-rate=1m https://example.com/file.iso # Download multiple URLs from file wget -i urls.txt

ping, netstat, ss, ip

# ping: test connectivity ping google.com ping -c 4 8.8.8.8 # send 4 packets ping -i 0.5 host # 0.5s interval # netstat: network connections (older) netstat -tuln # listening ports netstat -anp | grep :80 # connections on port 80 # ss: modern replacement for netstat ss -tuln # listening TCP/UDP ss -tp # show process names ss -s # summary statistics ss -tnp state established # established connections # ip: modern network interface tool ip addr show # show all interfaces ip addr show eth0 # specific interface ip route show # routing table ip link set eth0 up/down # bring interface up/down ip -s link # interface statistics # ifconfig (legacy but still common) ifconfig ifconfig eth0

7. Disk and Memory Management

df — Disk Filesystem Usage

# Human-readable sizes df -h # Specific filesystem df -h /var # Include filesystem type df -Th # Show inode usage df -i # Show only local filesystems df -hl

du — Disk Usage of Files/Directories

# Summary of current directory du -sh . # Summary of specific directory du -sh /var/log # All items in current directory, sorted du -sh * | sort -h # Top 10 largest directories du -h /var | sort -rh | head -10 # Exclude directory du -sh --exclude=node_modules /myproject # Show at depth 1 du -h --max-depth=1 /home

free — Memory Usage

# Human-readable memory info free -h # In megabytes free -m # Show total line free -ht # Watch memory continuously watch -n 1 free -h

lsblk — List Block Devices

# List all block devices lsblk # Show filesystem type and mount points lsblk -f # Show in tree format with sizes lsblk -t

8. Package Management

apt — Debian / Ubuntu

# Update package list sudo apt update # Upgrade all packages sudo apt upgrade # Install a package sudo apt install nginx # Remove a package sudo apt remove nginx # Remove with config files sudo apt purge nginx # Search for packages apt search keyword # Show package info apt show nginx # List installed packages apt list --installed # Clean package cache sudo apt autoremove && sudo apt clean

yum / dnf — RHEL / CentOS / Fedora

# Update all packages sudo yum update # Install package sudo yum install httpd # Remove package sudo yum remove httpd # Search packages yum search nginx # Show package info yum info nginx # dnf (modern replacement for yum) sudo dnf install nginx sudo dnf update sudo dnf remove nginx

brew — macOS Homebrew

# Update Homebrew brew update # Install package brew install node # Upgrade package brew upgrade node # Upgrade all brew upgrade # Remove package brew uninstall node # Search packages brew search redis # List installed brew list # Show info brew info nginx

9. SSH and Remote Access

ssh — Secure Shell

# Basic connection ssh user@hostname ssh user@192.168.1.100 # Specify port ssh -p 2222 user@host # Use identity file (private key) ssh -i ~/.ssh/id_rsa user@host # Run a command remotely ssh user@host "ls -la /var/www" # X11 forwarding (GUI apps) ssh -X user@host # SSH tunneling (local port forward) ssh -L 8080:localhost:80 user@host # Reverse tunnel ssh -R 9090:localhost:3000 user@host # Keep connection alive ssh -o ServerAliveInterval=60 user@host

SSH Config File

Create ~/.ssh/config for convenient aliases and settings:

# ~/.ssh/config Host myserver HostName 192.168.1.100 User deploy Port 2222 IdentityFile ~/.ssh/deploy_key ServerAliveInterval 60 Host staging HostName staging.example.com User ubuntu ForwardAgent yes # Now connect with: ssh myserver ssh staging

scp — Secure Copy

# Copy file to remote scp local_file.txt user@host:/remote/path/ # Copy file from remote scp user@host:/remote/file.txt ./local/ # Copy directory recursively scp -r ./mydir user@host:/home/user/ # Specify port scp -P 2222 file.txt user@host:/tmp/

rsync — Efficient File Sync

rsync only transfers changed files, making it far more efficient than scp for large or repeated transfers.

# Sync local to remote rsync -avz ./src/ user@host:/var/www/app/ # Sync remote to local rsync -avz user@host:/var/www/ ./backup/ # Delete files on destination not in source rsync -avz --delete ./src/ user@host:/var/www/ # Dry run (show what would be transferred) rsync -avzn ./src/ user@host:/var/www/ # Exclude directory rsync -avz --exclude='node_modules' ./src/ user@host:/app/ # Use custom SSH port rsync -avz -e "ssh -p 2222" ./src/ user@host:/var/www/ # Flags explained: # -a archive mode (recursive + preserve permissions) # -v verbose # -z compress during transfer # -n dry run

10. Shell Productivity

alias — Command Shortcuts

# Create temporary alias alias ll='ls -la' alias gs='git status' alias gp='git pull' alias dc='docker-compose' # Make permanent: add to ~/.bashrc or ~/.zshrc echo "alias ll='ls -la'" >> ~/.bashrc source ~/.bashrc # Remove alias unalias ll # List all aliases alias # Common useful aliases alias ..='cd ..' alias ...='cd ../..' alias grep='grep --color=auto' alias mkdir='mkdir -p' alias ports='netstat -tulanp'

history — Command History

# Show history history # Show last 20 commands history 20 # Search history interactively Ctrl+R # then type to search # Run command by number !42 # Run last command !! # Run last command starting with string !git # Clear history history -c # Search history with grep history | grep "docker"

Pipes and Redirects

# Pipe: send stdout to next command ls -la | grep ".txt" ps aux | grep nginx | grep -v grep cat access.log | awk '{print $1}' | sort | uniq -c | sort -rn | head -10 # Redirect stdout to file (overwrite) ls -la > filelist.txt # Append stdout to file echo "new line" >> notes.txt # Redirect stderr to file command 2> errors.log # Redirect both stdout and stderr command > output.log 2>&1 # Discard output command > /dev/null 2>&1 # Read from file sort < unsorted.txt # Here document cat << EOF > config.txt server=localhost port=5432 EOF # Tee: write to file AND stdout command | tee output.txt command | tee -a output.txt # append

xargs — Build and Execute Commands

# Delete all .log files found find /var/log -name "*.log" | xargs rm -f # Run command for each line cat urls.txt | xargs curl -O # Parallel execution (-P) cat files.txt | xargs -P 4 -I{} process_file {} # Prompt before execution echo "file.txt" | xargs -p rm # Handle filenames with spaces (-0 with find -print0) find . -name "*.txt" -print0 | xargs -0 wc -l

Useful One-Liners for Developers

# Find all TODO comments in codebase grep -rn "TODO" ./src --include="*.ts" # Count lines of code by extension find . -name "*.js" | xargs wc -l | tail -1 # Watch a log file in real-time tail -f /var/log/nginx/access.log # Monitor file for changes watch -n 2 ls -la /var/www/uploads/ # Show top 10 most-used commands history | awk '{print $2}' | sort | uniq -c | sort -rn | head -10 # Kill all node processes pkill -f node # Find which process is using port 3000 lsof -i :3000 ss -tlnp | grep 3000 # Generate random password openssl rand -base64 32 # Base64 encode/decode echo "hello" | base64 echo "aGVsbG8=" | base64 -d # Download and execute install script curl -fsSL https://get.docker.com | sh # Check if command exists command -v docker && echo "Docker installed" || echo "Not found"

Frequently Asked Questions

How do I find a file by name in Linux?

Use the find command: find /path -name "filename.txt". For case-insensitive search use -iname. To search only in the current directory and its subdirectories: find . -name "*.log". The locate command is faster for system-wide searches after running updatedb.

What is the difference between cp and mv?

cp copies a file, leaving the original in place. mv moves or renames a file, removing it from the original location. Use cp -r for directories and mv when you need to rename without duplicating.

How do I check disk usage in Linux?

Use df -h to see filesystem-level usage. Use du -sh /path for a specific directory, or du -sh * to list all items in the current directory sorted by size.

How do I kill a process by name?

Use pkill processname or killall processname. To find the PID first: pgrep processname or ps aux | grep processname, then kill PID. Force kill: kill -9 PID or pkill -9 processname.

What does chmod 755 mean?

It sets permissions: owner can read/write/execute (7=4+2+1), group can read/execute (5=4+0+1), others can read/execute. This is the standard permission for directories and executable scripts. Use chmod +x file to simply add execute permission without changing anything else.

How do I search for text inside files?

Use grep "pattern" filename. For recursive directory search: grep -r "pattern" /path. Add -n for line numbers, -i for case-insensitive, -l to list only filenames. Use grep -E for extended regular expressions.

How do I connect to a remote server using SSH?

Use ssh username@hostname or ssh username@IP. Specify a port with -p. For key-based auth: ssh -i ~/.ssh/id_rsa user@host. Create ~/.ssh/config with named host aliases to avoid repeating flags.

How do I use pipes and redirects?

Pipes (|) send the output of one command as input to the next. Redirect to file with > (overwrite) or >> (append). Redirect stderr with 2> and both stdout+stderr with > file 2>&1. Use tee to write to a file and continue the pipeline simultaneously.

𝕏 Twitterin LinkedIn
Apakah ini membantu?

Tetap Update

Dapatkan tips dev mingguan dan tool baru.

Tanpa spam. Berhenti kapan saja.

Coba Alat Terkait

%20URL Encoder/Decoder#Hash GeneratorB→Base64 Encoder

Artikel Terkait

Vim Guide: Complete Vim/Neovim Tutorial for Developers

Master Vim and Neovim with this complete tutorial. Covers modes, navigation, editing, text objects, macros, registers, splits, tabs, .vimrc configuration, init.lua, and top plugins like telescope and coc.nvim.

SSH Key Generator: Generate and Manage SSH Keys — Complete Guide

Master SSH key generation and management. Complete guide with ssh-keygen, Ed25519/RSA comparison, SSH config, node-forge, Python paramiko, Git SSH setup, tunneling, certificate-based SSH, and security best practices.

Docker Commands: Complete Guide from Basics to Production

Master Docker with this complete commands guide. Covers docker run/build/push, Dockerfile, multi-stage builds, volumes, networking, Docker Compose, security, registry, and production deployment patterns.