Linux Reference
🛠️ Operations Tools
🖥️ Command Generator Basics, Customization, Hotkeys & Vim
Built-In Guides - Man, Help, Apropos, Info: List files and directories: Change directory: Create a directory: Remove a file/directory: Copy a file/directory: Move or rename a file/directory: View file contents: Basic command-line commands are used to navigate the file system, manage files and directories, and view and manipulate file contents. Commands like ls, cd, pwd, and echo are used to list files, change directories, display the current working directory, and print system variables. Commands like mkdir, rm, cp, and mv are used to create directories, remove files and directories, copy files, and move or rename files and directories. Viewing file contents and searching for patterns in files can be done using commands like cat, less, head, tail, and grep.
man command # Show manual for a command, e.g. man ls
apropos keyword # Search for commands by keyword, e.g. apropos nano
apropos "network configuration" # Search for commands by phrase
info command # Show information about a command
help command # Show help for a built-in command
ls # List files and directories
ls -l # List files with details
ls -a # List all files, including hidden files
ls -lh # List files with human-readable sizes
pwd # Show the current directory
cd directory # Change to a specific directory
cd .. # Move up one directory
cd ~ # Change to the home directory
cd - # Change to the previous directory
Create a file: Edit a file: View file permissions: Change file permissions: Change file ownership: File operations are common tasks performed on files and directories in a Linux environment. Creating files, editing files, viewing and changing file permissions, and finding files by name, type, or size are essential operations for managing files and directories. Commands like touch, nano, vi, ls, chmod, chown, and find are used to create, edit, view, and manipulate files and directories on a Linux system.
Display text: Concatenate files: View text file: Search for text in files: Count lines, words, and characters: Sort lines in a file: Remove duplicate lines: Text operations involve working with text files, searching for patterns, counting lines, words, and characters, and manipulating text data. Commands like echo, cat, less, grep, wc, sort, and uniq are used to display text, concatenate files, view text files, search for patterns, count text data, sort lines, and remove duplicate lines. Text operations are essential for analyzing and processing text data on a Linux system.
Print working directory, variables, and path: Create aliases for commands: View and remove aliases: Save aliases permanently: History Customization Aliases Example .bashrc Configuration Terminal customization allows you to personalize the appearance and behavior of the command-line interface. Customizing the terminal prompt with PS1, creating aliases for frequently used commands, viewing and removing aliases, and saving aliases permanently in configuration files like .bashrc are common ways to enhance the command-line experience. Terminal customization and aliases help improve productivity and efficiency when working in a terminal environment.
echo $PATH # Show the system path
echo $HOME # Show the home directory
echo $USER # Show the current user
printenv # Display all environment variables
getenv variable # Display a specific environment variable
getent passwd username # Display user information
getent group groupname # Display group information
alias ll="ls -l" # Create an alias for ls -l
alias ..="cd .." # Create an alias for cd ..
alias c="clear" # Create an alias for clear
export c="clear" # Set an alias using export
export EDITOR=nano # Set the default text editor
export PATH=$PATH:/path/to/directory # Add a directory to the system path
history # Show command history
history n # Show last n commands
history -c # Clear command history
history -d n # Delete command at position n
history -a # Append history to history file
history -r # Read history from history file
vim ~/.bashrc # To edit .bashrc file
# Custom prompt
PS1="custom_prompt" # Set a custom prompt e.g. PS1="DOCKER_SERVER $ " # displays DOCKER_SERVER $ as the prompt
PS1="\[\e[1;32m\]\u@\h \w\[\e[m\] $ " # will display username@hostname current_directory $
# Aliases
alias ll="ls -l" # List files with details via ll
alias ..="cd .." # Move up one directory via ..
alias c="clear" # Clear the terminal via c
# History settings - HIST Variables
HISTFILE=~/.bash_history # Set history file location
HISTSIZE=1000 # Set history size limit
HISTFILESIZE=1000 # Set history file size limit
HISTCONTROL=ignoredups # Ignore duplicate commands
HISTIGNORE="ls:cd:exit" # Ignore specific commands
HISTTIMEFORMAT="%F %T " # Show timestamp in history
HISTCMD # Show history command number
# Custom functions
function greet() { echo "Hello, $1!"; } # Define a custom function, use greet name to execute
# Save and exit .bashrc
source ~/.bashrc # Reload .bashrc and apply changes to the current session
Vim Cheat Sheet Commands/Hotkeys Navigation Editing Search and Replace Vim is a powerful text editor that provides different modes for editing text, navigating files, and performing search and replace operations. Vim modes include insert mode for entering text, normal mode for navigation and editing, and command mode for executing commands. Vim navigation commands like h, j, k, l, w, b, e, 0, $, gg, and G are used to move the cursor within a file. Vim editing commands like x, dd, dw, u, Ctrl + r, and p are used to delete, copy, undo, redo, and paste text. Vim search and replace commands like /pattern, n, N, and :%s/old/new/g are used to search for patterns and replace text in files.
# Vim Navigation
h # Move left
j # Move down
k # Move up
l # Move right
w # Move to the beginning of the next word
b # Move to the beginning of the previous word
e # Move to the end of the word
0 # Move to the beginning of the line
$ # Move to the end of the line
gg # Move to the beginning of the file
G # Move to the end of the file
Vim Advanced Commands Vim Advanced Navigation Vim Advanced Editing Vim Advanced Search and Replace Vim advanced commands provide additional functionality for editing text, navigating files, and performing search and replace operations. Advanced Vim commands like :set, :r, :g, :v, and :%s are used to configure settings, insert file contents, delete lines based on patterns, and replace text in files. Advanced Vim navigation commands like Ctrl + f, Ctrl + b, Ctrl + d, Ctrl + u, Ctrl + e, Ctrl + y, H, M, and L are used to scroll pages and move the cursor within a file. Advanced Vim editing commands like . are used to repeat the last change, insert command output, insert text, and add text at the beginning or end of lines or files.
# Vim Advanced
>> # Shift text right one level
<< # Shift text left one level
:set number # Show line numbers
:set nonumber # Hide line numbers
:set list # Show hidden characters
:set nolist # Hide hidden characters
:set linebreak # Wrap long lines
:set wrap # Enable line wrapping
:set nowrap # Disable line wrapping
:set syntax=python # Set syntax highlighting for Python
:set autoindent # Enable auto-indent
:set noautoindent # Disable auto-indent
:set tabstop=4 # Set tab size to 4 spaces
:set shiftwidth=4 # Set indentation width to 4 spaces
:set expandtab # Convert tabs to spaces
:set noexpandtab # Use tabs instead of spaces
:set hlsearch # Highlight search results
:set nohlsearch # Disable search result highlighting
:set incsearch # Incremental search
:set noincsearch # Disable incremental search
:set ignorecase # Ignore case in search
:set noignorecase # Case-sensitive search
:set mouse=a # Enable mouse support
:set nomouse # Disable mouse support
:set ruler # Show cursor position
:set noruler # Hide cursor position
:set background=dark # Set dark background
:set background=light # Set light background
# Vim Advanced Navigation
Ctrl + f # Page down
Ctrl + b # Page up
Ctrl + d # Half page down
Ctrl + u # Half page up
Ctrl + e # Scroll line up
Ctrl + y # Scroll line down
H # Move to top of screen
M # Move to middle of screen
L # Move to bottom of screen
# Vim Advanced Editing
. # Repeat last change
:r filename # Insert file contents
:r !ls # Insert command output
:r !date # Insert date
:r !echo "text" # Insert text
:r !echo "text" | sort # Insert sorted text
:r !echo "text" | sort -r # Insert reverse sorted text
# Vim Advanced Search and Replace
:g/pattern/d # Delete lines with pattern
:v/pattern/d # Delete lines without pattern
:%s/old/new/gc # Replace all occurrences with confirmation
:%s/^/text/ # Add text at the beginning of each line
:%s/$/text/ # Add text at the end of each line
:%s/^/text/g # Add text at the beginning of the file
:%s/$/text/g # Add text at the end of the file
Terminal Hotkeys Terminal Navigation Terminal History Terminal hotkeys are keyboard shortcuts that provide quick access to common commands and operations in a terminal environment. Hotkeys like Ctrl + a, Ctrl + e, Ctrl + u, Ctrl + k, Ctrl + w, Ctrl + y, Ctrl + l, Ctrl + r, Ctrl + c, Ctrl + z, and Ctrl + d are used for moving the cursor, deleting text, pasting text, clearing the screen, searching command history, terminating processes, suspending processes, and exiting the shell. Terminal navigation hotkeys like Alt + f, Alt + b, Alt + d, Alt + u, Alt + l, Alt + c, and Alt + t are used for moving the cursor, deleting words, changing word case, and swapping characters. Terminal history hotkeys like !n, !!, !string, !?string, and !-n are used to repeat commands by number, repeat the last command, repeat commands starting with a string, repeat commands containing a string, and repeat the nth last command.
Ctrl + a # Move to the beginning of the line
Ctrl + e # Move to the end of the line
Ctrl + u # Delete from the cursor to the beginning of the line
Ctrl + k # Delete from the cursor to the end of the line
Ctrl + w # Delete the word before the cursor
Ctrl + y # Paste the last deleted text
Ctrl + l # Clear the screen
Ctrl + r # Search command history
Ctrl + c # Terminate the current process
Ctrl + z # Suspend the current process
Ctrl + d # Exit the shell
Advanced Search Operations
Find files by name: Find files by type: Find files by size: Find files by permissions: Find files by owner: Find files by group: Find files by modified time: Advanced Find Operations: Advanced search operations allow you to find files based on specific criteria such as name, type, size, permissions, owner, group, and modified time. The find command is a powerful tool for locating files and directories on a Linux system. By combining different options and arguments with the find command, you can search for files that meet your specific requirements.
find /path -name filename # Find files by name
find / -iname "filename" # Find files by name (case-insensitive), starting from root dir
find . -name "*.txt" # Find all .txt files starting from current dir
find / -type f -iname "*.txt" # Find all .txt files
find /path -type f -name ".*" # Find hidden files
find /path -type f # Find regular files
find /path -type d # Find directories
find / -type l # Find symbolic links, e.g. shortcuts
find / -type s # Find sockets, such as network connections
find / -type p # Find named pipes, such as FIFOs
find / -type c # Find character devices, such as terminals
find / -type b # Find block devices, such as hard drives
find / -type f -name "*.log" # Find files with a specific extension
find /path -size +10M # Find files larger than 10MB
find /path -size -1G # Find files smaller than 1GB
find /path -mtime -1 # Find files modified in the last day
find /path -mtime +7 # Find files modified more than 7 days ago
find /path -name "*.txt" -type f -size +1M -perm 644 -user username -group groupname -mtime -30 # Find .txt files larger than 1MB with specific permissions, owner, group, and modified time
find / -iname "file*.txt" -type f -size +10M -perm 644 -user username -group groupname -mtime -365 # Find case-insensitive .txt files larger than 10MB with specific permissions, owner, group, and modified time
find /path -type f -name "*.log" -exec rm {} \; # Find and remove .log files
find . -type f -name "*.txt" -exec cp {} /path/to/destination \; # Find and copy .txt files to a destination
find /path -type f -name "*.log" -exec grep "pattern" {} \; # Find files with a specific pattern
find / -type f -iname "*.sh" | grep -v "backup" # Find .sh files excluding those with "backup" in the name, -v for invert match
find /path -type f -name "*.log" -delete # Delete files with a specific extension
find / -type f -name "*.log" -print0 | xargs -0 rm # Delete files with a specific extension using xargs, xargs is used to build and execute command lines from standard input
Search for text in files: Search for text in files with context: Search for text in files by type: Search for text in files recursively: The grep command is used to search for text patterns in files. By specifying a pattern and a file or directory, you can search for specific text strings within files. Options like -i for case-insensitive search, -w for whole word search, -n for line numbers, -v for invert match, and -c for counting lines with matching text provide additional functionality for text searching. The grep command can be used to search for text in specific file types, search recursively in directories, and display context around matching lines.
grep pattern file # Search for text in a file
grep -r pattern /path # Search for text in files in a directory
grep -i pattern file # Search for text in a file (case-insensitive)
grep -w pattern file # Search for whole words in a file
grep -n pattern file # Show line numbers with matching text
grep -v pattern file # Invert match, show lines that don't match
grep -c pattern file # Count lines with matching text
grep -A 3 pattern file # Show 3 lines after the matching line, -A for after
grep -B 2 pattern file # Show 2 lines before the matching line, -B for before
grep -C 1 pattern file # Show 1 line before and after the matching line, -C for context
grep pattern *.txt # Search for text in .txt files
grep pattern *.log # Search for text in .log files
grep pattern *.sh # Search for text in .sh files, will search within the current directory
grep -r pattern /path # Search for text in files in a directory and its subdirectories
grep -ri pattern /path # Search for text in files in a directory and its subdirectories (case-insensitive)
grep -ri "pattern" /path/*.txt # Search for text in .txt files in a directory (case-insensitive)
grep -ri "pattern" /*.txt # Search for text in .txt files in all directories (starting from root dir)
Find files and search for text in them: Find files and count lines with specific text: Find files and display context around text: Find files and display line numbers with text: Other Examples: Combining the find and grep commands allows you to search for text patterns in files across directories. By using find to locate files based on specific criteria and grep to search for text patterns in those files, you can perform advanced text searching and analysis tasks. By combining find and grep with xargs or -exec, you can create powerful text processing workflows to search for text in files and directories.
find /path -type f -name "*.txt" | xargs grep "pattern" # Find .txt files and search for a pattern in them
find /path -type f -name "*.log" -exec grep "pattern" {} \; # Find .log files and search for a pattern in them
find /path -type f -name "*.txt" | xargs grep -c "pattern" # Find .txt files and count lines with a pattern, -c for count
find /path -type f -name "*.log" -exec grep -c "pattern" {} \; # Find .log files and count lines with a pattern
find /path -type f -name "*.txt" | xargs grep -A 3 "pattern" # Find .txt files and show context after the matching line
find /path -type f -name "*.log" -exec grep -B 2 "pattern" {} \; # Find .log files and show context before the matching line
find /path -type f -name "*.txt" | xargs grep -n "pattern" # Find .txt files and show line numbers with matching text
find /path -type f -name "*.log" -exec grep -n "pattern" {} \; # Find .log files and show line numbers with matching text
# Find .txt files, search for a pattern, sort lines, and remove duplicates
find /path -type f -name "*.txt" | xargs grep "pattern" | sort | uniq
# Find .log files, search for a pattern, extract filenames, sort lines, and remove duplicates
find /path -type f -name "*.log" -exec grep "pattern" {} \; | cut -d : -f 1 | sort | uniq
# Find .txt files, count lines with a pattern, and calculate the total count
find /path -type f -name "*.txt" | xargs grep -c "pattern" | awk '{sum += $1} END {print sum}'
# Find .log files, search for a pattern, show line numbers, and exclude specific lines
find . -type f -name "*.log" -exec grep -n "pattern" {} \; | grep -v "exclude"
# Find .txt files, search for a pattern, exclude specific lines, and count lines
find /path -type f -name "*.txt" -exec grep "pattern" {} \; | grep -v "exclude" | wc -l
# Find .log files, search for a pattern, exclude specific lines, extract filenames, sort lines, and remove duplicates
find /path -type f -name "*.log" -exec grep "pattern" {} \; | grep -v "exclude" | cut -d : -f 1 | sort | uniq
# Find .txt files, search for a pattern, exclude specific lines, extract filenames, sort lines, remove duplicates, and copy files to a destination
find /path -type f -name "*.txt" -exec grep "pattern" {} \; | grep -v "exclude" | cut -d : -f 1 | sort | uniq | xargs -I {} cp {} /path/to/destination
# Find .log files, search for a pattern, exclude specific lines, extract filenames, sort lines, remove duplicates, and remove files
find /path -type f -name "*.log" -exec grep "pattern" {} \; | grep -v "exclude" | cut -d : -f 1 | sort | uniq | xargs -I {} rm {}
Common file and text search commands: Advanced search commands with pipelines: These examples demonstrate the power of combining find, grep, awk, and sed for effective file and text manipulation. These tools are essential for searching, analyzing, and processing data efficiently.
find / -type f -name "*.log" # Find all .log files in the root directory
grep -ri "error" /var/log/ # Recursively search for 'error' in /var/log, case insensitive
find /home/user -type f -iname "config*" | xargs grep -i "setting" # Find files starting with 'config' and search for 'setting'
grep -rl "pattern" /path/ # Search recursively in /path for files containing 'pattern', list filenames only
find /etc -type f -exec grep -H 'httpd' {} \; # Find files in /etc and grep 'httpd' in them, show filenames
grep -v "exclude" file # Show lines that do not contain 'exclude'
find / -type f -mmin -60 # Find files modified within the last hour
find /var/log -type f -name "*.log" | xargs grep "error" | awk '{print $4, $5}' # Find log files and print 4th and 5th words of lines containing 'error'
find . -type f -exec grep -qi "pattern" {} \; -print # Quietly check for 'pattern' and print filenames
find / -type f -name "*.php" | xargs grep -i "mysqli_connect" # Find PHP files and search for 'mysqli_connect'
grep "pattern" file | sed 's/pattern/replacement/g' # Search for 'pattern' and replace it in the output
grep -r "pattern" /path | awk -F: '{print $1}' | uniq # Search for 'pattern', get unique filenames
find /var/log -type f | xargs grep -i "error" | sort | uniq -c # Find log files, grep 'error', sort, and count unique lines
grep -ri "pattern" /path | awk '{print $1}' | sort | uniq # Recursively search for 'pattern', print first field, sort, and remove duplicates
find / -type f -perm 0777 | xargs grep "confidential" # Find world-writable files and search for 'confidential'
cat file | grep "pattern" | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Search for a pattern, print specific columns, and substitute text
find /path -type f -name "*.txt" | xargs grep "pattern" | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Find .txt files, search for a pattern, print specific columns, and substitute text
find /path -type f -exec grep -l "todo" {} \; | xargs sed -i 's/todo/TODO/g' # Find files with 'todo', mark them as 'TODO'
Find files and extract specific fields: Find files and calculate totals: Find files and format output: Find files and perform calculations: Combining the awk command with find allows you to search for files based on specific criteria and perform text processing tasks on those files. By using find to locate files and awk to extract specific fields, calculate totals and averages, format output, and perform other text processing operations, you can create powerful text processing workflows to analyze and manipulate text data in files.
find /path -type f -name "*.txt" -exec awk '{print $1}' {} \; # Find .txt files and extract the first field
find /path -type f -name "*.log" -exec awk '{print $2}' {} \; # Find .log files and extract the second field
find /path -type f -name "*.txt" -exec awk '{sum += $1} END {print sum}' {} \; # Find .txt files and calculate the sum of the first field
find /path -type f -name "*.log" -exec awk '{sum += $2} END {print sum}' {} \; # Find .log files and calculate the sum of the second field
Search for text in files and extract specific fields: Search for text in files and calculate totals: Search for text in files and format output: Search for text in files and perform calculations: Combining the awk command with grep allows you to search for text patterns in files and perform text processing tasks on the matching lines. By using grep to search for text patterns and awk to extract specific fields, calculate totals and averages, format output, and perform other text processing operations, you can create powerful text processing workflows to analyze and manipulate text data in files.
grep "pattern" file | awk '{print $1}' # Search for text in a file and extract the first field
grep "pattern" file | awk '{print $2}' # Search for text in a file and extract the second field
grep "pattern" file | awk '{sum += $1} END {print sum}' # Search for text in a file and calculate the sum of the first field
grep "pattern" file | awk '{sum += $2} END {print sum}' # Search for text in a file and calculate the sum of the second field
Bonus Other examples that provide additional ways to assess files based on name, type, content, size, permissions, owner, group, and modified time. Commands like locate, file, strings, du, stat, and find are used to locate files, determine file types, display file content, show file sizes, view file permissions, and check file modification times. These commands are useful for performing detailed searches and analysis of files on a Linux system.
locate filename # Search for files by name
locate -i filename # Search for files by name (case-insensitive)
file filename # Determine file type
file -b filename # Show only the file type
strings filename # Display printable strings in a file
strings -n 10 filename # Display strings longer than 10 characters
du -h filename # Show file size
du -sh filename # Show total file size
stat filename # Show file permissions
stat -c "%a %n" filename # Show file permissions in octal format
stat -c "%U %n" filename # Show file owner
stat -c "%G %n" filename # Show file group
stat -c "%y %n" filename # Show file modification time
Text Processing and Transformation
Print specific columns: Print lines with specific patterns: Print lines with specific field values: Calculate totals and averages: Format output: Other Examples: The awk command is a powerful text processing tool that allows you to manipulate and analyze text data in files. By specifying patterns, field values, and calculations, you can extract specific columns, print lines with patterns, calculate totals and averages, and format output. The awk command is commonly used in combination with other commands like grep and sort to perform complex text processing tasks.
awk '{print $1}' file # Print the first column
awk '{print $2}' file # Print the second column
awk '{print $1, $3}' file # Print the first and third columns
awk '/pattern/' file # Print lines with a specific pattern
awk '!/pattern/' file # Print lines without a specific pattern
awk '$1 == "value"' file # Print lines where the first field equals a specific value
awk '$2 > 10' file # Print lines where the second field is greater than 10
awk '{sum += $1} END {print sum}' file # Calculate the sum of the first column
awk '{sum += $1} END {print sum/NR}' file # Calculate the average of the first column
awk '{print $1, $2, $3, $4}' file # Print the first four columns
awk '/pattern/ {print $1, $2}' file # Print specific columns in lines with a pattern
awk '$3 ~ /pattern/' file # Print lines where the third column matches a pattern
awk '$2 ~ /^[0-9]+$/' file # Print lines where the second column is a number
awk '{print $NF}' file # Print the last column
Substitution - Basic: Substitution - Intermediate: Substitute text in specific lines: Substitute text in specific columns: Delete lines with specific patterns: Append text after specific lines: Print specific lines: Advanced Examples: The sed command is a stream editor that allows you to perform text transformations on files. By specifying patterns, replacements, line numbers, and columns, you can substitute text, delete lines, append text, and perform other text editing operations. The sed command is commonly used in combination with other commands like grep and awk to process and manipulate text data in files.
sed 's/old/new/' file # Substitution, replace first occurrence of 'old' with 'new' in each line
sed 's/old/new/g' file # Global substitution, replace all occurrences of 'old' with 'new'
sed 's/old/new/Ig' file # Substitute with case insensitivity
sed -i.bak 's/old/new/g' file # Substitute and back up original file with a .bak extension
sed -i 's/old/new/g' file # Substitute and overwrite original file
sed 's/pattern/replacement/' file # Substitute the first occurrence of a pattern with a replacement
sed 's/pattern/replacement/g' file # Substitute all occurrences of a pattern with a replacement
sed '/pattern/s/old/new/g' file # Substitute 'old' with 'new' in lines with a specific pattern
sed '2s/pattern/replacement/' file # Substitute the first occurrence of a pattern in the second line
sed '2,4s/pattern/replacement/' file # Substitute the first occurrence of a pattern in lines 2 to 4
sed '3,5s/old/new/' file # Substitute 'old' with 'new' for lines 3 to 5
sed '3,10d' file # Delete lines from the 3rd to the 10th
sed 's/pattern/replacement/2' file # Substitute the first occurrence of a pattern in the second column
sed '/pattern/d' file # Delete lines with a specific pattern
sed '/start/,/end/d' file # Delete lines between start and end of patterns
sed '2a\text' file # Append text after the second line
sed '1,3a\text' file # Print lines from the 5th to the end # Append 'text' to lines 1 to 3
sed 's/pattern/&\nnew line/' file # Append a new line after a pattern
sed ':a;N;$!ba;s/\n/, /g' file # Replace newlines, turning multiple lines into a single line
sed 's/[^0-9]*//g' file # Remove non-numeric characters
sed 's/[^a-zA-Z]*//g' file # Remove non-alphabetic characters
sed 's/[^0-9]*//g' file | awk '{sum += $1} END {print sum}' # Remove non-numeric characters and calculate the sum
sed 's/[^a-zA-Z]*//g' file | awk '{print tolower($1)}' # Remove non-alphabetic characters and convert to lowercase
sed -n '10,20p' file | sort # Print lines 10 to 20 and sort them
sed '$!N; /^\(.*\)\n\1$/!P; D' file # Remove duplicate consecutive lines
sed 's/$/\r/' file | tr -d '\n' > output # Convert Unix line endings to Windows line endings
sed -n '/pattern/p' file | grep 'something' # Search for a pattern and filter the output
sed '/baz/s/foo/bar/g' file | awk '{print $1}' # Substitute 'foo' with 'bar' in lines with 'baz' and print the first column
Sort lines in a file: Sort unique lines: Sort lines in multiple files: Sort lines in a file by field: The sort command is used to sort lines in a file based on specific criteria such as alphabetical order, numerical order, and field values. By specifying options like -r for reverse order, -n for numerical sort, and -k for sorting by field, you can customize the sorting behavior. The sort command is commonly used in combination with other commands like uniq and awk to process and analyze text data in files.
sort file # Sort lines in a file
sort -r file # Sort lines in reverse order
sort -n file # Sort lines numerically
sort -k 2 file # Sort lines based on the second column
Remove duplicate lines in a file: Remove duplicate lines in sorted file: Remove duplicate lines in unsorted file: Remove duplicate lines based on fields: The uniq command is used to remove duplicate lines in a file. By specifying options like -c for counting duplicate lines, -d for showing only duplicate lines, -f for skipping fields, and -s for skipping characters, you can customize the behavior of the uniq command. The uniq command is commonly used in combination with other commands like sort and awk to process and analyze text data in files.
uniq file # Remove duplicate lines in a file
uniq -c file # Count duplicate lines in a file
uniq -d file # Show only duplicate lines
sort file | uniq # Sort and remove duplicate lines
sort file | uniq -c # Sort and count duplicate lines
sort file | uniq -d # Sort and show only duplicate lines
Translate characters in a file: Translate characters in a string: Translate characters in a file based on sets: Translate characters in a file based on ranges: The tr command is used to translate characters in a file or string. By specifying sets, ranges, and options like -d for deleting characters and -s for squeezing characters, you can customize the behavior of the tr command. The tr command is commonly used to perform character transformations and manipulations on text data in files.
tr 'a-z' 'A-Z' < file # Translate lowercase to uppercase
tr -d '0-9' < file # Delete digits
tr -s ' ' < file # Squeeze spaces
echo "hello" | tr 'a-z' 'A-Z' # Translate lowercase to uppercase
echo "12345" | tr -d '0-9' # Delete digits
echo "hello world" | tr -s ' ' # Squeeze spaces
Extract fields from a file: Extract characters from a file: Extract fields based on delimiter: Extract fields from a string: The cut command is used to extract fields and characters from a file or string. By specifying options like -f for fields, -c for characters, and -d for delimiters, you can customize the behavior of the cut command. The cut command is commonly used to process and extract specific data from text files.
cut -f 1 file # Extract the first field
cut -f 2,3 file # Extract the second and third fields
cut -f 1-3 file # Extract the first to third fields
cut -c 1 file # Extract the first character
cut -c 2-4 file # Extract the second to fourth characters
cut -c -4 file # Extract the first to fourth characters
Build and execute command lines from standard input: Copy files to a destination: Remove files: Execute commands with xargs: The xargs command is used to build and execute command lines from standard input. By combining xargs with find, grep, and other commands, you can create powerful text processing workflows to search for text in files, copy files to destinations, remove files, and execute commands on files. The xargs command is commonly used to process and manipulate text data in files and directories.
echo "file1 file2 file3" | xargs ls # List files
echo "file1 file2 file3" | xargs -n 1 ls # List files one by one
find /path -type f -name "*.txt" | xargs -I {} cp {} /path/to/destination # Copy .txt files to a destination
find /path -type f -name "*.txt" | xargs grep "pattern" # Search for a pattern in .txt files
find /path -type f -name "*.log" | xargs grep -v "pattern" # Search for lines without a pattern in .log files
find /path -type f -name "*.txt" | xargs -I {} mv {} /path/to/destination # Move .txt files to a destination
Merge lines from multiple files: Merge lines from multiple files vertically: Merge lines from multiple files with line numbers: Merge lines from multiple files with specific formatting: The paste command is used to merge lines from multiple files horizontally or vertically. By specifying options like -d for delimiters and -s for merging lines from a single file, you can customize the behavior of the paste command. The paste command is commonly used to combine and format text data from multiple files.
paste file1 file2 # Merge lines from two files
paste -d : file1 file2 # Merge lines with a specific delimiter
paste -s file # Merge lines from a single file
Join lines from multiple files based on fields: Join lines from multiple files based on fields and fields numbers: Join lines from multiple files with specific formatting: The join command is used to merge lines from multiple files based on common fields. By specifying options like -t for delimiters, -1 and -2 for field numbers, and -o for output fields, you can customize the behavior of the join command. The join command is commonly used to combine and format text data from multiple files.
Putting Them Together
Combine commands with pipelines: Use pipelines with redirection: Use pipelines with conditional operators: Pipelines allow you to combine multiple commands and utilities to perform complex operations on text data. By using the pipe symbol (|) to connect commands, you can pass the output of one command as input to another command. Pipelines are commonly used to process, filter, and analyze text data in files and directories. By combining commands like cat, grep, awk, sort, uniq, and xargs with pipelines, you can create powerful text processing workflows.
cat file | grep pattern | wc -l # Count lines with a specific pattern in a file
ls -l | grep "file" | awk '{print $9}' # List files with a specific pattern and print filenames
find /path -type f | xargs grep "pattern" # Find files and search for a pattern in them
Use awk and sed for text processing: Use awk and sed with pipelines: Use awk and sed with find and grep: The awk and sed commands are powerful text processing tools that allow you to manipulate and analyze text data in files. By specifying patterns, field values, calculations, and substitutions, you can extract specific columns, print lines with patterns, calculate totals, format output, substitute text, delete lines, and append text. The awk and sed commands are commonly used in combination with other commands like grep, sort, and uniq to perform complex text processing tasks.
awk '{print $1, $2}' file # Print the first two columns
awk '/pattern/ {print $1, $2}' file # Print specific columns in lines with a pattern
awk '$2 ~ /^[0-9]+$/' file # Print lines where the second column is a number
awk '{sum += $1} END {print sum}' file # Calculate the sum of the first column
awk '{printf "%-10s %-10s\n", $1, $2}' file # Format output with specific spacing
sed 's/pattern/replacement/' file # Substitute the first occurrence of a pattern with a replacement
sed 's/pattern/replacement/g' file # Substitute all occurrences of a pattern with a replacement
sed '/pattern/d' file # Delete lines with a specific pattern
sed '2a\text' file # Append text after the second line
cat file | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Print specific columns and substitute text
grep pattern file | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Search for a pattern, print specific columns, and substitute text
find /path -type f -name "*.txt" | xargs grep "pattern" | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Find .txt files, search for a pattern, print specific columns, and substitute text
find /path -type f -name "*.log" -exec grep "pattern" {} \; | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Find .log files, search for a pattern, print specific columns, and substitute text
Use sort and uniq for text processing: Use sort and uniq with pipelines: Use sort and uniq with find and grep: The sort and uniq commands are used for sorting lines and removing duplicate lines in text files. By combining sort and uniq with pipelines, you can process and analyze text data by sorting lines, counting duplicate lines, and removing duplicates. The sort and uniq commands are commonly used in combination with other commands like grep, awk, and sed to perform advanced text processing tasks.
sort file | uniq # Sort lines and remove duplicate lines
sort -r file | uniq -c # Sort lines in reverse order and count duplicate lines
sort -n file | uniq -d # Sort lines numerically and show only duplicate lines
Use tr and cut for text processing: Use tr and cut with pipelines: Use tr and cut with find and grep: The tr and cut commands are used for translating characters and extracting fields or characters from text files. By specifying sets, ranges, delimiters, and options, you can customize the behavior of the tr and cut commands to perform character transformations and data extraction. The tr and cut commands are commonly used in combination with other commands like grep, awk, and sed to process and analyze text data in files.
tr 'a-z' 'A-Z' < file # Translate lowercase to uppercase
tr -d '0-9' < file # Delete digits
tr -s ' ' < file # Squeeze spaces
cut -f 1 file # Extract the first field
cut -c 1 file # Extract the first character
cut -d : -f 1 file # Extract the first field based on :
cat file | tr 'a-z' 'A-Z' | cut -d : -f 1 # Translate lowercase to uppercase and extract the first field
grep pattern file | tr -d '0-9' | cut -c 1 # Search for a pattern, delete digits, and extract the first character
find /path -type f -name "*.txt" | xargs grep "pattern" | tr 'a-z' 'A-Z' | cut -d : -f 1 # Find .txt files, search for a pattern, translate lowercase to uppercase, and extract the first field
find /path -type f -name "*.log" -exec grep "pattern" {} \; | tr -d '0-9' | cut -c 1 # Find .log files, search for a pattern, delete digits, and extract the first character
Use paste and join for text processing: Use paste and join with pipelines: Use paste and join with find and grep: The paste and join commands are used to merge lines from multiple files based on common fields or delimiters. By specifying options like -d for delimiters, -s for merging lines from a single file, and -1 and -2 for field numbers, you can customize the behavior of the paste and join commands. The paste and join commands are commonly used to combine and format text data from multiple files.
paste file1 file2 # Merge lines from two files
paste -d : file1 file2 # Merge lines with a specific delimiter
paste -s file # Merge lines from a single file
join file1 file2 # Join lines based on common fields
join -t : file1 file2 # Join lines with a specific delimiter
join -1 2 -2 1 file1 file2 # Join lines based on the second field of the first file and the first field of the second file
cat file1 file2 | paste -d : - - # Combine files and merge lines with a specific delimiter
grep pattern file | paste -s - | join - file2 # Search for a pattern, merge lines from a single file, and join lines with another file
find /path -type f -name "*.txt" | xargs grep "pattern" | paste -s - | join - file2 # Find .txt files, search for a pattern, merge lines from a single file, and join lines with another file
find /path -type f -name "*.log" -exec grep "pattern" {} \; | paste -d : - - | join - file2 # Find .log files, search for a pattern, merge lines with a specific delimiter, and join lines with another file
Combine awk, sed, and sort for text processing: Use awk, sed, and sort with pipelines: Use awk, sed, and sort with find and grep: Combining awk, sed, and sort commands allows you to manipulate and analyze text data in files. By specifying patterns, field values, substitutions, and sorting criteria, you can extract specific columns, print lines with patterns, substitute text, and sort lines. By combining awk, sed, and sort with pipelines, you can create powerful text processing workflows to process and analyze text data in files and directories.
awk '{print $1, $2}' file | sed 's/pattern/replacement/' | sort # Print specific columns, substitute text, and sort lines
awk '/pattern/ {print $1, $2}' file | sed 's/pattern/replacement/g' | sort -r # Print specific columns in lines with a pattern, substitute all occurrences of a pattern, and sort lines in reverse order
cat file | awk '{print $1, $2}' | sed 's/pattern/replacement/' | sort # Print specific columns, substitute text, and sort lines
grep pattern file | awk '{print $1, $2}' | sed 's/pattern/replacement/g' | sort -r # Search for a pattern, print specific columns, substitute all occurrences of a pattern, and sort lines in reverse order
find /path -type f -name "*.txt" | xargs grep "pattern" | awk '{print $1, $2}' | sed 's/pattern/replacement/' | sort # Find .txt files, search for a pattern, print specific columns, substitute text, and sort lines
find /path -type f -name "*.log" -exec grep "pattern" {} \; | awk '{print $1, $2}' | sed 's/pattern/replacement/g' | sort -r # Find .log files, search for a pattern, print specific columns, substitute all occurrences of a pattern, and sort lines in reverse order
Combine tr, cut, and paste for text processing: Use tr, cut, and paste with pipelines: Use tr, cut, and paste with find and grep: Combining tr, cut, and paste commands allows you to translate characters, extract fields or characters, and merge lines from files. By specifying sets, ranges, delimiters, and options, you can customize the behavior of the tr, cut, and paste commands to perform character transformations, data extraction, and line merging operations. By combining tr, cut, and paste with pipelines, you can create powerful text processing workflows to process and analyze text data in files and directories.
tr 'a-z' 'A-Z' < file | cut -d : -f 1 | paste -s - # Translate lowercase to uppercase, extract the first field, and merge lines from a single file
tr -d '0-9' < file | cut -c 1 | paste -d : - - # Delete digits, extract the first character, and merge lines with a specific delimiter
cat file | tr 'a-z' 'A-Z' | cut -d : -f 1 | paste -s - # Translate lowercase to uppercase, extract the first field, and merge lines from a single file
grep pattern file | tr -d '0-9' | cut -c 1 | paste -d : - - # Search for a pattern, delete digits, extract the first character, and merge lines with a specific delimiter
find /path -type f -name "*.txt" | xargs grep "pattern" | tr 'a-z' 'A-Z' | cut -d : -f 1 | paste -s - # Find .txt files, search for a pattern, translate lowercase to uppercase, extract the first field, and merge lines from a single file
find /path -type f -name "*.log" -exec grep "pattern" {} \; | tr -d '0-9' | cut -c 1 | paste -d : - - # Find .log files, search for a pattern, delete digits, extract the first character, and merge lines with a specific delimiter
Common uses of grep, awk, and sed: Common uses of grep, awk, and sed with pipelines: Grep, awk, and sed are powerful text processing tools that are commonly used in combination to search for patterns, extract specific data, and manipulate text in files. By combining grep, awk, and sed with pipelines, you can create advanced text processing workflows to process and analyze text data in files and directories.
grep -r "pattern" /path # Search for a pattern recursively in a directory
grep -r "pattern" /path | awk '{print $1, $2}' # Search for a pattern and print specific columns
grep -r "pattern" /path | sed 's/pattern/replacement/' # Search for a pattern and substitute text
awk '{print $1, $2}' file # Print the first two columns
awk '/pattern/ {print $1, $2}' file # Print specific columns in lines with a pattern
awk '$2 ~ /^[0-9]+$/' file # Print lines where the second column is a number
sed 's/pattern/replacement/' file # Substitute the first occurrence of a pattern with a replacement
sed -i 's/pattern/replacement/g' file # Substitute all occurrences of a pattern with a replacement and overwrite the original file
sed '/pattern/d' file # Delete lines with a specific pattern
cat file | grep "pattern" | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Search for a pattern, print specific columns, and substitute text
find /path -type f -name "*.txt" | xargs grep "pattern" | awk '{print $1, $2}' | sed 's/pattern/replacement/' # Find .txt files, search for a pattern, print specific columns, and substitute text
File Transfer, Mounts, and Compression (Zip/Tar)
Copy file to remote host: Copy file from remote host: Copy directory, -r for recursive: Other Examples:
scp -r /path/to/local/directory user@host:/path/to/destination
scp -r user@host:/path/to/remote/directory /path/to/local/destination
scp -P 2222 /path/to/file user@host:/path/to/destination # Use a specific port
scp -i /path/to/key.pem /path/to/file user@host:/path/to/destination # Use a specific key, -i for identity file
scp -v /path/to/file user@host:/path/to/destination # Verbose output
scp -C /path/to/file user@host:/path/to/destination # Compress during transfer
scp -l 1000 /path/to/file user@host:/path/to/destination # Limit bandwidth to 1000 Kbps
scp -B /path/to/file user@host:/path/to/destination # Run in background
scp -p /path/to/file user@host:/path/to/destination # Preserve file attributes
scp -q /path/to/file user@host:/path/to/destination # Quiet mode
Connect via GUI: Mount a remote SMB share: Unmount a remote SMB share: Connect via GUI, other methods: Connecting via the GUI allows you to interact with files on a remote server as if they were part of your local file system, but it doesn’t integrate the remote filesystem into your system's file hierarchy. When you use a GUI to connect to a server, the file manager may use a virtual filesystem (VFS) layer that abstracts the actual file operations to the server, allowing you to manage files without permanently altering your system’s file structure. The connection typically lasts only for the duration of the session. Once the file manager is closed, the connection to the remote filesystem is also closed.
Mount a remote SMB share - Server Message Block (SMB), a protocol for sharing files, printers, and serial ports over a network. Commonly used in Windows environments: Unmount a remote SMB share: Other examples - SMB: NFS Mounts - Network File System Protocol. Allows you to access files over a network as if they were on your local machine. Commonly used in sharing files between Unix/Linux systems: Unmount NFS: Other examples - NFS: SSHFS - file system client based on the SSH File Transfer Protocol. Allows you to mount a remote directory over an SSH connection. Secure and easy to use: Mounting physically integrates a remote filesystem into your system’s directory tree, making it appear as if the remote files are part of the local file hierarchy. This is used for more permanent or semi-permanent needs where regular file operations on the remote files are necessary. Mounting involves setting up a filesystem on your system that forwards operations like reading, writing, and listing to another server over a network. This is often done using specific commands in the terminal with root privileges. Once mounted, the filesystem remains integrated with your local file system until it is unmounted. This means files can be accessed by any application on your system without needing to open a specific file manager.
# sudo apt-get install cifs-utils # Install CIFS client package
# -t to specify filesystem type, -o for options, uid and gid for user and group IDs
# file_mode and dir_mode for permissions, ro for read-only, rw for read-write
# cifs for Common Internet File System
sudo mount -t cifs -o username=user,password=pass,ro //server/share /mnt # Mount a remote SMB share with read-only permissions
sudo mount -t cifs -o username=user,password=pass,rw //server/share /mnt # Mount a remote SMB share with read-write permissions
sudo mount -t cifs -o username=user,password=pass,uid=1000,gid=1000 //server/share /mnt # Mount a remote SMB share with specific UID and GID
sudo mount -t cifs -o username=user,password=pass,file_mode=0755,dir_mode=0755 //server/share /mnt # Mount a remote SMB share with specific file permissions
# sudo apt-get install nfs-common # Install NFS client package
sudo mount -t nfs -o username=user,password=pass //server/share /mnt
sudo mount -t nfs -o ro server:/share /mnt # Mount a remote NFS share with read-only permissions
sudo mount -t nfs -o rw server:/share /mnt # Mount a remote NFS share with read-write permissions
sudo mount -t nfs -o vers=4 server:/share /mnt # Mount a remote NFS share using NFS version 4
sudo mount -t nfs -o vers=3,udp server:/share /mnt # Mount a remote NFS share using NFS version 3 and UDP protocol
Zip a file or directory: Unzip a file: Other Examples: Tar a file or directory: Other Tar Examples: Other Compression Formats:
# -r for recursive, -j to exclude directory structure, -q for quiet
zip -r archive.zip /path/to/directory
zip -r archive.zip /path/to/directory -x "*.log" # Exclude files with a specific extension
unzip -d /path/to/destination archive.zip # Unzip to a specific directory
unzip -l archive.zip # List contents of a zip file
zip -e archive.zip /path/to/directory # Compress and encrypt a directory with a password, only parent level contents
zip -r -e archive.zip /path/to/directory # Compress and encrypt a directory with a password, recursively
unzip -P password archive.zip # Decrypt, and -P to specify password (will prompt for password if not provided)
# -c to create, -x to extract, -v for verbose, -f to specify file
tar -cvf archive.tar /path/of/directory/to/archive # Create a tar file from a directory
tar -xvf archive.tar # Extract to current directory
Managing Users, Groups, Permissions, and Access
Add a user: Add a user to a group: Change user password: Delete a user: List all users: List all groups: List groups a user belongs to: Change user's primary group: Change user's secondary groups: Other examples and arguments: Users and groups are used to manage file permissions and access to resources on a Linux system. Each user has a unique username and user ID (UID) that is used to identify them. Users can belong to one or more groups, which can be used to assign permissions to files and directories. Groups have a group name and group ID (GID) that is used to identify them. Users can be added to groups to grant them access to resources that are only available to members of that group. Users can also be assigned a primary group, which is the group that is used by default when creating new files and directories.
sudo usermod -l newname oldname # Change username
sudo usermod -u 1001 username # Change UID
sudo usermod -g groupname username # Change primary group
sudo usermod -G group1,group2 username # Change secondary groups
sudo usermod -s /bin/bash username # Change default shell
sudo usermod -d /path/to/directory username # Change home directory
sudo usermod -e YYYY-MM-DD username # Set account expiration date
sudo usermod -L username # Lock account
sudo usermod -U username # Unlock account
Change file permissions: Change directory permissions: Change file ownership: Change group ownership: Change both owner and group: Other examples and arguments: Umask (user file creation mask) - Set default file permissions for new files and directories created by a user. Used to control the permissions that are automatically assigned to new files and directories: Other Examples: File permissions control access to files and directories on a Linux system. Each file has three sets of permissions: one for the owner of the file, one for the group that the file belongs to, and one for all other users. Permissions can be set to allow or deny read, write, and execute access for each of these groups. The owner of a file can change its permissions, as well as the owner and group of the file. Permissions can be set using symbolic notation (e.g., u+rwx) or octal notation (e.g., 755).
# u for user, g for group, o for others, a for all
# + to add permission, - to remove permission, = to set permission
# r for read (permission value of 4), w for write (permission value of 2), x for execute (permission value of 1)
chmod u+r file # Add read permission for user
chmod a+x file # Add execute permission for all
chmod g-w file # Remove write permission for group
chmod o=x file # Set execute permission for others
chmod a=rwx file # Set read, write, and execute permission for all, rwx is equivalent to chmod 777
chmod 755 file # Set read, write, and execute for user, read and execute for group and others
chmod 644 file # Set read and write for user, read for group and others
chmod 600 file # Set read and write for user, no permissions for group and others
chmod 777 file # Set read, write, and execute for all
chmod -R 755 directory # Recursively set permissions for all files in directory
chown -R username:groupname directory # Recursively change owner and group for all files in directory
Run a command as root: Edit sudoers file: Open a root shell using sudo - various methods: Sudo (superuser do) is a command that allows users to run programs with the security privileges of another user, by default the root user. It is used to perform administrative tasks without logging in as the root user. Sudo requires users to authenticate with their own password before running a command with elevated privileges. The sudoers file contains a list of users and groups that are allowed to use the sudo command, as well as the commands they are allowed to run.
# Open a root shell using sudo:
sudo -i
# Explanation: Launches a root shell similar to 'su -' but using sudo, ensuring the environment is clean as a root login session.
# Check current user's sudo permissions and allowed commands:
sudo -l
# Explanation: Lists the commands the current user can run with sudo, providing an audit of permissions.
# Run a specific command as another user:
sudo -u username command
# Explanation: Executes 'command' as the user 'username'. Useful for running scripts or commands under another user's privileges without switching to their account.
# Open a root shell, preserving the original user's environment:
sudo su -
# Explanation: Opens a shell with root privileges but retains environment variables and aliases from the original user's profile.
# Open a shell as another user, preserving their environment:
sudo su - username
# Explanation: Similar to 'sudo su -' but for another user, this keeps the user's environment intact during the session.
# Run a command as another user, preserving their environment:
sudo su -c "command"
# Explanation: Executes 'command' within a new shell session for the current root user, preserving the environment settings.
# Simulate a full login as the root user:
sudo su -l
# Explanation: Invokes a login shell as root, resetting the environment to what root would see upon a normal login.
# Simulate a full login as another user:
sudo su -l username
# Explanation: Starts a login shell as 'username', mimicking what the user would experience during a standard login, with all startup scripts executed.
# Open a shell as another user, preserving the original user's environment:
sudo -u username -s
# Explanation: Opens a shell as 'username' while maintaining the current user's environment settings.
# Open a login shell as another user:
sudo -u username -l
# Explanation: Initiates a login shell as 'username', similar to 'sudo su - username' but using sudo.
# List commands a user can run with sudo:
sudo -u username -l
# Explanation: Displays the commands 'username' can run with sudo, providing an overview of their permissions.
# Check if a user can run a specific command with sudo:
sudo -u username -l command
# Explanation: Verifies if 'username' can execute 'command' with sudo, useful for troubleshooting access issues.
# Invalidate the sudo timestamp:
sudo -k
# Explanation: Invalidates the current sudo timestamp, requiring re-authentication for the next sudo command.
# Remove the sudo timestamp, useful for logout scripts:
sudo -K
# Explanation: Removes the sudo timestamp, useful for scripts that require a clean logout process.
Generate an SSH key pair: Copy the public key to a remote host: Manually copy the public key to a remote host: Disable password authentication - this will require SSH keys for authentication: Other examples and arguments: SSH keys are used to authenticate users and servers when connecting to remote systems over SSH. They provide a secure way to log in to a server without entering a password each time. An SSH key pair consists of a public key and a private key. The public key is placed on the server, while the private key is kept on the client machine. When a user attempts to connect to the server, the server verifies the user's identity by checking the public key against the private key stored on the client machine. If the keys match, the user is granted access to the server.
# -t to specify key type, -b to specify key length, -C to add a comment
# RSA is the default key type, 4096 bits is recommended for security
ssh-keygen -t rsa -b 4096 -C "comment"
# Edit the SSH configuration file
sudo nano /etc/ssh/sshd_config
# Set PasswordAuthentication to no
PasswordAuthentication no
# Restart the SSH service
sudo systemctl restart sshd
ssh-keygen -t ed25519 - # Generate an Ed25519 key
ssh-keygen -t rsa -b 2048 -C "comment" # Generate an RSA key with 2048 bits
ssh-keygen -t rsa -b 4096 -f ~/.ssh/keyfile -C "comment" # Generate an RSA key with 4096 bits and a custom filename
ssh-keygen -p -f ~/.ssh/keyfile # Change the passphrase for a key
ssh-keygen -y -f ~/.ssh/keyfile # Output the public key for a private key
Connect to a remote host: Connect to a remote host on a specific port: Connect to a remote host with a specific identity file: Run a command on a remote host: Copy a file from a remote host: Other examples and arguments: SSH (Secure Shell) is a cryptographic network protocol used to securely connect to remote systems over an unsecured network. It provides a secure channel for data exchange between two devices, allowing users to log in to remote systems, execute commands, and transfer files securely. SSH uses public-key cryptography to authenticate users and encrypt data during transmission. It is widely used in system administration, software development, and network security.
ssh -l username host # Connect using a specific username, -l for login name
ssh -X user@host # Enable X11 forwarding, useful for GUI applications
ssh -v user@host # Verbose output
ssh -o "ProxyCommand ssh -W %h:%p gateway" user@host # Use a proxy command, e.g. to connect through a gateway
ssh -L 8080:localhost:80 user@host # Local port forwarding, e.g. to access a web server
ssh -R 8080:localhost:80 user@host # Remote port forwarding, e.g. to expose a local service
ssh -D 8080 user@host # Dynamic port forwarding, e.g. to create a SOCKS proxy
Analyzing Host Logs, Processes, and Performance
View system logs: View log entries with timestamps: Filter log entries by priority: Filter log entries by unit (service): Other examples and arguments: Other commands related to logging: Logs are records of events that occur on a system, providing valuable information for troubleshooting, monitoring, and security analysis. System logs contain messages from various components of the operating system, including the kernel, services, and applications. Logs are stored in files located in the /var/log directory and can be viewed using tools like cat, less, or journalctl. The journalctl command is used to query and display logs managed by the systemd journal service, which provides advanced filtering and querying capabilities.
cat /var/log/syslog # View system log - stores messages from system services
cat /var/log/auth.log # View authentication log - records user logins and authentication attempts
cat /var/log/kern.log # View kernel log - contains kernel messages
cat /var/log/dmesg # View kernel ring buffer - displays kernel messages during boot
cat /var/log/messages # View general system messages
cat /var/log/secure # View security log - contains security-related messages
cat /var/log/maillog # View mail log - records mail server activity
cat /var/log/cron # View cron log - logs cron job activity
cat /var/log/boot.log # View boot log - records system boot messages
cat /var/log/lastlog # View last login log - shows last login times for users
cat /var/log/wtmp # View login log - records login and logout times
cat /var/log/btmp # View failed login log - records failed login attempts
cat /var/log/utmp # View current login log - shows current login sessions
cat /var/log/audit/audit.log # View audit log - records security events
cat /var/log/yum.log # View YUM log - logs package installation and updates
cat /var/log/httpd/access_log # View Apache access log - records HTTP requests
cat /var/log/httpd/error_log # View Apache error log - records Apache errors
cat /var/log/nginx/access.log # View Nginx access log - records HTTP requests
cat /var/log/nginx/error.log # View Nginx error log - records Nginx errors
cat /var/log/mysql/error.log # View MySQL error log - records MySQL errors
# etc...
journalctl -b # Show logs from the current boot
journalctl -k # Show kernel messages
journalctl -f # Follow log output
journalctl --since "2022-01-01" # Show logs since a specific date
journalctl --until "2022-01-01" # Show logs until a specific date
journalctl --disk-usage # Show disk usage of journal files
dmesg # Display kernel ring buffer messages
tail -f /var/log/syslog # Follow system log in real-time
tail -f /var/log/auth.log # Follow authentication log in real-time
last # Show last logins
lastb # Show failed login attempts
lastlog # Show last login times
lastcomm # Show last executed commands
cat ~/.bash_history # View user command history
List running processes: Show process tree: Kill a process by PID: Kill a process by name: Other examples and arguments: Processes are running instances of programs on a system. Each process has a unique process ID (PID) that identifies it and allows it to be managed. The ps command is used to list running processes, while tools like top and htop provide interactive views of system processes. Processes will consume system resources such as CPU, memory, and disk I/O, and can be managed using commands like kill and pkill to stop or terminate them.
View system information: Check system performance: Get entire system information: Check disk usage: List hardware configuration: Monitor system performance: Check network performance: Other examples and arguments: System performance monitoring is essential for maintaining the health and stability of a system. Monitoring tools provide insights into resource usage, bottlenecks, and potential issues that may impact system performance. Commands like top, vmstat, iostat, and iftop provide real-time data on CPU, memory, disk, and network usage. Monitoring disk space, memory usage, and network performance helps identify potential problems and optimize system performance.
uname -a # Display all system information
hostname # Display the system's network name
date # Show the current date and time
cal # Show the calendar
lscpu # List CPU information
lsblk # List block devices
lsmem # List memory information
free -h # Show memory usage
Start a service: Stop a service: Restart a service: Enable a service to start on boot: Disable a service from starting on boot: Check the status of a service: Other examples and arguments: Services are background processes that run on a system to perform specific tasks or provide functionality. Services are managed by the init system, such as systemd on modern Linux distributions. The systemctl command is used to start, stop, restart, enable, disable, and manage services on a Linux system. Monitoring the status of services helps ensure that critical processes are running correctly and that the system is functioning as expected.
sudo systemctl reload service # Reload configuration changes
sudo systemctl mask service # Prevent a service from starting
sudo systemctl unmask service # Allow a masked service to start
sudo systemctl is-active service # Check if a service is active
sudo systemctl is-enabled service # Check if a service is enabled
sudo systemctl is-failed service # Check if a service has failed
Check system uptime: Show system load averages: Display the last system reboot time: Show the current time and date: Other examples and arguments: Uptime refers to the amount of time a system has been running without a reboot. Monitoring uptime provides insights into system stability, performance, and availability. The uptime command displays the current time, system uptime, number of users logged in, and system load averages. Checking system uptime helps identify issues related to system crashes, reboots, and performance degradation.
Kill a process by PID: Kill a process by name: Forcefully kill a process by PID: Other examples and arguments: The kill command is used to terminate processes on a Linux system. Each process has a unique process ID (PID) that can be used to identify and kill it. The kill command sends a signal to a process, instructing it to exit. The default signal sent by kill is SIGTERM, which allows the process to perform cleanup tasks before exiting. The -9 option sends a SIGKILL signal, which forcefully terminates the process without allowing it to clean up.
Interactive process viewer: Sort processes by CPU usage: Sort processes by memory usage: Filter processes by name: Other examples and arguments: htop is an interactive process viewer that provides a real-time overview of system processes and resource usage. htop displays a color-coded list of processes, CPU and memory usage, and system load averages. It allows users to interactively manage processes, sort and filter them by various criteria, and monitor system performance in a user-friendly interface. htop is a popular alternative to the top command for monitoring system processes.
Testing Network Connectivity - use with caution/troubleshooting
Testing Network Connectivity: Using Netcat: Using Telnet: Using Nmap: Using MTR (My Traceroute): Testing network connectivity involves checking if a port is open on a host. You can use tools like Netcat, Telnet, MTR, and Nmap to verify port availability. Netcat is a versatile networking utility that can be used to read and write data across network connections. Telnet is a command-line tool that allows you to communicate with a remote host using the Telnet protocol. Nmap is a network scanning tool that can be used to discover hosts and services on a network. MTR combines the capabilities of traceroute and ping by continuously sending packets to a target host and reporting real-time delays and route paths, making it invaluable for diagnosing transient network issues and visualizing network performance.
ping host # Send ICMP echo requests to a host
ping -c 4 host # Send 4 ICMP echo requests to a host
traceroute host # Trace the route to a host
mtr host # Network diagnostic tool that combines ping and traceroute
# -z stands for zero-I/O mode (used for scanning), -v for verbose output
nc -zv 10.11.12.13 80 # Check if a specific port is open on a host
nc -zv example.com 2222 # Check if a specific port is open on a host
nc -zv 10.0.0.1 22-80 # Check if a range of ports is open on a host
telnet 192.168.1.1 5000 # Connect to a host on a specific port
telnet example.com 443 # Connect to a host on a specific port
nmap -p 80 example.com # Check if a port is open on a host
nmap -p 1-1000 10.11.12.13 # Check if a range of ports is open on a host
nmap -p 1-1000 -sT host # Perform a TCP connect scan on a range of ports
nmap -p 1-1000 -sU host # Perform a UDP scan on a range of ports
nmap -p 1-1000 -sS host # Perform a SYN scan on a range of ports, SYN scan is stealthier than a TCP connect scan
# Displays Host (IP address or hostname of each router or link along the path to the destination),
# Loss% (percentage of lost packets at each hop), Snt(# of packets sent to each hop), last (latency of last packet),
# Avg (avg latency of all packets sent to that hop), Best (the best/lowest latency observed for a packet to this hop)
# Wrst (worst/highest latency observed), StDev (Std deviation of the latencies, e.g. variability in response times)
# 1. Basic usage to run mtr to a specific host (e.g., google.com)
mtr google.com
# 2. Use mtr with the IP address instead of the domain name
mtr 8.8.8.8
# 3. Run mtr with report mode which provides a summary after a set number of pings
mtr --report google.com
# 4. Set the number of pings in report mode to 10
mtr --report --report-cycles 10 google.com
# 5. Use mtr in verbose mode to get more detailed output
mtr --verbose google.com
# 6. Display numeric IP addresses instead of hostnames
mtr --no-dns google.com
# 7. Specify the size of the probing packets (e.g., 1200 bytes)
mtr --packet-size 1200 google.com
# 8. Change the interval between pings to 2 seconds (default is 1 second)
mtr --interval 2 google.com
# 9. Use mtr to generate a split report showing both hostnames and IP addresses
mtr --split google.com
# 10. Show the TCP mode of mtr using a specific port (e.g., 80 for HTTP)
mtr --tcp --port 80 google.com
Using nslookup: Using dig: Advanced Usage: Other Examples: The Domain Name System (DNS) is a hierarchical decentralized naming system for computers, services, or other resources connected to the Internet or a private network. DNS translates domain names to IP addresses, allowing users to access websites and other resources using human-readable names. Tools like nslookup and dig can be used to perform DNS lookups and query DNS servers for information about domain names.
nslookup example.com # Perform a DNS lookup for a domain
nslookup nslookup 192.168.1.1 # Perform a reverse DNS lookup for an IP address
nslookup example.com server # Perform a DNS lookup for a domain using a specific server
nslookup -type=mx example.com # Perform a DNS lookup for the MX records of a domain
nslookup -type=ns example.com # Perform a DNS lookup for the NS records of a domain
dig example.com # Perform a DNS lookup for a domain
dig -x 192.168.1.1 # Perform a reverse DNS lookup for an IP address
dig domain mx # Perform a DNS lookup for the MX records of a domain
dig domain ns # Perform a DNS lookup for the NS records of a domain
dig @server example.com # Perform a DNS lookup for a domain using a specific server
dig +trace example.com # Perform a DNS trace lookup for a domain
dig +short example.com # Perform a short DNS lookup for a domain
dig +noall +answer example.com # Perform a DNS lookup and display only the answer section
dig +noall +answer +comments example.com # Perform a DNS lookup and display only the answer section with comments
nslookup -type=txt example.com # Perform a DNS lookup for the TXT records of a domain
nslookup -type=soa example.com # Perform a DNS lookup for the SOA records of a domain
dig domain any # Perform a DNS lookup for all records of a domain
dig domain aaaa # Perform a DNS lookup for the AAAA records of a domain
Using curl: Using wget: Advanced Usage: Other Examples: Advanced POST Requests: HTTP requests are used to communicate with web servers and retrieve information from websites. Tools like curl and wget can be used to send HTTP requests and download files from URLs. Curl is a command-line tool that supports various request methods, headers, cookies, and authentication methods. Wget is a command-line tool that can download files from the web and supports resuming partial downloads, recursive downloads, and downloading prerequisites.
# -X specifies the request method, -d specifies the data to send, -H specifies the headers to include
# -b specifies the cookie to send, -c specifies the cookie to save,
# -u specifies the username and password for authentication,
# -I sends a HEAD request, -v enables verbose output, -k allows insecure SSL connections,
# -L follows redirects, -O saves the output to a file, -X POST -d 'data' sends a POST request with data,
# -X POST -H 'Content-Type: application/json' -d '{"key": "value"}' sends a POST request with JSON data
curl http://example.com:8080 # Send an HTTP GET request to a URL on a specific port
curl -X POST http://example.com:8080 # Send an HTTP POST request to a URL on a specific port
curl http://localhost:5000 # Send an HTTP GET request to a URL on a specific port
curl http://example.com # Send an HTTP GET request to a URL, for url, use http://example.com
curl -X POST http://example.com # Send an HTTP POST request to a URL
curl -X PUT http://example.com # Send an HTTP PUT request to a URL
curl -X DELETE http://example.com # Send an HTTP DELETE request to a URL
curl -I http://example.com # Send a HEAD request to a URL and display the headers
curl -v http://example.com # Send a request to a URL and display verbose output
curl -k https://example.com # Send a request to a URL with insecure SSL
curl -k https://localhost:5000 # Send an HTTPS GET request to a URL on a specific port
curl -L http://example.com # Follow redirects when sending a request to a URL
curl -O http://example.com/path/to/file.zip # Download a file from a URL
wget url # Download a file from a URL
wget -O output url # Download a file from a URL and save it as a specific name
wget -c url # Resume a partial download
wget -r url # Download a URL recursively
wget -p url # Download a URL and its prerequisites
curl -u username:password url # Send an authenticated request to a URL
curl -H 'Header: Value' url # Send a request with a custom header
curl -d 'data' url # Send a POST request with data
curl -F 'key=value' url # Send a POST request with form data
curl -b 'cookie' url # Send a request with a cookie
curl -s url | jq . # Send a request to a URL and format the JSON output using jq
curl -s url | python -m json.tool # Send a request to a URL and format the JSON output using Python
wget -qO- url | jq . # Download a file from a URL and format the JSON output using jq
wget -qO- url | python -m json.tool # Download a file from a URL and format the JSON output using Python
curl -X POST -d 'data' url # Send a POST request with data
curl -X POST -H 'Content-Type: application/json' -d '{"key": "value"}' url # Send a POST request with JSON data
wget --no-check-certificate url # Download a file from a URL without certificate validation
wget --user=username --password=password url # Download a file from a URL with authentication
Using OpenSSL: Using OpenSSL to Verify Certificates: Using OpenSSL to Generate Certificates: Using OpenSSL to Convert Certificates: Retrieve the public key/public certificate from a server: SSL certificates are used to secure communication between clients and servers over the internet. OpenSSL is a command-line tool that can be used to work with SSL certificates, generate new certificates, and convert certificates between different formats. You can use OpenSSL to connect to a host on a specific port, display certificate information, generate new certificates, and convert certificates between different formats. SSL/TLS is fundamental to secure communication over networks. Including troubleshooting steps helps ensure that security protocols are maintained properly. Problems with SSL/TLS configurations, such as expired certificates, mismatched domain names, and unsupported encryption algorithms, are common and can cause service interruptions or security vulnerabilities.
openssl s_client -connect host:port # Connect to a host on a specific port
openssl s_client -connect host:port -showcerts # Connect to a host on a specific port and display certificates
openssl s_client -connect host:port -servername example.com # Connect to a host on a specific port with SNI
openssl s_client -connect host:port -servername example.com -showcerts # Connect to a host on a specific port with SNI and display certificates
openssl x509 -in certificate.crt -text # Display information about a certificate
openssl x509 -in certificate.crt -noout -text # Display information about a certificate without the header and footer
openssl x509 -in certificate.crt -noout -issuer # Display the issuer of a certificate
openssl x509 -in certificate.crt -noout -subject # Display the subject of a certificate
openssl x509 -in certificate.crt -noout -dates # Display the validity dates of a certificate
openssl x509 -in certificate.crt -noout -enddate # Display the expiration date of a certificate
openssl x509 -in certificate.crt -noout -text | grep DNS # Display the DNS names in the certificate
openssl req -new -newkey rsa:2048 -nodes -keyout key.pem -out csr.pem # Generate a new private key and CSR
openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 365 # Generate a self-signed certificate
openssl genrsa -out key.pem 2048 # Generate a new RSA private key
openssl req -new -key key.pem -out csr.pem # Generate a CSR using an existing private key
openssl x509 -req -in csr.pem -signkey key.pem -out cert.pem # Sign a CSR with an existing private key
openssl x509 -in certificate.crt -out certificate.pem # Convert a certificate from CRT to PEM format
openssl x509 -in certificate.pem -out certificate.crt # Convert a certificate from PEM to CRT format
openssl x509 -in certificate.pem -out certificate.der -outform DER # Convert a certificate from PEM to DER format
# echo -n will suppress the newline character at the end of the output, sed will extract the certificate between the BEGIN and END lines
echo -n | openssl s_client -connect example.com:443 2>/dev/null | sed -n '/-----BEGIN CERTIFICATE-----/,/-----END CERTIFICATE-----/p' > example.com.crt
# Or, in Chrome, click on the padlock icon in the address bar, then click on "Certificate (Valid)" to view the
# certificate details. Click on the "Details" tab and then "Copy to File" or "Export" to save the certificate.
Network Configuration
Checking Network Interfaces: Checking Routing Tables: Checking DNS Configuration: Checking Network Statistics: Checking Listening Ports: Netstat Examples: Configuring Network Interfaces: Networking commands can be used to check network interfaces, routing tables, DNS configuration, and network statistics. Commands like ifconfig, ip, and netstat can provide information about network interfaces, IP addresses, routing tables, and network statistics. Understanding these commands can help troubleshoot network connectivity issues and configure network settings.
ifconfig # Display network interface configuration
ip addr show # Display IP address information
ip link show # Display link layer information
route # Display routing table
ip route show # Display IP routing table
netstat -r # Display routing table
ss -tulwn # List all listening TCP and UDP ports
netstat -tuln # List all listening TCP and UDP ports
lsof -i -P -n | grep LISTEN # List all listening ports
netstat -a # Display all listening and non-listening sockets
netstat -l # Display all listening sockets
netstat -t # Display all TCP connections
netstat -u # Display all UDP connections
netstat -n # Display numerical addresses instead of resolving hostnames
netstat -p # Display the PID and name of the program to which each socket belongs
netstat -c # Display continuously updated information
netstat -i # Display a table of network interfaces and their statistics
netstat -r # Display the kernel routing table
netstat -s # Display network statistics
netstat -tuln # Display all listening TCP and UDP ports
netstat -tulnp # Display all listening TCP and UDP ports with the PID and name of the program
ifconfig interface ip_address netmask mask # Configure a network interface with an IP address and netmask
ip addr add ip_address/mask dev interface # Configure a network interface with an IP address and netmask
ip addr del ip_address/mask dev interface # Remove an IP address from a network interface
ip link set interface up # Bring a network interface up
ip link set interface down # Bring a network interface down
Using iptables: Using firewalld: UFW (Uncomplicated Firewall): Using iptables for Port Forwarding: Using firewalld for Port Forwarding: Firewalls are used to control incoming and outgoing network traffic based on a set of security rules. Tools like iptables, firewalld, and UFW can be used to configure and manage firewall rules on Linux systems. Iptables is a command-line utility that allows system administrators to configure the IP packet filter rules of the Linux kernel firewall. Firewalld is a dynamic firewall manager that provides a way to configure firewall rules in a more user-friendly way. UFW (Uncomplicated Firewall) is a front-end for iptables that simplifies the process of configuring a firewall.
iptables -L # List all firewall rules
iptables -A INPUT -s ip_address -j DROP # Block incoming traffic from a specific IP address
iptables -A INPUT -p tcp --dport port -j DROP # Block incoming traffic on a specific port
iptables -A INPUT -s ip_address -p tcp --dport port -j DROP # Block incoming traffic from a specific IP address on a specific port
iptables -A INPUT -s ip_address -p tcp --dport port -j ACCEPT # Allow incoming traffic from a specific IP address on a specific port
iptables -A INPUT -s ip_address -p tcp --dport port -j REJECT # Reject incoming traffic from a specific IP address on a specific port
iptables -A INPUT -s ip_address -p tcp --dport port -j LOG # Log incoming traffic from a specific IP address on a specific port
iptables -A INPUT -s ip_address -p tcp --dport port -j REJECT --reject-with tcp-reset # Reject incoming traffic from a specific IP address on a specific port with a TCP reset
firewall-cmd --list-all # List all firewall rules
firewall-cmd --zone=public --add-port=port/tcp --permanent # Allow incoming traffic on a specific port
firewall-cmd --zone=public --remove-port=port/tcp --permanent # Remove incoming traffic on a specific port
firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="ip_address" port port protocol="tcp" reject' --permanent # Block incoming traffic from a specific IP address on a specific port
firewall-cmd --reload # Reload firewall rules
ufw status # Display the status of UFW
ufw enable # Enable UFW
ufw disable # Disable UFW
ufw allow port # Allow incoming traffic on a specific port
ufw deny port # Deny incoming traffic on a specific port
ufw allow from ip_address to any port port # Allow incoming traffic from a specific IP address on a specific port
ufw deny from ip_address to any port port # Deny incoming traffic from a specific IP address on a specific port
iptables -t nat -A PREROUTING -p tcp --dport port -j DNAT --to-destination internal_ip:port # Forward incoming traffic on a specific port to an internal IP address and port
iptables -t nat -A POSTROUTING -s internal_ip -j SNAT --to-source external_ip # Change the source IP address of outgoing traffic to an external IP address
Using iftop: Using nload: Using iptraf: Using vnstat: Using tcpdump: Other Examples: Other Tools: Network monitoring tools can be used to monitor network traffic, bandwidth usage, and network statistics. Tools like iftop, nload, iptraf, vnstat, and tcpdump can provide real-time and historical data about network activity. Monitoring network traffic can help identify performance issues, security threats, and abnormal behavior on the network.
iftop # Display bandwidth usage on an interface
iftop -i interface # Display bandwidth usage on a specific interface
nload # Display network traffic in real-time
nload -u K # Display network traffic in kilobytes
nload -u M # Display network traffic in megabytes
iptraf # Display network statistics
iptraf -i interface # Display network statistics for a specific interface
vnstat # Display network traffic statistics
vnstat -i interface # Display network traffic statistics for a specific interface
tcpdump -i interface # Capture and display network packets on an interface
tcpdump -i interface -c count # Capture and display a specific number of network packets on an interface
tcpdump -i interface -w output.pcap # Capture and save network packets to a file
tcpdump -r input.pcap # Read and display network packets from a file
tcpdump -i interface host ip_address # Capture and display network packets from a specific IP address
tcpdump -i interface port port # Capture and display network packets on a specific port
tcpdump -i interface src ip_address # Capture and display network packets with a specific source IP address
tcpdump -i interface dst ip_address # Capture and display network packets with a specific destination IP address
Regular Expressions
Basic Syntax: Character Classes: Quantifiers: Common Patterns: Demonstration: Regular expressions (regex) provide a powerful way to search and manipulate strings using a declarative syntax. They are used for pattern matching and text processing in various programming and scripting languages. Understanding basic regex syntax is crucial for performing complex text searches and manipulations efficiently. These patterns form the foundation of regex and are essential for various scripting and programming tasks.
. # Matches any single character except newline
# Example: grep 'h.t' file.txt to find 'hat', 'hot', etc.
^ # Matches the start of a string
# Example: grep '^start' file.txt to find lines starting with 'start'
$ # Matches the end of a string
# Example: grep 'end$' file.txt to find lines ending with 'end'
[abc] # Matches any one character from the set {a, b, c}
# Example: grep '[abc]' file.txt to find lines with 'a', 'b', or 'c'
[^abc] # Matches any one character not in the set {a, b, c}
# Example: grep '[^abc]' file.txt to find lines without 'a', 'b', or 'c'
\d # Matches any decimal digit; equivalent to [0-9]
# Example: grep '\d' file.txt to find lines with digits
\D # Matches any non-digit character; equivalent to [^0-9]
# Example: grep '\D' file.txt to find lines without digits
\w # Matches any alphanumeric character; equivalent to [a-zA-Z0-9_]
# Example: grep '\w' file.txt to find lines with word characters
\W # Matches any non-alphanumeric character; not equivalent to [^a-zA-Z0-9_]
# Example: grep '\W' file.txt to find lines with non-word characters
\s # Matches any whitespace character; equivalent to [ \t\r\n\f]
# Example: grep '\s' file.txt to find lines with whitespace characters
\S # Matches any non-whitespace character; equivalent to [^ \t\r\n\f]
# Example: grep '\S' file.txt to find lines without whitespace characters
* # Matches zero or more repetitions of the preceding element
# Example: grep 'a*' file.txt to find lines with zero or more 'a's
+ # Matches one or more repetitions of the preceding element
# Example: grep 'a+' file.txt to find lines with one or more 'a's
? # Matches zero or one repetition of the preceding element
# Example: grep 'a?' file.txt to find lines with zero or one 'a'
{n} # Matches exactly n occurrences of the preceding element
# Example: grep 'a{2}' file.txt to find lines with two consecutive 'a's
{n,} # Matches n or more occurrences of the preceding element
# Example: grep 'a{2,}' file.txt to find lines with two or more consecutive 'a's
{n,m} # Matches between n and m occurrences of the preceding element
# Example: grep 'a{2,4}' file.txt to find lines with 2 to 4 consecutive 'a's
^\d+ # Matches a line beginning with one or more digits
# Example: grep '^\d+' file.txt to find lines starting with numbers
\w+$ # Matches a line ending with one or more word characters
# Example: grep '\w+$' file.txt to find lines ending with words
\b\w+\b # Matches whole words only
# Example: grep '\b\w+\b' file.txt to find complete words
# Demonstration 1: Using basic regex patterns with `grep`
echo "find all lines that start with a number" | grep '^[0-9]'
echo "exclude lines that start with a hashtag" | grep -v '^#'
echo "list files ending with .txt" | ls | grep '\.txt$'
echo "select words starting with 'a'" | grep '\<a\w*\>'
echo "find empty lines" | grep '^$'
# Demonstration 2: Using basic regex in `sed`
echo "replace 'cat' with 'dog'" | sed 's/cat/dog/'
echo "delete digits from input" | sed 's/[0-9]//g'
echo "capitalize words starting with 'm'" | sed 's/\bm\(\w*\)/\u&m/g'
echo "delete all whitespace" | sed 's/\s//g'
echo "append 'end' at the end of each line" | sed 's/$/ end/'
# Demonstration 3: Using basic regex in `awk`
echo "extract first word of each line" | awk '{print $1}'
echo "delete first word of each line" | awk '{$1=""; print $0}'
echo "print lines longer than 20 characters" | awk 'length($0) > 20'
echo "print lines where the second field is greater than 100" | awk '$2 > 100'
echo "replace commas with semicolons" | awk '{gsub(/,/, ";"); print}'
Advanced Matching: Lookaheads and Lookbehinds: Common Patterns: Detailed Examples: Demonstration: This intermediate regex guide provides more complex examples and common patterns used in data validation and parsing tasks. Understanding these patterns is essential for efficient text manipulation and validation in various applications.
(abc) # Captures a group. Matches the characters abc and saves them as a group.
# Example: grep '(abc)' file.txt to find lines with 'abc'
\1 # Backreference to the first captured group. Matches the same text as previously matched by the first group.
# Example: grep '\(abc\).*\1' file.txt to find lines with 'abc' followed by the same text
[a-z] # Matches any lowercase letter from a to z.
# Example: grep '[a-z]' file.txt to find lines with lowercase letters
[^a-z] # Matches any character that is not a lowercase letter.
# Example: grep '[^a-z]' file.txt to find lines without lowercase letters
(a|b) # Matches either a or b.
# Example: grep '(yes|no)' file.txt to find lines with 'yes' or 'no'
(?:abc) # Non-capturing group. Matches the characters abc but does not capture the group.
# Example: grep '(?:abc)' file.txt to find 'abc' without capturing it
\1 # Backreference to the first captured group to match the same text as previously matched by the first group.
# Example: grep '\(abc\).*\1' file.txt to find lines with 'abc' followed by the same text
^\w+@\w+$ # Matches email-like patterns, ensuring they start with word characters, followed by an '@' and more word characters.
# Example: grep '^\w+@\w+$' file.txt to find email-like patterns
(?=abc) # Positive lookahead. Asserts that what immediately follows the current position in the string is abc.
# Example: grep 'foo(?=bar)' file.txt to find 'foo' followed by 'bar'
(?!abc) # Negative lookahead. Asserts that what immediately follows the current position is not abc.
# Example: grep 'foo(?!bar)' file.txt to find 'foo' not followed by 'bar'
(?<=abc) # Positive lookbehind. Asserts that what immediately precedes the current position in the string is abc.
# Example: grep '(?<=abc)def' file.txt to find 'def' preceded by 'abc'
(?<!abc) # Negative lookbehind. Asserts that what immediately precedes the current position is not abc.
# Example: grep '(?<!abc)def' file.txt to find 'def' not preceded by 'abc'
\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3} # Matches IP addresses like 192.168.1.1
[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,} # Matches email addresses
\+?\d{1,3}[-.\s]?\(?\d{1,3}\)?[-.\s]?\d{1,4}[-.\s]?\d{1,4}[-.\s]?\d{1,9} # International phone numbers w. optional country code
^\d+$ # Matches lines that contain only digits.
\b\w{6}\b # Matches exactly six alphanumeric characters surrounded by word boundaries.
\d+,? # Matches one or more digits followed by an optional comma.
\d{2,4} # Matches between 2 and 4 digits.
\w+@\w+\.\w+ # Basic pattern for matching email addresses.
https?://(?:www\.)?\w+\.\w+ # Matches HTTP and HTTPS URLs.
^\s+|\s+$ # Matches leading and trailing whitespace.
"([^"]*)" # Matches a string inside double quotes.
\b\d{2}-\d{2}-\d{4}\b # Matches dates in the format DD-MM-YYYY.
[a-zA-Z]+:\/\/[^\s]* # Matches URLs like http://example.com.
\.(jpg|png|gif)$ # Matches file extensions for images.
\d{3}-\d{2}-\d{4} # Matches a Social Security number like 123-45-6789.
\b([01]?\d|2[0-3]):([0-5]?\d)\b # Matches time in HH:MM format, 24-hour clock.
[^\w\s] # Matches any non-word, non-space character.
\b[aeiouAEIOU] # Matches any word beginning with a vowel.
[^b]at # Matches 'at' preceded by any character except 'b'.
\d{5}-\d{4} # Matches U.S. ZIP+4 codes like 90210-1234.
(?i)hello # Case-insensitive matching of 'hello'.
\b(?!\d+\b)\w+ # Matches whole words that do not consist solely of digits.
# Demonstration 1: Using intermediate regex patterns with `grep`
# Using -P Perl-compatible regex to match email addresses, -o flag to print only the matched text.
echo "match email addresses" | grep -Po '[\w.%+-]+@[\w.-]+\.[a-zA-Z]{2,6}'
echo "extract all IP addresses" | grep -Po '\b(?:\d{1,3}\.){3}\d{1,3}\b'
echo "find lines with exactly three words" | grep -P '^\w+\s\w+\s\w+$'
echo "capture words after 'user:'" | grep -Po '(?<=user:)\w+'
echo "match multiline patterns" | grep -Pzo '(?s)start.*?end'
# Demonstration 2: Using intermediate regex in `sed`
echo "rename file extensions from .htm to .html" | sed -r 's/\.htm$/\.html/'
echo "swap first two words in a line" | sed -r 's/^(\w+)\s(\w+)/\2 \1/'
echo "mask first part of email" | sed -r 's/([\w.%+-]+)@/\*\*\*@/'
echo "delete comments from code" | sed -r '/^\s*#.*$/d'
echo "convert CSV to tab-separated" | sed -r 's/,/\t/g'
# Demonstration 3: Using intermediate regex with `awk`
echo "validate date format MM/DD/YYYY" | awk '/^\d{2}\/\d{2}\/\d{4}$/'
echo "sum the numbers in text" | awk '{s=0; while(match($0, /[0-9]+/)) {s+=substr($0, RSTART, RLENGTH); $0=substr($0, RSTART+RLENGTH);} print s}'
echo "extract last field after delimiter ':'" | awk -F':' '{print $NF}'
echo "split lines into columns by comma" | awk -F, '{print $1, $2, $3}'
echo "filter lines where field 5 is not 'NULL'" | awk '$5 != "NULL"'
Advanced Regex with Command-line Tools: Combining grep with sed and awk: Using regex with find: Complex Grep Examples: Regex in log analysis: Advanced Pattern Matching: This advanced regex guide demonstrates the use of regex in combination with other Unix command-line tools to perform sophisticated text processing tasks, highlighting their applicability in data extraction, log analysis, and system administration.
grep -P '^\d{3}-\d{2}-\d{4}$' file.txt # Uses Perl-compatible regex to match Social Security numbers formatted as 123-45-6789 in a file.
echo "test data" | grep -oP '\b\w+\b' # Prints each word on a new line using Perl regex.
grep -P '^(?=.*\d)(?=.*[a-z])(?=.*[A-Z]).{8,}$' users.txt # Matches passwords that contain at least one digit, one lowercase, one uppercase letter, and are at least 8 characters long.
grep -P '\b\d{3}-\d{3}-\d{4}\b' file.txt | sed 's/-/./g' # Finds phone numbers and replaces dashes with dots.
grep -Po 'https?://\S+' web.log | awk '{print $1}' | sort | uniq -c # Extracts URLs, prints them, sorts, and counts unique occurrences.
grep -Po '(?<=id=")\d+' file.html # Extracts numeric IDs following 'id="' using a positive lookbehind.
grep -Po 'class="[^"]+"' file.css | cut -d'"' -f2 # Extracts class names from a CSS file.
Scripting Intro & Concepts
Introduction to Bash Scripting: Variables and Data Types: Control Structures: Basic Functions: Combining Scripts and Functions: Loops, Conditions, and Case Statements: This guide provides a foundation for bash scripting, introducing you to scripts, variables, control structures, and functions. It aims to equip you with the basics needed to start writing useful bash scripts for automation and system management.
#!/bin/bash
# This line is called a shebang. It tells the system this file is a bash script.
echo "Hello, World!" # Prints "Hello, World!" to the console.
greeting="Welcome to bash scripting!"
user=$(whoami) # Command substitution, captures the output of 'whoami'.
echo "$greeting, $user!" # Displays the greeting with the user name.
# If statement
if [ "$user" == "root" ]; then
echo "You are the root user."
else
echo "You are not the root user."
fi
# For loop
for i in {1..5}
do
echo "Iteration $i"
done
# Defining a function
function greet {
echo "Hello, $1" # $1 is a positional parameter, representing the first argument passed to the function.
}
# Calling a function
greet "Visitor"
Parameter Passing and User Input: Advanced Function Usage: Conditional Execution: This intermediate bash scripting guide deepens your understanding of parameter handling, user interaction, and advanced function definitions, essential for writing more complex scripts and managing runtime conditions effectively.
#!/bin/bash
# Accessing passed arguments with special variables.
echo "Script Name: $0" # Displays the script's filename.
echo "First Parameter: $1" # Displays the first passed parameter.
echo "Second Parameter: $2" # Displays the second passed parameter.
echo "All Parameters: $@" # Displays all passed parameters.
echo "Total Number of Parameters: $#" # Displays the number of parameters passed.
# Reading user input during script execution.
read -p "Enter your name: " name # Prompts the user to enter their name.
echo "Hello, $name!" # Greets the user with the entered name.
# A function that calculates the sum of two numbers.
function add {
local sum=$(( $1 + $2 )) # Adds the first and second arguments passed to the function.
echo "Sum: $sum" # Outputs the sum.
}
# Calling the function with user-provided arguments.
add 5 7 # Calls the add function with 5 and 7 as arguments.
# A function to check if a file exists.
function check_file {
local file="$1" # Takes the first argument as the filename.
if [ -e "$file" ]; then # Checks if the file exists.
echo "File exists."
else
echo "File does not exist."
fi
}
# Using the function to check for a specific file.
check_file "/path/to/your/file.txt" # Replace with the actual file path you want to check.
# Using logical conditions with if statements.
if [ $1 -gt $2 ]; then # Checks if the first parameter is greater than the second.
echo "$1 is greater than $2"
elif [ $1 -eq $2 ]; then # Checks if the parameters are equal.
echo "$1 is equal to $2"
else
echo "$1 is less than $2"
fi
# Using a case statement to respond based on user input.
read -p "Do you like bash scripting? (yes/no): " answer # Asks the user if they like bash scripting.
case $answer in
yes|YES|Yes)
echo "That's great!"
;;
no|NO|No)
echo "That's okay, it's not for everyone."
;;
*)
echo "Please answer yes or no."
;;
esac
Advanced Data Structures and Script Optimization: Complex Function Definitions: Signal Handling and Script Debugging: This advanced bash scripting guide covers complex data structures, deeper function usage, signal handling, and debugging techniques. These concepts are pivotal for developing robust, efficient, and maintainable bash scripts that handle complex tasks.
#!/bin/bash
# Working with arrays.
fruits=('apple' 'banana' 'cherry') # Defines an array of fruits.
echo "${fruits[0]}" # Accesses the first element of the array.
fruits[3]='orange' # Adds another element to the array.
echo "${fruits[@]}" # Prints all elements of the array.
echo "${#fruits[@]}" # Prints the number of elements in the array.
# Associative arrays (hash maps).
declare -A capitals # Declares an associative array.
capitals["France"]="Paris" # Adds an element to the associative array.
capitals["Germany"]="Berlin" # Adds another element.
for country in "${!capitals[@]}"; do # Iterates over keys of the associative array.
echo "The capital of $country is ${capitals[$country]}"
done
# A function that uses recursion to calculate factorial.
function factorial {
local num=$1 # Local scope for the function's argument.
if [ $num -le 1 ]; then # Base case: factorial of 1 or 0 is 1.
echo 1
else
echo $(( num * $(factorial $((num - 1))) )) # Recursion step.
fi
}
echo "Factorial of 5: $(factorial 5)" # Calls the factorial function.
# A function to process text files.
function process_files {
local file=$1
while read line; do # Reads a file line by line.
echo "Processing: $line"
done < "$file"
}
# Trap signals.
trap 'echo "Signal SIGHUP received";' SIGHUP # Traps SIGHUP signal.
trap 'echo "Exiting..."; exit;' SIGINT # Traps SIGINT signal and exits.
# Set options for script debugging.
set -e # The script will exit if any commands fail.
set -u # The script will exit if it tries to use uninitialized variables.
set -x # Prints commands and their arguments as they are executed.
set -o pipefail # Causes a pipeline to return the exit status of the last command in the pipe that failed.
# Debugging example.
debug_function() {
echo "Starting debug..."
set -x # Enable debugging.
local temp=$1
echo "Value: $temp"
set +x # Disable debugging.
}
debug_function "Test"
Script Examples & Cron Jobs
System Health Check Script: Backup and Restore Script: Network Scanner Script: These scripts are tailored for real-world applications in DevOps, engineering, and cybersecurity. They provide practical solutions for system monitoring, data management, and network scanning to enhance operational efficiency and security.
#!/bin/bash
# This script checks the health of your server by inspecting disk usage, load average, and system uptime.
echo "Checking disk usage..."
df -h | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5 " " $1 }' | while read output;
do
echo $output
usep=$(echo $output | awk '{ print $1}' | cut -d'%' -f1 )
partition=$(echo $output | awk '{ print $2 }' )
if [ $usep -ge 90 ]; then
echo "Running out of space \"$partition ($usep%)\" on $(hostname) as on $(date)"
fi
done
echo "Checking load average..."
uptime | cut -d ':' -f 5
echo "Checking system uptime..."
uptime | awk '{print $3,$4}' | cut -f1 -d,
#!/bin/bash
# This script creates a compressed backup of a specified directory and can restore it.
function backup {
tar czf "/backup/$(date +%Y%m%d_%H%M%S)_$1.tar.gz" $1
echo "Backup of $1 completed successfully."
}
function restore {
tar xzf $1 -C $2
echo "Restore completed successfully."
}
case $1 in
backup)
backup $2
;;
restore)
restore $2 $3
;;
*)
echo "Usage: $0 {backup|restore} [source] [target]"
;;
esac
Scheduling Tasks with Cron: Common Cron Commands: Cron Job Examples: Cron jobs are essential for automating repetitive tasks, system maintenance, and scheduled operations. Understanding cron syntax and common commands enables you to efficiently manage and schedule tasks on Unix-based systems.
# Cron job syntax: minute hour day month day_of_week command
# Example: Run a script every day at 3:30 AM
30 3 * * * /path/to/script.sh
# Example: Run a script every Monday at 8:00 PM
0 20 * * 1 /path/to/script.sh
# Example: Run a script every 15 minutes
*/15 * * * * /path/to/script.sh
# Example: Run a script at 2:00 AM on the first day of every month
0 2 1 * * /path/to/script.sh
crontab -e # Edit the current user's crontab file.
crontab -l # List the current user's crontab entries.
crontab -r # Remove all crontab entries for the current user.
crontab -u username -l # List crontab entries for a specific user.
crontab -u username -e # Edit crontab entries for a specific user.
# Backup script running daily at midnight
0 0 * * * /path/to/backup.sh
# Log cleanup script running every Sunday at 3:00 AM
0 3 * * 0 /path/to/cleanup.sh
# System update script running every Friday at 2:00 AM
0 2 * * 5 /path/to/update.sh
# Monitoring script running every 15 minutes
*/15 * * * * /path/to/monitor.sh
# Cron job to restart a service every hour
0 * * * * systemctl restart some-service
# Monitoring script that checks the health of a web server every 5 minutes
*/5 * * * * /path/to/server_health_check.sh
# Clean temporary files every day at midnight
0 0 * * * rm -rf /tmp/*
Advanced Scripting Examples: Monitor and Log Network Latency Security Audit Script Example: These advanced scripting examples demonstrate practical use cases for monitoring system resources, network latency, and automating tasks. They showcase the versatility of scripting for system administration, monitoring, and alerting in real-world scenarios.
# Script to monitor system resources and send an email alert if thresholds are exceeded.
#!/bin/bash
cpu_threshold=90
mem_threshold=80
disk_threshold=90
cpu_usage=$(top -bn1 | grep "Cpu(s)" | sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | awk '{print 100 - $1}')
mem_usage=$(free | awk '/Mem/{printf("%.2f"), $3/$2*100}')
disk_usage=$(df | awk '$NF=="/"{printf("%.2f"), $5}')
if (( $(echo "$cpu_usage > $cpu_threshold" | bc -l) )); then
echo "CPU usage is above threshold: $cpu_usage%" | mail -s "High CPU Usage Alert"
fi
if (( $(echo "$mem_usage > $mem_threshold" | bc -l) )); then
echo "Memory usage is above threshold: $mem_usage%" | mail -s "High Memory Usage Alert"
fi
if (( $(echo "$disk_usage > $disk_threshold" | bc -l) )); then
echo "Disk usage is above threshold: $disk_usage%" | mail -s "High Disk Usage Alert"
fi
#!/bin/bash
# Script to monitor and log network latency
while true; do
ping -c 1 google.com | grep 'time=' | awk '{print $7}' >> latency.log
sleep 300 # Wait for 5 minutes
done
# Automatically ssh into a server and run commands
ssh user@server "uptime; df -h"
# Download files from a list of URLs
cat urls.txt | xargs -n 1 wget
# Send an alert if a server is not responding
ping -c 3 example.com || echo "Server down!" | mail -s "Server Down Alert" admin@example.com
#!/bin/bash
# Find all .sh files and check their permissions
find / -type f -name "*.sh" -exec ls -l {} \; | awk '$1 !~ /^-rwxr-xr-x$/ {print $9}'
# Check for unauthorized SSH access attempts
grep "Failed password" /var/log/auth.log | awk '{print $11}' | sort | uniq -c | sort -nr
# Scan for open ports on a local machine
netstat -tuln | grep LISTEN
# List all users with UID 0 (root privileges)
awk -F: '$3 == 0 {print $1}' /etc/passwd
# Check for no-password sudoers
grep NOPASSWD /etc/sudoers