Cheatsheet

= ARP vs MAC Table =

= Fragmentation =


 * Before fragmentation:


 * After fragmentation:

= Headers =

Hardware type Protocol type Hardware address length Protocol address length Operation Source MAC Source IP Dest MAC Dest IP
 * ARP Header

Code Checksum Rest of Header
 * ICMP Header

= DNS =


 * Record Types

A 	Address record 	 	 	 	Returns a 32-bit IPv4 address, AAAA 	IPv6 address record CNAME 	Canonical name record 	 	 	Alias of one name to another, DNS lookup will continue by retrying the lookup with the new name. LOC 	Location record 	 	 	Specifies a geographical location associated with a domain name MX 	Mail exchange record 	 	 	Maps a domain name to a list of message transfer agents for that domain NS 	Name server record 	 	 	Delegates a DNS zone to use the given authoritative name servers PTR 	Pointer record 	 	 	 	Pointer to a canonical name. Unlike a CNAME, DNS processing stops and just the name is returned. The most common use is for implementing reverse DNS lookups. SOA 	Start of [a zone of] authority record 	Specifies authoritative information about a DNS zone, including the primary name server, the email of the domain administrator, the domain serial number,etc SRV 	Service locator 	 	 	Generalized service location record, used for newer protocols instead of creating protocol-specific records such as MX. TXT 	Text record 	 	 	 	Originally for arbitrary human-readable text in a DNS record. Now more often carries machine-readable data, opportunistic encryption, Sender Policy Framework, etc. * 	All cached records 	 	 	Returns all cached records of all types known to the name server. If the name server does not have any information on the name, the request will be                                                 forwarded on. AXFR 	Authoritative Zone Transfer 	 	Transfer entire zone file from the master name server to secondary name servers. IXFR 	Incremental Zone Transfer 	 	Requests a zone transfer of the given zone but only differences from a previous serial number.


 * Glue Record


 * A glue record is a term for a record that's served by a DNS server that's not authoritative for the zone, to avoid a condition of impossible dependencies for a DNS zone.
 * What glue records do is to allow the TLD's servers to send extra information in their response to the query for the example.com zone - to send the IP address that's configured for the name servers.
 * It's not authoritative, but it's a pointer to the authoritative servers, allowing for the loop to be resolved.

= TCP =

MSS WSF SACK Permitted
 * Parameters determined during Handshake:


 * MTU vs MSS




 * RTO: Four ACKs acknowledging the same packet, which are not piggybacked on data and do not change the receiver's advertised window.

- If RTO has a larger value - If sender receives four acknowledgments with same value (three duplicates) - Segment expected by all of these Ack is resent immediately
 * Fast Retransmission

- -
 * Fast Recovery:


 * Congestion Control

- Sender starts with cwnd = 1 MSS, Size increases 1 MSS each time one Ack arrives, Increases the rate exponentially(1,2,4,8....) until a threshold is reached
 * Slow Start - Exponential Increase


 * Congestion Avoidance - Additive Increase

- Increases the cwnd Additively, When a “window” is Ack cwnd is increased by 1, Window = No of segments transmitted during RTT - The increase is based on RTT, not on the number of arrived ACKs, Congestion window increases additively until congestion is detected


 * Congestion Detection - Multiplicative Decrease

- If congestion occurs, Window size must be decreased, Sender knows about congestion via RTO or 3 Dup Acks received, Size of Threshold is dropped to half

- If RTO occured, TCP Reacts Strongly - Reduces cwnd back to 1 Segment, starts the slow start phase again
 * Tahoe

- If 3 Duplicate ACKs are received, TCP has a Weaker Reaction - Starts the Congestion Avoidance phase - This is called fast transmission and fast recovery
 * Reno


 * Both consider RTO and Duplicate ACKs as packet loss events.
 * Behavior of Tahoe and Reno differ primarily in how they react to duplicate ACKs.


 * Silly Window Syndrome: Sender creates data slowly or Receiver consumes slowly or both.

Syndrome due to Sender: - Nagle’s Algorithm: Send data initially, accumulate data in output buffer, Wait for Ack or till 1 MSS Data in Buffer

Syndrome due to Receiver: - Clark’s Solution: Announce window size 0 till 1) enough space for 1 MSS in Buffer or Half Receive buffer is empty - Delayed Acknowledgment: Segment not acknowledged immediately, Sender TCP does not slide its window, reduces traffic, sender may unnecessarily retransmit, Not delay more than 500 ms.


 * Persistence Timer

- Issue of Deadlock created by Lost Ack, used to reset Window size 0 advertized earlier, is resolved by this timer - Sending TCP sends a special segment(1 byte of new data) called Probe, causes the receiving TCP to resend Ack - If no reply, another probe is sent and value of persistence timer is doubled and reset - Sender continues sending probes, doubling, resetting value of persistence timer until it reaches a threshold(generally 60s) - After that the sender sends one probe segment every 60s until the window is reopened

= VPN Messages =

Cookie,Proposal List Cookie,Accepted Proposal DH Key,Nonce DH Key,Nonce ID,ID Hash ID,ID Hash
 * Phase 1 - Main Mode

ID,Proposal List,DH Key,Nonce ID,Accepted Proposal,DH Key,Nonce,ID Hash ID Hash Ph1 Hash,Message ID,Proposal List,Nonce, DH Key,Proxy-ID Ph1 Hash,Message ID,Accepted Proposal,Nonce,DH Key,Proxy-ID Ph1 Hash,Message ID,Nonce
 * Phase 1 - Aggressive Mode
 * Phase 2 - Quick Mode

= HTTP =


 * HTTP Error Codes


 * HTTP1.0 vs HTTP1.1

HTTP/1.0:


 * Uses a new connection for each request/response exchange
 * Closed connections after every request.
 * Supports GET, POST, HEAD request methods

HTTP/1.1:


 * Connection may be used for one or more request/response exchanges
 * Uses persistent connections, save bandwidth & reduces latency as it does not require to do TCP Handshake again for every file download (like images, css, etc.)
 * HTTP Pipeline feature in which client sends multiple requests before waiting for each response.
 * Supports OPTIONS, PUT, DELETE, TRACE, CONNECT request methods

GET:      Retrieve Data HEAD:     Header only without Response Body POST:     Submits Data to DB, web forum, etc PUT:      Replaces target resource with the uploaded content DELETE:   Removes target resource given by URI CONNECT:  Used when the client wants to establish a transparent connection to a remote host, usually to facilitate SSL-encrypted communication (HTTPS) through an HTTP proxy OPTIONS:  Returns the HTTP methods that the server supports for the specified URL TRACE:    Performs a message loop back test to see what (if any) changes or additions have been made by intermediate servers PATCH:    Applies partial modifications to a resource.
 * HTTP Request Methods

PUT method only allows a complete replacement of a document. PATCH is used to make changes to part of the resource at a location.
 * PUT vs PATCH

Cookie
Other uses
 * Session cookie
 * Persistent cookie
 * Secure cookie
 * Http-only cookie
 * Same-site cookie
 * Third-party cookie
 * Supercookie
 * Zombie cookie

= FTP =



= SSL Handshake =



= NetScaler =

Least Connection    = Service with fewest active connections Round Robin         = Rotates a list of services Least Response time = Fewest active connections & lowest average response time Least Bandwidth     = Service serving least amount of traffic measured in mbps Least Packets       = Service that received fewest packets Source IP Hash      = Destination IP Hash =
 * LB Methods:

SOURCE IP     = COOKIE Insert = Connections having same HTTP Cookie inserted by Set-Cookie directive from server belong to same persistence session. SSL Session   = Connections having same SSL session ID RULE           = All connection matching a user defined rule URL Passive   = requests having same server ID(Hexadecimal of Server IP & Port) of service to which request is to be fwded Dest IP       = SRC IP DST IP = CALL ID       = Same Caller ID in SIP Header
 * Persistence Methods:


 * What is Stateful & Stateless Persistence? Which one is more scalable/Efficient?

Stateless Session Persistence: Cookie inserted by ADC is more efficient because no need to create a table, NS will insert cookie & forget, with reply, it will read cookie value, decrypt it & fwd request. State-full Session Persistence: Server will insert cookie, NS will hash it & fwd based on Hash value but will need to keep a table in memory with all hashes & IP Addresses. Same is true for Source IP based Persistence, Also inefficient behind NAT Using Set-cookie-header = by Server - insert Name & Value Fields Client sends cookie in Cookie Header Who ever generates cookie, will be able to read it

= OSPF =

Down Attempt Init     Hello sent out all int 2-Way    Hello rcvd cont own RID in ngbr list ExStart  Determine master slave Exchange Master sends DBD first, then Slave Loading  Comp DBDs, send LSR for missing LSAs Full     LSDB of ngbr are fully syncd
 * States

Type 1 - Router LSAs         Sent from router to other routers in the same area, has info reg router's int in the same area, int IPs, adjacent routers Type 2 - Network LSAs        Generated by the DR on a multi access segment, similar to LSA Type 1 Type 3 - Network Summary LSA Generated by ABRs, contain the subnets & costs Type 4 - ASBR summary LSA    Same as summary LSA except the destination advertised by ABR is ASBR, ABR in same area as the ASBR will originate the Type 4 LSA. Type 5 - AS external LSA     Generated by ASBRs, Flooded throughout the AS to advertise a route external to OSPF Type 7 - NSSA External LSA   Generated by the ASBR in an NSSA area, Converted into a type 5 LSA by the ABR when leaving the area
 * LSA Type

Type 1 - Hello Type 2 - Database Description (DBD) Type 3 - Link-State request (LSR) Type 4 - LSU (Contain LSAs) Type 5 - LSAck
 * Packet Types

Same area Same authentication config Same subnet Same hello/dead interval Matching stub flags
 * Neighbor Requirements:


 * LSA Details




 * OSPF path selection: O > O*IA > O*E1 > O*E2.
 * “area range” summarize type 3 LSA’.
 * “summary-address” summarize type 5 & 7 LSA’s.
 * Auto-cost reference BW (Default = 100mb), formula = 100000000/Int-Bw.

= BGP =


 * Route Selection Criteria

Idle Active        Attempting to connect Connect       TCP session established OpenSent      Open message sent OpenConfirm   Response received Established   Adjacency established
 * BGP States

Open Update Keepalive      Sent every 60 seconds Notification   Always indicate something is wrong
 * BGP Messages

Aspath prepend: Applied outwardly. Impacts incoming path. Shorter the as-path length higher the preference As-path prepend is the way to add AS number to the list of subnet u want to advertise. This is a way to route poisoning. Tell the outside world not to follow the path.
 * Directions

Local preference: Applied while the traffic coming inside. Impacts traffic while going out. Non transitive. Propagates within the same as-path. Higher the local preference value higher the preference

MED: Multiexitdescriptor When your router has connection with two other routers with same AS. Let's say you have 2 subnets behind your router. You can use MED value to mention which networks should be accessed through which links. It is advertised outwards. Impacts the incoming traffic. Semi transitive. Propagates to one AS. Lower the MED value higher the preference. MED should be used carefully as it reduces network resiliency.

=VPN Monitor vs DPD vs IKE Heartbeat =

=SRX Architecture= Screens Static NAT | Dest NAT Route ==> Forwarding Lookup Zones Policy Reverse Static NAT | Source NAT Service ALG Session
 * First Path:

Screens TCP NAT Service ALG
 * Fast Path:

= ScreenOS = Sanity Check Screening Session lookup Route Lookup Policy lookup Session creation ARP lookup
 * ScreenOS Flow order

Policy Based Routing Source Interface Based Routing Source Routing Destination Routing Mapped IP Virtual IP  Policy Based NAT (NAT-Src & NAT-Dst) Interface Based NAT
 * Route preference order
 * NAT Preference order

= SYN Flood Protection = Threshold = Proxy connections above this limit If Syn-cookie is enabled, no sessions established between client & firewall or firewall & server directly Alarm Threshold = Alarm/Alert (to log) Queue Size = The number of proxied connections held in queue After this the firewall starts rejecting new connection requests Timeout Value is maximum time before a half-completed connection is dropped from the queue The range is 0–50s; default is 20s

= Linux =

Linux Booting
Source: technochords.com

The following are the 6 high level stages of a typical Linux boot process:


 * 1) BIOS
 * MBR
 * 1) GRUB
 * 2) Kernel
 * 3) Init
 * 4) Runlevel programs


 * BIOS(Basic Input/Output System) - loads and executes the MBR boot loader.
 * Performs some system integrity checks (POST-Power On Self Test)
 * Searches, loads, and executes the boot loader program.
 * It looks for boot loader in floppy, cd-rom, or hard drive.
 * You can press a key (typically F12 of F2, but it depends on your system) during the BIOS startup to change the boot sequence.
 * Once the boot loader program is detected and loaded into the memory, BIOS gives the control to it.


 * MBR (Master Boot Record) - loads and executes the GRUB boot loader.
 * It is located in the 1st sector of the bootable disk.
 * Typically /dev/hda, or /dev/sda
 * MBR is less than 512 bytes in size.
 * This has three components:
 * 1) primary boot loader info in 1st 446 bytes,
 * 2) partition table info in next 64 bytes(16,16,16,16) 4 partitions,
 * 3) magic numbers as mbr validation check in last 2 bytes.
 * It contains information about GRUB (or LILO in old systems).

default=0 timeout=5 splashimage=(hd0,0)/boot/grub/splash.xpm.gz hiddenmenu title CentOS (2.6.18-194.el5PAE) root (hd0,0) kernel /boot/vmlinuz-2.6.18-194.el5PAE ro root=LABEL=/ initrd /boot/initrd-2.6.18-194.el5PAE.img
 * GRUB (Grand Unified Bootloader) - loads and executes Kernel and initrd images.
 * It is a Multiboot boot loader.
 * If you have multiple kernel images installed on your system, you can choose which one to be executed.
 * GRUB displays a splash screen, waits for few seconds, if you don’t enter anything, it loads the default kernel image as specified in the grub configuration file.
 * GRUB has the knowledge of the filesystem (the older Linux loader LILO didn’t understand filesystem).
 * Grub configuration file is /boot/grub/grub.conf (/etc/grub.conf is a link to this).
 * 1) boot=/dev/sda
 * As you notice from the above info, it contains kernel and initrd image.


 * Kernel
 * Once the control is given to kernel which is the central part of all your OS and act as a mediator between hardware and software.
 * Kernel once loaded into to RAM it always resides on RAM until the machine is shutdown.
 * Once the Kernel starts its operations the first thing it do is executing INIT process.

0 – halt 1 – Single user mode 2 – Multiuser, without NFS 3 – Full multiuser mode 4 – unused 5 – X11 6 – reboot
 * Init (initialization)
 * Looks at the /etc/inittab file to decide the Linux run level.
 * Following are the available run levels


 * Init identifies the default initlevel from /etc/inittab and uses that to load all appropriate program.
 * Execute ‘grep initdefault /etc/inittab’ on your system to identify the default run level
 * Typically you would set the default run level to either 3 or 5.

Run level 0 – /etc/rc.d/rc0.d/ Run level 1 – /etc/rc.d/rc1.d/ Run level 2 – /etc/rc.d/rc2.d/ Run level 3 – /etc/rc.d/rc3.d/ Run level 4 – /etc/rc.d/rc4.d/ Run level 5 – /etc/rc.d/rc5.d/ Run level 6 – /etc/rc.d/rc6.d/
 * Runlevel programs
 * When the Linux system is booting up, you might see various services getting started.
 * For example, it might say “starting sendmail …. OK”.
 * Those are the runlevel programs, executed from the run level directory as defined by your run level.
 * Depending on your default init level setting, the system will execute the programs from one of the following directories.


 * Please note that there are also symbolic links available for these directory under /etc directly.
 * So, /etc/rc0.d is linked to /etc/rc.d/rc0.d.
 * Under the /etc/rc.d/rc*.d/ directories, you would see programs that start with S and K.
 * 1) Programs starts with S are used during startup. S for startup.
 * 2) Programs starts with K are used during shutdown. K for kill.
 * 3) There are numbers right next to S and K in the program names.
 * 4) Those are the sequence number in which the programs should be started or killed.
 * 5) For example, S12syslog is to start the syslog deamon, which has the sequence number of 12.
 * 6) S80sendmail is to start the sendmail daemon, which has the sequence number of 80.
 * So, syslog program will be started before sendmail.

Manually Boot using Grub
grub> ls (hd0) (hd0,msdos5) (hd1) (hd1,msdos0)
 * Locate where the vmlinuz and initrd.* files are located:

grub> linux (hd1,msdos1)/install/vmlinuz root=/dev/sdb1 grub> initrd (hd1,msdos1)/install/initrd.gz grub> boot
 * Boot the system:

File system layout
/          – The Root Directory /bin       – Essential command binaries /boot      – Boot loader files /dev       – Device Files /etc       – Configuration Files /home      – Home Directory /lib       – Essential Libraries /lost+found – Recovering Files /media     – Removable Media Devices /mnt       – Temporarily mounted filesystems /opt       – Optional software packages /proc      – Kernel & Process Information /root      – Root Home Directory /sbin      – System binaries /selinux   – Security-Enhanced Linux /srv       – Service Data /sys       – virtual filesystem /tmp       – Temporary files /usr       – binaries, documentation, source code, libraries /var       – Variable Files

ProcFS

 * Procfs or /proc is a special FS under Linux used to present process information and kernel processes.
 * Much of the information for kernel level of 2.6 & above have been moved to "sysfs" generally mounted under /sys.
 * /proc is stored in memory.

"siblings" = (HT per CPU package) * (# of cores per CPU package) "cpu cores" = (# of cores per CPU package)
 * On multi-core CPUs, /proc/cpuinfo contains the fields for "siblings" and "cpu cores":


 * A CPU package means physical CPU which can have multiple cores (single core for one, dual core for two, quad core for four).
 * This allows a distinction between hyper-threading and dual-core, i.e. the number of hyper-threads per CPU package can be calculated by siblings / CPU cores.
 * If both values for a CPU package are the same, then hyper-threading is not supported.
 * For instance, a CPU package with siblings=2 and "cpu cores"=2 is a dual-core CPU but does not support hyper-threading.

/proc/cmdline      – Kernel command line information. /proc/consoles     – Information about current consoles including tty. /proc/crypto	   – list of available cryptographic modules /proc/devices      – Device drivers currently configured for the running kernel. /proc/diskstats    – /proc/dma          – Info about current DMA channels. /proc/fb           – Framebuffer devices. /proc/filesystems  – Current filesystems supported by the kernel. /proc/iomem        – Current system memory map for devices. /proc/ioports      – Registered port regions for input output communication with device. /proc/kmsg	    – holding messages output by the kernel /proc/loadavg      – System load average. /proc/locks        – Files currently locked by kernel. /proc/meminfo      – Summary of how the kernel is managing its memory. /proc/misc         – Miscellaneous drivers registered for miscellaneous major device. /proc/modules      – Currently loaded kernel modules. /proc/mounts       – List of all mounts in use by system. /proc/partitions   – Detailed info about partitions available to the system. /proc/pci          – Information about every PCI device. /proc/scsi	    – Information about any devices connected via a SCSI or RAID controller /proc/stat         – Record or various statistics kept from last reboot. /proc/swap         – Information about swap space. /proc/tty	    – Information about the current terminals /proc/uptime       – Uptime information (in seconds). /proc/version      – Kernel version, gcc version, and Linux distribution installed.

/proc/PID/cmdline  – Command line arguments. /proc/PID/cpu      – Current and last cpu in which it was executed. /proc/PID/cwd	    – Link to the current working directory. /proc/PID/environ  – Values of environment variables. /proc/PID/exe	    – Link to the executable of this process. /proc/PID/fd	    – Directory, which contains all file descriptors. /proc/PID/maps	    – Memory maps to executables and library files. /proc/PID/mem	    – Memory held by this process. /proc/PID/root	    – Link to the root directory of this process. /proc/PID/stat	    – Process status. /proc/PID/statm    – Process memory status information. /proc/PID/status   – Process status in human readable form (eg: GID, UID, etc) /proc/PID/limits   – Contains information about the limits of the process

Usage: ls -l /proc/$(pgrep -n python)/exe

Inode Number
Source: linoxide.com


 * Inode is entry in inode table containing metadata about a regular file and directory.
 * An inode is a data structure on a traditional Unix-style file system such as ext3 or ext4.
 * Linux extended filesystems such as ext2 or ext3 maintain an array of these inodes: the inode table.
 * This table contains list of all files in that filesystem.
 * The individual inodes in inode table have a unique number (unique to that filesystem) - the inode number.
 * There are some data about files, such as their size, ownership, permissions, timestamp etc.
 * This meta-data about a file is managed with a data structure known as an inode (index node).


 * There is no entry for file name in the Inode, file name is kept as a separate entry parallel to Inode number.
 * This is for maintaining hard-links to files.


 * Copy file: cp allocates a free inode number and placing a new entry in inode table.
 * Move or Rename a file: if destination is same filesystem as the source, Has no impact on inode number, it only changes the time stamps in inode table.
 * Delete a file: Deleting a file in Linux decrements the link count and freeing the inode number to be reused.


 * A Directory cannot hold two files with same name because it cannot map one name with two different inode numbers.
 * The inode number of / directory is fixed, and is always 2.


 * There exists an algorithm which is used to create number of Inodes in a file system.
 * This algorithm takes into consideration the size of the file system and average file size.
 * The user can tweak the number of Inodes while creating the file system.


 * Inode number (or index number) consists following attributes:

File type:                Regular file, directory, pipe etc. Permissions:               Read, write, execute Link count:               The number of hard link relative to an inode User ID:                  Owner of file Group ID:                 Group owner Size of file:             or major/minor number in case of some special files Time stamp:               Access time, modification time and (inode) change time Attributes:               Immutable' for example Access control list:      Permissions for special users/groups Link to location of file Other metadata about the file

df -i                               ==> Inodes on Filesystem df -i /dev/vda1                     ==> Inodes on Filesystem ls -il myfile.txt                   ==> Show inode no of file find /home/rahul -inum 1150561      ==> Find file using inode no stat unetbootin.bin                  ==> Show all details of file stat --format=%i unetbootin.bin     ==> Shows only inode no
 * Check info:

List the contents of the filesystem superblock tune2fs -l /dev/sda6 | grep inode
 * Manipulate the filesystem meta data

Make sure files on the file system are not being accessed: mount -o remount /yourfilesystem

debugfs /dev/sda1                   ==> Manipulate FS here

You can use debugfs to undelete a file by using its inode and indicating a file

In the case of inodes are full, You need to remove unused files from the filesystem to make Inode free. There is no option to increase/decrease inodes on disk. Its only created during the creation of filesystem on any disk.
 * Free Inodes on Filesystem

Sort links vs Hard link
drwxr-xr-x 6 aman aman    4096 Mar 30 11:50  Documents drwxr-xr-x 3 aman aman    4096 Sep 15 19:11  Downloads ^
 * Links and index number in Linux
 * In the output of ls -l, the column following the permissions and before owner is the link count.
 * Link count is the number of Hard Links to a file.
 * A link is a pointer to another file.
 * There are two types of links:

ln -s /home/bob/sync.sh filesync
 * Symbolic links (or Soft Links)
 * A separate file whose contents point to the linked-to file.
 * When creating a Sym link, first refer to the name of the original file and then to the name of the link:


 * Editing Sym link is like directly edit the original file.
 * If we delete or move the original file, the link will be broken and our filesync file will not be longer available.

ls -l filesync lrwxrwxrwx 1 root root 20 Apr 7 06:08 filesync -> /home/bobbin/sync.sh
 * The ls -l command shows that the resulting file is a symbolic link:


 * The contents of a symbolic link are the name of target file only.
 * The permissions on the symbolic link are completely open.
 * This is because the permissions are not managed
 * The original file is just a name that is connected directly to the inode, and the symbolic link refers to the name.
 * The size of the symbolic link is the number of bytes in the name of the file it refers to, because no other information is available in the symbolic link.

find. -type l -ls ls -la | grep "\->"
 * Find Sym Links


 * Hard links


 * The identity of a file is its inode number, not its name.
 * A hard link is a name that references an inode.
 * It means that if file1 has a hard link named file2, then both of these files refer to same inode.
 * So, when you create a hard link for a file, all you really do is add a new name to an inode.
 * there is no difference between the original file and the link: they are just two names connected to the same inode.

ln /home/bob/sync.sh synchro
 * Create a Hard link:

ls -il /home/bob/sync.sh synchro 517333 -rw-r- 2 root root 5 Apr 7 06:09 /home/bob/sync.sh 517333 -rw-r- 2 root root 5 Apr 7 06:09 synchro
 * Compare:


 * The directories cannot be hard linked as Linux does not permit this to maintain the acyclic tree structure of directories.
 * A hard link cannot be created across filesystems. Both the files must be on the same filesystems, because different filesystems have different independent inode tables (two files on different filesystems, but with same inode number will be different).

/home/bob/sync.sh /root/synchro
 * How to find hard link in Linux
 * 1) find / -inum 517333


 * Remove files
 * When rm command is issued, first it checks the link count of the file.
 * If the link count is greater than 1, then it removes that directory entry and decreases the link count.
 * Still, data is present, nor is the inode affected.
 * And when link count is 1, the inode is deleted from the inode table, inode number becomes free, and the data blocks that this file was occupying are added to the free data block list.

Hosts file

 * All operating systems with network support have a hosts file in order to translate hostnames to IP addresses.
 * The file /etc/hosts started in the old days of DARPA as the resolution file for all the hosts connected to the internet (before DNS existed).
 * It has the maximum priority ahead of any other name system

hosts:         files dns
 * Order of name resolution is actually defined in /etc/nsswitch.conf, which usually has this entry:


 * This means "try files (/etc/hosts); and if it fails, try DNS."
 * i.e. If the host name is not found there, then consult the remote DNS name servers identified by the /etc/resolv.conf file.
 * This order could be changed or expanded.


 * As a single file, it doesn't scale well: the size of the file becomes too big very soon.
 * That is why the DNS system was developed, a hierarchical distributed name system.
 * It allows any host to find the numerical address of some other host efficiently.


 * On Linux and Mac OS it is located here: /etc/hosts
 * On Windows it is under: Windows\System32\drivers\etc\


 * The hosts file contains lines of text consisting of an IP address field followed by One or More Host names.
 * Each field is separated by white space – tabs or spaces.
 * Comment lines are indicated by an octothorpe (#) in the first position.
 * Entirely blank lines in the file are ignored.
 * One name may resolve to several addresses (192.168.0.8 10.0.0.27).
 * However which one is used depends on the routes (and their priorities) set for the computer.

Block a website Handle an attack or resolve a prank Create an alias for locations on your local server Override addresses that your DNS server provides Control access to network traffic
 * By editing the hosts files, you can achieve:


 * IP-to-hostname conversion usually display only the first name found:

192.168.10.12 server.example.com myftp.example.com myhost myftp

$ ping myftp PING myhost.example.com (192.168.10.12) 56(84) bytes of data. 64 bytes from myhost.example.com (192.168.10.12): icmp_seq=1 ttl=64 time=0.023 ms 64 bytes from myhost.example.com (192.168.10.12): icmp_seq=2 ttl=64 time=0.028 ms

Note that we pinged myftp but results come from host myhost. This is a reliable hint that you are addressing an alias, not the actual host.

File permission

 * Linux File Permission Basics


 * The first character represents the type of file.
 * The remaining nine bits in groups of three represent the permissions for the user, group, and global respectively.

File Type	       User	 Group	 Global d    Directory        rwx	 r-x	 r-x -    Regular file     rw-	 r--	 r-- l    Symbolic Link    rwx	 rwx	 rwx

Permission       On a file                     On a directory r (read)         read file content (cat)       read directory content (ls) w (write)        change file content (vi)      create file in directory (touch) x (execute)      execute the file              enter the directory (cd)
 * Permissions Meaning

Who (Letter)	Meaning u	       user g	       group o	       others a	       all
 * Targeted Users:

Binary	 Octal	Permission 000	 0	— 001	 1	–x 010	 2	-w- 011	 3	-wx 100	 4	r– 101	 5	r-x 110	 6	rw- 111	 7	rwx
 * Permissions Table:

chmod [who][+,-,=][permissions] filename
 * chmod Command Syntax and Options

chmod g+w ~/group-project.txt
 * Example:

chmod g=u ~/group-project.txt
 * The + operator grants permissions whereas the - operator takes away permissions.
 * Copying permissions is also possible:


 * The parameter g=u means grant group permissions to be same as the user’s.

chmod g+w,o-rw,a+x ~/group-project-files/
 * Multiple permissions can be specified by separating them with a comma, as in the following example:


 * Owner of the file is referred to as the user (e.g. u+x).

chmod -R +w,g=rw,o-rw, ~/group-project-files/
 * The -R option applies the modification to the permissions recursively to the directory specified:

chmod 600 .msmtprc chmod g-rwx,o-rwx .fetchmail
 * Restrict File Access: Remove all Group and World PermissionsPermalink


 * Octal Notation for File Permissions:

chmod u=rwx,g=rx,o= group-project.txt chmod 750 group-project.txt
 * The permissions to be set for file:

111 101 000 - rwx r-x ---
 * Disregarding the first bit, each bit that is occupied with a - can be replaced with a 0 while r, w, or x is represented by a 1:


 * This is called octal notation because the binary numbers are converted to base-8 by using the digits 0 to 7

Allows R,W,X permissions for the owner R permissions for the group and “world” users
 * Typical default permission: 744
 * Other default permissions are 600 or 644
 * For executable files, the equivalent settings would be 700 and 755


 * umask
 * Known as User Mask or User File creation MASK.
 * While creating a file or directory, by default a set of permissions are applied.
 * These default permissions are viewed by umask command.
 * For safety reasons all Unix systems doesn't provide execution permission to newly created files.
 * The 'mkdir -m' command can be used to set the mode.

mkdir -m 777 dir1 mkdir -m 000 dir2

cp -p list dupli.txt
 * Preserves the permissions and time stamps from source file:

CPU
lscpu lshw -C CPU hardinfo           ==>  sudo apt install hardinfo nproc sudo dmidecode -t 4 cpuid cat /proc/cpuinfo cat /proc/cpuinfo | grep processor | wc -l
 * CPU Info


 * The number of processors shown by /proc/cpuinfo might not be the actual number of cores on the processor.
 * For example a processor with 2 cores and hyperthreading would be reported as a processor with 4 cores.
 * If there are 4 different core ids, this indicates that there are 4 actual cores.

core id        : 0 core id        : 2 core id        : 1 core id        : 3
 * 1) cat /proc/cpuinfo | grep 'core id'

top -o %CPU htop vmstat sar 1 3       ==>  yum install sysstat iostat        ==>  yum install sysstat
 * CPU Usage

top - 01:07:37 up 2:40,  1 user,  load average: 0.37, 0.37, 0.39 Tasks: 286 total,  1 running, 285 sleeping,   0 stopped,   0 zombie %Cpu(s): 4.7 us,  1.6 sy,  0.0 ni, 93.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st MiB Mem :  15935.7 total,   9403.3 free,   3045.2 used,   3487.1 buff/cache MiB Swap:  4100.0 total,   4100.0 free,      0.0 used. 11720.3 avail Mem
 * Top Command

PID USER     PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND 6865 aman     20   0  982620  85280  53716 S   6.2   0.5   2:52.77 Xorg 10082 aman     20   0 3537624 285448 118848 S   6.2   1.7   5:45.24 gnome-shell

CPU Section

us     user cpu time           % CPU time spent in user space sy     system cpu time         % CPU time spent in kernel space ni     user nice cpu time      % CPU time spent on low priority processes id     idle cpu time           % CPU time spent idle wa     io wait cpu time        % CPU time spent in wait (on disk) hi     hardware irq            % CPU time spent servicing/handling hardware interrupts si     software irq            % CPU time spent servicing/handling software interrupts st     steal time              % CPU time stolen from a virtual machine

Main Section: %MEM   directly related to RES, percentage use of total physical memory by the process. VIRT   total memory that this process has access to shared memory, mapped pages, swapped out pages, etc. RES     total physical memory used shared or private that the process has access to. SHR    total physical shared memory that the process has access to.

RES is most close to the memory used by the process in memory, excluding what’s swapped out. This includes the SHR (shared physical memory) which mean it could have been used by some other process as well.

pgrep -n python pidof chrome              - return all PIDs pidof -s chrome           - return only 1 PID ps -C chrome -o pid=      - C = CMD
 * Obtain the PID:

Memory
dmidecode -t 17
 * Info

cat /proc/meminfo  ==> egrep --color 'Mem|Cache|Swap' /proc/meminfo top -o %MEM free -m total       used        free      shared  buff/cache   available Mem:         15935        3046        9470         767        3418       11787 Swap:         4099           0        4099
 * Usage

vmstat vmstat -s  ==> More detailed htop

ps -o pid,user,%mem,command ax | sort -b -k3 -r sudo pmap 917                                      ==> Libraries, other files, etc usage of memory sudo pmap 917 | tail -n 1                          ==> Total used by this process
 * Per Process usage check

HDD
du -h          ==> space by dir including all subdir in dir tree du -sh /etc/   ==> total disk space used by dir and suppress subdir du -ah /etc/ ==> see all files, not just directories:

df -h Filesystem    Type      Size  Used Avail Use% Mounted on    /dev/sda4      ext4       77G   51G   22G  71% / df -T -h       ==>  List Filesystem type as well df -t ext4     ==>  Only see ext4 file system df -a          ==>  List all filesystems that have a size of zero blocks as well df -i          ==>  Display File System Inodes

lsblk          ==>  Lists out all the storage blocks, which includes disk partitions and optical drives NAME  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT sda     8:0    0   1.8T  0 disk ├─sda1  8:1    0   500M  0 part /boot/efi ├─sda2  8:2    0   128M  0 part

sudo fdisk -l  ==> Partition & FS Type details parted         ==> List out partitions and modify them

IP
ip addr show  (ip a) ifconfig hostname -I ip route get 8.8.8.8 | head -1 | awk '{print $7}' ip route get 8.8.8.8 | head -1 | cut -d' ' -f7

DNS Config Info
cat /etc/resolv.conf nmcli dev show | grep DNS systemd-resolve --status resolvectl status | grep -1 'DNS Server'

Host Command
host google.com host -t a google.com host -t mx google.com host -t soa google.com host -t cname files.google.com host -t txt google.com host google.com ns2.google.com     ==> Query a particular host host -t any google.com

DIG Command
dig google.com a dig google.com mx dig google.com ns dig google.com txt dig @ns1.google.com a dig @4.2.2.2 google.com soa         ==> SOA record dig +nssearch google.com           ==> SOA record dig +short google.com              ==> only IP address dig +noall +answer google.com      ==> Just answer line dig +noall +answer google.com any  ==> Just answers for all records

NSLOOKUP
nslookup yahoo.com                 ==> Find A Record nslookup 209.191.122.70            ==> Reverse Domain Lookup nslookup -query=mx www.yahoo.com   ==> Query MX (Mail Exchange) record nslookup -query=ns www.yahoo.com   ==> NS(Name Server) record nslookup -query=any yahoo.com      ==> query all Available DNS records nslookup -debug yahoo.com          ==> verbose information like TTL, etc

CURL
curl -I http://domain.com                                  Get HTTP header information curl -i http://domain.com                                  Get HTTP header + Body information curl -L http://domain.com                                  Handle URL redirects curl -v http://domain.com                                  Debug level details curl -x proxy.sr.com:3128 http://domain.com                Using proxy to download a file curl -k https://domain.com                                 Ignoring the ssl certificate warning curl -A "Mozilla/5.0" http://domain.com                    Spoofing user agent: curl -L -H "user-agent: Mozilla/5.0" https://aman.info.tm  Custom Headers curl smtp://example.com:2525 curl ftp://example.com curl example.com:21 curl example.com:7822                                        Troubleshooting SSH:   SSH-2.0-OpenSSH_5.3 time curl google.com curl -i https://site1.lab.com --cert /root/ca/domains/ubnsrv01-cert.pem --key /root/ca/domains/ubnsrv01-key.pem curl -v -X OPTIONS https://site3.lab.com curl -v -X TRACE https://site3.lab.com curl --sslv2 https://yoururl.com curl --tlsv1 https://yoururl.com curl -H 'X-My-Custom-Header: 123' https://httpbin.org/get  Using httpbin tool; shows header info curl -e google.com yoururl.com                               Referrer curl --data "name=bool&last=word" https://httpbin.org/post Post data curl -X POST https://httpbin.org/post                      Empty Post Request curl -H 'Host: aman.info.tm' 128.199.139.216               If Server using Virtual Hosting

Post Json Data curl --data '{"email":"test@example.com", "name": ["Boolean", "World"]}' -H 'Content-Type: application/json' https://httpbin.org/post

Time Breakdown curl https://www.booleanworld.com/ -sSo /dev/null -w 'namelookup:\t%{time_namelookup}\nconnect:\t%{time_connect}\nappconnect:\t%{time_appconnect}\npretransfer:\t%{time_pretransfer}\nredirect:\t%{time_redirect}\nstarttransfer:\t%{time_starttransfer}\ntotal:\t\t%{time_total}\n'

IPtables
iptables -L                          ==>  List rules iptables -F                          ==>  Stop iptables iptables -nvL                        ==>  Check Stats iptables --flush MYCHAIN             ==>  Flush Chain iptables -X MYCHAIN                  ==>  Delete Empty Chain iptables -A INPUT -p tcp --dport ssh -j ACCEPT          ==>  Allow SSH iptables -A INPUT -p tcp --dport 80 -j ACCEPT           ==>  Allow incoming web traffic iptables -A INPUT -j DROP                               ==>  Blocking Traffic iptables -A INPUT -i ens160 -s 10.140.198.7 -j DROP     ==>  Blocking Traffic iptables -I INPUT 1 -i lo -j ACCEPT                     ==>  Allow loopback iptables -I INPUT 5 -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7  ==> Logging

TCPDump
sudo tcpdump -s 0 -i ens160 host 10.1.1.1 -v -w /tmp/packet_capture.cap sudo tcpdump -s 0 -i ens160 host 10.1.1.1 and port 22 -v -w /tmp/packet_capture.cap sudo tcpdump -s 0 -i ens160 host 10.1.1.1 and port not 22 and port not 80 -v -w /tmp/packet_capture.cap sudo tcpdump -s 0 -i ens160 host 10.1.1.1 and tcp port not 22 and tcp port not 80 -v -w /tmp/packet_capture.cap

for i in `find. -type f | egrep "All.pcap"`; do echo $i; tcpdump -r $i '((host 1.1.1.1 or host 2.2.2.2) and host 3.3.3.3) and port 445' ; echo -e "\n"; done

MTR
Provides the functionality of both the ping and traceroute commands. Prints information about the entire route.

mtr google.com mtr -g google.com          Display Numeric IP addresses mtr -b google.com          Both hostnames and numeric IP addresses mtr --tcp google.com       Use TCP SYN packets mtr --udp google.com       UDP datagrams

Traceroute
traceroute 4.2.2.2            ==> Uses UDP traceroute -n 4.2.2.2         ==> Do not resolve hostnames sudo traceroute -nI 4.2.2.2   ==> Use ICMP Packets sudo traceroute -nT 4.2.2.2   ==> Use TCP Syn (Port 80)

Netstat
netstat -s netstat -a    Listing all ports (both TCP and UDP) netstat -at   Listing TCP Ports connections netstat -au   Listing UDP Ports connections netstat -l    Listing all LISTENING Connections netstat -lt   Listing all TCP Listening Ports netstat -s    Showing Statistics by Protocol netstat -st   Showing Statistics by TCP Protocol netstat -tp   Displaying Service name with PID netstat -r    Displaying Kernel IP routing netstat -anp netstat -ant

PS
ps -aux                                             Display all processes in BSD format ps -eo pid,ppid,user,cmd ps -e --forest                                      Print Process Tree ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%cpu | head

LS
Append a character to each file name indicating the file type: ls -F or ls --classify

*  Executable files /  Directories @  Symbolic links |  FIFOs =  Sockets >  Doors Nothing for Regular Files

List Symoblic Links:

ls -la lrwxrwxrwx  1 root       root                    11 Sep 13 14:57 mounts -> self/mounts dr-xr-xr-x  3 root       root                     0 Sep 13 14:57 mpt -rw-r--r--  1 root       root                     0 Sep 13 14:57 mtrr

Troubleshooting
Source: scoutapm.com

1: Check I/O wait and CPU Idletime Look for "wa" (I/O wait) and "id" (CPU idletime) I/O Wait represents the amount of time the CPU waiting for disk or network I/O. Anything above 10% I/O wait should be considered high. CPU idle time is a metric you WANT to be high If your idle time is consistently above 25%, consider it "high enough"

2: IO Wait is low and idle time is low: check CPU user time Look for the %us column (first column), then look for a process or processes that is doing the damage. If %usertime is high, see which program is monopolizing the CPU Be default, top sorts the process list by %CPU, so you can just look at the top process or processes. If situation seems anomalous: kill/restart the offending processes. If situation seems typical given history: upgrade server or add more servers.

3: IO wait is low and idle time is high Your slowness isn't due to CPU or IO problems, so it's likely an application-specific issue. Slowness might be caused by another server in your cluster or by an external services like DB If you suspect another server in your cluster use - Strace and Lsof Strace will show you which file descriptors are being read or written to. Lsof can give you a mapping of those file descriptors to network connections.

4: IO Wait is high: check your swap usage Use top or free -m Cache swaps will monopolize the disk Processes with legitimate IO needs will be starved for disk access. In other words, checking disk swap separates "real" IO wait problems from what are actually RAM problems that "look like" IO Wait problems.

5: Swap usage is high High swap usage means that you are actually out of RAM.

6: Swap usage is low Low swap means you have a "real" IO wait problem iotop is an awesome tool for identifying io offenders.

7: Check memory usage Once top is running, press the M key - this will sort applications by the memory used. Important: don't look at the "free" memory -- it's misleading. To get the actual memory available, subtract the "cached" memory from the "used" memory. This is because Linux caches things liberally, and often the memory can be freed up when it's needed. A memory leak can be satisfactorily addressed by a one-time or periodic restart of the process. If memory usage seems anomalous: kill the offending processes. If memory usage seems business-as-usual: add RAM to the server, or split high-memory using services to other servers.

= Flows =


 * Complete Flow of PC opening a Website:


 * 1) Check NW config
 * 2) DHCP if not configured
 * 3) Check Domain name in Browser Cache
 * 4) Check Domain name in OS Cache
 * 5) Check if an entry exists in Hosts File
 * 6) If not Found in any cache, Prepare to send UDP DNS query to DNS Server
 * 7) If DNS Server configured is in same Network Check MAC address in ARP Table
 * 8) If not found, send ARP for MAC Address
 * 9) Forward DNS Query to DNS Server and wait for reply containing IP address of Website
 * 10) If DNS server configured is not in same subnet, check Gateway config(IP & MAC address)
 * 11) If MAC address not found in ARP Table, send ARP request
 * 12) After getting reply, fwd the DNS query to gateway
 * 13) After getting DNS response, start TCP 3-way handshake S-SA-A.
 * 14) Start SSL Handshake if SSL/TLS configured
 * 15) Send GET Request
 * 16) Client sends ACK & Body containing HTML Data
 * 17) If HTTP 1.0, Server sends FIN & CLoses connection
 * 18) Client send FIN-ACK
 * 19) Server sends Ack


 * Complete Flow of DNS Traffic


 * 1) Check NW config
 * 2) DHCP if not configured
 * 3) Check Domain name in Browser Cache
 * 4) Check Domain name in OS Cache
 * 5) Check if an entry exists in Hosts File
 * 6) If not Found in any cache, Prepare to send UDP DNS query to DNS Server
 * 7) If DNS Server configured is in same Network Check MAC address in ARP Table
 * 8) If not found, send ARP for MAC Address
 * 9) Forward DNS Query to DNS Server and wait for reply containing IP address of Website
 * 10) If DNS server configured is not in same subnet, check Gateway config(IP & MAC address)
 * 11) If MAC address not found in ARP Table, send ARP request
 * 12) After getting reply, fwd the DNS query to gateway
 * 13) DNS Server ??
 * 14) DNS Server ?? Iterative? Recursive? TLD? Authoritative
 * 15) DNS Server ??
 * 16) After getting DNS response, start TCP 3-way handshake S-SA-A.

[PC1]-[Hub]-[Switch]-[Router]--[Router]--[PC2]
 * Complete Flow of Traffic passing through below scenario:


 * 1) Check NW config
 * 2) DHCP if not configured
 * 3) Check if PC2 in same Subnet(not in this scenario as routers present)
 * 4) If in Same Subnet, check if MAC address is there in ARP Table
 * 5) Else send ARP Request
 * 6) Once MAC address is known, directly send Packet to PC2
 * 7) If PC2 is in Different Subnet(True for above scenario), Check Gateway IP address & MAC address
 * 8) If MAC address is not known, send an ARP request.
 * 9) Hub is directly connected, will receive & Flood packet on all Ports.
 * 10) Switch will receive packet and check its CAM Table for the MAC to Port bindings
 * 11) If MAC entry is not found in CAM table, Switch will Flood the ARP packet on all ports.
 * 12) Other destinations will drop the ARP Request packet as they do not have the IP address requested in ARP Header.
 * 13) Only Router will accept the packet as it has the requested IP address matching its own MAC address.
 * 14) It will reply with an ARP Reply message.
 * 15) Switch will add an entry of this MAC address & port number in its CAM Table once the reply packet pass through it.
 * 16) Hub will flood the packet through all ports.
 * 17) ARP Reply will reach PC1, it will add entry to its ARP Table
 * 18) Then send a packet destined to PC2 with destintion MAC address as Router's Interface's MAC address received in ARP reply.

= Sorting Algorithms =

Quicksort is a good default choice. It tends to be fast in practice with some small tweaks its dreaded O(n2)O(n^2)O(n2) worst-case time complexity becomes very unlikely. A tried and true favorite. Heapsort is a good choice if you can't tolerate a worst-case time complexity of O(n2)O(n^2)O(n2) or need low space costs. The Linux kernel uses heapsort instead of quicksort for both of those reasons. Merge sort is a good choice if you want a stable sorting algorithm. can easily be extended to handle data sets that can't fit in RAM where the bottleneck cost is reading and writing the input on disk, not comparing and swapping individual items. Radix sort looks fast, with its O(n)O(n)O(n) worst-case time complexity. if you're using it to sort binary numbers, then there's a hidden constant factor that's usually 32 or 64 (depending on how many bits your numbers are). That's often way bigger than O(lg⁡(n))O(\lg(n))O(lg(n)), meaning radix sort tends to be slow in practice. Counting sort is a good choice in scenarios where there are small number of distinct values to be sorted. This is pretty rare in practice, and counting sort doesn't get much use.


 * Which sorting algorithm has best asymptotic run time complexity?