Topics

Stream Topics % Done
Networking OSI layers 100%
TCP, IP, UDP, ICMP, ARP, DHCP, NAT, FTP 100%
HTTP, SSL 100%
Caching, Cookies, Certificates Caching, Cookies,
DNS Done
SMTP
Latency, Tail Latency
BGP, OSPF Done
Load Balancing
Linux Shell scripting
Kernel, libraries, system calls Done
Memory Management
Permissions, file systems Done
Linux Commands Brushup Done
Programming Python Brushup
Algorithms
HTML, CSS
Databases, SQL, Indexing
Google Technical Quesitions
Behavioral Questions
Leadership Questions
Designing Medium sized Campus Design
Cheatsheet Done

Preparations

  • 1st round
  • 1st Week Dec starting
  • Video interview
coding
tech 
communication
thinking
  • 1 Hour:
1st Troubleshooting code
Distributes system
Web & Network technologies
  • F2F - Bangalore office
2 tech Networking/General Web troubleshooting
1 manager
  • Share use cases
  • Working with Code (15 min)
  • What is code, Issue & How to improve it.
  • Ask question about question
  • Break it down
  • No IDE
  • No library

Topics 2

  • Semaphores
  • Mutex
  • Threads


Questions

  • What are alternative solutions of Spanning tree?
Port Isolation
Vlan Isolation
Loopback Detection
  • Tail Latency
200ms latency -> initial rto by most OSs
Do not consider normal Retransmissions, but only RTOs
exponential backoff rto will double with each iteration -> 400ms, 800ms, 1000ms
congestion, pathmtu not honored, icmp rate-limited, reply pathmtud-reply packets dropped,
os issues -> Zero window -> cpu high(response slow), memory high, disk issue
  • 4k video file edit latency
RTP uses tcp, n/w level issues, check latency on which side traces,


Networking Followup

Moving of files BW APAC & EMEA

1MBPS speed? whats wrong  
Multiple streams
Pearing
Latency
Congestion
BW/Delay product BDP
TCP Window size
Packet capture

SSH to remote server

GCP VM
Timed out

SSH Config File:

/etc/ssh/sshd_config      # sshd server configuration file
Port 22
ListenAddress 0.0.0.0
PermitRootLogin prohibit-password
PubkeyAuthentication yes
PasswordAuthentication yes
PermitEmptyPasswords no
KerberosAuthentication no
UsePAM yes
Banner none
TCPKeepAlive yes
/etc/ssh/sshd_config      # ssh client configuration file

Restart SSHD:

sudo systemctl restart sshd.service
sudo systemctl status sshd.service

Wrong IP:

ssh: connect to host 192.168.1.51 port 22: No route to host

Wrong Port:

ssh: connect to host 192.168.1.50 port 2222: Connection refused

Netstat:

netstat -ant | grep 22
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN

Journalctl:

journalctl -xe

System Design

Past system design you have done
Trade off you made in this
Scaling this specific network
3-tier network design in public cloud. How do you diesgn
Will you need LB? Why?

How can you establish secure connectivity to public cloud

VPN tunnel?
How do you ensure High Availability for this?
Webmin
SSH? Using Key? Using Password?
VNC? Encryption?
RDP? Encryption?
TeamViewer
Dataplicity
SSH + DynDNS

Re-Evaluate the design

Room for improvement?

Answers

  • Network throughput impacted by TCP window size, Latency and Congestion
  • Window Size
Maximum amount of data a sender can send before receiving an acknowledgement. Standard TCP Window Size = 65K bytes


  • It’s not just about latency, TCP doesn’t like congestion
Adding more traffic produces a negative marginal effect above about 30% utilization
  • Application is able to generate 10 GBPS traffic? OS limits - CPU - Memory, Network Card speed?
  • Window scaling changes the TCP window to:
64KB * 2n (n = window scale factor)
With a window scale factor of 7, which equals a TCP window of 8MB
Single-flow throughput is limited to:
TCP window size / RTT
Without window scaling, TCP is limited to:
64KB / 100ms = 5 Mbps
With CloudBridge default window scale, TCP is limited to:
8MB / 100 ms = 650 Mbps


Take Packet Captures
Fragmentation
WSF 64kB to 8MB
SACK to minimize data that is resent
Fast re-transmits to reduce delay before resend

Bandwidth Delay Product

Amount of data that can be in transit (flight) in the network
Includes data in queues if they contributed to the delay
BDP (bytes) = total_available_bandwidth (bps) x round_trip_time (sec) / 8


BIC TCP (Binary Increase Congestion control)

  • BIC TCP for faster recovery from packet loss
Allows bandwidth probing to be more aggressive initially when the difference from the current window size to the target window size is large, and become less aggressive as the current window size gets closer to the target window size.
A unique feature of the protocol is that its increase function is logarithmic; it reduces its increase rate as the window size gets closer to the saturation point.
  • BIC is optimized for high speed networks with high latency: so-called "long fat networks".
  • For these networks, BIC has significant advantage over previous congestion control schemes in correcting for severely underutilized bandwidth.
  • BIC implements a unique congestion window (cwnd) algorithm.
  • This algorithm tries to find the maximum cwnd by searching in three parts: binary search increase, additive increase, and slow start.
  • When a network failure occurs, the BIC uses multiplicative decrease in correcting the cwnd.
  • BIC TCP is implemented and used by default in Linux kernels 2.6.8 and above.
  • The default implementation was again changed to CUBIC TCP in the 2.6.19 version.

CUBIC TCP

  • CUBIC is an implementation of TCP with an optimized congestion control algorithm for high bandwidth networks with high latency (LFN: long fat networks).
  • CUBIC TCP is implemented and used by default in Linux kernels 2.6.19 and above, as well as Windows 10 & Windows Servers.
  • It is a less aggressive and more systematic derivative of BIC TCP, in which the window size is a cubic function of time since the last congestion event, with the inflection point set to the window size prior to the event.
  • Because it is a cubic function, there are two components to window growth.
  • The first is a concave portion where the window size quickly ramps up to the size before the last congestion event.
  • Next is the convex growth where CUBIC probes for more bandwidth, slowly at first then very rapidly.
  • CUBIC spends a lot of time at a plateau between the concave and convex growth region which allows the network to stabilize before CUBIC begins looking for more bandwidth.
  • Another major difference between CUBIC and standard TCP flavors is that it does not rely on the cadence of RTTs to increase the window size.
  • CUBIC's window size is dependent only on the last congestion event.
  • With standard TCP, flows with very short round-trip delay times (RTTs) will receive ACKs faster and therefore have their congestion windows grow faster than other flows with longer RTTs.
  • CUBIC allows for more fairness between flows since the window growth is independent of RTT.