Ace the Web Development Interview— Step 1

Prepare expeditiously for FAANG!

Vishal Bhushan

--

FAANG (Facebook, Apple, Amazon, Netflix, Google) , Microsoft

This story(with its other upcoming parts) unveils some of the top interview topics that cover general aspects of web development. Let’s start by going over network security, optimization techniques for load balancing and browser rendering, SEO(Search Engine Optimization), APIs, and system design. Overall, these are topics that are often overlooked, but they can really make you stand out from your peers and help you ace the interview.

1. Network :

a) From Entering a URL to receiving a root file
Get an IP address from the URL; DNS Servers for URL to IP mapping
Get a response from the server.
Make requests for external resources from the root file.
Parse HTML, CSS, JS.
Render the website on the browser.

b) HTTP, Servers, and Clients

Clients obtain content (such as videos) and/or services (such as an online calculator) from servers.

Servers: The server controls access to a centralized resource or service, such as a website. In other words, a server has a website, and it can show that website to other processes, called clients, when asked.

Clients : Client processes use the Internet to consume content and use services. Client processes almost always initiate connections to servers, while server processes wait for requests from clients. So, if you go to google.com, your browser is the client.

A pizza delivery centre

A good analogy for a server and client architecture is a 24/7 pizza delivery place and its customers. The pizza place (a server) is generally always open, and has pizza (a website) which they can give to the customers (the clients) who ask for it.

c) The Anatomy of a URL

A URL, or Universal Resource Locator, is used to locate files that exist on servers. URLs consist of the following parts:
The protocol in use
The hostname of the server
The location of the file
The arguments to the file

Anatomy of a URL

d) HTTP is a Request-Response Protocol (Stateless in nature)
The
host first needs to establish a ‘TCP connection’ with the server. This TCP connection requires a ‘handshake’ that involves at least three back and forth messages between the client and the server.

e) HTTP: Request Messages

The anatomy of an HTTP request line

The request line consists of three parts:

Method, URL and Version

The anatomy of an HTTP request

f) HTTP: Response Messages

It has three parts: an initial status line, some header lines, and an entity body.

Anatomy of an HTTP response message

Here is a list of some common status codes and their meanings:

  • 200 OK: the request was successful
  • 404 File Not Found: the requested object doesn’t exist on the server
  • 400 Bad Request: request was in a format that the server could not comprehend.
  • 500 HTTP Internal Server Error: server encountered some unexpected error.
  • 505 HTTP Version Not Supported: HTTP version is not supported by the server.

Header lines

The response body contains the file requested. These are some of the header lines that you must know about:

  • Cache-control is used to configure resource caching. Directives like must-revalidate, no-cache, public, private, max-age=<seconds>can be used with it:

ETag: an identifier for a resource, such as 3147526947used to check the freshness of a cached resource. The browser can request the latest Etag from the server, and if it matches the Etag of the cached version of the resource, then the browser can just keep using it; else a fresh version of the resource must be requested.

  • Transfer-Encoding is used to tell how the message body has been encoded. The following directives can be used, separated by a comma: chunked, compress, deflate, gzip, identity. Note that HTTP/2 does not support chunked.
  • X-Frame-Options is used to tell a browser if it can render a page in a <frame>, <iframe>, <embed>, or <object> tag. This can be used to prevent clickjacking attacks that get users to click on hidden elements.

g) AJAX (Asynchronous JavaScript and XML). It is a combination of web development techniques that allows us to:

  • Update a web page without reloading it.
  • Request, send, and receive data from the server after page loading and rendering.

WebSockets

The WebSocket API opens up ‘full-duplex’ connections that allow bi-directional communication between clients and servers (messages can be sent and received simultaneously over one connection).

WebSockets are useful for real-time applications where the UI needs to be updated without reloading the page. They are also useful for gaming and chat applications.

2. Web Security

a) HTTPS is encrypted HTTP, which makes it more secure to use. With initiatives like HTTPS Everywhere, it is incredibly important to have a website that is on HTTPS fully.

b) Cross-origin Resource Sharing (CORS) : When a website can access a resource or execute commands on another domain via HTTP requests, the process is called cross-origin resource sharing. This is a problem because it can be abused

Protecting your website’s resources from other domains is important!

c) Clickjacking & cross-site scripting attacks are incredibly famous, and knowing about them is necessary to prevent them as a front-end engineer!

Clickjacking attack: An attacker gets you to click on an invisible page via embedded iframe on their site.

Solution :X-Frame-Options header can be sent in HTTP responses, and it tells browsers if they can load a page in an iframe.

XSS(Cross-site scripting) attack : An attacker may inject and hence execute malicious scripts on your browser that may preferably steal the cookies needed to login to say an e-commerce site, bank account etc.

Solution: input sanitation (escape all text that is entered by users to ensure that none of it is malicious).

3. Optimizing Browser Loading Performance

Performance can make or break a website.

a) Performance Metrics: These days, two types of metrics are popular: time to first byte and page load time.

Page load time and time to first byte
  • Time to first byte is the time from the issuance of the HTTP request until the web browser receives the first byte of the response.
  • Page load time is the time since the issuance of the HTTP request until the page has finished loading completely.

b) Optimising Requests: a total of at least five messages(including TCP connection/handshakes) need to be exchanged before any requests can be made. This is a considerable overhead, and optimizing it deserves attention.

Solution: HTTP/2 multiplexing

With HTTP/2, the constraint that the server needs to send the responses in the order it got the requests is gone. Multiplexing enables mapping the requests and responses.

So, the server can immediately respond with whichever request is processed first, and HTTP/2 will map each response to the original request (through one TCP connection).

Server push using HTTP/2

c) Network Optimisations could be achieved by the following techniques :

  • Reduce time to first byte
  • Speed up DNS resolution
  • Enable browser/server caching
  • Reduce redirects
  • Serving assets from multiple domains

d) CDN (Content Delivery Network) is a network of servers in addition to the main server. They are replica clusters of the main server and they are meant to quickly serve content.

CDN puts your content in many places at once, serving the users based on their proximity to whichever server provides them with superior coverage.

If any of the following is true for your website, you should definitely opt for a CDN:

  • Large amounts of traffic
  • A scattered, international user-base
  • Expected growth
  • Lots of media content, such as images and videos

e) Client-side vs Server-side rendering:

A website that uses client-side rendering will have the server send a mostly empty HTML file to begin with. It will have links to CSS in the <head>. It will also have links to JavaScript in the <script> just before the closing </body> tag. The <body> will otherwise be empty. This is to stop JavaScript loading from blocking content loading. So, the content is loaded and the scripts are loaded next.

Pros : JavaScript bundles can be cached to speed things up in the future.

Cons : Loading content may take a while because requests have to travel all the way to the server, which can be very far away.

SEO takes a hit because all search engines see is an empty HTML file.

In server-side rendering, the whole web page is compiled on the server. The HTML is completely populated with the content, which is sent to the client.

Pros : Search engines will be able to crawl the site, resulting in better SEO because the pages will be populated with content.

Cons

  • A page will have to be rendered on the server and reloaded every time a new page on the site is visited, which will lead to full page reloads.
  • The server will receive frequent requests, which can easily lead to the server getting flooded with requests and slowing down.

f) Progressive Rendering can make a world of difference for the speed of your apps.

You might have noticed that most modern websites render gradually. They don’t appear all at once, but in parts. This is because the page rendering process is not done all in one go. The browser starts receiving, parsing, and rendering whatever HTML it can, even if it has not received all of it.

This method is called progressive rendering, and is sort of in-between server-side and client-side rendering. The components of the page are rendered on the server and sent in order of priority. So, the highest priority components are rendered on the server, sent to the browser, and painted on the browser first.

g) Service Workers:

PWA(Progressive web apps) are just websites that work like native apps. Service workers are incredibly essential to create smooth progressive web apps.

Why progressive web apps? Well, for one they are reachable by any device and any browser. They are platform independent. There is also an incentive for users to not download native apps.

Web workers are JavaScript files that run independently of the website off of the main thread of the app. Any heavy non-interactive calculation or anything else that can run in parallel to the site can be offloaded to a web worker.

A service worker is a specific type of web worker that acts as a proxy between the browser and the server. It also acts as a proxy between the browser and the cache. So, service workers act as a caching agent, and can store content for offline use.

They also give you more control over network requests and allow you to handle push-messaging, too. They run independently of your app, meaning that they can run even when the app is not open. Progressive web apps use service workers, and can thus work offline or on very slow networks.

h) Prefetching & Preloading Resources

Prefetching assets and DNS prefetching can help improve the overall user experience of a website.

<html>
<head>
</head>
<body>
<link rel="prefetch" href="/udata/abcdef/blue_logo.png"/>
<a href="/udata/1kMwYObLWGB/blue_logo.png">Logo</a> </body>
</html>

4. Search Engine Optimization is simply the process of using known methods to ‘optimize’ your content/website so that a search engine’s (usually that means Google’s) ranking algorithm picks it to be among the first few results. That would result in more traffic, and hence more revenue.

Optimizing for search engines can make or break your website. It’s an essential skill for anyone creating content for the Internet.

“The best hiding place for a dead body is really on the second page of Google search results.”🤣

SEO could be of types: on-page SEO and off-page SEO

This is the end of Step 1 of this story. Will continue with some more critical topics on Web Development Interview soon in Ace the Web Development Interview: FAANG (Facebook, Apple, Amazon, Netflix, Google) Preparation — Step 2.

--

--