Content Delivery Network
Content Delivery NetworkRate:


Table of Contents
Content Delivery Network
Tags: Server, CDN

A Content Delivery Network or Content Distribution Network (CDN) is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance ('speed') by distributing the service spatially relative to end users.

1. Introduction

CDNs came into existence in the late 1990s as a means of alleviating the performance bottlenecks of the internet as the internet was starting to become a mission-critical medium for people and enterprises. Since then, CDNs have grown to serve a large portion of the internet content today, including web objects (text, graphics, and scripts), downloadable objects (media files, software, documents), applications, live streaming media, on-demand streaming media, and social media sites.

CDNs are a layer in the internet ecosystem. Content owners such as media companies and e-commerce vendors pay CDN operators to deliver their content to their end users. In turn, a CDN pays internet service providers (ISPs), carriers, and network operators for hosting its servers in their data centers.

CDN is an umbrella term spanning different types of content delivery services; video streaming, software downloads, web and mobile content acceleration, licensed/managed CDN, transparent caching, and services to measure CDN performance, load balancing, Multi CDN switching and analytics and cloud intelligence. CDN vendors may cross over into other industries like security DDoS protection, web application firewalls (WAF), and WAN optimization.

Notable content delivery service providers include Akamai Technologies, Edgio, Cloudflare, Amazon CloudFront, Fastly, and Google Cloud CDN.

2. Technology

CDN nodes are usually deployed in multiple locations, often over multiple internet backbones. Benefits include reducing bandwidth costs, improving page load times, and increasing the global availability of content. The number of nodes and servers making up a CDN varies, depending on the architecture, some reaching thousands of nodes with tens of thousands of servers on many remote points of presence (PoPs). Others build a global network and have a small number of geographical PoPs.

Requests for content are typically algorithmically directed to nodes that are optimal in some way. When optimizing for performance, locations that are best for serving content to the user may be chosen. This may be measured by choosing locations with the fewest hops, the lowest number of network seconds away from the requesting client, or the highest availability regarding server performance (both current and historical), to optimize delivery across local networks. When optimizing for cost, the least expensive locations may be chosen instead. In an optimal scenario, these two goals tend to align, as edge servers close to the end user at the edge of the network may have an advantage in performance or cost.

Most CDN providers will provide their services over a varying, defined, set of PoPs, depending on the coverage desired such as United States, International or Global, Asia-Pacific, etc. These sets of PoPs can be called "edges", "edge nodes", "edge servers", or "edge networks" as they would be the closest edge of CDN assets to the end user.

3. Security and Privacy

CDN providers profit either from direct fees paid by content providers using their network, or profit from the user analytics and tracking data collected as their scripts are being loaded onto customers' websites inside their browser origin. As such these services are being pointed out as potential privacy intrusions for behavioral targeting and solutions are being created to restore single-origin serving and caching of resources.

In particular, a website using a CDN may violate the EU's General Data Protection Regulation (GDPR). For example, in 2021 a German court forbade the use of a CDN on a university website because this caused the transmission of the user's IP to the CDN, which violated the GDPR.

CDNs serving JavaScript have also been targeted as a way to inject malicious content into pages using them. A subresource integrity mechanism was created in response to ensure that the page loads a script whose content is known and constrained to a hash referenced by the website author.

4. Content Networking Techniques

The internet was designed according to the end-to-end principle. This principle keeps the core network relatively simple and moves the intelligence as much as possible to the network end-points; the hosts and clients. As a result, the core network is specialized, simplified, and optimized to only forward data packets.

Content Delivery Networks augment the end-to-end transport network by distributing a variety of intelligent applications employing techniques designed to optimize content delivery. The resulting tightly integrated overlay uses web caching, server load balancing, request routing, and content services.

Web caches store popular content on servers that have the greatest demand for the content requested. These shared network appliances reduce bandwidth requirements, reduce server load, and improve the client response times for content stored in the cache. Web caches are populated based on requests from users (pull caching) or based on preloaded content disseminated from content servers (push caching).

Server-load balancing uses one or more techniques including service-based (global load balancing) or hardware-based (i.e., layer 4-7 switches, also known as a web switch, content switch, or multilayer switch) to share traffic among several servers or web caches. Here the switch is assigned a single virtual IP address. Traffic arriving at the switch is then directed to one of the real web servers attached to the switch. This has the advantage of balancing load, increasing total capacity, improving scalability, and providing increased reliability by redistributing the load of a failed web server and providing server health checks.

A content cluster or service node can be formed using a layer 4 to 7 switch to balance load across a number of servers or a number of web caches within the network.

CDNs use a variety of methods of content delivery including, but not limited to, manual asset copying, active web caches, and global hardware load balancers.

4.1 Content Service Protocols

Several protocol suites are designed to provide access to a wide variety of content services distributed throughout a content network. The Internet Content Adaptation Protocol (ICAP) was developed in the late 1990s to provide an open standard for connecting application servers. A more recently defined and robust solution is provided by the Open Pluggable Edge Services (OPES) protocol. This architecture defines OPES service applications that can reside on the OPES processor itself or be executed remotely on a Callout Server. Edge Side Includes pr ESI is a small markup language for edge-level dynamic web content assembly. It is fairly common for websites to have generated content. It could be because of changing content like catalogs or forums, or because of the personalization. This creates a problem for caching systems. To overcome this problem, a group of companies created ESI.

4.2 Peer-to-Peer CDNs

In peer-to-peer (P2P) CDNs, clients provide resources as well as use them. This means that, unlike client-server systems, content-centric networks can perform better as more users begin to access the content (especially with protocols such as Bittorrent that require users to share). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor.

4.3 Private CDNs

If content owners are not satisfied with the options or costs of a commercial CDN service, they can create their own CDN. This is called a private CDN. A private CDN consists of PoPs (points of presence) that are only serving content for their owner. These PoPs can be caching servers, reverse proxies, or application delivery controllers. It can be as simple as two caching servers, or large enough to serve petabytes of content.

Large content distribution networks may even build and set up their private network to distribute copies of content across cache locations. Such private networks are usually used in conjunction with public networks as a backup option in case the capacity of the private network is not enough or there is a failure that leads to capacity reduction. Since the same content has to be distributed across many locations, a variety of multicasting techniques may be used to reduce bandwidth consumption. Over private networks, it has also been proposed to select multicast trees according to network load conditions to more efficiently utilize available network capacity.

Author: Mikhail

No comments yet.

You must be logged in to leave a comment. Login here

Content Delivery Network