Transcription

Airlift: Video Conferencing as a Cloud Serviceusing Inter-Datacenter NetworksYuan Feng, Baochun LiDepartment of Electrical and Computer EngineeringUniversity of TorontoAbstract—It is typical for enterprises to rely on servicesfrom cloud providers in order to build a scalable platformwith abundant available resources to satisfy user demand, andfor cloud providers to deploy a number of datacenters interconnected with high-capacity links, across different geographicalregions. In this paper, we propose that video conferencing, evenwith its stringent delay constraints, should also be provided asa cloud service, taking full advantage of the inter-datacenternetwork in the cloud. We design Airlift, a new protocol designedfor the inter-datacenter network, tailored to the needs of acloud-based video conferencing service. Airlift delivers packets inlive video conferences to their respective destination datacenters,with the objective of maximizing the total throughput across allconferences, yet without violating end-to-end delay constraints.In order to simplify our protocol design in Airlift, we use intrasession network coding and the concept of conceptual flows,such that the optimization problem that can be convenientlyformulated as a linear program. Our real-world implementationof Airlift has been deployed over the Amazon EC2 cloud. Weshow that Airlift delivers a substantial performance advantageover state-of-the-art peer-to-peer solutions.I. I NTRODUCTIONThe gist of cloud computing is to maximize the sharingof resources with statistical multiplexing, while keeping usersof the cloud satisfied. To provide cloud services with ahigher quality, it is customary for cloud providers to deploya number of datacenters across different geographical regions,inter-connected with high-capacity links. Enterprises, such asNetflix, are moving their entire platform to the cloud [1] totake advantage of its abundant resources that are available ondemand.From the perspective of both bandwidth demand and endto-end delay constraints, multi-party video conferencing maybe one of the most demanding multimedia applications. Existing conferencing solutions in the literature have traditionallyfocused on the use of peer-to-peer (P2P) [2], [3] or clientserver architectures (e.g., Microsoft Lync). With abundantbandwidth between datacenters, one would naturally wonder ifit is feasible to take full advantage of inter-datacenter networksin the cloud to support higher bit rates in video conferences,yet still maintaining acceptable delays.In this paper, we promote the use of inter-datacenter networks to support live multi-party video conferencing as aThis research was supported in part by the SAVI NSERC Strategic Networks Grant, by a grant from NSFC/RGC under the contractN HKUST610/11, by grants from HKUST under the contract RPC11EG29,SRFI11EG17-C and SBI09/10.EG01-C, by a grant from ChinaCache Int.Corp. under the contract CCNT12EG01.978-1-4673-2447-2/12/ 31.00 c 2012 IEEE.Bo LiDepartment of Computer ScienceHong Kong University of Science and TechnologyUser 2ovide D1rceSoulntroUser 1e coRatInter-datacenter networkD5D2User 3User 4D3D4User 5Fig. 1. Packets from User 1 in a 5-party conference are being transmittedin an inter-datacenter network with 5 datacenters, via a combination of directpaths (e.g., D1 D4) and relay paths (e.g., D1 D3 D5).cloud service. Our protocol and real-world implementation,collectively referred to as Airlift, is designed from the groundup to support multiple live conferences with an inter-datacenternetwork operated by a cloud provider. As its name suggests,the unique advantage of Airlift is to provide low-latency endto-end paths among participants in multiple conferences, yetwithout the “hustle and bustle” of the public Internet. WithAirlift, packets in conferences can be routed through a highcapacity inter-datacenter network, as if they are travelingaround the world in chartered private flights with minimalcongestion, rather than cruise ships with long lines waitingfor embarkation.A highlight of this paper is our design of a new applicationlayer protocol for inter-datacenter networks. Its original features are two-fold: First, to be more scalable, it aggregatesuser-initiated conferences to a smaller number of multicastsessions among datacenters. Second, it is designed to maximizethe total throughput across all the sessions, while maintainingbasic fairness across different conferences, and making surethat stringent delay constraints are not violated.Due to the multicast nature of aggregated sessions, traditional wisdom resorts to Steiner tree packing [3] in orderto maximize the video flow rate from a single source tothe remaining participants in a video conference. Since theproblem is NP-Complete, existing works [3], [4] pack onlydepth-1 and depth-2 trees. With a large number of conferencesserved concurrently in an inter-datacenter network, packingSteiner trees for each source and in each conference iscomputationally prohibitive, even with trees of limited depth.To solve this problem, we use intra-session network codingas an integral part in both our protocol design and ourreal-world implementation. Thinking from the perspective ofconceptual flows [5], the upshot of network coding is its powerof resolving conflicts competing for bandwidth resources in

bottleneck links. With the help of network coding, we arenow able to formulate the problem of maximizing the totalthroughput across all aggregated sessions as a linear program,easily solvable using a standard LP solver. Its optimal solutionserves as the foundation of the Airlift protocol.Finally, using Airlift as a video conferencing cloud serviceis simple. In our design, the cloud service can be treated asa full-service broker: a participating user with a video sourcein a conference only needs to transmit its packets to one ofthe datacenters in the cloud, and to process acknowledgmentsfrom the cloud service. Fig. 1 shows an illustrative exampleof a 5-party video conference, supported by Airlift.We evaluate the validity and performance of Airlift as acloud service with our real-world implementation, with 17,000lines of code in C . Our implementation has been developedwith performance in mind: to maximize packet processingrates, asynchronous networking has been used; to minimizethe computational overhead of network coding, our networkcoding implementation is accelerated with the Intel/AMDSSE2 instruction set.Our real-world experimental results over PlanetLab and theAmazon EC2 cloud have shown substantially (3 to 24 times)higher throughput as compared to Celerity [3], a state-of-theart P2P solution, yet without any disadvantage on end-to-enddelays that can be perceptible to end users. To the best of ourknowledge, this paper presents the first design and implementation of a cloud-based solution for video conferencing.The remainder of this paper is organized as follows. InSec. II, we motivate the Airlift cloud service and discuss ourdesign objectives and choices. In Sec. III, we present ouranalytical study on maximizing the total throughput in aninter-datacenter network, which serves as the foundation of ourprotocol design. In Sec. IV, we present details of our protocol,designed based on results from our theoretical analysis. InSec. V, we present our real-world implementation of Airlift,and evaluate its validity and performance in the Amazon EC2cloud. We discuss related work and conclude the paper inSec. VI and Sec. VII, respectively.II. A IRLIFT: M OTIVATION AND D ESIGN O BJECTIVESA. Conferencing via the Cloud: MotivationThe Achilles’ heel of peer-to-peer video conferencing solutions, such as Celerity [3], is the challenge of computing theflow rate on each overlay link between users who participatein the same conference. Such a challenge comes from the factthat overlay links may compete for the same physical link inthe layer-3 Internet topology; yet due to the lack of completeknowledge about the underlying layer-3 topology, it wouldbe infeasible to determine how overlay links share commonphysical links.Such a challenge is present even in a simple “dumbbell”topology, illustrated in Fig. 2. Without the knowledge thata physical bottleneck exists between the two user pairs, thenumber of overlay links competing for the bottleneck may notbe optimal, if incorrect trees are formed to route packets. Toaddress such a challenge, Celerity resorts to a complex mix ofalgorithms, including decentralized optimization to convergeto optimal overlay link rates based on loss rates and queueingdelays, as well as spanning tree packing at each source tocompute its overlay trees.In contrast, if we take advantage of the high-capacityinter-datacenter network in the cloud, the pairs of users onboth sides of the dumbbell topology can each transmit totheir respective datacenters, as Fig. 3 shows, with user 1 asan example. Each datacenter is responsible for aggregatingincoming video flows from both users, and forwarding themto the other datacenter. With such aggregation, the number ofvideo flows sharing the bottleneck, which resides between thetwo datacenters in the cloud, is naturally minimized, withoutthe complexity of Celerity. If the inter-datacenter link has ahigher capacity (e.g., due to private peering relationships), thegain on the total throughput in the conference is even moresubstantial.1313D124D224Fig. 2. With P2P conferencing, more Fig. 3. Conferencing via the cloud:than the minimum number of overlay flow rates over the physical bottlenecklinks may compete for the same phys- are aggregated and minimized.ical bottleneck.But are end-to-end delays sacrificed by routing packetsthrough the cloud? We answer this question with resultsfrom real-world experiments, using PlanetLab nodes as videosources. Table I shows the measured throughput values andend-to-end delays between three pairs of conference participants with diverse geographic locations, comparing theperformance of a P2P overlay link with that of routing throughthe Amazon EC2 cloud. In the latter case, each participatinguser connects to its closest datacenter in the EC2 cloud,forming a three-hop path. For example, users in Beijing andSeoul will connect to the datacenter located in Japan, andusers in Cambridge, UK and Moscow will connect to thedatacenter in Ireland. As is self-explanatory in the table, whenrouted through the cloud, all three pairs have enjoyed higherthroughput values (substantially higher in two of the threepairs), yet this is achieved with similar or even shorter end-toend delays, as compared to the overlay link in a P2P solution.TABLE IC ONFERENCING WITH P2P OVERLAYS OR VIA THE CLOUD ? ACOMPARISON OF THROUGHPUT AND END - TO - END DELAY.Cloud/P2PToronto-BeijingCambridge-Sao PauloSeoul-MoscowThroughput (Mbps)2.202/0.1791.687/1.4327.189/1.103Delay (msec)171.6/148.8103.4/204.6201.7/436.9We will revisit such a comparison between Airlift andCelerity with a more elaborate set of experiments in Sec. V.Suffice it to say, there exists a clear performance advantage toprovide video conferencing as a cloud service.

B. Airlift: Design Objectives and ChoicesTowards the design of a new application-layer protocol inthe inter-datacenter network, we target a number of importantobjectives.Performance. The best possible incentive that can be toutedto attract users to establish video conferences using Airlift isits superior performance, with respect to higher video flowrates from each of the participants in a live conference, whilestill maintaining an acceptable end-to-end delay. Airlift should,first and foremost, be designed with performance in mind.Simplicity. As a cloud service, Airlift should be conceptuallysimple to use, and work as a full-service broker. A participatinguser in a conference should only need to connect to the“cloud,” and to start transmitting packets from its video sourceafter a connection is established. The “cloud” should provideinformative feedback to the user as packets arrive, so that theuser can adequately increase or throttle its video flow rate byvarying parameters of its video codec. In this sense, as long asa packet is acknowledged by the “cloud,” the user will havecomplete “peace of mind” that the packet will be deliveredintact and on time to other participants in the conference,subject to a typical end-to-end delay constraint.But what is the “cloud” that a participating user shouldconnect to? Our design in Airlift has intentionally left thedecision open with respect to which datacenter that a usershould connect to, as existing work has already covered thiscomplementary problem quite well. It is typical to select anappropriate datacenter by taking advantage of the customizedIP address returned by DNS servers. Alternatively, userscan outsource datacenter selection to third parties [6], withcustomizable mapping policies. Since video conferencing issensitive to end-to-end delays, the recommended mappingpolicy is to choose the “closest” datacenter with respect todelay, using any of the existing selection protocols that can betailored to consider client proximity (e.g., [6]).Scalability. Datacenters operated by a cloud provider areoften inter-connected with high-capacity links. As such, eachinter-datacenter link may be able to carry thousands of videoflows from different sources and in different conferencessimultaneously. This brings the challenge of scalability to thespotlight, in that any online algorithm in the Airlift protocolneeds to complete its computation in real-time, so that alarge number of conferences can be routed through the interdatacenter network efficiently and without much fanfare.To be more scalable, we believe that all the video flowsfrom different participants — in their respective conferences— need to be aggregated, provided that these participantsconnect to the same source datacenter, and are destined tothe same subset of destination datacenters, which, in turn, areresponsible for delivering them to all other participants in eachof the conferences. To put it simply, we wish to aggregate allthe video flows routed through the same source datacenterand transmitted to the same subset of destination datacenters,regardless of which conference they belong to. Each of theseaggregated sessions is inherently a multicast session in theinter-datacenter network.Considering only aggregated sessions, rather than individualconferences that use the cloud as a service, makes Airliftmuch more scalable. For example, in order to maximizethe total throughput of all conferences routed through theinter-datacenter network, we only need to maximize the totalthroughput across all aggregated sessions in our problemformulation, with a significantly reduced number of variablesthat need to be determined. To be more precise, in an interdatacenter network with NPdatacenters, the maximum numberNof aggregated sessions is i 2 [i · Ni ]. With 7 datacenters inthe Amazon EC2 cloud, the maximum number of simultaneousaggregated sessions is only 441, which may be an order ofmagnitude smaller than the total number of participants in allthe concurrent conferences routed through the Airlift cloudservice.III. M AXIMIZING T OTAL T HROUGHPUT IN THE C LOUDIn a nutshell, a key idea in the design of Airlift is to takefull advantage of the available inter-datacenter capacity inthe cloud, so that the total throughput across all conferencesis maximized, subject to delay and fairness constraints. Weprecede our protocol design with a theoretical formulation ofthis problem.A. Feasible Paths Satisfying a Delay BoundLet us consider an inter-datacenter network with multipledatacenters that are geographically distributed around theworld, operated by the same cloud provider. These datacentersform a complete directed graph G (V, E), where V indicatesthe set of datacenters, and E indicates the set of directededges inter-connecting them. For each directed edge e E,we use a positive real-valued function C(e) to denote itsavailable capacity, which is the maximum available rate ofpacket transmission on e.We use S i to denote the source datacenter in an aggregatedsession i, and Rji , j 1, 2, . . . , ki to denote the set of kidestination datacenters in session i. If we overlook fairnessconcerns for a moment, our objective is to maximize the totalthroughput of all the aggregated sessions in G, as long as theend-to-end delays from S i to each of Rji , j 1, . . . , ki areacceptable, i.e., they do not violate a certain delay bound,Dmax .Let us now examine such a delay constraint with a microscope. Each directed edge e in E has a corresponding propagation delay, d(e), which is readily measurable in practice.Assuming that queueing delays on a relaying datacenter areminimal with the use of small buffers, the end-to-end delayfrom S i to each Rji can be estimated as the sum of allpropagation delays, on each of the edges along the acyclic paththat packets follow. Considering our objective of maximizingthe total throughput of all aggregated sessions as a variantof the maximum flow problem, it is conceivable that packetsfrom S i to each Rji may need to follow multiple acyclic paths,rather than a single one. We need to make sure that the endto-end delay on any of these acyclic paths does not violate

the delay bound that we impose; in other words, we need toexclude paths that violate such a bound, and only consider theset of feasible paths — denoted by Pji — that do not. Moreformally:Pji {p p is an acyclic path from S i to RjiXs.t.d(e) Dmax }.e pGiven the inter-datacenter graph G and the delay bound Dmax ,one can easily find the set of all feasible paths Pji from S i toRji using a simple variant of the depth-first search algorithm,where the search only continues if, with the path obtained sofar, there are no cycles and the delay bound Dmax has not yetbeen violated. In our subsequent formulation of the problem,we have the convenience of only considering the set of feasiblepaths Pji .B. The Problem of Maximizing Total ThroughputOn the surface, it appears that the problem of maximizingthe total throughput of all aggregated sessions in G corresponds to the traditional multi-commodity maximum flowproblem. Unfortunately, this is not the case, simply becauseaggregated sessions1 are aggregated multicast sessions bynature, rather than unicast sessions between source-destinationpairs as in the multi-commodity maximum flow problem. Inessence, packet transmission in a multicast session is moreefficient than in multiple unicast sessions, due to the ability fora datacenter to replicate and forward packets to its downstreamdatacenters in a multicast tree.To maximize the throughput of a multicast session in G,traditional wisdom resorts to Steiner tree packing [3]. Asan NP-Complete problem, Steiner tree packing seeks to findthe maximum number of pairwise edge-disjoint Steiner trees,in each of which the datacenters involved in the sessionremain connected. To reduce its complexity, existing work onP2P video conferencing [3] packs only depth-1 and depth-2trees. However, packing Steiner trees within each session isstill computationally prohibitive, due to the large number ofconcurrent sessions.Fortunately, the concept of network coding provides us witha way out of the woods. Having been studied extensively inthe past decade, network coding [7] extends the capabilities ofnodes in a network session: from basic forwarding (as in themaximum flow problem) and replication (as in multicast), tocoding in Galois fields. For a multicast session in any directedacyclic graph, if a rate x can be achieved from the source toeach of the destinations independently, it can also be achievedfor the entire multicast session [7]. In other words, networkcoding has the power of resolving the competition amongdifferent source-destination pairs for edge capacities. To takeadvantage of such power, Li et al. [5] introduced the conceptof conceptual flows, defined as network flows that co-exist1 When it is clear from the context of our discussions, aggregated sessionsin the inter-datacenter network is simply referred to as sessions from this pointonwards.in the network without contending for edge capacities if theyare destined to different destinations, each of which is from asource to a destination, transmitted in a coded form.To our surprise, inspired by [5], the idea of conceptualflows allows us to formulate the problem of maximizing totalthroughput as the following linear program, which can besolved by a standard LP solver:maxs.t.XX X(1)xij (p) xi (e), i, j 1, . . . , ki(2)p PjiXp Pji (e)Xxij (p)/wi , i, j 1, . . . , kixi (e) C(e), e Eiixj (p) 0, xi (e) 0, X 0, p Pji , i, j 1, . . . , ki , e E.(3)(4)The objective of this linear program is to maximize the totalthroughput, which is the sum of flow rates in all the multicastsessions, X . In each session, its flow rate is the minimum ofthe flow rates that can be independently achieved from thesource to each of the destinations in the session. In constraint(1), wi is used to provide weighted proportional fairness acrossdifferent sessions, and xij (p) represents the conceptual flowrate from S i to Rji , along an acyclic path p in the set of feasiblepaths Pji . Since the flow rate is specified along a particularpath p, the flow conservation constraint for a conceptual flowis implicitly satisfied.Since conceptual flows destined to different destinationswithin the same session do not compete with one anotherfor edge capacity, the effectivePflow rate within a session ion edge e is xi (e) maxj p P i (e) xij (p), where Pji (e)jrepresents the set of paths in Pji that uses edge e. Sincethe max function is not linear, this constraint is relaxed toconstraint (2). Finally, constraint (3) reflects the fact that thesummation of the effective flow rates of different sessionsshould not exceed the capacity of an edge, as they contendwith one another for edge capacities.A feasible solution to our linear program provides theconceptual flow rates xij (p) along all feasible paths for eachdestination, within every session. The effective flow routingscheme xi (e) for each session, as well as the feasible totalthroughput X , are all guaranteed to be non-negative, withconstraint (4). Since only feasible paths are considered in ourlinear program, the delay constraint is naturally satisfied.As an example using the inter-datacenter network that wehave shown previously in Fig. 1, Fig. 4 shows the optimalsolution obtained by solving our linear program using astandard LP solver. To keep such a conceptual example simple,we assume that all the edge capacities are 10 Mbps. The valuelabeled on each edge e in the figure indicates its propagationdelay d(e), which is the same in both directions. Let us nowconsider two sessions, S1 and S2. In S1, video flows aretransmitted from D1 to D4 and D5; and in S2, they are

114010 14060240203Conceptual flow in S1120301105110Conceptual flow in S2Link {2,5}Effective flow in S14Effective flow in S2Fig. 4. Maximizing the total throughput in an inter-datacenter network: anexample of the optimal solution obtained by solving the linear program.transmitted from D2 to D3 and D5. If the delay constraintDmax is set to be 100 milliseconds, some of the paths needto be excluded from the set of feasible paths. The conceptualflows along all the feasible paths in each session are shownin the figure, where their widths indicate the correspondingconceptual flow rates.As we can see, the feasible path in S1 to D4 is D1 D2 D4, with a flow rate of 10 Mbps for the correspondingconceptual flow; the feasible paths in S1 to D5 are D1 D2 D3 D5 and D1 D2 D5, each with 3.9Mbps and 6.1 Mbps respectively. Similarly, feasible paths inS2 to D3 are D2 D3 and D2 D5 D3, with 6.1Mbps and 3.9 Mbps, respectively; and feasible paths in S2to D5 are D2 D5 and D2 D3 D5, with 3.9 Mbpsand 6.1 Mbps, respectively. In the optimal solution, the totalthroughput in both sessions is 20 Mbps in this example, andedge capacities along the feasible paths have been saturated.To better illustrate the concept of conceptual flows, weexamine the edge from D2 to D5. Though two conceptualflows in S2 — each with a rate of 3.9 Mbps — pass throughthis edge, the effective flow rate in S2 on this edge remains tobe 3.9 Mbps, since the power of network coding guaranteesthat conceptual flows destined to different destinations in thesame session do not compete for edge capacities. On the otherhand, the effective flow rate in S1 on this edge is 6.1 Mbps,competing with the effective flow in S2 for the edge capacity.With the use of conceptual flows, the optimal solution ofour linear program is quite expressive. It may impose that anincoming flow be replicated, be split, or that multiple incomingflows be merged using network coding, and then forwardedalong outgoing edges. In our example, consider datacenterD2. Its incoming flow of 10 Mbps in session S1 is not onlyforwarded to D4 directly, but also split and forwarded at thesame time to D3 and D5, with an outgoing flow rate of 3.9Mbps and 6.1 Mbps, respectively. The flexibility and power ofthe optimal solution expressing a wide variety of forwardingstrategies at each datacenter have provided a solid foundationfor Airlift, yet they also pose a challenge to our protocoldesign, to make sure that the optimal solution can be realizedfaithfully in practice.IV. A IRLIFT: P ROTOCOL D ESIGNWith the available capacity and propagation delay on eachinter-datacenter edge as input, the optimal solution along theset of feasible paths provides the complete plan to start actualpacket transmission: In each conceptual flow, the optimalsolution computes its flow rate xij (p), along the path p in asession i from the source S i to the destination Rji that packetswill follow.The design objective of the Airlift protocol is to faithfullyrealize the complete plan that the optimal solution provides ina real-world implementation, with as little gap between theoryand practice as possible. As we shall soon observe, such agoal is challenging to achieve; and subsequent experimentalevaluations of our Airlift implementation will focus on howtradeoffs in our design will contribute to the gap betweentheory and reality.A. Transport with Network CodingBefore we discuss challenges in our design, we first presenta primer on how network coding can be implemented in practice. With random linear network coding [8], [9], a generationof live video is divided into n packets (called the generationsize) b [b1 , b2 , . . . , bn ]T , where each packet has a fixednumber of bytes, k. To code a new coded packet xj , thesource first independently and randomly chooses a set ofcoding coefficients [cj1 , cj2 , · · · , cjn ] in GF (28 ), one for eachoriginal or coded packetPn it has buffered. It then produces onecoded packet xj i 1 cji · bi . The destination decodes assoon as it has received n linearly independent coded packetsx [x1 , x2 , . . . , xn ]T . It first forms an n n coefficientmatrix C, using the coefficients of each packet bi , whichare embedded in the packet. Each row in C correspondsto the coefficients of one coded packet. It then recoversthe original packets b [b1 , b2 , . . . , bn ]T as b C 1 x.Gauss-Jordan elimination is used in such a decoding process,performed progressively as coded packets are being received.The inversion of C is only possible when its rows are linearlyindependent, i.e., C is full rank.Airlift uses UDP as its transport protocol on each interdatacenter edge, the rate of which is controlled by an implementation of TCP-Friendly Rate Control (TFRC) [10] at theapplication layer. In such a context, network coding has beenapplied extensively in the Airlift protocol design. This is notonly because our problem formulation hinges upon the conceptof conceptual flows made possible by network coding, butalso since random linear codes are rateless erasure codes, andcoded packets — each with its own vector of randomly chosencoefficients — can be generated ad infinitum. As long as nlinearly independent packets are received, they are sufficientto recover the original generation. This is a perfect match toUDP as a transport protocol: losing some coded packets isno longer a concern, as more from the source will be arrivingsoon, provided that the source receives some form of feedback.Unfortunately, one seemingly trivial question — on animplementation detail when network coding is used at thesource — puts the very idea of using network coding at risk.B. Bandwidth Overhead vs. Delay: A DilemmaIt is the question of what an appropriate generation size,n, is, which the source datacenter should use when it applies

network coding. In other words, shall we use a smaller numberof packets in each generation, or a larger number of them?Let us consider the outcome of using a smaller numberof (say, 5) packets. In the example illustrated in Fig. 5(a),we can observe that a small generation size will lead tosignificant bandwidth overhead. At the time when the sourcefinishes sending all 5 packets, the acknowledgement, to besent when the entire generation is completely received anddecoded at the destination, has not yet arrived at the source.Such an acknowledgement may only be received by the sourceafter a round-trip time since the last coded packet in thegeneration has been sent. During such a period of time, thesource will have no choice but to either stop sending, inwhich case its instantaneous flow rate is throttled to zeroand outgoing bandwidth is idled; or to keep sending morecoded packets, in which case these packets are redundant anduseless when received by the destination, leading to significantbandwidth overhead. As the number of packets in a generationbecomes smaller, the overhead of such redundant packets, asa percentage, will be even more sendingDestinationi] SRji{{Source.[RTTACK}}UsefulpacketsOriginalpackets

In this paper, we propose that video conferencing, even with its stringent delay constraints, should also be provided as a cloud service, taking full advantage of the inter-datacenter network in the cloud. We design Airlift, a new protocol designed for the inter-datacenter network, tailored to the needs of a