Schedule of Class Meeng
Mul$media Networking
#8 P2P Streaming Semester Ganjil 2012 PTIIK Universitas Brawijaya
Schedule of Class Mee$ng
1.Introduc$on 2.
Applica$ons of MN 3.
Requirements of MN 4.
Coding and Compression
5.
RTP 6. IP Mul$cast 7.
IP Mul$cast (cont’d) 8.
Overlay Mul-cast 9.
CDN: Solu$ons 10.
CDN: Case Studies 11. QoS on the Internet:
Constraints 12. QoS on the Internet:
Solu$ons 13
Discussion 14. Summary
Today’s Outline
- Ways to distribute video online
- – Client-‐server
- – IP Mul$cast
- – P2P Media Streaming
- – CDN (Content Delivery Networks)
Applica$on Category
Online mee$ng Online game
TV broadcast Live event
Conference Non-real-time communication
Real-time communication Many-to-many Communication
One-to-many Communication
E-‐learning CDN
Stored contents VoD (e.g. YouTube)
Writable contents? (Virtually many-‐to-‐ many)
Client-‐Server
- Applica$on-‐layer solu$on
- – Single media server unicasts to all clients
- Needs very high capacity to serve large number of clients
- – CPU
- – Memory
- – Bandwidth
• Expensive for millions of simultaneous viewers
Client-‐Server 10 flows of the same packet SERVER
Unicast
Mul$cast
• Basic idea: the same data needs to reach mul$ple
receivers
avoid transmicng it once for each receiver
- – par$cularly useful if access link has bandwidth limita$ons
- – can be implemented at link, network and applica$on layer
- – e.g., mailing list as example
- IP Mul$cast: Network-‐layer solu$on
- – Routers responsible for mul$cas$ng
IP Mul$cast
- Network-‐layer solu$on
- – Routers responsible for mul$cas$ng
- Efficient bandwidth usage
- Requires per-‐group state in routers
- – Scalability concern
- – Violates end-‐to-‐end design principle
- Slow deployment
- – IP mul$cast is ofen disabled in routers
- Difficult to support higher layer func$onality
Peer-‐to-‐Peer Networks (P2P)
P2P Technology
- The servers serve only a handful of clients;
- Each of the clients in turn propagate the stream to more downstream clients and so on.
- This moves the distribu$on costs from the channel owner to the user.
- Many P2P applica$ons since the 1990s
- – File sharing
- Napster, Gnutella, KaZaa, BitTorrent
- – Internet telephony
- Skype
- – Internet television
- PPLive, CoolStreaming, Joost
Why P2P?
- Every node is both a server and client
- – Easier to deploy applica$ons at endpoints
- – No need to build and maintain expensive infrastructure
- – Poten$al for both performance improvement and addi$onal robustness
- – Addi$onal clients create addi$onal servers for
scalability
P2P Overview
- Applica$on-‐layer approach
- Clients send contents to each other
- Use an Overlay Network!
Overlay Network
- Consists of applica$on-‐layer links
- Applica$on-‐layer link is logical link consis$ng of one or more links in underlying network
- Used by both CDNs and pure P2P systems
Overlay Networks
Overlay Networks Focus at the application level
Overlay Networks
- A logical network built on top of a physical network
- – Overlay links are tunnels through the underlying network
- Many logical networks may coexist at once
- – Over the same underlying network
- – And providing its own par$cular service
- Nodes are ofen end hosts
- – Ac$ng as intermediate nodes that forward traffic
- – Providing a service, such as access to files
- Who controls the nodes providing service?
- – The party providing the service (e.g., Akamai)
- – Distributed collec$on of end users (e.g., peer-‐to-‐peer)
IP Tunneling
- IP tunnel is a virtual point-‐to-‐point link
- – Illusion of a direct link between two separated nodes
F A B E
tunnel
Logical view: F A
E B Physical view:
- Encapsula$on of the packet inside an IP datagram
- – Node B sends a packet to node E
- – … containing another packet as the payload
Server Distribu$ng a Large File d
2 d
3 d
4 upload rate u s download rates d i Internet
Server Distribu$ng a Large File
- Sending an F-‐bit file to N receivers
- – Transmicng NF bits at rate u
s
$me
- – … takes at least NF/u
s
- Receiving the data at the slowest receiver
= min {d }
- – Slowest receiver has download rate d
min i i
$me
- – … takes at least F/d
min , F/d }
- Download $me: max{NF/u
s min
Speeding Up the File Distribu$on
- Increase the server upload rate
- – Higher link bandwidth at the server
- – Mul$ple servers, each with their own link
- Alterna$ve: have the receivers help
- – Receivers get a copy of the data
- – … and redistribute to other receivers
- – To reduce the burden on the server
2 d
3 d
4 upload rate u s Internet u
1 u
2 u
3 u
4 upload rates u i
Peers Help Distribu$ng a Large File
- Components of distribu$on latency
- – Server must send each bit: min $me F/u
s
- – Slowest peer must receive each bit: min $me F/
d min
- Upload $me using all upload resources
- – Total number of bits: NF
+ sum (u )
- – Total upload bandwidth u
s i i , F/d , NF/(u +sum (u ))}
- Total: max{F/u
s min s i i
Peer-‐to-‐Peer is Self-‐Scaling
- Download $me grows slowly with N
, F/d }
- – Client-‐server: max{NF/u
s min , F/d , NF/(u +sum (u ))}
- – Peer-‐to-‐peer: max{F/u
s min s i i
- But…
- – Peers may come and go
- – Peers need to find each other
- – Peers need to be willing to help each other
Loca$ng the Relevant Peers
- Three main approaches
- – Central directory (Napster)
- – Query flooding (Gnutella)
- – Hierarchical overlay (Kazaa, modern Gnutella)
- Design goals
- – Scalability
- – Simplicity
- – Robustness
- – Plausible deniability
Peer-‐to-‐Peer Networks: BitTorrent
- BitTorrent history
- – 2002: B. Cohen debuted BitTorrent
- Emphasis on efficient fetching, not searching
- – Distribute same file to many peers
- – Single publisher, many downloaders
- Preven$ng free-‐loading
- – Incen$ves for peers to contribute
BitTorrent: Simultaneous Downloads
- Divide file into many chunks (e.g., 256 KB)
- – Replicate different chunks on different peers
- – Peers can trade chunks with other peers
- – Peer can (hopefully) assemble the en$re file
- Allows simultaneous downloading
- – Retrieving different chunks from different peers
- – And uploading chunks to peers
- – Important for very large files
BitTorrent: Tracker
- Infrastructure node
- – Keeps track of peers par$cipa$ng in the torrent
- – Peers registers with the tracker when it arrives
- Tracker selects peers for downloading
- – Returns a random set of peer IP addresses
- – So the new peer knows who to contact for data
- Can have “trackerless” system
- – Using distributed hash tables (DHTs)
Defini$on: Peer
- Leecher
- – A peer that is client and server
- – In the context of content delivery
- Has a par$al copy of the content
- Seed
- – A peer that is only server
- – In the context of content delivery
- Has a full copy of the content
BitTorrent: Overall Architecture
Tracker Web page with link Web Server
to .torrent t n rre o
.t
C A Peer
Peer [Seed]
B [Leech]
Peer
BitTorrent: Overall Architecture
Tracker Web page with link Web Server
to .torrent
C A Peer
Peer [Seed]
B [Leech]
Peer
BitTorrent: Overall Architecture
Tracker Web page with link Web Server
to .torrent
C A Peer
Peer [Seed]
B [Leech]
Peer
BitTorrent: Overall Architecture
Tracker Web page with link Web Server
to .torrent Shake-hand
C A Peer
Peer [Seed]
B [Leech]
Downloader
Peer
BitTorrent: Overall Architecture
Tracker Web page with link Web Server
to .torrent pieces
C A Peer
Peer [Seed]
B [Leech]
Downloader
Peer
BitTorrent: Overall Architecture
Tracker Web page with link Web Server
to .torrent pieces
C A Peer
Peer [Seed]
B [Leech]
Downloader
Peer
BitTorrent: Overall Architecture
Tracker Web page with link Web Server
to .torrent pieces
C A Peer
Peer [Seed]
B [Leech]
Peer P2P Streaming Stan1 Gatech
Stanford Source: Stan2 Purdue
Berk1
Dumb Network
Berkeley
Overlay Tree
Berk2
Stan1
Gatech
Stan2 Purdue
Berk1 Berk2
P2P Streaming
- Tree-based
- – Parent-child relationships
- – Push-based
- – Uplink bandwidth not utilized at leaves
- Data can be divided and disseminated along multiple trees (e.g., SplitStream)
- – Must be repaired and maintained to avoid interruptions
- – Example: End System Multicast (ESM)
P2P Multicast
- Mesh-based
- – Data-driven
- – Pull-based
- – Periodically exchange data availability with random partners and retrieve new data
- – Unlike BitTorrent, must consider real-time constraints
- – Example: CoolStreaming
Overlay Performance
- Even a well-designed overlay cannot be as efficient as IP Mulitcast • But performance penalty can be kept low
- Trade-off some performance for other benefits Gatech
Stanford
Duplicate Packets: Bandwidth Wastage
Dumb Network
Increased Delay Berkeley
Case Study: PPLive
- Very popular P2P IPTV application
- – Free for viewers
- – Over 100,000 simultaneous viewers and
400,00 viewers daily
- – Over 200+ channels
- – Windows Media Video and Real Video format
Case Study: PPLive
- Gossip-based protocols
- – Peer management
- – Channel discovery
- Recommend papers are measurement studies of PPLive
Case Study: PPLive
1. Contact channel server for available channels
2. Retrieve list of peers watching selected channel
3. Find active peers on channel to share video chunks
Source: “Insights into PPLive: A Measurement Study of a Large-Scale P2P IPTV System” by Hei et al.